الجمعة، 8 نوفمبر 2019

NEWS TECHNOLOGIE

Machine learning , artificial intelligence, ai, deep learning blockchain neural network concept. Brain made with shining wireframe above multiple blockchain cpu on circuit board 3d render.

In February of this year, the nonprofit artificial intelligence research lab OpenAI announced its new algorithm called GPT-2 could write believable fake news in mere seconds. Rather than release the bot to the world, OpenAI deemed it too dangerous for public consumption. The firm spent months opening up pieces of the underlying technology so it could evaluate how it was used. Citing no “strong evidence of misuse,” OpenAI has now made the full GPT-2 bot available to all

OpenAI designed GPT-2 to consume text and produce summaries and translations. However, the researchers became concerned when they fed the algorithm plainly fraudulent statements. GPT-2 could take a kernel of nonsense and build a believable narrative around it, going so far as to invent studies, expert quotes, and even statistics to back up the false information. You can see an example of GTP-2’s text generation abilities below. 

You can play around with GPT-2 online on the Talk to Transformer page. The site has already been updated with the full version of GPT-2. Just add some text, and the AI will continue the story. 

The deluge of fake news was first called out in the wake of the 2016 election when shady websites run by foreign interests spread misinformation, much of which gained a foothold on Facebook. OpenAI worried releasing a bot that could pump out fake news in large quantities would be dangerous for society. Although, some AI researchers felt the firm was just looking for attention. This technology or something like it would be available eventually, they said, so why not release the bot so other teams could develop ways to detect its output. 

An example of GPT-2 making up facts to support the initial input.

Now here we are nine months later, and you can download the full model. OpenAI says it hopes that researchers can better understand how to spot fake news written by the AI. However, it cautions that its research shows GPT-2 can be tweaked to take extreme ideological positions that could make it even more dangerous. 

OpenAI also says that its testing shows detecting GPT-2 material can be challenging. Its best in-house methods can identify 95 percent of GPT-2 text, which it believes is not high enough for a completely automated process. The worrying thing here is not that GPT-2 can produce fake news, but that it can potentially do it extremely fast and with a particular bias. It takes people time to write things, even if it’s all made up. If GPT-2 is going to be a problem, we’ll probably find out in the upcoming US election cycle.

Now read:



from ExtremeTechExtremeTech https://ift.tt/2CskUla

ليست هناك تعليقات:

إرسال تعليق