Latest

AI now can spot fake news generated by AI – CNET

99-keyboards-2

This AI is one step ahead of… itself. 

Josh Goldman/CNET

Researchers at Harvard University and the MIT-IBM Watson AI Lab have created a tool to help combat the spread of misinformation. The tool, called GLTR (for Giant Language Model Test Room), uses artificial intelligence to detect the very statistical text patterns that give AI away, according to the team’s June report. 

GLTR highlights words in the text based on the likelihood that they’ll appear again — green is the most predictable, red and yellow are less predictable, and the least predictable is purple. 

A tool like that could come in handy for social media sites like Twitter and Facebook that have to contend with rampant content created by bots.

Sebastian Gehrmann, one of the minds behind GLTR, said that as text generation methods become more sophisticated, malicious actors can potentially abuse them to spread false information or propaganda. 

“Someone with enough computing power could automatically generate thousands of websites with real looking text about any topic of their choice,” Gehrmann said. “While we have not quite arrived at this point of focused generation yet, large language models can already generate text that is indistinguishable from human-written text.”

Gehrmann said his team conducted a study to see if language-processing students could differentiate “real” text from AI-generated text. He said the students’ accuracy was at 54%, barely above randomly guessing. Using GLTR improved the students’ detection rates to 72%. 

“We hope that GLTR can inspire more research toward similar goals, and that it successfully showed that these models are not entirely too dangerous if we can develop defense mechanisms,” Gehrmann said.

GLTR is free and available for people to try

Originally published July 31, 1:43 p.m. PT.
Update, Aug. 2: Adds comments from Harvard researcher Sebastian Gehrmann.

Now playing: Watch this: Twitter to start hiding tweets that violate policies,…

1:04

Tags

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
Close

Adblock Detected

Please consider supporting us by disabling your ad blocker