SALT LAKE CITY — Americans say the spread of made-up news is a bigger problem for the country than terrorism, illegal immigration, racism or sexism, according to a new Pew Research Center poll.
Now, Twitter is turning to artificial intelligence to help solve the problem.
The social media company bought Fabula AI, a London-based startup that uses artificial intelligence to automatically detect fake news, Twitter’s chief technology officer, Parag Agrawal, announced in a June 3 blog post. The financial terms of the deal between Twitter and Fabula have not been disclosed.
“This strategic investment … will be a key driver as we work to help people feel safe on Twitter and help them see relevant information,” Agrawal wrote.
The acquisition comes at a time when fake news and disinformation are frequently being deployed as weapons in political warfare. The U.S. government concluded that Russians led a systematic campaign to interfere with the 2016 presidential election using fake social media posts, advertising and videos that seized on divisive issues like race relations, immigration and gun rights. More recently, an edited video made House Speaker Nancy Pelosi appear impaired or drunk, a doctored image suggested Sen. Elizabeth Warren had a racist doll decorating her kitchen cabinet, and false stories have circulated claiming Sen. Kamala Harris had an affair with a married man and accusing South Bend, Indiana, Mayor Pete Buttigieg of sexual assault.
“The impact of made-up news goes beyond exposure to it and confusion about what is factual,” Amy Mitchell, director of journalism research at Pew Research Center, told the Guardian. “Americans see it influencing the core functions of our democratic system.”
But identifying fake news can be tricky, even for humans. The term has been used to describe everything from one-sided reporting that might be misleading to completely fabricated information spread intentionally for malicious reasons. Fake news could be an edited image or a real photo shared in the wrong context. It could be an article with one inaccurate sentence or an article with no truths at all.
Two years ago, Facebook founder Mark Zuckerberg wrote an open letter that said ending the spread of “fake news” would take “many years” because it would require the development of artificial intelligence “that can read and understand news.”
Fabula, however, has taken a different approach, one that Agrawal calls “novel.” The company’s algorithms analyze how fake news spreads rather than the content itself.
“There is … a mounting amount of evidence that shows that fake news and real news spread differently,” Fabula co-founder and computing professor at Imperial College London, Michael Bronstein, told TechCrunch. He pointed to a 2018 study by researchers at Massachusetts Institute of Technology that found false news spreads “farther, faster, deeper and more broadly” than true news.
Another Fabula co-founder, Damon Mannion, defines fake news as “stories published on social media containing intentionally false information.” To check for accuracy, Fabula relied on data from …read more
Source:: Deseret News – Top stories