News

Facebook bans deepfake videos and manipulated content from site


FILE PHOTO: Silhouettes of laptop users are seen next to a screen projection of a Facebook logo, March 28, 2018.  REUTERS/Dado Ruvic/File Photo

Facebook said in an announcement on Monday that deepfake videos and manipulated media will be banned from the social media site.
The company said in a statement that it was taking a multi-pronged approach to address the issue, including investigating deceptive behaviors in AI-generated content and partnering with academia, government, and industry to better identify manipulated content.
Videos that are flagged will be subject to review by third party fact-checkers to determine if the content is false.
In September, Facebook CTO Mike Schroepfer said the company was making its own deepfake content in order to better detect manipulated content for removal, Business Insider’s Alexei Oreskovic reported.
Visit Business Insider’s homepage for more stories.

Facebook said in an announcement Monday that deepfake videos and manipulated media will be banned from the social media site.

The company said in a statement that it was taking a multi-pronged approach to address the issue, including investigating deceptive behaviors in AI-generated content and partnering with academia, government, and industry to better identify manipulated content.

“Manipulations can be made through simple technology like Photoshop or through sophisticated tools that use artificial intelligence or ‘deep learning’ techniques to create videos that distort reality – usually called ‘deepfakes,'” the company said in a statement. “While these videos are still rare on the internet, they present a significant challenge for our industry and society as their use increases.”

The company defined the type of “deepfake” content that will be removed from the site as the following:

It has been edited or synthesized – beyond adjustments for clarity or quality – in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say.
It is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.

The policy does not affect content that is manipulated for the purpose of comedy or satire, the company said in the statement.

Videos that are flagged will be subject to review by third party fact-checkers to determine if the content is false. At that point, Facebook will “significantly reduce its distribution” to newsfeeds or reject it, if it is attempting to run as an ad.

Moreover, people who attempt to share the content before it is completely removed from the site will be issued a warning to alert them that the content is false.

The company added that it established a partnership with Reuters to “help newsrooms worldwide to identify deepfakes and manipulated media through a free online training course.”

“News organizations increasingly rely on third parties for large volumes of images and video, and identifying manipulated visuals is a significant challenge,” the statement read. “This program aims to support newsrooms trying to do this work.”

In September, Facebook CTO Mike Schroepfer said the company was making its own deepfake content in order to better detect manipulated content for removal, Business Insider’s Alexei Oreskovic reported.

“The goal of the challenge is to produce technology that everyone …read more

Source:: Business Insider

      

(Visited 1 times, 1 visits today)

Leave a Reply

Your email address will not be published. Required fields are marked *