
Photoshop is the granddaddy of image-editing apps, the O.G. of our airbrushed, Facetuned media ecosystem and a product so enmeshed in the culture that it’s a verb, an adjective and a frequent lament of rappers. Photoshop is also widely used. More than 30 years since the first version was released, professional photographers, graphic designers and other visual artists the world over reach for the app to edit much of the imagery you see online, in print and on billboards, bus stops, posters, product packaging and anything else the light touches.
So what does it mean that Photoshop is diving into generative artificial intelligence — that a just-released beta feature called Generative Fill will allow you to photorealistically render just about any imagery you ask of it? (Subject, of course, to terms of service.)
Not just that, actually: So many AI image generators have been released over the past year or so that the idea of prompting a computer to create pictures already seems old hat. What’s novel about Photoshop’s new capabilities is that they allow for the easy merger of reality and digital artifice and they bring it to a large user base. The software allows anyone with a mouse, an imagination and $10 to $20 a month to — without any expertise — subtly alter pictures, sometimes appearing so real that it seems likely to erase most of the remaining barriers between the authentic and the fake.
The good news is that Adobe, the company that makes Photoshop, has considered the dangers and has been working on a plan to address the widespread dissemination of digitally manipulated pics. The company has created what it describes as a “nutrition label” that can be embedded in image files to document how a picture was altered, including if it has elements generated by artificial intelligence.
The plan, called the Content Authenticity Initiative, is meant to bolster the credibility of digital media. It won’t alert you to every image that’s fake but instead can help a creator or publisher prove that a certain image is true. In the future, you might see a snapshot of a car accident or terrorist attack or natural disaster on Twitter and dismiss it as fake unless it carries a content credential saying how it was created and edited.
“Being able to prove what’s true is going to be essential for governments, for news agencies and for regular people,” Dana Rao, Adobe’s general counsel and chief trust officer, told me. “And if you get some important information that doesn’t have a content credential associated with it — when this becomes popularized — then you should have that skepticism.”
The key phrase there, though, is “when this becomes popularized.” Adobe’s plan requires industry and media buy-in to be useful, but the AI features in Photoshop are being released to the public well before the safety system has been widely adopted. I don’t blame the company — industry standards often aren’t embraced before an industry has matured, and AI content generation remains in the early stages …read more
Source:: The Mercury News – Entertainment