This system can sort real pictures from AI fakes — why aren’t platforms using it?

Image: Cath Virginia / The Verge, Chris Strider

As the US presidential election approaches, the web has been filled with photos of Donald Trump and Kamala Harris: spectacularly well-timed photos of an attempted assassination; utterly mundane photos of rally crowds; and shockingly out-of-character photos of the candidates burning flags and holding guns. Some of these things didn’t actually happen, of course. But generative AI imaging tools are now so adept and accessible that we can’t really trust our eyes anymore.

Some of the biggest names in digital media have been working to sort out this mess, and their solution so far is: more data — specifically, metadata that attaches to a photo and tells you what’s real, what’s fake, and how that fakery happened. One of the best-known systems for this, C2PA authentication, already has the backing of companies like Microsoft, Adobe, Arm, OpenAI, Intel, Truepic, and Google. The technical standard provides key information about where images originate from, letting viewers identify whether they’ve been manipulated.

“Provenance technologies like Content Credentials — which act like a nutrition label for digital content — offer a promising solution by enabling official event photos and other content to carry verifiable metadata like date and time, or if needed, signal whether or not AI was used,” Andy Parsons, a steering committee member of C2PA and senior director for CAI at Adobe, told The Verge. “This level of transparency can help dispel doubt, particularly during breaking news and election cycles.”

But if all the information needed to authenticate images can already be embedded in the files, where is it? And why aren’t we seeing some kind of “verified” mark when the photos are published online?

The problem is interoperability. There are still huge gaps in how this system is being implemented, and it’s taking years to get all the necessary players on board to make it work. And if we can’t get everyone on board, then the initiative might be doomed to fail.

The Coalition for Content Provenance and Authenticity (C2PA) is one of the largest groups trying to address this chaos, alongside the Content Authenticity Initiative (CAI) that Adobe kicked off in 2019. The technical standard they’ve developed uses cryptographic digital signatures to verify the authenticity of digital media, and it’s already been established. But this progress is still frustratingly inaccessible to the everyday folks who stumble across questionable images online.

“It’s important to realize that we’re still in the early stage of adoption,” said Parsons. “The spec is locked. It’s robust. It’s been looked at by security professionals. The implementations are few and far between, but that’s just the natural course of getting standards adopted.”

The problems start from the origin of the images: the camera. Some camera brands like Sony and Leica already embed cryptographic digital signatures based on C2PA’s open technical standard — which provides information like the camera settings and the date and location where an image was taken — into photographs the moment they’re taken.

This is currently only supported on a handful of cameras, across both new models like the Leica M11-P or via firmware updates for existing models like Sony’s Alpha 1, Alpha 7S III, and Alpha 7 IV. While other brands like Nikon and Canon have also pledged to adopt the C2PA standard, most have yet to meaningfully do so. Smartphones, which are typically the most accessible cameras for most folks, are also lacking. Neither Apple nor Google responded to our inquiries about implementing C2PA support or a similar standard into iPhone or Android devices.

If the cameras themselves don’t record this precious data, important information can still be applied during the editing process. Software like Adobe’s Photoshop and Lightroom, two of the most widely used image editing apps in the photography industry, can automatically embed this data in the form of C2PA-supported Content Credentials, which note how and when an image has been altered. That includes any use of generative AI tools, which could help to identify images that have been falsely doctored.

But again, many applications, including Affinity Photo and GIMP, don’t support a unified, interoperable metadata solution that can help resolve authenticity issues. Some members of these software communities have expressed a desire for them to do so, which might bring more attention to the issue. The developers of the popular pro photo editing software Capture One told The Verge that it was “committed to supporting photographers” being impacted by AI and is “looking into traceability features like C2PA, amongst others.”

Even when a camera does support authenticity data, it doesn’t always make it to viewers. A C2PA-compliant Sony camera was used to take the now-iconic photo of Trump’s fist pump following the assassination attempt as well as a photo that seemed to capture the bullet that was shot at him flying through the air. That metadata information isn’t widely accessible to the general public, though, because online platforms where these images were being circulated, like X and Reddit, don’t display it when images are uploaded and published. Even media websites that are backing the standard, like The New York Times, don’t visibly flag verification credentials after they’ve used them to authenticate a photograph.

Part of that roadblock, besides getting platforms on board in the first place, is figuring out the best way to present that information to users. Facebook and Instagram are two of the largest platforms that check content for markers like the C2PA standard, but they only flag images that have been manipulated using generative AI tools — no information is presented to validate “real” images.

When those labels are unclear, it can cause a problem, too. Meta’s “Made with AI” labels angered photographers when they were applied so aggressively that they seemed to cover even minor retouching. The labels have since been updated to deemphasize the use of AI. And while Meta didn’t disclose to us if it will expand this system, the company told us it believes a “widespread adoption of Content Credentials” is needed to establish trust.

Truepic, an authenticity infrastructure provider and another member of C2PA, says there’s enough information present in these digital markers to provide more detail than platforms currently offer. “The architecture is there, but we need to research the optimal way to display these visual indicators so that everyone on the internet can actually see them and use them to make better decisions without just saying something is either all generative AI or all authentic,” Truepic chief communications officer Mounir Ibrahim said to The Verge.

A cornerstone of this plan involves getting online platforms to adopt the standard. X, which has attracted regulatory scrutiny as a hotbed for spreading misinformation, isn’t a member of the C2PA initiative and seemingly offers no alternative. But X owner Elon Musk does appear willing to get behind it. “That sounds like a good idea, we should probably do it,” Musk said when pitched by Parsons at the 2023 AI Safety Summit. “Some way of authenticating would be good.”

Even if, by some miracle, we were to wake up tomorrow in a tech landscape where every platform, camera, and creative application supported the C2PA standard, denialism is a potent, pervasive, and potentially insurmountable obstacle. Providing people with documented, evidence-based information won’t help if they just discount it. Misinformation can even be utterly baseless, as seen by how readily Trump supporters believed accusations about Harris supposedly faking her rally crowds, despite widespread evidence proving otherwise. Some people will just believe what they want to believe.

But a cryptographic labeling system is likely the best approach we currently have to reliably identify authentic, manipulated, and artificially generated content at scale. Alternative pattern analyzing methods like online AI detection services, for instance, are notoriously unreliable. “Detection is probabilistic at best — we do not believe that you will get a detection mechanism where you can upload any image, video, or digital content and get 99.99 percent accuracy in real-time and at scale,” Ibrahim says. “And while watermarking can be robust and highly effective, in our view it isn’t interoperable.”

No system is perfect, though, and even more robust options like the C2PA standard can only do so much. Image metadata can be easily stripped simply by taking a screenshot, for example — for which there is currently no solution — and its effectiveness is otherwise dictated by how many platforms and products support it.

“None of it is a panacea,” Ibrahim says. “It will mitigate the downside risk, but bad actors will always be there using generative tools to try and deceive people.”

 

Source: https://www.theverge.com/2024/8/21/24223932/c2pa-standard-verify-ai-generated-images-content-credentials

Exit mobile version