Working to help detect AI-generated images

For a brief moment last month, an image purporting to show an explosion near the Pentagon spread on social media, causing panic and a market sell-off. The image, which bore all the hallmarks of being generated by AI. It was later debunked by authorities.

But according to Jeffrey McGregor, the CEO of Truepic, it is “truly the tip of the iceberg of what’s to come.” As he put it, “We’re going to see a lot more AI generated content start to surface on social media. and we’re just not prepared for it.”

McGregor’s company is working to address this problem. Truepic offers technology. That claims to authenticate media at the point of creation through its Truepic Lens. The application captures data including date, time, location. The device used to make the image, and applies a digital signature to verify if the image is organic. Or if it has been manipulated or generated by AI.

Truepic, which is backed by Microsoft, was founded in 2015, years before the launch of AI-powered image generation tools like Dall-E and Midjourney. Now McGregor says the company is seeing interest from “anyone that is making a decision based off of a photo,”. From NGOs to media companies to insurance firms looking to confirm a claim is legitimate.

Some lawmakers are now calling for tech companies to address the problem. Vera Jourova, vice president of the European Commission, on Monday called for signatories of the EU Code of Practice on Disinformation. A list that includes Google, Meta, Microsoft and TikTok – to “put in place technology to recognize such content and clearly label this to users.”

A growing number of startups and Big Tech companies. Including some that are deploying generative AI technology in their products. They are trying to implement standards and solutions to help people determine whether an image or video is made with AI. Some of these companies bear names like Reality Defender. It speak to the potential stakes of the effort. Protecting our very sense of what’s real and what’s not.

But as AI technology develops faster than humans can keep up, it’s unclear whether these technical solutions will be able to fully address the problem. Even OpenAI, the company behind Dall-E and ChatGPT, admitted earlier this year. Its own effort to help detect AI-generated writing. Rather than images, is “imperfect,” and warned it should be “taken with a grain of salt.”

“This is about mitigation, not elimination,” Hany Farid, a digital forensic expert and professor at the University of California. Berkeley, told CNN. “I don’t think it’s a lost cause, but I do think that there’s a lot that has to get done.”

“The hope,” Farid said, is to get to a point where “some teenager in his parents basement. It can’t create an image and swing an election or move the market half a trillion dollars.”

A preventative approach

In a different, preventative approach, some larger tech companies are working to integrate a kind of watermark to images to certify media as real or AI-generated when they’re first created. The effort has so far largely been driven by the Coalition for Content Provenance and Authenticity, or C2PA.

The C2PA was founded in 2021 to create a technical standard that certifies the source and history of digital media. It combines efforts by the Adobe-led Content Authenticity Initiative (CAI) and Project Origin, a Microsoft- and BBC-spearheaded initiative. That focuses on combating disinformation in digital news. Other companies involved in C2PA include Truepic, Intel and Sony.

Not just a private sector solution – images AI

While tech companies are trying to tackle concerns about Ai-generated images and the integrity of digital media. Experts in the field stress that these businesses will ultimately need to work with each other and the government to address the problem.

“We’re going to need cooperation from the Twitters of the world and the Facebooks of the world so they start taking this stuff more seriously. Stop promoting the fake stuff and start promoting the real stuff,” said Farid. “There’s a regulatory part that we haven’t talked about. There’s an education part that we haven’t talked about.”

Parsons agreed. “This is not a single company or a single government or a single individual in academia who can make this possible,”. He said. “We need everybody to participate.”

For now, however, tech companies continue to move forward with pushing more AI tools into the world.

Trả lời

Email của bạn sẽ không được hiển thị công khai. Các trường bắt buộc được đánh dấu *