Updater
May 16, 2024 , in technology

 

Generative AI Raises the Stakes for Fake News Content

It has never been easier to create convincing-looking news content with text, photos, and now voice, increasingly easy to counterfeit. The easy availability of generative AI tools has given new urgency to the fight against misinformation.

Eidosmedia Generative AI and Fake News

Generative AI and Fake News | Eidosmedia

Fake news. Deep fakes. The number of “fakes” that media consumers need to beware of is proliferating, and generative AI is making it worse. Now, it’s not just consumers who need to keep an eye out for fake news but journalists themselves. As AI tools multiply and improve, it becomes easier for the average reader, and even journalists, to be tricked by a fake photo, especially in breaking news situations.

Fortunately, as well as empowering the creators of fake news, AI is also beginning to provide both readers and journalists with the tools to unmask them.

We look at the future of misinformation in the AI era.

Photo fakes and voice scams

The Future Institute’s “2024 Tech Trends Report” delved into how generative AI tools are transforming the way media organizations approach news. For instance, the report calls attention to “photos” that popped up on the internet after former U.S. President Donald Trump was indicted. The images showed him being taken into custody by the police, but they were too real. “They were generated using Midjourney, a generative AI tool,” the report says. “While the images were quickly debunked, they showed how quickly generated media could spread even with obvious imperfections in the image.”

It’s not just images that consumers and media professionals need to be on the lookout for. AI can create audio content that imitates the voices of real people, and in Indonesia, it has even been used to “resurrect” a former leader to deliver a video endorsement. And it’s not just bad actors using AI. New York City Mayor Eric Adams used the technology to make robocalls to voters in their native languages.

Additionally, the rise of news content created entirely by AI means there may be more content circulating that, at best, is not fact-checked by a skilled editor and, at worst, is churned out rapidly with the intention of misleading people. Either way, it can lead to increasing skepticism and distrust of the media.

Slicker, more credible and more dangerous

Engineering and machine learning expert at Virginia Tech, Walid Saad, told Virginia Tech News, “With the advent of AI, it became easier to sift through large amounts of information and create ‘believable’ stories and articles. Specifically, LLMs made it more accessible for bad actors to generate what appears to be accurate information. This AI-assisted refinement of how the information is presented makes such fake sites more dangerous.”

The potential implications of the proliferation of fake news are clear. We are already seeing so-called “pink slime” sites pop up to promote partisan perspectives as local news, some with ties to Russia, and these kinds of efforts can grow exponentially with the help of AI-generated content. Meanwhile, images and videos can spread like wildfire across social media before being exposed as fakes.

Luckily AI technologies also have the power to help users fight back.

Certifying content authenticity

If AI can create content, it can also detect the work of its fellow AIs, and many experts are finding new ways to root out fakes before they gain traction. As the Future Institute’s report points out, “The Content Authenticity Initiative [CAI] is one cross-disciplinary collaboration focusing on addressing misinformation and content authenticity at scale.” The CAI project is led by Adobe but also includes the likes of the New York Times, Axel Springer, and The Associated Press.

“One of CAI’s key initiatives was launching a metadata standard called ‘content credentials’ for tracking the ways that images can be edited, manipulated, and enhanced using artificial intelligence,” reports the Future Institute. It takes a four-pronged approach to verifying content:

  • Creation — At this step, content creators like journalists can use “cryptographic asset hashing to provide verifiable, tamper-evident signatures that the image and metadata hasn’t been unknowingly altered.”
  • Editing — Tools like Photoshop allow secure capture metadata to “be preserved and amended with history data of any alterations to content.”
  • Publishing and sharing — News organizations can integrate with publishers' CMS to ensure that secure capture information and relevant content edits are preserved during publishing. Even when the content is shared on social networks, “the product flow will preserve CAI metadata.”
  • Viewing — Consumers can view historical information about content with CAI metadata through the Verify site.

This effort could have knock-on effects as AI becomes more widely available and is deployed to help with content moderation, even for smaller players in the media landscape who may not have been able to build their own tools.

As the Integrity Institute points out, “AI has long been used for content moderation by social media platforms, who mostly develop their own AI models internally; smaller companies and startups might leverage third-party vendors to detect unwanted content like spam and abuse. Models designed to detect this content might use metadata such as account and post details, the text of the post alone, or a combination of the two.”

Curating content and building trust

These tools are not a panacea, however. As Pen warns, “Existing AI-detection tools have failed to consistently and effectively identify real images vs. AI-generated images.”

If there’s one thing we can be sure of it’s that humans will continue to find new ways to use AI — for good and for ill — and it is incumbent upon content creators and consumers to stay vigilant. Ultimately, the best solution may be the simplest: journalists must employ their skills to fact-check and accurately source content, and news organizations must build trust with their audiences to become the go-to source for the truth.

Interested?

Find out more about Eidosmedia products and technology.

GET IN TOUCH