Meta/Facebook to Label “AI Content”

Transparency is the issue as the social media giant shines a spotlight on AI content.

Artificial intelligence, especially so-called AI assistants like OpenAI’s ChatGPT and Google’s Bard, are finding legions of new converts on a daily basis. AI users rave about the speed, efficiency, and engagement opportunities in the Gen AI marketplace.

One nagging issue with AI is starting to pop up in workplaces, newsrooms, and corporate boardrooms—mistrust of AI over the accuracy of its generated content. The AI trust issue is attracting more attention as the technology industry, in particular, begins covering it in surveys, studies, and white papers.

Exhibit “A” is Mozilla’s latest report, “In Transparency We Trust,” which tracks the history of AI content and its accuracy “challenges.” The report hammers home the AI content trust issue, calling for new public policy measures that would establish regulatory guardrails and urging technology companies that use AI to be more transparent about what is real and what is AI-generated content on their platforms.

This from the study.

Human-facing (AI content) disclosure methods, such as visible labels and audible warnings, rely heavily on the recipient’s perception and motivation.

Their effectiveness is questioned, given the ease with which bad actors can bypass labeling requirements. In addition, they may not prevent or effectively address harm once it has occurred, especially in sensitive cases.

Our assessment points to a low fitness level for these methods due to their vulnerability to manipulation, constant technological change, and inability to address wider societal impacts. We highlight that while these methods aim to inform, they can lead to information overload, exacerbating public mistrust and societal divides.

This underlines the shortcomings of relying solely on transparency through human-facing disclosure, without accompanying measures to protect users from the complexities of navigating AI-generated content.

Meta Steps Up

In a nod to the pro-AI-transparency movement, Meta, owner of Facebook and Instagram, says it’s rolling out a new initiative to label AI-generated content on all of its social media platforms.

“We will begin labeling a wider range of video, audio, and image content as “Made with AI” when we detect industry-standard AI image indicators or when people disclose that they’re uploading AI-generated content,” Monika Bickert, vice president of content policy, said in an April 5 note on the platform.

The company’s content policy team agreed with the Oversight Board’s argument that Meta’s “existing approach” is too narrow “since it only covers videos created or altered by AI to make a person appear to say something they didn’t say.”

Bickert also noted that Meta’s manipulated media policy was written in 2020, when realistic AI-generated content was rare and videos were the overarching concern.

“In the last four years, and particularly in the last year, people have developed other kinds of realistic AI-generated content like audio and photos, and this technology is quickly evolving,” she said. “As the Board noted, it’s equally important to address manipulation that shows a person doing something they didn’t do.”

As for the specific labels, Meta said it will monitor “a broader range of content” in addition to the “manipulated content” the company’s Oversight Board recommended covering.

“If we determine that digitally-created or altered images, video or audio create a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label so people have more information and context,” Bickert said. “This overall approach gives people more information about the content so they can better assess it and have context if they see the same content elsewhere.”

Meta says it will commence labeling AI-generated content in May. To date, it’s the first major social media platform committed to labeling AI-generated content, which should satisfy transparency-minded regulators and platform users alike.

Recent Posts

Leave a Reply

Your email address will not be published. Required fields are marked *