Users of Facebook and Instagram will soon be able to see labels AI-generated photographs that show up in their feeds as part of a larger tech industry effort to distinguish between real and fake.
On Tuesday, Meta announced that it is developing technological standards with industry partners to facilitate the identification of photos and, in the future, audio and video produced by artificial intelligence systems.
How well it functions in a time when it’s simpler than ever to create and disseminate artificial intelligence (AI)-generated imagery that can be harmful, from nonconsensual fake nudes of celebrities to election propaganda, is still to be seen.
Assistant Cornell University information science professor Gili Vidan said, “It’s kind of a signal that they’re taking seriously the fact that generation of fake content online is an issue for their platforms.” Although it won’t probably detect everything, she claimed it may be “quite effective” in flagging a significant percentage of AI-generated content created with commercial tools.
Nick Clegg, head of worldwide relations at Meta, stated on Tuesday that the labels will be available in many languages and “in the coming months,” without providing an exact date. He also mentioned that “a number of important elections are taking place around the world. As the difference between human and synthetic content gets blurred, people want to know where the boundary lies,” he said on his blog.
Although Meta already labels photorealistic photographs created with its own technology as “Imagined with AI,” the majority of the AI-generated content that floods its social media platforms is not created by Meta.