Social Media

Meta moves against scams, to label AI-generated content on Facebook, Instagram


Social media platform Meta is working to detect and label AI-generated images on Facebook, Instagram and Threads as the company pushes to call out “people and organisations that actively want to deceive people.”

President of global affairs at Meta, Nick Clegg, said in a blog post the social media giant will begin labeling AI-generated images on Facebook, Instagram and Threads “in the coming months” as it works with “industry partners” to develop common detection standards for AI content.

Meta, which includes Facebook, Instagram, Threads and WhatsApp, already applies “Imagined with AI” labels for images created using its Meta AI feature, but Clegg said Meta wants to also be able to label content created by other companies.

Meta, whose Facebook platform currently has 36.2 million advertising audiences in Nigeria, in a statement, said the move is to differentiate between human and AI content and promote transparency on its platforms. Clegg said Meta currently applies some measures when photorealistic images are created using its AI feature.

According to him, these measures include putting visible markers that users can see on the images, and both invisible watermarks and metadata embedded within image files.

The Meta Global Affairs President said the company is coming up with labeling because of the need to establish a boundary between AI and humans.

“As the difference between human and synthetic content gets blurred, people want to know where the boundary lies. People are often coming across AI-generated content for the first time and our users have told us they appreciate transparency around this new technology.

Read More   Twitter expands tweet character limit massively

“So, it’s important that we help people know when photorealistic content they’re seeing has been created using AI. We do that by applying “Imagined with AI” labels to photorealistic images created using our Meta AI feature, but we want to be able to do this with content created with other companies’ tools too.

“That’s why we’ve been working with industry partners to align on common technical standards that signal when a piece of content has been created using AI. Being able to detect these signals will make it possible for us to label AI-generated images that users post to Facebook, Instagram, and Threads.

“We’re building this capability now, and in the coming months, we’ll start applying labels in all languages supported by each app. We’re taking this approach through the next year, during which several important elections are taking place around the world. During this time, we expect to learn much more about how people are creating and sharing AI content, what sort of transparency people find most valuable, and how these technologies evolve. What we learn will inform industry best practices and our approach going forward,” he said.

Clegg said Metal is also building industry-leading tools that can identify invisible markers at scale – specifically, the “AI generated” information in the C2PA and IPTC technical standards. He said this would enable the company to label images from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock as they implement their plans for adding metadata to images created by their tools.



Read More   YouTube relaxes rules around swearing and demonetization


READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.