Meta says it will introduce technology that can detect and label images generated by other companies’ artificial intelligence (AI) tools.
It will be deployed on its platforms Facebook, Instagram and Threads.
Meta already labels AI images generated by its own systems. It says it hopes the new tech, which it is still building, will create “momentum” for the industry to tackle AI fakery.
But an AI expert told the BBC such tools are “easily evadable”.
In a blog written by senior executive Sir Nick Clegg, Meta says it intends to expand its labelling of AI fakes “in the coming months”.
In an interview with the Reuters news agency, he conceded the technology was “not yet fully mature” but said the company wanted to “create a sense of momentum and incentive for the rest of the industry to follow”.
But Prof Soheil Feizi, director of the Reliable AI Lab at the University of Maryland, suggested such a system could be easy to get around.
“They may be able to train their detector to be able to flag some images specifically generated by some specific models,” he told the BBC.
“But those detectors can be easily evaded by some lightweight processing on top of the images, and they also can have a high rate of false positives.
“So I don’t think that it’s possible for a broad range of applications.”
Meta has acknowledged its tool will not work for audio and video – despite these being the media that much of the concern about AI fakes is focused on.