Science and Tech

We have a new method for detecting deepfakes. One based on how we measure galaxies

Eyes

The main photo of Scarlett Johansson is real. The one on the right is not. It is an AI-generated deepfake, and although it is really well done, it has been possible to detect that it had been created by a machine. The method for achieving this detection is especially curious.

The Secret in Their Eyes. Researchers from the University of Hull They have developed a novel method for detecting images created by generative AI models. The key, they say, is in the eyes of these images of people. And specifically, in the way they reflect light.

An astronomical techniqueThe way to detect these fake images is surprisingly based on tools used by astronomers to study galaxies. In this case, these techniques allow the consistency of the light reflected in the eyeballs to be analyzed.

FailureAccording to the study, led by student Adejumoke Owolabi and supervised by his astrophysics professor, Dr. Kevin Pimbblet, studying light reflections in the eyes helps detect deepfakes. Normally, both eyes reflect the reflection of light sources in the same way, but in images created by AI, this fact is not taken into account, and there is often an inconsistency in the reflection in each eye.


Eyes

Playing to find differences. Although in many cases it is easy to see with the naked eye the differences in the reflection of light in the eyes, astronomical techniques help to find and quantify these inconsistencies. Owolabi developed a technique to automatically detect these differences by analyzing the morphological characteristics of the reflections using indices to compare the similarity between the left and right eyeballs.

Gini coefficientThe tool developed makes use of the so-called Gini coefficient, which has traditionally been used to measure the distribution of light in images from galaxies, and which allows the uniformity of reflections to be assessed. In the case of these studies, as Pimbblet indicated, the shapes of the galaxies are measured, whether they are compact, symmetrical, and the distribution of light.

Useful…for nowThe tool seems useful, and joins other techniques that have been used in recent months to help us detect deepfakes. The problem is that once it is known that generative AI models have this problem, their creators will modify them to correct it and make these deepfakes even more difficult to detect.

Watermarks as an alternativeFaced with these techniques, it seems that the most interesting option at the moment is to generate these invisible watermarks that identify the images generated by AI as such. There are various diversified movements in this direction, and it remains to be seen whether they end up becoming the norm.

Image | Adejumoke Owolabi

At Xataka | AI has advanced so much that the problem is not just deepfakes. It’s that we distrust even real photos

Source link