Develop Computer scientists at the University of Buffalo are a tool that automatically identifies deepfakes by analyzing reflections of light in the eye.
The tool demonstrated 94 percent effectiveness with image-like images in experiments described in a paper accepted at the IEEE International Conference on Phonetics, Speech and Signal Processing held in June in Toronto.
The cornea is like a perfect spherical shape and it is highly reflective, so anything that comes to the eye with light emitted from these sources will have an image on it, says Siwei Lyu, professor of innovation at SUNY Empire University in the Department of Computer Science and Engineering and lead author of the paper. The cornea.
“The eyes should have very similar reflective patterns because they see the same thing, which is something we don’t usually notice when we look at the face,” added Liu, a multimedia and digital guides expert.
However, most AI-generated imagery – including those generated by the GAN – fail to do so accurately or consistently.
The new tool takes advantage of this shortcoming by detecting small aberrations in the light reflected in the eyes of the fake images.
To conduct the experiments, the research team obtained real photos, as well as fake photos from the site of the faces created by artificial intelligence, which appear to be lifelike.
All the photos were selfies, including real people and fake people looking directly at the camera in good lighting, at a resolution of 1024 x 1024 pixels.
The tool works by mapping each face, and then examining the eyes, eyeballs, and reflected light in each eyeball, and comparing details of potential differences in shape, light intensity, and other features of the reflected light.
While the technology is promising, it has limitations – it needs a reflected light source, and mismatched light reflections can also be fixed to the eye while editing an image.
In addition, the technology considers only individual pixels reflected in the eye, not the shape of the eye, the shapes inside the eye, or the nature of what is reflected in the eyes.
The technique compares the reflections inside both eyes, so that if a person is missing an eye or the eye is invisible, the technique fails.
Liu, who has researched machine learning and computer vision projects for more than 20 years, has demonstrated that deepfakes videos tend to have inconsistent or non-existent flash rates for video subjects.
And Liu helped a company Facebook in 2020 in the global challenge to discover deepfakes, and helped create Deepfake-o-meter, And it’s an online resource to help the average person find out if the video they’ve watched is a deepfakes video.
Deepfakes are being used for a host of nefarious purposes, from disinformation campaigns to introducing people to pornography, and is becoming more and more difficult to detect.