Want to spot a deepfake? Look for the stars in their eyes, says a team of astronomers at the University of Hull. They suggest that AI-generated fakes can be spotted by analyzing human eyes in the same way that they study pictures of galaxies: if the reflections in a person’s eyeballs match, the image is likely to be that of a real human; if they don’t, they’re probably deepfakes.

In this image, the person on the left (Scarlett Johansson) is real, while the person on the right is AI-generated. Their eyeballs are depicted underneath their faces. The reflections in the eyeballs are consistent for the real person, but incorrect (from a physics point of view) for the fake person. Image credit: Adejumoke Owolabi / CC BY 4.0.

In this image, the person on the left (Scarlett Johansson) is real, while the person on the right is AI-generated. Their eyeballs are depicted underneath their faces. The reflections in the eyeballs are consistent for the real person, but incorrect (from a physics point of view) for the fake person. Image credit: Adejumoke Owolabi / CC BY 4.0.

“The reflections in the eyeballs are consistent for the real person, but incorrect (from a physics point of view) for the fake person,” said University of Hull’s Professor Kevin Pimbblet.

Professor Pimbblet and his colleagues analyzed reflections of light on the eyeballs of people in real and AI-generated images.

They then employed methods typically used in astronomy to quantify the reflections and checked for consistency between left and right eyeball reflections.

Fake images often lack consistency in the reflections between each eye, whereas real images generally show the same reflections in both eyes.

“To measure the shapes of galaxies, we analyze whether they’re centrally compact, whether they’re symmetric, and how smooth they are. We analyze the light distribution,” Professor Pimbblet said.

“We detect the reflections in an automated way and run their morphological features through the CAS (concentration, asymmetry, smoothness) and Gini indices to compare similarity between left and right eyeballs.”

“The findings show that deepfakes have some differences between the pair.”

The Gini coefficient is normally used to measure how the light in an image of a galaxy is distributed among its pixels.

This measurement is made by ordering the pixels that make up the image of a galaxy in ascending order by flux and then comparing the result to what would be expected from a perfectly even flux distribution.

A Gini value of 0 is a galaxy in which the light is evenly distributed across all of the image’s pixels, while a Gini value of 1 is a galaxy with all light concentrated in a single pixel.

The astronomers also tested CAS parameters, a tool originally developed by astronomers to measure the light distribution of galaxies to determine their morphology, but found it was not a successful predictor of fake eyes.

“It’s important to note that this is not a silver bullet for detecting fake images,” Professor Pimbblet said.

“There are false positives and false negatives; it’s not going to get everything.”

“But this method provides us with a basis, a plan of attack, in the arms race to detect deepfakes.”

The researchers presented their work July 15 at the Royal Astronomical Society’s 2024 National Astronomy Meeting (NAM 2024) at the University of Hull.

_____

Kevin Pimblett et al. Detection of Deepfakes using Astronomy Techniques. NAM 2024