July 16th 2025
We asked Gemini 2.5 Flash to use everything it knows (including the latest research and common limitations of current generative AI), to tell us how to spot deepfakes that are too good for the human eye to detect.
Gemini had a 3-second think about things that then said that the giveaways often lie in subtle, systemic inconsistencies in physiological and environmental details that betray a lack of genuine understanding of physics and human biology.
Here’s its findings:
The reason these are often the “last bastions” of detection for advanced deepfakes is that generating them requires not just replicating pixels, but accurately simulating complex real-world physics, biological processes, and nuanced human behaviour – something current generative AI still finds challenging. Dedicated AI detection tools are trained to spot these specific, often microscopic, anomalies that are invisible to the naked eye.
July 16th 2025
(1) What is the source of the content? Is it from a reputable, known source, or an unfamiliar website or social media account?
(2) Does the context in which the image is presented seem plausible or sensationalist? Are there any accompanying claims?
(3) Are there visible inconsistencies in lighting, shadows, or reflections within the image, particularly around the subject’s face or body compared to the background?
(4) Do the edges of the person or object in question look unnaturally sharp, blurry, or pixelated compared to the rest of the image?
(5) Are there any unusual distortions or artifacts in facial features, such as eyes, teeth, ears, or hair? Do they look symmetrical or natural?
(6) Does the skin texture look overly smooth, waxy, or patchy? Are there any inconsistencies in skin tone or blemishes?
(7) If it’s a known person, does their expression, pose, or the situation depicted align with their known behaviour or public persona?
(8) Are there any oddities in the background details? Do objects appear distorted, or are there any illogical elements present?
(9) Ask: have I seen this image elsewhere? Or can I find other sources corroborating or debunking it using a reverse image search (e.g., Google Images, TinEye)?
(10) Are there any subtle digital artifacts, such as unusual patterns in textures (e.g., hair, fabric), inconsistencies in focus or resolution between different parts of the image, or tell-tale signs of digital “stitching” or “inpainting” that suggest a generative AI process was used?