verifylabs logo
About Us FAQs Pricing Blog Sign up or Login Detected a Deepfake?
About Us Use cases by sector Pricing Blog
Sign up or Login Detected a Deepfake?

“Super-recognisers” are super distractions when it comes to safety

February 24th 2026

A recent study by the British Psychological Society (BPS) has tested whether individual differences in people’s facial recognition ability explain variations in telling AI from real faces.

The recent BPS study highlights two points that deserve attention beyond the headline:

  1. Human expertise varies – a small group of “super‑recognisers” can spot AI‑generated faces slightly better than average participants (≈57 % accuracy).
  2. Synthetic faces occupy a central region of “face‑space” – generative models tend to create hyper‑average, statistically smoother faces that are distinct from the sparsity of real human variation.

The authors make a valuable contribution to cognitive science by showing that face‑identity expertise can be repurposed for deep‑fake detection, and that the “hyper‑average” signature is detectable at the algorithmic level. However, the practical message for the public is modest: even the best human detectors are only modestly above chance, and their advantage disappears when the faces become less extreme or when detection is required under time pressure.

What this means for anyone concerned about deep‑fakes—parents, students, professionals—is that relying on personal intuition or on a small cadre of experts will not provide the robustness needed in everyday life.


Why a “super‑AI‑face‑detector” alone is not enough

The phrase super‑AI‑face‑detector often conjures a plug‑and‑play shield that instantly blocks every synthetic image. In practice:

Super‑recognisers illustrate that human perceptual cues can complement algorithmic signals, but scaling that expertise to billions of users is unrealistic. The study’s “wisdom‑of‑the‑crowd” simulation shows that aggregating many highly trained observers can improve performance, yet it also underscores the cost of assembling such crowds in real time.

Practical steps to stay safe:

  1. Activate provenance features – many smartphones, cameras and social apps now offer options to store a hash or metadata tag with a photo. Turn these on, especially for images you post of your children or for professional headshots.
  2. Verify before you trust – for a video call, a profile picture, or a claimed news clip, run the file through a trusted verification service (e.g., our mobile or browser Deepfake Detector tool). Do not assume a “real‑looking” face equals a real person.
  3. Limit personal data exposure – AI models need training data; the more you feed platforms with high‑resolution selfies, the easier it is to generate convincing fakes. Use privacy settings to hide unnecessary details.
  4. Create a family “deep‑fake checklist”
    • Is the source known and contactable?
    • Does the file carry a verification badge?
    • Do any visual cues (asymmetry, uncanny lighting) look off?
    • Can we confirm the claim via a direct, unscripted interaction?
  5. Set browser extensions – extensions that flag media lacking a verification hash can give an extra heads‑up; VerifyLabs.ai offers a lightweight, privacy‑first add‑on for Chrome.

For parents, the most reliable safety net is early digital literacy. Children who learn to treat visual media as “claims that need evidence” are far less likely to be duped by a synthetic portrait, even if the portrait looks flawless.

 

verifylabs logo
© VerifyLabs.AI 2025. All rights reserved.