verifylabs logo
About Us FAQs Pricing Blog Sign up or Login Detected a Deepfake?
About Us Use cases by sector Pricing Blog
Sign up or Login Detected a Deepfake?

Young people v AI deepfakes

December 16th 2025

https://docs.google.com/document/d/1osHLqvh-kmwHuasAL0QIiYTv6L7A1x0g9p01u_n3Nzw/edit?tab=t.0

Humanity has always invented and commoditised first, then made safe later. Like the car: from the appearance of the first widely-used models to the UK legislation enforcing seatbelts took nearly a century.

We can’t afford to repeat that mistake with AI Deepfakes.

Today deepfakes are indistinguishable from reality, are multi-modal across video, images and voice, and are non-binary (mixing real with fake elements) to help evade detection. For the first time in history, deepfake technology means that seeing or hearing isn’t believing. Neither can someone’s identity be determined anymore at face value.

Deepfake apps are already everywhere, invading every realm of digital life, from news to social media, from corporate vetting to university applications. Data show an exponential year-on-year rise in AI deepfakes and crime associated with them.

For young people, exposure to harmful synthetic content is now part of the fabric of life, as the apps used to make deepfakes are available without parental agreement protocols or age limitations. The apps are “gamified” in design, literally child’s play to use and mean that deepfake generation is both easy and fast. Our own testing has shown that even image generators that purport to have a strong anti-deepfake policy can be relatively easily subverted to generate deepfake images indistinguishable from the real thing.

Children and young people are more vulnerable to deepfake attacks than adults. They’re digitally literate, quick to learn how to use new technology and spend much of their lives engaging online. But their technical knowledge isn’t balanced by risk awareness. This often exacerbates the consequences of deepfake abuse.

The specific risks to children are significant. They include grooming and exploitation, non-consensual explicit content, blackmail and coercion, identity theft and fraud, social reputational damage, educational disruption, emotional trauma and ongoing distress.

Consequently an alarming rise in cyber-bullying using non-consensual sexual material has violated a whole generation of young people. The fear that parents and educators have is real; new research from VerifyLabs.AI has revealed that over a third (35%) of Brits said deepfake nudes (non-consensual intimate imagery) or videos of themselves or their child were what they feared most when it came to deepfakes.

Another survey from Censuswide found more than a quarter of children have seen a sexualised deepfake of a celebrity, friend, teacher or themselves. Just under half of young people think more needs to be done to ensure their online safety.

Current legislation hasn’t begun to tackle the issue. The UK still doesn’t have a single, overarching law specifically applied against deepfakes. Instead, it uses a patchwork of existing and new legislation to address specific harms caused by AI misuse, particularly in cases of non-consensual sexual content, fraud and harassment. This reactive, archaic stance continues to put individuals and society at great risk.

There’s an urgent need for legislation aimed at both companies producing AI-generated deepfake content and the digital platforms hosting it. There’s a concurrent need for legislation that supports and empowers victims in the digital space, including automatic reporting mechanisms and processes, a right to absolute and immediate deletion and compensation and support.

verifylabs logo
© VerifyLabs.AI 2025. All rights reserved.