The number of deepfake applicants targeting universities is increasing, with education software providers scrambling to mitigate AI fraud in admissions.
AI technology can mimic, replace or augment people, allowing fraudulent applicants to appear fluent, knowledgeable and confident in subjects of which they may understand little. It is increasingly non-binary, and can include real or human-made elements spliced with deepfakes.
Fraudulent impersonation or augmentation isn’t the only deepfake threat educators face. Deepfake attacks can undermine the integrity and reputation of academic institutions by fabricating research data. Deepfakes can be used in combination with plagiarised or machine-generated academic submissions to compromise the effectiveness of genuine students, infringe copyright and intellectual property laws and cause costly legal penalties. Deepfakes can also cause havoc with government requirements for academic institutions. In the UK institutions risk losing their student sponsorship licence through UK Visas and Immigration (UKVI) if the Home Office refuses more than 10% of their applicants a year.
Deepfake technology has given new tools to people who threaten or harass others, and those who behave in abusive or intrusive ways. Deepfakes can be easily weaponised as a means of coercive control and are being applied in a growing spectrum of settings that include domestic violence, celebrity stalking, partner impersonation and “digital rape”.
Studies indicate that a disproportionate percentage of victims of digital abuse are female, with up to 58% of women having experienced technology-assisted abuse. Non-consensual sexual imagery makes up 96% of deepfakes online, with 99.9% of it depicting women.
Professionals involved in the management of these threats often work reactively, triaging damage caused by synthetic-media attacks. A lack of accountability by technology platforms or specific legislation countering deepfake crime makes fixated-risk management laborious territory. It’s important to be able to act quickly to label AI-generated images as such, to break the momentum of online abuse.
Threats caused by deepfakes to financial institutions are significant and evolving.
Deepfakes can exploit weaknesses in authentication mechanisms, impersonate institutional leaders and manipulate or influence markets. Losses from deepfake and AI-generated fraud are expected to reach tens of billions of dollars in the next few years.
Financial institutions are being targeted in a number of ways by criminals using deepfake technology, from C-suite impersonation to third-party imitation.
Financial organisational attacks include vishing (voice-based fishing), deepfake meeting fraud, deepfake-enabled coercion, online biometric identity impersonation, theft of employee data, deepfake tool misuse, non-compliance, misinformation and disinformation.
Financial-landscape technology attacks include data-poisoning, model inversion, software supply-chain attacks, weak or inappropriate models and adversarial-AI tampering.
Deepfakes pose a matrixed threat to the jobs and recruitment industries. From false credentials to impersonations; fraudulent applications to racially-manipulated appearance—recruiters today must navigate a barrage of potential AI-driven crime.
Jobseekers in turn are also at risk as bad actors use deepfake job postings and interviews to harvest personal data for identity fraud and other crimes.
A common misapprehension that puts recruiters and jobseekers at risk is that deepfakes are relatively easy to spot. In fact, the opposite is true: even security experts have been duped. AI security company KnowBe4 Systems hired an AI deepfake software engineer, “Kyle”, for their internal AI team. They sent “him” a company laptop before discovering he was a deepfake after “he” tried to download malware onto it. KnowBe4 put “Kyle” through four video-conference interviews on separate occasions and completed background checks and pre-hiring checks, passing “him” at every hurdle.
In the midst of deepfake attacks from states and individuals, the onus is on leaders to update security technology well beyond standard backgrounding and recruitment protocols.
Deepfakes pose an increasing threat to journalists and news professionals who must verify the authenticity of sources and take steps against misinformation. While traditionally journalists had to combat “shallowfakes”—repurposed, mis-contextualised or edited media intended to deceive—the growth of deepfake technology is now of paramount concern. High-profile journalists especially females make popular targets for deepfake criminals. For example, investigative journalist Rana Ayyub who was attacked in a non-consensual sexual deepfake video intended to silence her.
As synthetic media becomes more widespread, journalists must identify and respond at speed. Forensic techniques, pulse detection and blockchain hashing algorithms are time-consuming and unreliable.
The growth of deepfakes is forcing journalists to spend both time and money verifying images, audio and video. This impacts disproportionately those journalists who are under-resourced, adding an extra layer of disadvantage to the deepfake impact.
AI-driven crime is now so sophisticated that it can undermine fundamental security protocols across the legal services, such as anti-money laundering (AML), Know Your Customer (KYC), and customer protection services and fraud detection systems.
In the UK, the Solicitors Regulation Authority is intensifying scrutiny of standard practices, such as Client Matter Risk Assessments (CMRAs) and increasingly hefty fines are being applied for lapses in regulatory functions. The onus is now firmly on law firms to ditch “template” risk assessment in favour of tailored, ongoing approaches that address real-world, practice-specific scenarios.
Deepfakes can also be used to create or alter evidence, conjure false identities, manipulate legal proceedings and deceive executives into divulging sensitive information. In fact, the applications of deepfakes are limited only by the imaginations of cyber criminals—prompting the Law Society to issue guidance on threat mitigation.
Judges are increasingly grappling with evidentiary issues caused by synthetic media. Who is responsible for proving the authenticity of images, video, text or audio? Can digitally-enhanced images change the meaning of proportions to the extent that they become “fake”? At what point should a broken or concealed software trail be interpreted as a lack of authenticity?
Deepfakes cause serious damage to OSINT by spreading misinformation, corrupting data analysis and calling into question the veracity of intelligence reports.
OSINT teams and communities have a critical need to identify media authenticity and information provenance—often at speed.