VerifyLabs.AI uses several sophisticated neural network architectures, and they are chosen based on the type and content of the media they are checking. Some are better at detecting AI-manipulated faces; others are better at detecting whether or not an entire image has been generated by AI.
Different neural nets will give you slightly different results.
The different algorithms measure different characteristics of a piece of content, and some algorithms will give stronger results than others, depending on what tool made the original content.
Results for images or videos can change if they have been manipulated. For example, an image sent via WhatsApp is almost always processed to create a smaller file for easier transmission over the internet. This changes the characteristic of the file and can result in variations in scores. If you get a grey or red result, you’ll need to treat the image or other media with caution.
COMING SOON: If you suspect that an image is a deepfake, but it still gives you a ‘Human Made’ result, you will be able to run it through a separate detector within the app for a deeper check.