Optic

Deepfake-detecting startup Optic boldly claims it can identify AI-generated images created by Stable Diffusion, Midjourney, DALL-E, and Gan.ai. Users simply have to drag and drop the questionable image in question into Optic’s website or submit it to a bot on Telegram and the company will spit out a verdict.
Optic presents a percentage showing how confident its system is that the submitted image is AI-generated. It also attempts to determine which particular model was used to create the image. Though Optic claims it’s able to correctly identify AI-generated images 96% of the time, it’s not foolproof. Gizmodo asked Optic’s model to determine the authenticity of the widely debunked image depicting a supposed explosion near the Pentagon and Optic determined it was “generated by a human.”