
Can you tell AI-generated images from real ones? A recent study by Joe Youngblood suggests that people still aren’t easily fooled. While artificial intelligence continues to advance in image creation, most Americans can still tell the difference between real and AI-generated pictures. Well, at least for now.
The research involved 4,016 U.S. consumers between the ages of 18 and 65. Participants were shown pairs of images and asked to identify which one was created using generative AI. Overall, they got it right 71.63% of the time. Pretty impressive, huh?
[Related Reading: Think you can tell portraits from AI? Try this quiz!]
Study Reveals Americans Aren’t Easily Fooled
The researchers conducted the experiment using the Eureka survey system and a Typeform questionnaire. They randomly showed participants side-by-side comparisons of AI and real images. Each AI image was chosen from five different generations created using tools like Midjourney, DALL-E, and Grok 3, with the most realistic option selected for testing.
“Consumers are highly likely to be able to detect if your images are generated by AI,” the study concluded. But I’m sure you wonder what’s with those comments all over Facebook where people genuinely think AI-generated images are real. Well, some results did highlight the weak points like this.
“Consumers on social media confusing AI and real imagery are possibly in the lowest quartile of detection showing there might be some other issues there such as vision impairment, technical prowess, or intelligence levels,” the study reads.
What Were the Results?
The study found that most generative AI images stood out to viewers as obviously fake. The Italian countryside, Scarlett Johansson as Black Widow, a baby peafowl, and Jupiter were all identified correctly over 83% of the time. This suggests that AI still struggles to match the realism of human-taken photographs.
However, images like the Eiffel Tower really confused the participants. Only 18.05% of them selected the AI image correctly. “Perhaps it is because this is one of the most famous landmarks on the planet or the training data for various generative AI systems include lots of photos of this structure,” the study reads, “but most humans were unable to correctly identify the generative AI image.”

When Are AI Images Accepted?
Interestingly, the study also indicates that AI-generated content may be more accepted in specific contexts, such as memes, cartoons, video game graphics, or stylized diagrams — areas where realism isn’t the main concern.
However, when trust or authenticity plays a key role in consumer decisions — such as product endorsements or historical representations — passing off AI-generated content as real may backfire.
What’s Next?
In future studies, the research team plans to repeat the study using OpenAI’s newer image generation models, including ChatGPT-4o, which showed improved performance in early tests. But even then, inconsistency remains a challenge. “It might generate a prompt near perfectly one day and the next completely wrong,” the study notes.
So, although generative AI continues to improve, most Americans still seem equipped to spot its flaws and tell AI from reality. Those Facebook comments might tell you otherwise, though. And something tells me we’re in for way more of them as generative AI evolves.
By the way, can you tell which of the two images from the lead image is AI-generated? Honestly, I couldn’t tell if I didn’t make it.
[via Phoblographer]