AI Systems Now Fall for the Same Optical Illusions as Humans—Here's What That Reveals
- Jan 16
- 4 min read

Scientists discover artificial intelligence can be tricked by visual illusions, unlocking groundbreaking insights into how our brains actually work.
Artificial intelligence has just proven it can be fooled by the same optical tricks that have puzzled humans for centuries. In a fascinating development, researchers have discovered that some deep neural networks (DNNs)—the technology powering today's most advanced AI systems—fall for optical illusions just like we do, opening revolutionary new pathways for understanding human cognition.
The breakthrough matters because it provides scientists with an unprecedented tool for studying the brain without ethical constraints. While experimenting directly on human brains raises serious moral concerns, AI models offer a sandbox where researchers can simulate, manipulate, and analyze how visual perception actually works.
We've all experienced it: the Moon appears dramatically larger when hanging near the horizon compared to when it sits high in the night sky, even though its actual size and distance from Earth remain constant. This classic optical illusion demonstrates that what we perceive isn't always reality.
But these visual tricks aren't simply mistakes—they reveal the ingenious shortcuts our brains use to process the overwhelming flood of information constantly bombarding our eyes. Our brains can't possibly analyze every detail in our busy visual environments, so they've evolved to extract only what's essential, filtering and prioritizing information with remarkable efficiency.
This is where AI enters the picture with a game-changing twist.
Dr. Eiji Watanabe, an associate professor of neurophysiology at Japan's National Institute for Basic Biology, led groundbreaking research using an AI system called PredNet to study motion-based illusions like the famous "rotating snakes" pattern—a static image of colorful circles that appears to spin when you stare at it.
"Using DNNs in illusion research allows us to simulate and analyze how the brain processes information and generates illusions," Watanabe explains. "Conducting experimental manipulations on the human brain raises serious ethical concerns, but no such restrictions apply to artificial models."
PredNet operates on a fascinating principle called predictive coding, which mirrors a leading theory about human vision. Rather than passively recording what our eyes see, our visual system actively predicts what it expects to encounter based on experience, then processes only the differences between prediction and reality. This allows us to see and react much faster—a crucial evolutionary advantage.
Watanabe trained PredNet using roughly one million frames of natural landscape videos captured from head-mounted cameras, simulating human visual experience. Critically, the AI never saw any optical illusions during training.
When Watanabe presented the trained AI with variations of the rotating snakes illusion, something remarkable happened: the machine was fooled by the same images that trick humans. An altered version that humans correctly perceive as static also appeared static to the AI.
"After processing around a million frames, PredNet learns certain rules of the visual world," Watanabe says. "It extracts and remembers the essential rules, and among these, it may have also learned characteristics of moving objects."
The AI's susceptibility to the same illusion supports the predictive coding theory. Both human brains and PredNet identify visual patterns indicative of motion and trigger prediction systems that assume movement is occurring—even when nothing actually moves.
Despite these striking similarities, important differences remain. When humans fix their gaze on one rotating circle in the illusion, it appears to stop while peripheral circles continue spinning. PredNet, however, always perceives all circles moving simultaneously.
"This is likely because PredNet lacks an attention mechanism," Watanabe explains. Unlike humans, who can focus on specific details, the AI processes the entire image uniformly—a limitation that highlights we're still far from replicating human vision completely.
Currently, no deep neural network experiences all the illusions humans do. As Watanabe notes, while systems like ChatGPT might converse like humans, their underlying architecture functions very differently from biological brains.
Some researchers are pushing even further by combining AI with quantum mechanics to simulate how we perceive ambiguous illusions like the Necker cube—a wireframe cube that randomly flips between two orientations in our minds.
Ivan Maksymov, a research fellow at Australia's Charles Sturt University, developed a quantum-enhanced neural network that mimics this perceptual switching. His system uses quantum tunneling to process information and, remarkably, switches between interpretations at intervals similar to humans.
"It's quite close to what people see in tests," Maksymov says. This work doesn't necessarily suggest our brains operate on quantum principles, but it demonstrates that quantum theory can better model certain aspects of human thought and decision-making.
These discoveries extend beyond theoretical neuroscience. AI systems excel at spotting patterns and details humans miss, making them invaluable for detecting early disease signs in medical scans. Understanding where AI perception aligns with—or diverges from—human vision helps improve these diagnostic tools.
The research also has implications for space exploration. Studies of astronauts on the International Space Station (ISS) reveal that spending three months in orbit changes how they perceive optical illusions, likely because depth perception relies partly on gravity. Quantum-enhanced AI could simulate these perceptual changes, helping prepare future space travelers.
"While it's a narrow field of research, it's quite important because humans want to go to space," Maksymov notes.
For Ghana and other developing nations investing heavily in AI and technology education, this research demonstrates that cutting-edge innovation doesn't always require massive computing resources. Understanding fundamental principles of how AI processes information can inform more efficient, locally relevant applications.
African tech innovators working on AI-powered health diagnostics, agricultural monitoring, or financial services can apply these insights about machine perception to build more reliable, human-centered systems that account for the differences between artificial and human vision.
As AI systems become increasingly integrated into daily life—from autonomous vehicles to medical diagnostics to content moderation—understanding exactly how they "see" the world becomes critical. These optical illusion studies reveal both the remarkable similarities and crucial differences between silicon and biological intelligence.
The fact that some AI can be fooled by the same visual tricks that puzzle humans doesn't make the technology less reliable—it makes it more understandable. And understanding our tools, whether they're made of neurons or circuits, is the first step toward using them wisely.
The research continues, with scientists working to create AI that experiences a broader range of human-like illusions while maintaining the detail-oriented precision that makes machine vision so valuable. It's a journey that promises to teach us as much about ourselves as about the machines we're building.
DISCLAIMER: Information on this website is for general purposes only. Views expressed are those of the authors and do not necessarily reflect our official position. We are not liable for actions based on content.




Comments