Tech

AI Falls for Optical Illusions, Offering Clues About Human Brain

Navigation

Ask Onix

AI and Human Brains Fooled by the Same Visual Tricks

The Moon appears larger near the horizon, though its size remains constant. Such optical illusions reveal how our brains take shortcuts to process the world efficiently-yet artificial intelligence can be deceived in similar ways, shedding light on human perception.

Why Illusions Happen

Optical illusions are often seen as errors in our visual system, but they actually highlight how the brain prioritizes key details over exhaustive data. Processing every visual input would overwhelm us, so the brain filters information, sometimes leading to misperceptions.

Artificial intelligence, particularly deep neural networks (DNNs), excels at detecting patterns humans miss-such as early signs of disease in medical scans. Yet, some AI systems are vulnerable to the same illusions that trick people, offering researchers a new way to study brain function.

AI as a Brain Model

Eiji Watanabe, a neurophysiology associate professor at Japan's National Institute for Basic Biology, explains that DNNs allow scientists to simulate how the brain processes information without ethical concerns tied to human experiments.

"Using DNNs in illusion research lets us analyze how the brain generates illusions," Watanabe says. "Artificial models face no ethical restrictions, unlike human studies."

While theories abound, no single explanation fully accounts for why humans perceive illusions. Studies of individuals who regained sight later in life suggest motion perception is more resilient to sensory deprivation than shape recognition, possibly because infants learn to process movement earlier.

Predictive Coding Theory

Watanabe's team tested whether an AI model called PredNet, based on the predictive coding theory, would react to the "rotating snakes" illusion-a static image that appears to move. The theory posits that the brain predicts what it expects to see, then adjusts for discrepancies, enabling faster perception.

PredNet was trained on videos of natural landscapes, learning to anticipate future frames. When shown the illusion, it perceived motion, just like humans. However, unlike people, who see motion stop when focusing on a single circle, PredNet processed the entire image uniformly due to its lack of an attention mechanism.

Quantum AI and Perception

Some researchers are merging AI with quantum mechanics to simulate human perception. Ivan Maksymov, a research fellow at Australia's Charles Sturt University, developed a quantum-inspired DNN to model how people perceive ambiguous illusions like the Necker cube (a cube that flips between two orientations) and the Rubin vase (a vase or two faces).

The AI, using quantum tunneling, switched between interpretations at intervals similar to human observers. Maksymov suggests this reflects how quantum theory might model decision-making, though he doesn't claim the brain itself is quantum.

Space and Perception

Astronauts on the International Space Station (ISS) experience altered perception of illusions like the Necker cube. On Earth, they favor one perspective, but after months in orbit, they see both equally. Scientists attribute this to gravity's role in depth judgment.

"This research is vital as humans venture into space," Maksymov says. "Astronauts must trust their vision."

Limitations of AI

Despite progress, AI still falls short of replicating human vision. Watanabe notes that no DNN experiences all human illusions, and systems like ChatGPT, while conversational, function differently from biological brains. The shared use of neuron-like structures masks fundamental differences in processing.

As AI advances, it may bridge gaps in understanding perception-but for now, the human brain remains uniquely complex.

Related posts

Report a Problem

Help us improve by reporting any issues with this response.

Problem Reported

Thank you for your feedback

Ed