When we walk down a street and observe the world around us, our brain performs a remarkable feat: distinguishing between objects that are stationary and those in motion. Consider the challenge of telling apart a parked car from one zipping past at high speed. It might seem trivial, but the mechanisms that allow this perception are highly intricate. This difficulty arises primarily because the motions of our eyes themselves induce apparent movement of the entire visual scene across the retina—a phenomenon long regarded as visual “noise” that the brain must filter out to perceive true object motion.
Traditional neuroscience has held that the visual system must subtract out the retinal motion generated by eye movements to isolate the motion of objects relative to the environment. However, this longstanding notion has been challenged by new research from the University of Rochester. Their groundbreaking investigation reveals that the visual motion caused by our eye movements is far from meaningless interference. Instead, these specific patterns of image motion are valuable clues that the brain actively analyzes to decipher how objects move and, critically, how they occupy three-dimensional space.
Leading this innovative inquiry is Professor Greg DeAngelis, a distinguished figure in brain and cognitive sciences, neuroscience, and biomedical engineering. According to DeAngelis, the assumption that image motion produced by eye movements is merely a nuisance variable to be discarded is a misconception. Their findings illustrate that the brain harnesses these global patterns of image flow to infer the relative movements of the eyes in the surrounding space. This insight revolutionizes our understanding of visual processing by framing eye movement-induced image motion as an essential component for depth and motion interpretation, not as a problem to be erased.
To systematically investigate these dynamics, the research team devised an advanced theoretical framework predicting human perception of motion and depth under different eye movement conditions. This model accounts for the complex interplay between target object motion, eye fixation, and the accompanying retinal image displacement. By simulating multiple scenarios with varying object trajectories and gaze directions, they formulated precise predictions on observers’ perceptual errors regarding depth and motion.
The team validated these predictions through controlled experiments employing immersive 3D virtual reality environments. Participants maintained fixation on a stable point while observing target objects moving in the scene. In one perceptual assessment, subjects adjusted a dial to align a secondary object’s motion direction with their perceived trajectory of the target. In another depth perception task, participants indicated whether the target appeared closer or farther than the fixation point. The observed consistent and systematic perceptual biases in both tasks matched the theoretical expectations remarkably well, underscoring the model’s robustness.
Importantly, this body of work demonstrates that the brain integrates multiple streams of information—especially the image motions generated by eye movements—when constructing its representation of the three-dimensional world. Rather than suppressing these retinal signals as noise, the visual system evaluates their spatial patterns to infer the real-world layout accurately. This nuanced understanding challenges canonical perspectives in vision science that have dominated for decades.
The implications of these findings extend beyond basic neuroscience, touching on real-world applications such as technological interfaces and virtual reality. DeAngelis points out that current VR systems largely ignore the dynamic relationship between eye movements and the visual scene when rendering images. This disconnect may produce visual conflicts causing discomfort or motion sickness among users, as the artificial image motion does not align with the brain’s expected sensory input during eye movements.
By incorporating models of how the brain processes eye movement-induced image motion, future VR technologies could render more naturalistic and stable visual environments. Such advancements have the potential not only to enhance user comfort and reduce motion sickness but also to improve immersion and accuracy in virtual spaces. This line of research opens pathways toward a new generation of visually intelligent systems that harmonize with the brain’s perceptual strategies.
Furthermore, these discoveries inform our understanding of neurological disorders affecting visual perception and motion processing. Conditions that impair the brain’s ability to integrate eye movement signals might underlie difficulties in spatial navigation, object recognition, or depth perception. By elucidating how the healthy brain solves these challenges, this research sets the stage for targeted therapies and diagnostic tools.
The study involved contributions from graduate and postdoctoral researchers, reflecting a collaborative endeavor across multiple domains of expertise. Zhe-Xin Xu, formerly a doctoral student and now a postdoc at Harvard, and Jiayi Pang, currently continuing graduate studies at Brown University, brought critical insights. Akiyuki Anzai, a research associate at Rochester, also played a key role, underscoring the multidisciplinary nature of the investigation within neuroscience and visual cognition.
Supported by the National Institutes of Health, this research underscores the value of integrating theoretical modeling with immersive experimental paradigms to unravel complex brain functions. The fusion of computational and behavioral approaches emerges as a powerful tool for deciphering perception mechanisms that govern human experience of space and motion.
Ultimately, this paradigm-shifting work not only redefines our conception of how eye movements affect visual perception but also paves the way for innovations across health sciences and technology. By revealing that eye movement-induced image motion serves as an informative signal rather than unwanted noise, this study illuminates the sophisticated strategies the brain employs to interpret and navigate the three-dimensional world around us.
Subject of Research: Neuroscience, Visual Perception, Eye Movement, 3D Spatial Interpretation
Article Title: The Brain’s Use of Eye Movement-Induced Image Motion to Interpret 3D Space and Object Motion
Web References:
- University of Rochester: http://www.rochester.edu/
- Greg DeAngelis Lab: https://www.sas.rochester.edu/bcs/people/faculty/deangelis_greg/index.html
- Nature Communications article: https://www.nature.com/articles/s41467-025-67857-4
- DOI: http://dx.doi.org/10.17605/OSF.IO/ZY8W6
References:
DeAngelis, G.C., Xu, Z.-X., Pang, J., Anzai, A. (2025). Patterns of visual motion produced by eye movements inform the brain’s perception of 3D motion and depth. Nature Communications. https://doi.org/10.17605/OSF.IO/ZY8W6
Image Credits: John Schlia Photography, University of Rochester
Keywords: Neuroscience, Visual Perception, Eye Movements, 3D Vision, Depth Perception, Motion Perception, Virtual Reality, Cognitive Psychology, Brain and Cognitive Sciences

