With a brain-inspired chip, robots can see faster and in real time
Researchers mimic brain filtering architecture to reduce optic flow delays
Robots are beginning to see and react in real time. A new study published in the journal Nature Communications by researchers at Beihang University in China describes a vision system that processes motion four times faster than today’s leading optical flow methods. The advancement could improve the reflexes of autonomous vehicles, industrial robots, and surgical machines.

The breakthrough is based on neuromorphic engineering, a field that designs hardware that is modeled after the human brain. Unlike traditional processors that separate memory and computation, neuromorphic chips integrate both functions, enabling faster and more energy-efficient data processing. This bio-inspired approach has long been seen as a promising way to bridge the gap between machine and human cognition.
The research team, led by roboticist Shuo Gao, took inspiration from a lesser-known brain structure: the lateral geniculate nucleus (LGN). Located between the retina and the visual cortex, the LGN acts as both a relay and a filter. Its sensitivity to temporal and spatial changes allows the human visual system to focus its processing power on fast-moving or rapidly changing objects, such as a cyclist cutting through traffic or a changing traffic light. Gao’s team set out to replicate this selective attention mechanism in silicon.
In a typical robotic vision system, cameras capture static frames, and optical flow algorithms calculate motion by tracking changes in brightness in pixels from frame to frame. While this method is reliable, it is slow – it can take more than half a second to process a single frame. This delay is critical for an autonomous vehicle traveling at highway speeds, where every fraction of a second translates into meters of blind spot.

Beihang researchers have developed a custom-designed neuromorphic module that detects changes in light intensity over time. This design allows the system to identify areas of movement in real time, directing computing resources only to areas where changes are occurring.
In tests, including simulated driving scenarios and tasks with a robotic arm, the prototype reduced processing delays by about 75 percent and doubled the accuracy of motion tracking during complex maneuvers.
The system still relies on conventional optical flow algorithms for final image interpretation and struggles in visually crowded environments where multiple movements overlap. Nevertheless, its performance is a significant improvement over traditional hardware configurations and points to a future where machines could achieve perceptual speeds that match or even surpass those of humans.
Researchers familiar with the work say the results could expand the range of environments in which robots can operate safely, from public streets to homes. In home environments, where robots must detect subtle visual cues such as gestures or changing facial expressions, faster visual reaction times could make human-robot interactions less mechanical and more natural.
The next challenge for engineers designing autonomous systems will be scaling neuromorphic hardware and integrating it into existing AI systems without sacrificing speed or accuracy. If successful, biologically inspired vision systems could redefine not only what robots see, but also how quickly they understand the moving world.
Credits:
Image:


