There was a time when computers and the human brain inhabited parallel universes, unable to truly talk to each other. The former, highly efficient in calculation but blind to the complexity of the real world; the latter, an evolutionary miracle in interpreting visual chaos but limited in pure computing power. Today this gap is closing, thanks to the neuromorphic chip developed at RMIT University. This microscopic device does more than just see: it interprets and memorizes, just like our brains do. Without relying on external computers, it detects movements, creates visual memories and processes information with an energy efficiency that puts traditional digital systems to shame.
At the heart of this neuromorphic chip revolution is the molybdenum disulfide, a material so thin that it is almost two-dimensional, yet capable of behaving like the neurons in our brain.
The structure of the artificial brain
The device uses a radically different approach than traditional systems artificial vision. Instead of capturing and analyzing every single frame (which requires enormous computational resources), the neuromorphic chip detects only significant changes in the surrounding environment, a process known as “edge detection.”
As Professor explains Sumeet Walia, director of the Centre for Optoelectronic Materials and Sensors at RMIT:
“This test device mimics the human eye’s ability to capture light and the brain’s ability to process visual information.”
This allows you to instantly perceive changes in the environment and create memories without consuming huge amounts of data and energy.
The technology is based on the spiking neural networks (SNN), which work like real neurons by activating through discrete signals or “spikes.” At the heart of the device is molybdenum disulfide (MoS₂): a metal compound with defects at the atomic level that can detect light and convert it into electrical signals, just like neurons do in the human brain.
Revolutionary applications
The implications of this technology are vast and disruptive. In autonomous vehicles, vision systems with neuromorphic chips could detect changes in the scene almost instantly, enabling immediate reactions that could save lives.
For advanced robotics, this technology offers the possibility of more natural interactions. “For robots that work closely with humans in manufacturing or as personal assistants, neuromorphic technology could enable more natural interactions,” says the Professor Akram Al-Hourani, deputy director of COMAS and co-author of the study published in Advanced Materials Technologies.

The team is now scaling up the single-pixel prototype to a larger array of MoS₂-based devices, thanks to funding from the Australian Research Council. As reported Australian Manufacturing, the doctoral student Thiha Aung, first author of the study, demonstrated that atomically thin MoS₂ can precisely replicate the behavior of an “integrate-and-fire” neuron, a fundamental element of spiking neural networks.
The Future of Vision and Neuromorphic Chips
“Although our system mimics some aspects of the brain’s neural processing, particularly in vision, it is still a simplified model,” Walia admits. The team sees their work as a complement to traditional computing, not a replacement. Conventional systems excel at many tasks, while neuromorphic technology offers significant advantages in visual processing where energy efficiency and real-time operation are key.
Researchers are also exploring materials other than MoS₂ that could extend infrared capabilities, opening up new possibilities for monitoring global emissions and intelligently detecting contaminants such as toxic gases, pathogens, and chemicals.
One day soon, these more efficient and effective computer vision systems could make traditional digital vision technologies obsolete. As We suggest only at the beginning: it will be a new era in which artificial intelligence will autonomously design its own neural chips, leading us towards a creative symbiosis between the human mind and artificial intelligence.