A Brain Model Learns To Drive – Neuroscience News

Researchers at the Institute of Biophysics of the National Research Council (IBF-CNR) in Palermo, Italy, have used their knowledge of the neuronal architecture and connections in the brain’s hippocampus to create a robotic platform that can learn from its experiences as it moves through an environment, just like humans do.

After being exposed to the car-like virtual robot, the simulated hippocampus can modify its synaptic connections. Notably, this implies that the hippocampus must only traverse a certain destination once before it can recall the route.

Considerable advancement has been made compared to existing autonomous navigation techniques that depend on deep learning and need to work out thousands of potential routes.

Michele Migliore and Simone Coppolino of the IBF-CNR have recently made a breakthrough, as it is the first time that we can replicate the function and the design of the hippocampus, including single neurons and their interconnections. Their findings were published in the Neural Networks journal. This mimicking is significant because the hippocampus plays an important part in working memory for our brains.

Drawing from the examples provided by biology, the team devised a new set of procedures for navigation that differed from those employed by deep learning-based systems. This was accomplished by using fundamental components and characteristics described in the literature, such as neurons that encode for objects, specific connections and synaptic plasticity.

A deep-learning system can identify the most cost-effective route from one place to another by running numerous simulations and assessing costs. Although this technique may seem simple, it has undergone many years of research to reduce the labor required for calculation. The Human Brain Project has been a major contributor to this effort.

The pair says:

“Our system, on the contrary, bases its calculation on what it can actively see through its camera,”

“When navigating a T-shaped corridor, it checks for the relative position of key landmarks (in this case, coloured cubes). It moves randomly at first, but once it is able to reach its destination, it reconstructs a map rearranging the neurons into its simulated hippocampus and assigning them to the landmarks. It only needs to go through training once to be able to remember how to get to the destination.”

When you visit a museum, it can be likened to the behavior of humans and animals in that your first instinct is to explore and wander around. But if you need to find your way back to a particular display, you can easily recall each step taken.

A deep-learning system was implemented through EBRAINS, a digital research infrastructure. This allowed the researchers to build and test a physical robot in a real corridor with the robotic platform and hippocampal simulation. The system calculates possible paths on a map, assigning them costs before selecting the least expensive path to reach its destination.

They went on to say:

“Object recognition was based on visual input through the robot’s camera, but it could in theory be calibrated on sound, smell or movement: the important part is the biologically inspired set of rules for navigation, which can be easily adapted to multiple environments and inputs.”

Giuseppe Giacopelli from Migliore’s laboratory is developing the system to adapt it for industrial purposes, enabling it to recognize particular figures.

Migliore says:

“A robot working in a warehouse could calibrate itself and be able to remember the position of shelves in just a few hours,”

“Another possibility is helping the visually impaired, memorizing a domestic environment and acting as a robotic guide dog.”

Mammals find navigating a complex environment to be an uncomplicated undertaking. For example, it generally does not take long for them to learn how to get out of a maze by following various cues. An artificial intelligence approach that is based on the hippocampal circuitry can be used to explain this process.

In most cases, just one or two visits to a novel environment are enough for an individual to learn how to get out of any part of the maze. This starkly contrasts with the common challenge deep learning algorithms face when mapping a route through a series of items.

Current AI techniques are not sufficiently advanced to imitate how human brains operate when completing tasks. This is evidenced by teaching an AI system a lengthy sequence of objects, and commands would require an excessively long training period.

Previously, we developed a proof-of-principle model, SLT (Single Learning Trial), which illustrates how the circuitry in the hippocampus can be used to learn an identifiable sequence of objects with only one trial.

In this study, we build upon the existing model and dub it e-STL. This new version enables a single trial navigation of a classic four-arm maze, allowing the learner to identify and steer clear of dead ends while finding an exit.

The presented conditions demonstrate that the e-SLT network, encoding locations, head direction, and objects can effectively perform a basic cognitive task. These findings suggest potential circuitry and operations of the hippocampus and could form a basis for novel artificial intelligence algorithms designed for spatial navigation.

Source: Neuroscience News

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top