AI Tech That Reads Brain Scans And Re-Creates What Is Seen

AI is making remarkable progress in replicating humans’ capacity to transform what they perceive through their eyes into mental images. On the other hand, scientists are still seeking clarity on this complex process.

The findings of a current study, set to be unveiled at a future event focusing on computer visions, display that with the aid of AI, brain scans can now be analyzed and recreated as derivatives of visuals seen by bodies.

Exploring various animal species’ perceptions and potentially recording human dreamscapes, the development of this technology could lead to various applications: aiding communication in people, with paralysis being one of them.

Recently, using an AI algorithm called Stable Diffusion, developed and publicly released by a German group in 2022, excellent progress has been achieved for the first time in having computer visualization simulate human faces and pictures of landscapes seen recently by a subject through scanning their brain scans.

Like other Generative AIs such as DALL-E 2 and Midjourney, Stable Diffusion is a text-to-image program trained on billions of images associated with its respective text descriptions to produce novel images from given textual prompts.

The new study by a Japanese group featured a more enhanced version of the Stable Diffusion system; participants observed thousands of photos and associated text descriptions, with resulting brain patterns recorded during these scans.

Previous AI algorithms used to interpret brain scans needed to be trained on large data sets, but Stable Diffusion got more use out of fewer training samples for each person by adding photo captions.

Ariel Goldstein, a cognitive neuroscientist at Princeton University who was not involved with the project, described it as a novel approach that “deciphers the brain” by fusing visual and textual information.

The experiment, undertaken by neuroscientist Yu Takagi of Osaka University, uses data integrated from brain parts responsible for image comprehension, such as the temporal and occipital lobes, by an AI algorithm.

Functional Magnetic Resonance Imaging (fMRI) scans, which measure changes in blood flow to the brain’s active areas, are the information source for this system.

The temporal and occipital lobes register different types of information when people view a photo. The temporal is most active in understanding the elements within the image, such as individuals and objects, while the occipital processes layout, perspective, and scale.

The fMRI records brain activity by detecting peaks, and with the aid of AI, this data can be turned into a visual image. This process is instrumental in observing which brain parts become more active during specific activities.

The researchers in the study added more training to the Stable Diffusion algorithm with a UTF data set consisting of scans from four individuals as each viewed 10,000 pictures. To try out the AI system, part of these brain scans from the same participants weren’t used for training.

Overall, the ability of AI to recreate what people see by reading their brain scans is a fascinating development that holds tremendous promise for the future. As this technology continues to evolve, it will be exciting to see how it is applied in different fields and transforms our understanding of the human brain and how we perceive the world.

Source: Science

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top