Researchers have achieved something that sounds like it belongs in a science fiction novel: training an A.I. system to produce pictures of objects people see based on their brain scans. The A.I. accurately generated images such as a teddy bear, clock tower, and airplane after the participants viewed similar pictures.
Iris Groen, a neuroscientist from the University of Amsterdam who was not associated with the study, spoke to Science’s Kamal Nahas.
Iris Groen says:
“The accuracy of this new method is impressive,”
Researchers believe refining this brain-scan-to-image A.I. technology could lead to various beneficial applications in the future, ranging from helping people with paralysis communicate to interpreting dreams or understanding how other species experience the world. Though still far from being ready for public use, this concept may one day prove useful for gaining insights into the inner workings of people’s minds.
Researchers from Osaka University in Japan have implemented the text-to-image generator Stable Diffusion, which debuted in August 2022, to interpret brain scans with A.I. technology. Their model is exceptionally uncomplicated, needing only thousands of parameters instead of millions learned during training.
The team has released more information in an unpublished paper posted on the preprint platform bioRxiv. Furthermore, they intend to present their discoveries at a forthcoming computer vision gathering, as reported by Science.
Typically, when using Stable Diffusion or other A.I. technologies such as DALL-E 2 and Midjourney, a user would enter a word or phrase that the software interprets into an image. This works because it has been trained to recognize patterns from many existing images and their accompanying text captions; with this knowledge, it can recreate similar images based on the prompt.
The researchers exceeded the regular training by instructing an A.I. model to connect functional magnetic resonance imaging (fMRI) data with images. To do this, they utilized fMRI scans from four individuals who had already seen 10,000 pictures of people, scenes, and objects as part of a separate study. Additionally, another A.I. model was taught to associate brain activity in fMRI information with textual explanations of the pictures that the research participants observed.
The two models employed by Stable Diffusion enabled the conversion of fMRI data into representations that closely resemble images not found in the A.I. training set. The initial model produced a view and arrangement similar to what the participant had seen, but these generated pictures were unclear and lacked detail. Subsequently, the second model came into play; it could distinguish which object people were viewing based on textual descriptions from the teaching images.
If the technology were given a brain scan similar to one of its trained models, which showed someone looking at an airplane, it would then insert an airplane into the generated image from the initial perspective with approximately 80 percent accuracy.
The recreated images possess an uncanny resemblance to the originals, though there are some obvious variations. For instance, the A.I.-made version of a locomotive is engulfed in a murky gray fog rather than depicting its original cheerful blue sky background. On the other hand, the clock tower image made by A.I. appears more like an abstract painting than an actual photograph.
The technology has potential, yet it still has a few downsides. It is only able to generate visuals of items that were used in its training process. Furthermore, given that the A.I. was only trained with brain scans from four people, making it available to more would necessitate instructing the model on the EEG recordings of each new person – an expensive and lengthy endeavor. Consequently, this technology probably won’t be easily accessible by the public in its present form.
Carissa Wong of New Scientist spoke with Sikun Lin, a computer scientist from the University of California Santa Barbara who had no connection to the project.
Sikun Lin says:
“This is not practical for daily use at all,”
People worry about A.I. technologies: are they taking away from human creativity and infringing copyright regulations? Is law enforcement going to be impacted unfairly by this technology, or will it lead to the spread of false information while compromising privacy? Scientists continuously discover new ways to use A.I., yet engineers and ethicists must keep debating these issues for the foreseeable future.
Demis Hassabis, the CEO of DeepMind (an A.I. research laboratory), spoke to Time magazine’s Billy Perrigo in 2019.
Demis Hassabis says:
“When it comes to very powerful technologies—and obviously A.I. is going to be one of the most powerful ever—we need to be careful,”
“It’s like experimentalists, many of whom don’t realize they’re holding dangerous material.”
Advances in Artificial Intelligence, Neuroscience, and Technology have opened up new possibilities for Brain Ethics and Innovations. Recommended Videos that are related to these topics may be of interest.
The development of this technology is an exciting step forward in neuroscience and AI. As we continue to explore the potential of A.I. and push the boundaries of what is possible, it will be important to approach this technology with care and thoughtfulness to ensure that it is used in a way that benefits everyone.
Source: Smithsonian Magazine