Japanese Researchers Develop An Artificial Intelligence Model That Can Read Brain Activity From MRI Images
Using the Stable Diffusion AI model developed in Germany, Japanese researchers have developed an artificial intelligence model that can read brain activity from MRI images. The research raised concerns about the risks posed by Artificial Intelligence in general.
Using the Stable Diffusion AI model developed in Germany, Japanese researchers have developed an artificial intelligence model that can read brain activity from MRI images.
ACHIEVED IMAGES LIKE THE ORIGINAL
Neuroscientist Yu Takagi of Osaka University and his team analyzed brain scans of test subjects exposed to up to 10,000 images while in an MRI machine, using Stable Diffusion (SD), a deep learning Artificial Intelligence model developed in Germany in 2022. After Takagi and his research partner Shinji Nishimoto built a simple model for translating brain activity into a readable format, Stable Diffusion was able to generate high-quality images similar to the original.
“WE DO NOT HAVE THE ABILITY TO SOLVE DREAMS AND DREAMS”
The AI could do this without prior exposure to the images or any training to generate the results. “We really didn’t expect this kind of outcome,” Takagi said. Takagi emphasized that the important step does not represent mind reading at this stage. Artificial Intelligence can only produce images that a person sees. “This is not mind reading. Unfortunately, there are many misconceptions about our research. We don’t have the ability to decipher dreams or dreams, which is a very optimistic thought. But, of course, there is potential for the future,” Takagi said.
“STOP” CALL FROM TECHNOLOGY GIANTS
However, as part of the broader debate about how this type of technology could be used in the future, this development has raised concerns about the risks posed by AI in general. Last month, tech leaders, including Tesla founder Elon Musk and Apple co-founder Steve Wozniak, called for AI development to be halted because of “deep risks to society and humanity.”
CAN BE MISUSED
Despite his excitement, Takagi concedes that concerns about mind-reading technology are justified given the possibility that it could be misused or used without permission by malicious people. “For us, privacy issues are paramount. If a government or agency can read people’s minds, it’s a very sensitive issue,” Takagi said. The scientist concluded by saying, “There must be high-level discussions so that this does not happen.”
GENERATED EXCITEMENT IN THE WORLD OF TECHNOLOGY
Takagi and Nishimoto’s research has aroused great interest in the tech community, where excitement has peaked with rapid advances in Artificial Intelligence. These include the release of ChatGPT, which produces human-like speech in response to a user’s requests.