[This is a transcript of the video embedded below.]

Human communication works by turning thoughts into movement. Body language, language or writing – we use muscles in one way or another to extract the information we want to share. But sometimes it would be really handy if we could communicate directly from our brains, with each other, or with a computer. How far has this technology progressed? How does it work? And what comes next? That’s what we’ll talk about today.

Scientists currently have two ways to find out what’s going on in your brain. One possibility is functional magnetic resonance imaging, the other is the use of electrodes.

Functional magnetic resonance imaging, or fMRI for short, measures blood flow to different regions of the brain. Blood flow is correlated with neural activity, so an fMRI will tell you which parts of the brain are activated in a given task. I previously made a video about magnetic resonance imaging. So, if you want to know how physics work, check this out.

The problem with fMRI is that people have to lie in a large machine. Not only is this device expensive to use, but it also takes some time to do an fMRI, which means that the temporal resolution is not great, typically a few seconds. So the fMRI cannot say much about fast and temporary processes.

The other way to measure brain activity is with electroencephalography, or EEG for short, which measures tiny currents in electrodes placed on the skin on the head. The advantage of this method is that the temporal resolution is much better. The big disadvantage, however, is that you only have a rough idea of ​​which region the signal is coming from. A much better way is to place the electrodes directly on the surface of the brain, but this will require surgery.

Elon Musk has the idea that one day people might be ready to have electrodes implanted in their brains and has invested some money for this with his “Neuralink” project. But getting a research project approved is difficult when it involves drilling holes in other people’s heads. Hence, most of the studies are currently using fMRI – or people who already have holes in their heads for one reason or another.

Before we talk about the results of recent studies, I would like to briefly thank our Tier 4 supporters on Patreon. Your support is of great help in keeping this channel running. And you too can be part of the story, visit our Patreon page, the link is in the info below.

Then let’s look at what scientists have found out.

Researchers from Carnegie Mellon and other American universities have done a very interesting series of experiments with fMRI. In the first, they put eleven participants in the MRI machine and showed them a word on a screen. Participants were asked to come up with a concept for a noun, such as an apple, a cat, a refrigerator, etc. Then they gave the brain scans of 10 of these people to artificially intelligent software, along with the word that prompted people . The AI ​​looked for patterns in brain activity that correlated with the words, then guessed what the eleventh person was thinking from just the brain scan. The program guessed correctly about three-quarters of the time.

That’s not great, but * is better than chance – it’s proof of the principle. The researchers made a very interesting discovery. The study had participants whose first language was either English or Portuguese, but their brain signature was independent. In fact, the researchers found that in the brain, the concept encoded by a word doesn’t have much to do with the word itself. Instead, the brain codes the concept by assigning it various attributes. You have identified three of these attributes:

1) Food related. This brain pattern is activated by words like “apple”, “tomato” or “salad”.

2) Protection related. This pattern is activated, for example, for “house”, “closet” or “screen” and

3) A body-object interaction. For example, if the concept is “tongs,” the brain uses the tongs to activate the part that represents your hand.

In this way, the computer can, to a certain extent, predict what the signal of a concept will look like, even if the computer has not yet seen any data about it. The researchers checked this by combining different concepts into sentences such as “The old man threw the stone into the lake”. From 240 possible sentences, the computer was able to select the right one in 83 percent of the cases. The computer cannot tell the whole sentence, but it knows its basic components, it knows the semantic elements.

The basic finding of this experiment, that the brain identifies concepts through a combination of attributes, was confirmed by other experiments. For example, another 2019 study that also used fMRIs asked participants to think about different animals and found that the brain broadly classifies them based on attributes such as size, intelligence, and habitat.

There have also been several attempts over the past decade to find out what a person sees in their brain activity. In 2017, for example, a team from Kyoto University published a paper in which they used deep learning – again artificial intelligence – to find out what someone sees from their fMRI signal. They trained the software to recognize general aspects of the image like shapes, contrast, faces, and so on. You can judge the results yourself. Here you can see the actual images that the test participants looked at, and here the reconstruction by the artificial intelligence – I find it really impressive.

What about speech or text? In April 2019, UCSF researchers published an article in Nature reporting that they had successfully converted brain activity directly into language. They worked with epilepsy patients who already had electrodes for treatment on the surface of their brains. What the researchers were looking for were the motor signals that corresponded to the sounds in the language, such as the tongue, jaw, lips, and so on. Here, too, they had a computer find out how to assign the brain signal to language. What you will hear in a moment is that one of the participants reads a sentence and then what the software has just recreated from the brain activity.

That’s pretty good, isn’t it? Unfortunately, it took weeks to decode the signals with this quality, so it is rather useless in practice. But a new study that came out just a few weeks ago has made a huge leap forward in the use of brain-to-text software by not examining the movements associated with making sound, but rather the movements associated with it go hand in hand with handwriting.

The person they worked with is paralyzed from the neck down and has electrodes implanted in their brain. He was asked to imagine writing the letters of the alphabet that the software was trained with, and later the AI ​​was able to reproduce the text from brain activity if the subject imagined writing entire sentences. And it could do it in real time. This allowed the paralyzed man to write at a rate of about 90 characters per minute, which is pretty similar to that of healthy people with text messages, about 135 characters. The AI ​​was able to identify characters with an accuracy of over 94%, and with autocorrection even up to 99%. So, as you can see, research in the field of signal analysis has made quite rapid progress in recent years. But the problem for technology applications is that fMRIs are impractical, EEGs are not precise enough, and not everyone wants to connect a USB port to their brain. Are there other possibilities?

Well, one thing that researchers have done is genetically engineered zebrafish larvae so that their neurons fluoresce when they are active. This allows you to measure brain activity non-invasively. And that’s nice, but even if you’ve done that to people, the skull is still in the way, so that doesn’t seem very promising.

A more promising approach by NASA is to develop an infrared system for monitoring brain activity. That still requires users to wear sensors around their heads, but it’s non-invasive. And several teams of scientists are trying to monitor brain activity by combining different non-invasive measurements: electrical and ultrasound, as well as optical. For example, the US military has invested $ 104 million in the Next-Generation Nonsurgical Neurotechnology Program, or N cube, which aims to control military drones.

We are living at a momentous period in the history of human development. It is the time when people leave behind the notion that conscious thinking is outside of science. So we can suddenly develop technologies that support the conversion of thoughts into action. I find that incredibly interesting. I expect much to be done in this area in the years to come and will keep you updated from time to time.



LEAVE A REPLY

Please enter your comment!
Please enter your name here