All of us have probably seen the sci-fi films where leaders from the resistance are put into a brain scanning machine. And even though he fights it, he is not able to hold out. Eventually the oppressors find out where the enemy base is and the whole jig is over. This scenario is great for cranking up the tension in a story to move a plot along. However, we are quick to disregard these devices in real life.
Can We Really Read Minds?
But just how close are we really from actually discovering a mind reading machine? Believe it or not, researchers are getting very close. They have started to decipher brain patterns inside the visual cortex through the use of an fMRI machine which works in tandem with advanced artificial intelligence.
Scientists from Purdue University have recently utilized an advanced form of A.I. to “read” the minds of humans. How in the world are they doing it? To begin with, 3 people were placed in a fMRI machine. Inside, they viewed videos on screens that depicted people, animals, and natural scenes, as their brains were being scanned. These scientists gathered about 11.5 hours of data from the fMRI. A deep learning program was utilized to assess this data and which also trained it for future use.
Forecasting the Thoughts of a Person
After that, this A.I. learned to forecast the brain activity which would occur inside the visual cortex of a subject, based on which scene was depicted. Over time, this deep learning program worked to decode all the fMRI data and categorized every image signature. This allowed the scientists to pinpoint those regions of the brain that are responsible for every visual. Finally, this deep learning program was able to know exactly which object was being view by a person based on only the brain scan data.
Scientists were then able to let this A.I. program totally reconstructed these videos without having any access to them, just from the brain scans of their subjects alone. And it performed this task flawlessly. These discoveries are giving us a much better idea of the brain’s inner workings. And this will also create even more advanced version of the A.I.
Zhongming Liu, who is presently an assistant professor from the Weldon School of Biomedical Engineering and School of Electrical and Computer Engineering at Purdue, has been evaluating brains for almost two decades. But his major progress did not happen until he began using this A.I. Dr. Liu recently said, “Reconstructing someone’s visual experience is exciting because we’ll be able to see how your brain explains images.”
A Doctoral student named Haiguang Wen, who was the lead researcher, discussed the team’s approach just a little more. “A scene with a car moving in front of a building is dissected into pieces of information by the brain. One location in the brain may represent the car; another location may represent the building.”
Wen continued, “Using our technique, you may visualize the specific information represented by any brain location, and screen through all the locations in the brain’s visual cortex. By doing that, you can see how the brain divides a visual scene into pieces, and re-assembles the pieces into a full understanding of the visual scene.”
This particular kind of “deep learning” algorithm is referred to as a convolutional neural network. Prior to this use, it was used mainly for isolating patterns that were associated with stationary visual images inside the brain. But it now helps facilitate object recognition software. This study was the very first time that this kind of technique used videos and natural scenery, which allows scientists to get even closer to their ultimate goal, which is to evaluate the human brain within a real time dynamic situation.