Mind-reading with a brain scan
Brain activity can be decoded using magnetic resonance imaging.
Scientists have developed a way of ‘decoding’ someone’s brain activity to determine what they are looking at.
“The problem is analogous to the classic ‘pick a card, any card’ magic trick,” says Jack Gallant, a neuroscientist at the University of California in Berkeley, who led the study. But while a magician uses a ploy to pretend to ‘read the mind’ of the subject staring at a card, now researchers can do it for real using brain-scanning instruments. “When the deck of cards, or photographs, has about 120 images, we can do better than 90% correct,” says Gallant.
The technique is a step towards being able to see the contents of someone’s visual experiences. “You can imagine using this for dream analysis, or psychotherapy,” says Gallant. Already the results are helping to provide neuroscientists with a more accurate model of how the human visual system works.
If the work can be broadened to developing more general models of how the brain responds to things beyond visual stimuli, such brain scans could help to diagnose disease or monitor the effects of therapy.
Predicting responses
There have been previous efforts at brain-reading using functional magnetic resonance imaging (fMRI), but these have been quite limited. In most such attempts, volunteers’ brain responses were first monitored when looking at a discrete selection of pictures; these brain scans could then be used to determine which picture from this set a person is looking at. This only works when there is a limited number of simple pictures, and when a subject’s response to those pictures is already known.
In the new report, Gallant and his team instead fMRI to model a subject’s brain responses to various types of pictures, and used this to predict responses to novel images1.
“It’s definitely a leap forward,” says John-Dylan Haynes of the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig, Germany, who also works on decoding the brain’s activity. “Now you can use a more abstract way of decoding the images that people are seeing.”
In the experiment, the brain activity of two subjects (two of Gallant’s team members, Kendrick Kay and Thomas Naselaris) was monitored while they were shown 1,750 different pictures. The team then selected 120 novel images that the subjects hadn’t seen before, and used the previous results to predict their brain responses. When the test subjects were shown one of the images, the team could match the actual brain response to their predictions to accurately pick out which of the pictures they had been shown. With one of the participants they were correct 72% of the time, and with the other 92% of the time; on chance alone they would have been right only 0.8% of the time.
Complex response
The next step is to interpret what a person is seeing without having to select from a set of known images. “That is in principle a much harder problem,” says Gallant. You’d need a very good model of the brain, a better measure of brain activity than fMRI, and a better understanding of how the brain processes things like shapes and colours seen in complex everyday images, he says. “And we don’t really have any of those three things at this time.”
Previous attempts have simply modelled the brain’s response to simple geometric shapes, says Gallant. It’s much harder to understand the brain’s response to more complex, realistic images.
A decoding device that can read out the brain’s activity could be used in medicine to assess the results of a stroke or the effect of a particular drug treatment, or to help diagnose conditions such as dementia, by seeing how the function of the brain changes as a result of illness or intervention.
Creating a model of how the brain responds to various stimuli might also be useful in other types of neural processing. “It’s interesting to see how this could extend,” says Haynes, who showed last year that it is possible to predict which of two sums a person was computing in their head2. But it will be a long time yet before it applies to his own work, he says, because “we don’t have a good enough model for intentions".
Comments
If you find something abusive or inappropriate or which does not otherwise comply with our Terms or Community Guidelines, please select the relevant 'Report this comment' link.
Comments on this thread are vetted after posting.
Nice technology! Once fine-tuned, we can at last get along well with criminals, salesmen, lawyers, politicians, and husbands/wives. When will its prototype become available in the market? - Caezar AE. Arceo
It could be outstanding and ground breaking if it could detect the thought all most in every cases.But the modelling must be utterly complicated as our brain resposes in a multidimensional way.So, to modell all the aspects in an instant is a challenge.To develop the model taking all most all the predictable images and then make a model according to that is a brainstorming work.
A very interesting technology but a bit scaring too. Imagine that someone else can read your mind and know what are you thinking!It'll be very usful in some cases but it should be used with caution and not believe 100% in it.
Several years ago, in 1998-1999, I see article about use SQUID sensor arrays systems for like it technology, but for measuring natural magnetic fields. I think, that if you have strong magnetic fields into brain, we can understand, that we will have effect. Dipole H2O in brain must have other interа�tion between its. So, neurons should have interconnection, different from that we expected in "free" brain. That's why, if we can have relevent information we can use methods with minimal influence effect. Strong magnetic fields can able to make more Hi-resolution image, than SQUID array, but why we can use incorrect information? Especially, if we use natural magnetic field tomograph with increased SQUID array density and more power comuters&software we will have image with equal resolution. I think, that this device intallation have in some finland clinics and universities. Anyway, in this case, we can test some hipothesis of quantum calculations into brain. In this case with increase resolution we will have different macroimage of magnetic fields of brain. We will observe, collapse of function of probability of state of one or another brain state. This test will correct if we have switch with sum signals from some elemetnary detectors (SQUID's)a and will have physical level of tomograph whith some another important modifications of probe device. I think that goal be worthy of it!
In "Until the End of the World" (Wim Wenders 1991) a man has invented a device which allows you to record your dreams and vision and he has invented a special camera that will enable the blind to see. A very interesting film...
this may help us tommorow in investegating crime
The work of Kay KN et al “Identifying natural images from human brain activity,� Nature. 2008 Mar 5, is an interesting confirmatory study, building upon prior breakthrough concepts and enabling research by DH Marks et al (Multidimensional Representation of Concepts as Cognitive Engrams in the Human Brain. The Internet Journal of Neurology [peer-reviewed serial on the Internet]. 2007. Volume 6, Number 1). This conceptual work of DH Marks 2007 envisioned a veritable Rosetta Stone, allowing two-way movement between actual imaging data and a database of activation maps from neuroimaging studies. A wide range of faces, objects, places and concepts have unique activation map correlates, which are termed Cognitive Engrams (www.Cognitive-Eng.org). The presence of specific Cognitive Engrams within neuroimaging data should allow the identification of the actual thought which led to that brain activation – a form of applied mind reading.
"In the new report, Gallant and his team instead (sic) fMRI to model a subject’s brain responses to various types of pictures, and used this to predict responses to novel images" I presume the word 'used' was meant but even that doesn't make too much sense. fMRI merely yields pictures. It cannot predict. An algorithm by which pictures are processed in a computer might yield predictions. "In the experiment, the brain activity of two subjects ... was monitored while they were shown 1,750 different pictures. The team then selected 120 novel images that the subjects hadn’t seen before, and used the previous results to predict their brain responses." I would have appreciated some words on the key issue: HOW did they use "the previous results to predict .. brain responses" to novel images? How did they combine the brain activity pattern from cat-seeing with the brain activity pattern from bicycle-seeing to deduce what the brain activity for, say, cat-on-bicycle-seeing would be? Surely they did not just add them! What principle motivated the interpretation of brain scan activity so as to produce 'prediction'?
I find this piece of news pretty interesting and I am waiting for it to be published. Anyway I think that I won't be too much easy to be able to predict one's own imagination without deepening the knowledge of which neural processes subserve visual imagery. I shall notice that as far as I know, vision and visual imagery (that is 'seeing with the mind's eye') share some but not all the same neural pathways and physiological correlates. For this reason It won't be possible to directly extend the findings from the present work to the field of the prediction of visual imagery. Furthermore, picture decontestualization cannot be considered but a first – and obviously necessary - attempt to predict object perception. We do all know that the scenes that we see every day in out lives include many complex objects of several different shapes that are far more complex than simply a cat, a telephone, a bowl of fruit and a bicycle can be. I can as well say that decontestualization may help to predict the shape of an object but not the object itself with its features, colours and emotional valence (that is what it means or how it feels for me). Many objects share the same shape but hold different content: two books may probabily share the same shape but may have nothing in common as regards their content. Moreover one of them could be my family's photo album. I think we shall face the emotional aspects of object perception as a fundamental charateristics of object representation in our mind. Finally, I would like to consider that many of the scenes we do imagine or see, and many of the dreams we dream, include motion and action. This could lead an additional implication for future studies attempting to predict object vision.
This will be very useful technology, especially for functional neurosurgery while we are trying to remove glioma which is very close to the elequent area in our brain. Recently, awake surgery is very useful technic for disecting glioma. But it would be very useful if someday we could combine between awake neurosurgery and mind-reading with a brain scan technology.
(a nature news report)