Friday, September 23, 2011

Mind-Reading Tech Reconstructs Videos From Brain Images



Mind Reading Video Reconstruction Jack Gallant
A year and a half ago, we published a great feature on the current state of the quest to read the human mind. It included some then in-progress work from Jack Gallant, a neuroscientist at U.C. Berkeley, in which Gallant was attempting to reconstruct a video by reading the brain scans of someone who watched that video--essentially pulling experiences directly from someone's brain. Now, Gallant and his team have published a paper on the subject in the journal Current Biology.
This is the first taste we've gotten of what the study actually produces. Here's a video of the reconstruction in action:
The reconstruction (on the right, obviously) was, according to Gallant, "obtained using only each subject's brain activity and a library of 18 million seconds of random YouTube video that did not include the movies used as stimuli. Brain activity was sampled every one second, and each one-second section of the viewed movie was reconstructed separately."
Don't forget to check out our original feature on this work for some more background into what the researchers would really prefer we call "neural decoding" rather than "mind-reading."

No comments:

Post a Comment