According to the Gizmodo article “Scientists Reconstruct Brains’ Visions Into Digital Video In Historic Experiment” scientists at Berkeley have found a way to record signals from the brain and translate them into images.
The research team, lead by Shinji Nishimoto, demonstrated that the pattern of brain activity captured by fMRI while a subject watched a short video could be “translated” back into remarkably similar images using compositing technique that combined 100 YouTube clips into a single video that most closely matched the pattern of activity in the viewer’s mind.
What is really amazing here, I think, is that these scientists have worked out a way to translate brain signals into images.
As quoted in the Berkeley press release:
“Our natural visual experience is like watching a movie,” said Shinji Nishimoto, lead author of the study and a post-doctoral researcher in Gallant’s lab. “In order for this technology to have wide applicability, we must understand how the brain processes these dynamic visual experiences.” …
“We built a model for each voxel that describes how shape and motion information in the movie is mapped into brain activity,” Nishimoto said….
Reconstructing movies using brain scans has been challenging because the blood flow signals measured using fMRI change much more slowly than the neural signals that encode dynamic information in movies, researchers said. For this reason, most previous attempts to decode brain activity have focused on static images.
“We addressed this problem by developing a two-stage model that separately describes the underlying neural population and blood flow signals,” Nishimoto said.
Here, movies become a control condition for understanding how brains processing information in everyday life. Based on V.S. Ramachandran’s work on mirror neurons, it seems that we have similar patterns of neural activity whether we are actually perceiving an activity or doing it ourselves. Are similar patterns taking place when we perceive or imagine an activity? If so, then we really could be en route to mind reading.
While the Gizmodo journalist is excited about the possibility of recording our dreams, I can’t help but think of the second half of Wim Wender’s Until the End of the World, when all the characters loose themselves in a device that lets them watch their dreams. Though it also reminds me of Memo and Luz plugging into one another’s nodes in Sleep Dealer, and experiencing hyperpresent connection.
And, Of course, I can’t help but imagine what real time mind recording would do for psychotherapy!