Friday Fun Link – Scientists Create Video From Brain Waves

This is one of the coolest things I’ve ever seen.  Scientists have figured out a way to create (blurry, impressionistic) video of what a person is watching by scanning their brain waves!

This was accomplished by the researchers watching a bunch of YouTube videos and creating a database of which parts of their brain “lit up” while watching different images then having a computer match it against another library of video clips.

(It’s fitting they use Google’s YouTube service because, in many ways, their technique sounds not dissimilar to the process Google uses to match your search query against its massive-database of potential web sites you might be looking for.)

Here’s how it works directly from the linked article:

Three volunteers, all neuroscientists on the project, watched hours of video clips while inside an fMRI machine. Outside volunteers weren’t used because of the amount of time and effort involved, and because the neuroscientists were highly motivated to focus on the videos, ensuring better brain images.

Using the brain-imaging data, Gallant and his colleagues built a “dictionary” that linked brain activity patterns to individual video clips — much like their 2008 study did with pictures. This brain-movie translator was able to identify the movie that produced a given brain signal 95 percent of the time, plus or minus one second in the clip, when given 400 seconds of clips to choose from. Even when the computer model was given 1 million seconds of clips, it picked the right second more than 75 percent of the time.

With this accurate brain-to-movie-clip dictionary in hand, the researchers then introduced a new level of challenge. They gave the computer model 18 million seconds of new clips, all randomly downloaded from YouTube videos. None of the experiment participants had ever seen these clips.

The researchers then ran the participants’ brain activity through the model, commanding it to pick the clips most likely to trigger each second of activity. The result was a from-scratch reconstruction of the person’s visual experience of the movie. In other words, if the participants had seen a clip that showed Steve Martin sitting on the right side of the screen, the program could look at their brain activity and pick the YouTube clip that looked most like Martin sitting on the right side of the screen.

Here’s a video of what the translated video looks like:

Sometimes I think about the difference between my life and that of my grandparents and great-grandparents – the Internet, man on the moon, heart transplants, etc.  Then I try to guess what life might be like for my grandchildren and great-grandchildren.  And stuff like this starts to come *really* close!

Trackbacks & Pingbacks 1

  1. From Head Tale - Top Tech Trends for 2012 on 22 Dec 2011 at 10:22 pm

    […] Pretty exciting.  I’ve said it before but I love living in the future!  (In fact, going further into the future, IBM is predicting mind-reading computers within five years – although they may be cheating a bit since that technology is already in its infancy!) […]

Post a Comment

Your email is never published nor shared. Required fields are marked *