Tuesday 6 September 2011

Think Of A Number Between 1 and 20... (post 2 of 2)

I left the last post of this blog with the tantalising idea of being able to read people’s minds by looking at the electrical signals produced in the areas of the brain that govern speech. I would now like to consider the process of speech. When you want to say anything you must first produce the idea and the meaning of what you wish to say. Following that, you must choose the words you wish to use to convey this meaning before finally moving all the different muscles needed to produce the sounds of speech. These three processes are all controlled by different areas of the brain and have characteristic signals that we may be able to intercept and interpret.

The third final stage of speech lies with the movement of muscles in the tongue, the jaw and the whole of the facial area. Simply say a word slowly and think about all the movements that are occurring. These movements are controlled by the motor cortex, which produces characteristic electrical signals for any type of movement. So in terms of mind reading we may be able to focus on the motor cortex and look at the facial movements that are to be made seconds before speech will occur in order to work out what will be said, an approach used by Philip Kennedy and his team in 2008. Their work was conducted on a patient with locked-in syndrome by recording the electrical signals produced in the motor cortex when the patient thought of specific words. They were able to decipher three vowel sounds based on the electrical activity recorded from the motor cortex and then put these signals through a voice synthesiser in real time to produce the sounds. This is a modest start, but Kennedy believes that within 5 years they may be able to produce a working vocabulary of over 100 words that locked-in patients are most likely to use (yes, no, etc.). This would no doubt be a fantastic achievement; however the approach is specific to a single patient and relies on penetrating the brain with electrodes, which carries a high level of risk. Therefore it may be better to look further back in the speech process, at events which are universal rather than patient specific, and see if the research can be done in a less invasive way.

Using the motor cortex as the basis of 'mind reading'  therefore comes with certain problematic issues. Not only is it patient specific, but attempting to read the mind of someone seconds before they speak a word hardly classes as mind reading in the true sense (although it undoubtedly has an application to those suffering from locked-in syndrome). The motor cortex is merely the climax of the neurological process of speech. There are two other areas of the brain, known as 'staging areas' of speech, which send signals to the motor cortex, before transmitting to the facial muscles. The first of these two areas is known as ‘Wernicke’s Area’, which deals with meaning and ideas that are then passed to ‘Broca’s Area’, which produces the words necessary to convey them. One of the best times to study the brain is when things go wrong, and this was the case in the discovery of Wernicke’s and Broca’s areas (this is a link to some detail about their discovery).

So we know the regions of the brain that are the major players in the production of speech, therefore all that is left to do is to look at the electrical signals produced in these areas, identify the words these correspond to, and put them through a voice synthesiser. It sounds so simple when said like that… but obviously  it is not. 


Let’s start with reading electrical signals from the brain. The classical approach for measuring electrical activity was to place a net of electrodes on a subject's scalp and record the signals from there. While this may be safe, the skull interferes with the signals making it difficult to get the fine detail from small brain areas which is needed for deciphering the processes of language production. The best way to get fine detail is through the penetration of the brain with electrodes as used by Kennedy et al. However, as discussed above, this carries a high level of risk. The halfway house is a method known as electrocorticography (ECoG) whereby the skull is opened up and a net of electrodes is placed over specific areas of the brain.

ECoG provides a good method that mixes safety with fidelity when it comes to measuring the electrical signals of the brain and was put to good use in 2010 by Bradley Greger and his team. In their work on the motor cortex and ‘Wernicke’s Area’ using ECoG were able to detect signatures of 10 words. These were: yes, no, hot, cold, thirsty, hungry, hello, goodbye, more and less. Most of the data came from the motor complex, but this was the first sign of looking at the staging areas in an attempt to mind read.

As if safely reading the electrical signals of the brain wasn’t hard enough, next we have the issue of words. While it is pretty difficult to give an accurate description of how many words there actually are in a language (take dog: is it one or two – a noun - the animal and/or a verb - to follow persistently) the estimates seem to range from 171,476 to over 1 million. This means we may need to find over one million distinct electrical signals and program them into a speech synthesiser, which is no small task! And when you consider the fact that it is estimated that a word is added every 98 minutes the task would only continue to grow. 


There is however a short cut to programming all these words for a speech synthesiser and measuring the electrical signals. Think of a word like “school”, which is made up of four distinct sounds. “S” “K” “OO” “L.” These sounds are four examples of the building blocks of language. In English there are about 44 of these distinct sounds, known as phonemes. The hope is to find the electrical signatures for these 44 sounds, either in Wernicke’s or Broca’s Area, then construct the intended word from the findings. This hope is based on work by Schalk et al. published earlier this year which used ECoG on the motor cortex and Wernicke’s Area and was able to detect the phonemes “oo,” “ah,” “eh,” and “ee.” These weren’t the only findings, as it was also discovered that when looking at the motor cortex there was little difference between spoken words and imagined words, whereas when looking at Wernicke’s Area there was a much larger signal seen when words were only imagined as opposed to spoken – giving tantalising evidence of the inner voice I mentioned earlier and suggesting that it may perhaps be generated in this area of the brain.

Mind reading is a long way off because for one thing - it effectively still requires brain surgery. However with the way things are developing, there is already a strong hope that the lives of people suffering from severe paralysis may soon be able to achieve at least a basic level of communication. Questions no doubt have to be raised as to how far we should take this technology. If it could be used on anyone without the need for surgery would it be abused? Where might it eventually stop? I’m sure you have had thoughts you’d rather other people never heard…

By the way, that number you thought of - was it 17?

No comments:

Post a Comment