Wednesday, 21 September 2011

In the beginning evolution created the eukaryotic cell

If you take the book of Genesis literally, then 6000 years ago God took all of 6 days to create the earth and everything on it, including man. Now I’m pretty sure most people reading this blog will agree with me when I say that we shouldn’t take the book of Genesis literally. The earth is in fact closer to 4.6 billion than 6000 years old and it wasn’t until about 2.5 million years ago that the genus Homo (to which we belong – Homo sapiens) was first seen. However, unlike in the book of Genesis, humans did not just appear - there were many tiny evolutionary steps along the way. The biggest and arguably the most important, was the formation of the building blocks of all complex life.

A stylized Eukaryotic cell
The average human body is estimated to contain between 50 and 100 trillion cells (that’s potentially as high as 100,000,000,000,000). The strange thing about these trillions upon trillions of cells is that at their most basic level they are, in essence, pretty much all the same. This isn’t just a characteristic of our cells either; the same is true for all complex life on the planet. At the most basic level there is little difference between me, you, a giraffe or a whale. These building blocks of complex life are known as ‘eukaryotic’ cells (from the Greek meaning ‘a good kernel’ – having a nucleus) and due to their ubiquity among all complex life, Eukaryota (organisms made of eukaryotic cells) are termed a domain of life (or an Urkingdom). Eukaryota are one of the three domains to which all life on the planet can be classified and are the youngest of the three. What I want to look at with you here is where the Eukaryota domain came from.

A stylized Prokaryotic cell
Alongside the Eukaryota Urkingdom, the two others are known as the Prokaryota (again taken from Greek, this time having no nucleus) and Archaea (‘ancient things’). These two Urkingdoms are thought to be a similar age and date back to between 3.5 and 3.8 billion years ago – the birth of cellular life on earth. Both kingdoms are mostly made up of simple, single-celled organisms. Prokaryotes are bacteria such Escherichia coli, Staphylococcus aureus, Neisseria meningitidis and so on. Archaea were once thought to be bacteria and it wasn’t until the late 1970s that this was shown to be incorrect. Before the 70s the best way to look at differences between cell types was down a microscope to consider the morphology of the cells. Archaea and Prokaryotes appear very similar, for instance bacteria do not have a membrane-bound nucleus and neither do Archaea (whereas our cells do). However in the 70s, work was conducted by Carl Woese and George Fox to look at molecular differences instead of morphological differences in what had been classically thought of as bacteria and they found striking differences in some organisms, which they termed the Archaea. Archaea are defined by their ability to survive in extreme conditions (for example halophiles which live in extremely salty conditions) and are thought to be the oldest form of life, in part due to the harsh conditions present on Earth around 3.5 billion years ago.

So around 3.5 billion years ago the Earth was inhabited by simple, single-celled life forms, the Archaea and Prokaryotes. These life forms dominated the planet for about the next 1.5 billion years until a giant evolutionary leap was made. This giant leap was of course the formation of the eukaryotic cell. It is proposed that around 2 billion years ago, an archaeal cell adept at phagocytosis (the process of taking smaller molecules into itself) took in a smaller bacterial cell that was adept at using oxygen to produce energy. These two cells then entered a symbiotic relationship (a partnership in which both parties benefit) with the archaeal cell providing nutrients for the bacterial cell, which in turn was acting as the power source for the archaeal cell. Due to this new internal source of energy, the cell was able to grow much larger and become more complex. The formation of this symbiotic union was first proposed by Lynn Margulis in 1966 and is known as the ‘endosymbiotic theory’. The proposal was that, as a result of the energy supplied by this bacterium, the cell was able to undergo a 200,000-foldincrease in its genome size, allowing it to rapidly evolve and become more complex. It didn’t take long (in the grand scheme of things) for the single cells to become multi-cellular (fossil records indicate the first multi-cellular eukaryotic cell to be around 1.8 billionyears old) and from then on the only way was forward, becoming ever more complex, until 2.5 million years ago the Homo genus was formed (along with all other complex life). The rest, as they say, is history.

As I mentioned, all the cells that make up our bodies are eukaryotic cells, and you may be wondering what happened to the bacteria that entered that ancient cell to drive the evolution of complexity: in fact - we still have them. All the cells in our bodies (with a few exceptions) contain a powerhouse known as mitochondria. Mitochondria are the source of respiration, where the food we eat and the oxygen we breathe react together to produce the energy we need to survive. These tiny energy producers are the descendants of the bacterial cell engulfed by an archaeal cell around 2 billion years ago. This theory is backed up by the fact that the mitochondria in our cells have a genome of their own – that’s right, our cells contain two genomes, our human one and a mitochondrial one which resembles a bacterial genome (highlighted by the fact that the mitochondrial genome is circular, as with many bacterial genomes, not linear like ours). Analysis has of course been done on the mitochondrial genome and it was found to have an ancient relative with bacteria of the α-proteobacterial genus, supporting the chimeric origin of our cells.
A mitochondrion

Further evidence to support the endosymbiotic theory for the origin of eukaryotic cells can be found through molecular comparisons of Archaea, Prokaryotes and Eukaryotes. An example of this can be seen in the formation of proteins. Protein formation uses molecules known as RNA, which are thought to be some of the oldest molecules, if not actually the oldest – some believe the earliest form of life was a simple RNA strand (a hypothesis gaining more support). Eukaryotes have a form of RNA used to start the production of a strand of protein known as Met-tRNA. Prokaryotes on the other hand have the addition of a small chemical group giving formyl-Met-tRNA. Archaea have Met-tRNA, like eukaryotic cells, indicating the close link between archaeal and eukaryotic cells.

So around 2 billion years ago, an event so monumental occurred that it shaped the planet from that moment until this, and will continue to do so for many more years to come. This event was the beginning of complex life and it all stemmed from one simple moment, the engulfment of a small cell by a larger one (something that occurs throughout our body on a daily basis). I think we will be hard pressed to find such a small event that has had such a giant evolutionary consequence as this one has.

Sunday, 18 September 2011

Give Us A Hand (post 2 of 2)

I left the last post with the question of what work is being done to make transplantation safer and improve the availability of organs..?

One of the major hopes for future years is the ability to engineer transplantable material in a laboratory, which may well reduce the need for organ donation and better yet, reduce the risk of rejection since the organs can be engineered to have the recipient’s cells (and thus the correct HLA). Being able to make an organ in a lab using a patient’s own cells sounds like something out of science fiction but it is fast becoming science fact. Early last year a team lead by Korkut Uygun were able to engineer a rat liver in a lab. This was achieved by taking a liver and stripping it of all its living cells. This left a framework for the correct structure of a liver to which the team were able to add a whole new set of hepatocytes (the major cell type that makes up the liver). Think of this like drawing a stick man then adding detail to it to draw a cartoon person. The hepatocytes that were added were produced from stem cells taken from the recipient rat meaning the team produced a (semi) functional liver with the correct HLA for the recipient rat. Impressive as this work is, it still has some way to go before being of real clinical use. For starters, livers aren’t only made of hepatocytes and the work was only on a small rat liver, but it is no doubt an incredibly exciting prospect. Another approach for engineered organs was successfully used only a few months ago in which a patient received a windpipe transplant. The interesting thing about this transplant was that the windpipe framework was made of a completely synthetic material, which was then coated in the recipient’s cells. Hats off to the surgeon Paolo Macchiarini and the producer of the synthetic material Alexander Seifalian (a researcher at UCL I’d like to add) for this pioneering work.

Not only could we one day produce organs in a lab but we may also be able to use animal organs, a procedure known as xenotransplantation. This isn’t without its complex issues as you can imagine, not least in terms of different organ sizes but some organs do hold hope (pig hearts and our hearts, for example, are of very similar size and structure). Being able to use animal organs would certainly avoid the issues of a shortage of available organs but whether people will accept the idea of receiving a pig heart is yet to be seen. The safety of such procedures is also unclear as there may be a risk of zoonosis (the link is back to an older blog I wrote about HIV in which I discuss zoonosis).

Another bright hope for the future is the ability to induce immunological tolerance (another topic I will write a blog on in the near future) to the new tissue. As I mentioned above, our immune system is highly tuned to be able to detect our specific HLA, but when the cells of our immune system are first produced they all have different receptors for binding to the HLA, obviously not all will be able to. The cells must therefore learn how to bind to the correct HLA, and any cells unable to will be destroyed. If we could find a way to make the immune cells recognise the new tissue as self (make the cells tolerant to the new tissue) it would avoid the whole issue of rejection.

One final thought would be for regenerative medicine (a link to a small article with more detail on regenerative medicine). This is in its early days but centres around the potential use of stem cells to repair damaged tissues which potentially could one day be used to stop the need for transplantation entirely.

Transplantation is not without its issues but the work in the area has an incredibly bright future and the power to have an astonishing impact on many people’s lives. One problem that may be encountered however is with funding. A major source of funding in the medical realm comes from the big pharma companies. However, these companies make huge profits from transplant patients who have to continue taking (and buying) drugs for the rest of their lives and there may therefore be reluctance to find ways around the issue of avoiding rejection which, in turn, may make funding hard to come by (something I hope not to be the case). Still, the future looks bright for the world of transplantation.

Thursday, 15 September 2011

Give Us A Hand (post 1 of 2)

57 years ago, Joseph Murray and his team at the Peter Brent Brigham Hospital in Boston performed a landmark surgery. For the first time it was possible to take a liver from one individual and successfully insert it to another, thus beginning the age of transplantation. Transplantation is a procedure that is now commonplace in many countries, for example, last year in the UK nearly 4000 organ transplants were carried out. However, transplantation is not without its complications and risks. Even when successful, a patient is still not out of the woods and for the rest of their life they are required to take a cocktail of immunosuppressive drugs and yet may still need a new transplantation within 10 years. As an example, the 10-year survival rate for a first time transplant of a deceased donor kidney is only 48%. What I’d like to look at with you here is why it is necessary for a patient to continue to take so many drugs and what the future may hold for the field of transplantation.

Any patient who receives a transplanted organ must take a vast array of immunosuppressive drugs for the rest of their life. These drugs are vital for survival of both the patient and the organ, due to the immune system being finely tuned to attack anything it sees as foreign, such as someone else’s organ. Our immune system must be able to recognise and attack invading microbes that look to cause damage (known as pathogens) while at the same time not attack our own cells (self-cells). Therefore the immune system must be able to distinguish between self and non-self (pathogens) in order to be of any use. Detection of anything by the immune system is achieved through the binding of immune cells to molecules on other cells. The molecules the immune system binds to on pathogens are simply known as antigens, while the (main) molecule used to detect self is known as the HLA (human leukocyte antigen). The HLA is a wonderfully complex molecule and I will probably write a whole blog on it at some later date but for now the extent of the detail I will go to is to say that unless two people are genetically identical (eg. identical twins) they will not have the same HLA; your HLA is as unique as you are. You may now be starting to see where I’m going with this. Since your HLA is unique to you and stops your immune system attacking your cells, if you receive an organ from someone not genetically identical its cells will have a totally different HLA – to the immune system this appears no different to an antigen on a pathogen, giving the immune system the license to attack. If it weren’t for immunosuppressive drugs all transplanted material would simply be destroyed by the recipient’s immune system, a phenomenon known as rejection. This is the main reason for the failure of any transplant operation and can occur anywhere from minutes after the operation to years later, hence the need for immunosuppression for the rest of life.

Transplantation of organs is no doubt a fantastic achievement but it isn’t without its complications and dangers. These problems are not the only issues facing transplantation, as there is also a distinct lack of donor tissue, in the US for example, it is estimated there is a deficit of 4000 livers each year. Without going into the debate of whether organ donation should be an opt-out system (something I believe) there is clearly a need for improvement in availability and safety. But what is the work being done to tackle these issues?

As with my other blogs I’ll leave you hanging on that note and post the next half in a day or two, so stay tuned!
Some slightly outdated stats but a nice image to demonstrate the lack of organs (and where organs are in the body)

Tuesday, 6 September 2011

Think Of A Number Between 1 and 20... (post 2 of 2)

I left the last post of this blog with the tantalising idea of being able to read people’s minds by looking at the electrical signals produced in the areas of the brain that govern speech. I would now like to consider the process of speech. When you want to say anything you must first produce the idea and the meaning of what you wish to say. Following that, you must choose the words you wish to use to convey this meaning before finally moving all the different muscles needed to produce the sounds of speech. These three processes are all controlled by different areas of the brain and have characteristic signals that we may be able to intercept and interpret.

The third final stage of speech lies with the movement of muscles in the tongue, the jaw and the whole of the facial area. Simply say a word slowly and think about all the movements that are occurring. These movements are controlled by the motor cortex, which produces characteristic electrical signals for any type of movement. So in terms of mind reading we may be able to focus on the motor cortex and look at the facial movements that are to be made seconds before speech will occur in order to work out what will be said, an approach used by Philip Kennedy and his team in 2008. Their work was conducted on a patient with locked-in syndrome by recording the electrical signals produced in the motor cortex when the patient thought of specific words. They were able to decipher three vowel sounds based on the electrical activity recorded from the motor cortex and then put these signals through a voice synthesiser in real time to produce the sounds. This is a modest start, but Kennedy believes that within 5 years they may be able to produce a working vocabulary of over 100 words that locked-in patients are most likely to use (yes, no, etc.). This would no doubt be a fantastic achievement; however the approach is specific to a single patient and relies on penetrating the brain with electrodes, which carries a high level of risk. Therefore it may be better to look further back in the speech process, at events which are universal rather than patient specific, and see if the research can be done in a less invasive way.

Using the motor cortex as the basis of 'mind reading'  therefore comes with certain problematic issues. Not only is it patient specific, but attempting to read the mind of someone seconds before they speak a word hardly classes as mind reading in the true sense (although it undoubtedly has an application to those suffering from locked-in syndrome). The motor cortex is merely the climax of the neurological process of speech. There are two other areas of the brain, known as 'staging areas' of speech, which send signals to the motor cortex, before transmitting to the facial muscles. The first of these two areas is known as ‘Wernicke’s Area’, which deals with meaning and ideas that are then passed to ‘Broca’s Area’, which produces the words necessary to convey them. One of the best times to study the brain is when things go wrong, and this was the case in the discovery of Wernicke’s and Broca’s areas (this is a link to some detail about their discovery).

So we know the regions of the brain that are the major players in the production of speech, therefore all that is left to do is to look at the electrical signals produced in these areas, identify the words these correspond to, and put them through a voice synthesiser. It sounds so simple when said like that… but obviously  it is not. 


Let’s start with reading electrical signals from the brain. The classical approach for measuring electrical activity was to place a net of electrodes on a subject's scalp and record the signals from there. While this may be safe, the skull interferes with the signals making it difficult to get the fine detail from small brain areas which is needed for deciphering the processes of language production. The best way to get fine detail is through the penetration of the brain with electrodes as used by Kennedy et al. However, as discussed above, this carries a high level of risk. The halfway house is a method known as electrocorticography (ECoG) whereby the skull is opened up and a net of electrodes is placed over specific areas of the brain.

ECoG provides a good method that mixes safety with fidelity when it comes to measuring the electrical signals of the brain and was put to good use in 2010 by Bradley Greger and his team. In their work on the motor cortex and ‘Wernicke’s Area’ using ECoG were able to detect signatures of 10 words. These were: yes, no, hot, cold, thirsty, hungry, hello, goodbye, more and less. Most of the data came from the motor complex, but this was the first sign of looking at the staging areas in an attempt to mind read.

As if safely reading the electrical signals of the brain wasn’t hard enough, next we have the issue of words. While it is pretty difficult to give an accurate description of how many words there actually are in a language (take dog: is it one or two – a noun - the animal and/or a verb - to follow persistently) the estimates seem to range from 171,476 to over 1 million. This means we may need to find over one million distinct electrical signals and program them into a speech synthesiser, which is no small task! And when you consider the fact that it is estimated that a word is added every 98 minutes the task would only continue to grow. 


There is however a short cut to programming all these words for a speech synthesiser and measuring the electrical signals. Think of a word like “school”, which is made up of four distinct sounds. “S” “K” “OO” “L.” These sounds are four examples of the building blocks of language. In English there are about 44 of these distinct sounds, known as phonemes. The hope is to find the electrical signatures for these 44 sounds, either in Wernicke’s or Broca’s Area, then construct the intended word from the findings. This hope is based on work by Schalk et al. published earlier this year which used ECoG on the motor cortex and Wernicke’s Area and was able to detect the phonemes “oo,” “ah,” “eh,” and “ee.” These weren’t the only findings, as it was also discovered that when looking at the motor cortex there was little difference between spoken words and imagined words, whereas when looking at Wernicke’s Area there was a much larger signal seen when words were only imagined as opposed to spoken – giving tantalising evidence of the inner voice I mentioned earlier and suggesting that it may perhaps be generated in this area of the brain.

Mind reading is a long way off because for one thing - it effectively still requires brain surgery. However with the way things are developing, there is already a strong hope that the lives of people suffering from severe paralysis may soon be able to achieve at least a basic level of communication. Questions no doubt have to be raised as to how far we should take this technology. If it could be used on anyone without the need for surgery would it be abused? Where might it eventually stop? I’m sure you have had thoughts you’d rather other people never heard…

By the way, that number you thought of - was it 17?

Monday, 5 September 2011

Think Of A Number Between 1 and 20... (post 1 of 2)

Now if you have thought of a number between 1 and 20, then the chances are you would have said it to yourself in your head. This inner voice is also used when reading these words on your screen, I myself am using it while typing. When we think of words in our inner voice we generate an internal dialogue, a conversation with ourselves – The question I would like to consider is this: what if we could hack into someone else's internal dialogue and understand it? 


Besides the ethical questions this undoubtedly raises, there are profound scientific applications for the ability to read a person’s internal dialogue. Take for instance a person suffering from locked-in syndrome, whereby they are awake and aware but completely unable to communicate in any way more complex than the blinking of the eyes. If we could read the mind of patients suffering from this terrible syndrome we would be able to provide them a much higher quality of life. The key reason that it is believed we may indeed one day be able to hack into the internal dialogue is based on the fact that the brain produces a characteristic electrical signal when we think of any word. All we need to do is work out which words correspond to which electrical signals. No small task…

So the key to mind reading lies in the electrical signals produced by the brain. The first time these electrical signals were recorded was in 1924 by Hans Berger (the chap on the right), who invented the electroencephalograph (EEG) by placing electrodes on the head of a subject and measured the electrical output of all the brain's neurones. The ability to read these electrical signals progressed over the next 70 years and by the mid-1990s it was possible to have them processed by a computer make a cursor move on a screen. This “brain computer interface” (BCI) allowed the cursor to move on a screen by training the computer to recognise two distinct electrical signals and respond to these in the appropriate way. For instance, one signal would correspond to left and another to right. These electrical signals were produced by thinking of specific movements (i.e. using electrical signals generated in the motor cortex) for instance, thinking of hitting a tennis ball produced a signal that the computer interpreted as left, while kicking a football would produce a different signal, interpreted as right.


Early EEG recording by Berger

While moving a cursor on a screen is no doubt a fantastic achievement, having to communicate solely by moving a cursor around a screen would be pretty difficult and slow. Just imagine moving a mouse around an image of a keyboard on a screen, let alone having to focus on precise movements in your head, which a computer has to interpret. If we really want to read minds, we need to be able to focus in on the brain areas governing speech and decipher the electrical signals produced there in real time. If we can do that, we may then be able to produce a speech synthesiser to convert these electrical signals into sounds – in essence, giving the patients a voice.

The question is, which areas of the brain do we need to look at and what work being done to study these areas?


Stay tuned to find out...