Wednesday, 30 November 2011

Much A-Flu About Nothing (Post 3 of 2 - a little add on)


I’d just like to make a little add on to my previous post, as I’ve just read about some very interesting work regarding influenza that hasn’t even been published yet and is currently under review. This work is causing a huge amount of controversy because of the implications it could have if published. The study was conducted by Ron Fouchier of the Erasmus Medical Centre in the Netherlands, in which he managed to create an H5N1 flu virus capable of host-to-host transmission, which if you’ve read my previous post, you’ll know is the main thing stopping that strain of Avian Flu becoming a pandemic.

Fouchier’s work looked to create a virus capable of transmission between hosts; obviously he can’t use humans, so he used ferrets which are the closest model available to study the virus. He first attempted to create mutations to the virus which he thought would allow host-to-host transmission but was unsuccessful in this approach. He then moved on to the tried and tested method of simply passing the virus from one ferret to another over and over again. This passaging process causes the virus to adapt and make it better at transmitting between hosts. After only 10 generations, the virus had become airborne and could pass from one ferret to another simply by them being in close proximity to one another. Subsequent analysis of this new mutant form of the virus showed that only five changes had been made to its DNA, and these affected just two proteins. What’s more worrying about this is that all five of these mutations have been seen in nature, just never all five in the same virus.

Herein lies the controversy: this man-made virus, in theory, has the ability to spread from person-to-person and it is estimated that it could kill over half the global population as a result! If the work is published and the mutations are freely available for anyone to view, it is not too difficult for anyone to re-create this virus and use it for a bioterrorist attack, for instance. This work has started the ball rolling on an intense debate over scientific freedom and whether so called ‘dual-research’ (research which has the potential for good and evil, for want of better terms) should be published. The good that comes from this work is twofold: firstly we will be able to look for these mutations in the wild and get a very rapid response if it ever looks like the virus may form, thereby potentially limiting its fatality, and secondly by having the virus in the lab we will be able to study it and look for novel ways to treat it if ever it does form. The ‘evil,’ as mentioned above, is the possibility of bioterrorism.

Personally, I feel that the work should be published. With the information available about the mutations that are key to allowing it to spread, we will be better able to understand the virus and find novel ways to deal with it, should we ever have to. Yes, there is a risk of the information getting into the wrong hands but if we are able to find ways to deal with a natural form of this virus then we will also be able to deal with a man-made form. Science doesn’t move forward without a risk or two along the way.

Tuesday, 29 November 2011

Much A-Flu About Nothing (Post 2 of 2)

I left the last post touching on the fact that we would look into 3 historic pandemics in more detail. We’ll start with the furthest back, and the most deadly, this being the Spanish Flu pandemic of 1918. In the 2 years this pandemic lasted, it managed to infect an estimated 500 million people (that was about third of the population at the time), furthermore it also managed to kill between 50 and 100 million of those it infected, meaning that between 3% and 6% of the global population died due to this virus. Additionally, instead of targeting those of advanced age, as seasonal flu does, it caused deaths in younger, healthier individuals, making the burden of disease far greater. The Spanish Flu pandemic was the result of an early avian virus managing to infect humans due to mutation, not re-assortment. This virus was an H1N1 virus.
A pretty famous image form the Spanish Flu pandemic. All are patients.

Fast-forward to 2009 and we hear from H1N1 again, something I’m sure many readers will remember. This was the Swine Flu pandemic, which lasted a little over a year. This H1N1 virus came from a quadruple re-assortment of strains from swine, avian and human flu viruses. So couple the terror of the possibility of a second Spanish Flu outbreak with a quadruple re-assortment and you can hopefully understand why there was plenty of fear surrounding this outbreak. Swine Flu had more than just letters and a number in common with Spanish Flu, in that it was more contagious than seasonal flu and was seen to target the young more than the old. Fortunately, however, the virus seemed to cause only a mild illness; of the 375,000 lab confirmed cases there were only 4,500 deaths.

Our final virus to consider is Bird Flu (aka Avian Flu), another outbreak which many people will probably remember hitting the headlines. This virus is yet to reach pandemic level, unlike Swine Flu, however there is a close eye being kept on it as there is a high potential that it could soon manage to become pandemic. The Bird Flu virus is an H5N1 virus that can pass from birds to humans and from 2007 to 2010 there have been 26 outbreaks resulting in human infection. The reason that H5N1 has failed to live up to the media hype of being a huge global killer virus is that its transmission from human to human is very limited, so any outbreaks rapidly burn out. The reason for this lack of transmission lies in our lungs. As a reminder, flu viruses have HA on their surface, which binds to cells and allows the virus to enter them for infection. All HA molecules target a receptor on cells known as the sialic acid receptor which is (in parts) made of sugar molecules. Avian HA targets sugar molecules which are linked in a certain way, known as an α2,3 linkage. However, human HA targets α2,6 linked sugar molecules. Both of these types of linkage are found in our lungs, but not with an even distribution. In our upper respiratory tract we have many α2,6 receptors, whereas in the lower respiratory tract we have α2,3 receptors. Flu is an upper respiratory tract infection and going to the lower respiratory tract means it will not be coughed or sneezed out as it is too far down, thus stopping human to human transmission. Here in lies the fear though, in order to switch from targeting α2,3 receptors to targeting α2,6 receptors the H5N1 HA only needs to pick up 2 mutations. Mutations are a rare event, but picking up 2 is not beyond the realms of possibility and is something that is constantly being looked for.

Hopefully now you’ll never consider flu in the same way. For so many people it is essentially just a severe cold, but spare a thought for those who suffer much more from the threat of flu. And the next time the media pick up on a global pandemic that will “kill us all” hopefully you’ll be able to fully understand the aspects of the virus in question and understand the real dangers it may or may not pose, as the case may be.

Monday, 21 November 2011

Much A-Flu About Nothing (Post 1 of 2)


Everyone knows of flu, or to give it its full name, influenza. For most people it’s nothing more than a passing illness characterised by symptoms such as fever, fatigue, coughing, sneezing, aches and pains. However, for some people, it can be a much more serious disease, with estimates from the World Health Organisation believing it to cause 250,000 to 500,000 deaths a year. Those people at the highest risk have probably already been contacted with regard to receiving the flu vaccine, which is an annual occurrence around this time of year (in the northern hemisphere). What I’d like to talk to you about in this blog is what flu is and why every couple of years there seems to be a huge scare about a pandemic outbreak, the most recent of these being Bird Flu and Swine Flu. I don’t want this to be a scare-mongering blog about how there could be a pandemic flu outbreak that could kill us all (as the papers like to report it) but just to help people understand what is going on next time there are reports of a potential outbreak.

First things first; flu is caused by a virus. One of the defining features of viruses is that, unlike bacteria, they are completely unable to replicate without a host, and the aim of any living thing is to replicate. So without something to infect, viruses would just sit there as a ball of DNA, proteins and fats doing nothing. The influenza virus has 3 main hosts, these being humans, pigs and birds. The virus must therefore infect one of these animals, replicate inside their cells and then spread to new hosts. In humans, the virus infects cells in the upper respiratory tract, which is why we cough and sneeze a lot when we have flu.



Now for a bit of virology. A virus is unable to replicate without a host because of the fact that, in general, all a virus consists of is genetic material (either DNA, like us, or RNA) and a particle which transports this material, made up of proteins and fats. This particle is known as an envelope. In this envelope there are proteins, which stick out (see the picture) known as haemagglutinin (HA) and neuraminidase (NA). Both of these have very important roles for the virus. HA allows entry into the cells and without entry the virus cannot live. NA on the other hand allows the virus to leave the cells and spread to more cells. And yes, if you were wondering, these are where names of flu viruses such as H1N1 come from. These notations refer to which distinct type of HA and NA is on the surface of that flu virus. We currently know of 16 HA variants (numbers 1-3 infect humans) and 9 NA variants (only numbers 1 and 2 infect humans).

The last bit of background regards the influenza genome. The genetic material of influenza is RNA and inside each influenza particle there are 8 different segments of RNA. This is like having a jigsaw of 8 different pieces, all of which have different detail to them and all of which are necessary to have the final image. Each of these 8 RNA molecules will produce different proteins that enable the virus to replicate inside host cells and then spread.

Now that we’ve got the background down, things start to get interesting. As long as the flu virus has all 8 of the RNA segments it needs in the genome, it doesn’t care where they all come from. So let us think of a typical human flu virus with its 8 RNA segments. If segments 4 and 5 (for example) are exchanged with a flu virus that infects birds, we have a whole new virus that may be capable of infecting humans. We can take this idea further and include pig viruses as well and get what is known as triple re-assortments, where we form a completely new virus with RNA segments from human, pig and bird viruses. This is what scares virologists and epidemiologists the most.

Most people never get anything worse than a severe cold from flu infection due to the ability of our immune system to fight it off. This system of cells and molecules can detect the flu virus by means of molecules known as antigens. Our immune system is essentially blind and has to fumble around feeling for things it recognises as being an invader before it can destroy them. The main antigens that we use to detect flu are the HA and NA proteins I described earlier. Now let us consider recombination: if a flu virus recombines the segments of its genome which code HA and NA, then there is the potential to produce a flu virus with completely different HA and/or NA which in turn will be completely un-recognisable to our immune system, the system won’t know what that feels like so will ignore it.

Being that this new flu virus has a whole new set of antigens, no-one in a population will have immunity to it, which allows for the very rapid spread of a highly infectious virus. This will start as an epidemic and spread from there to eventually become a pandemic. A small caveat to this is that it is not always necessary for there to be recombination for a pandemic to occur; sometimes flu viruses which infect animals can simply mutate to aquire the ability to infect humans instead, bringing with them a whole new set of antigens.

So the question is: what, potentially, can the consequences of all this be? And why are we so scared? In the next blog we will look in detail at one of the worst global pandemics in history and also at two more recent examples of a pandemic and near pandemic that never really lived up to the hype, and why they didn’t. So come back soon to learn more…

Wednesday, 26 October 2011

Have a read of this

For people who read my most recent post please have a read of this... http://www.sciencedaily.com/releases/2011/10/111025122615.htm

The viral particles I spoke about in my last post would appear to make us human!

(as a side note - I recently broke my laptop so won't be posting for a couple of weeks until it's fixed)

Tuesday, 18 October 2011

More Than Just Junk (post 2 of 2)


In the first half of this blog I discussed with you that a mere 2.5% of our genome actually codes the genes that make us who and what we are. A further 22.5% of our genome is in some way related to these genes, either through pseudogenes or as small fragments of larger, functional, genes. Here we are going to look at the remaining 75% of our genome, the so-called ‘extragenic DNA’ and discuss the fact that about 50% of our genome isn’t even human.

The vast majority of the extragenic DNA in our genome can be classed as human transposable elements (HTE) or ‘jumping genes’. These are relatively small portions of DNA which sit in our genome then, on a whim, jump out of the genome and move to another place within it, like riding on a bus and moving seats halfway through. So the first question to tackle is how are these DNA elements able to jump around our genome? In order to understand this we must re-visit the process of making protein. From the DNA in our cells we make RNA; RNA is not connected to our genome and in order to serve its purpose of producing protein, it leaves the rest of the genome (found in the nucleus) and moves to the rest of the cell. RNA is therefore free to move around in our cells. Jumping genes take advantage of this by converting themselves into RNA and then reversing this process to make new DNA that will be inserted elsewhere, so to go back to the bus analogy used before it is in actual fact more like sitting in one seat and cloning yourself, then moving to another, as the initial DNA remains in the genome while new DNA is made and inserted elsewhere, via the RNA intermediate.

These jumping genes can be sub-divided into two main classes, depending on their size. These classes are known as long or short interspersed nuclear elements (SINEs or LINEs). The most abundant of the SINE family are known as Alu repeat elements while the most abundant of the LINE family is known as L1.
The Alu repeat family has over 1.1 million copies in our genome, accounting for about 10% of the mass of the genome (that’s four times the amount taken up by our genes!). The repeats are short, only about 300 bases (bases being the units that make up DNA) in length and are seen to repeat approximately every 4000 bases of our overall genome. This is the current estimate but as I mentioned, these genes can freely jump around our genome and add new copies and it is estimated that a new Alu element is added about every 200 new births. An interesting fact about these Alu repeats is that they are only found in primates, meaning that they must have first been formed at the point when primates split from other mammals in our evolutionary origins. This knowledge can then be used to infer when primates split from other animals on the evolutionary tree. While Alu repeats are classified as SINEs due to their short length, they can also be classified as non-viral HTEs because they are unable to produce the protein that is necessary to convert their RNA into DNA. In order to do this, they must borrow the protein from the viral HTEs, which takes us nicely onto L1.

L1 is the most abundant LINE in our genome and as I stated above, it can be classed as a viral HTE due to its ability to code the enzyme which converts RNA back into DNA, allowing the jumping genes to jump. This enzyme is known as reverse transcriptase (RT) (transcription is the conversion of DNA to RNA; this is being reversed and all enzymes have –ase at the end) and is not human, or even eukaryotic (those who follow my blog will know about eukaryotes, but for those who don’t – the previous blog explains it), it is a protein coded for, predominantly by viruses, in particular retroviruses.  Retroviruses code this protein as their genome is made of RNA and they must convert that RNA to DNA and insert it to their host’s genome in order to replicate. The best known of these retroviruses is undoubtedly HIV. Since L1 can produce RT, it must in some way have a viral origin, which, again, leads us nicely into our next discussion piece.

Retroviruses are a viral family which can convert their RNA genome into DNA and then insert that DNA into our genome. This happens to anyone infected with HIV or any other retrovirus. Usually the DNA will be inserted into cells known as ‘somatic cells,’ which are the cells that make-up you, from your skin, to your hair, to your intestines, to your little toe. However these are not the only cells in the body - we also have ‘germline cells’, which make up the next generation of you - sperm and egg cells. Consider this: if a retrovirus inserts its genome into a germline cell, which is then used to make a child, that retrovirus will be part of the child’s genome. This sounds strange and highly unlikely; it is, but that hasn’t stopped it from happening on numerous occasions. The process forms what are known as endogenous retroviruses (ERVs or HERVs – with the H being for human). Approximately 8% of our genome is made up of intact HERVs, providing a signature of our long co-existence with viruses. It is important to point out that these viruses would once have caused disease by infecting our ancestors, but the ones that kept a place in our genome lost the disease causing potential, otherwise they would not still be here today. The vast majority of the HERVs in our genome are ‘silent’, meaning they have no function, as they do not produce protein. However some are functional and, more amazingly, some are now vital to us!

Take for instance the placenta, an organ without which a fetus would be unable to survive as it allows nutrients and oxygen to get from the mother’s blood to her unborn baby. The placenta is made of cells, which are fused together into a structure known as a ‘syncytium’. In the year 2000 it was found that a HERV known as HERV-W was almost exclusively expressed in human placenta. Further study revealed that HERV-W produced a protein whose function was to fuse cells together in order to make the placenta; this protein is known as ‘syncytin.’ Later work found a second protein capable of this function (‘syncitin-2’) and this was also found to be produced by a HERV, this time by HERV-FDR.

Let’s just take a step back and think about that; what were once two viruses managed to infect the cells of an ancient animal (or possibly just a simple cell) and become part of the genome, from there they were able to be passed from generation to generation and eventually found a role in producing a protein that could fuse cells in order to produce a placenta, one of the key features for the development of a human child. Boggles the mind a little bit!

So there you have it, the mysteries of our genome. A so-called ‘human’ genome with only a tiny proportion of genes, that can actually be called human, along with the mere 2.5% of the genome which gives us genes we have gene fragments and pseudogenes derived from function genes making up just 25% of the genome. Approximately 50% of the remainder is accounted for by viral elements such as fully functional HERVs and fragments of these, along with the viral jumping genes such as L1. The rest is non-viral jumping genes such as the Alu family, which accounts for four times as much DNA as the human genes. Makes you think doesn’t it, can we really call ourselves human..?

Thursday, 6 October 2011

More Than Just Junk (post 1 of 2)


Back in 2003, after 13 long years, the Human Genome Project was completed and, for the first time, we were able to read the full genetic code of a human being. This publication was a landmark and resulted in a rapid increase in our understanding of many aspects of human biology. However, when first released, the genome also raised questions, the biggest of which was simply: where are the genes? It had been estimated that there would be around 100,000 genes in the genome to account for our complexity and genome size; however the actual number falls well short of this. It is estimated that we have between 20,000 and 25,000 genes, close to the number of genes a mouse has. Now consider this: our genome has approximately 3.2x109 base pairs (these are the chemical units which make up DNA) in length and the longest of all the 20-25,000 genes is about 2.4x106 base pairs, meaning the biggest gene in our genome takes up a mere 0.075%.  So what is all the rest?


I’ve already alluded to the fact that only a small amount of the genome is actually the genes that make us what and who we are. Of the whole of our genome, a mere 2.5% is genes that are expressed somewhere in our bodies at some time in our life. In order for DNA to be expressed it must firstly be converted to a molecule called mRNA, which is in turn used to make protein the protein then functions in our bodies to make us what and who we are. The remaining 97.5% was once just considered to be ‘junk’ DNA that had no use. We now know there is a lot more to this remaining DNA than just ‘junk.’

Let us firstly consider the small part of our genome that is actually human genes or in some way related to human genes, which make up a mere 25% of our genome. Of this 25% only 10% is actually ‘coding’. This means (as I stated above) that only 2.5% of our entire genome actually gives molecules that have a function in our bodies. The remaining 90% (of the 25%) is composed of ‘non-coding’ DNA.

There are two main types of non-coding DNA, known as pseudogenes and gene fragments.
As the prefix pseudo- implies, the pseudogenes are essentially false genes because they cannot be made into proteins. There are 4 different types of pseudogene:

Processed pseudogenes
For DNA to have any effect it must first be converted to mRNA prior to being made into a protein. When DNA is made into mRNA, certain changes are made. One of the biggest changes is known as splicing. Consider buying a watch with a metal link strap and finding it is too big. You would have links taken out of the strap and then put the whole strap back together at a shorter length so that it fits. This is sort of what happens to the DNA. When the DNA is made into mRNA certain parts known as introns are removed and the molecule is then put back together to be functional as mRNA. A processed pseudogene is created when this functional mRNA (so no introns) is converted back into DNA, instead of to protein as it should be. This new bit of DNA is then re-inserted to the genome. Due to the lack of introns, this DNA cannot be made into new mRNA, so is no longer functional - a false gene.

Non-processed pseudogenes
As the name implies, this is DNA that is non-functional but not as a result of the process I outlined above. Non-processed pseudogenes are simply genes that were once functional but are no longer expressed. These can also be known as fossilised genes as they were once necessary, but due to lack of use they no longer function, a case of ‘use it or lose it’. An example of this is a gene with the catchy name of OR7D4. This gene (like many non-processed pseudogenes) is part of our sense of smell and in approximately 30% of people it is no longer functional, meaning these individuals are unable to smell a chemical known as androstadienone.

Disabled pseudogenes
These are genes which have been disabled by some external factor such as a protein which may have mutated and become irreversibly bound to the DNA, impeding its expression.

Expressed pseudogenes
These are genes which are expressed but have no known function. It is though that these may be early proteins which will evolve over time to have a function in future generations.

Alongside pseudogenes, the other main non-coding DNA are gene fragments. These fragments are small parts of a gene that are not expressed and simply sit there as extra bits of DNA. For instance the HLA gene in the genome has what is known as a ‘leader’ followed by α1, α2 and α3 exons. Exons are in essence the opposite of introns in that they are the bits of DNA left in the mRNA when introns are removed (they can be thought of as the parts of the watch strap left in, to use that analogy again). Sitting next to the HLA gene in our genome there are gene fragments; one is made of the leader, α1 and α2, and the other is made of just α2. These are not expressed and simply take up space in the genome.

So that covers the 25% of the genome that is either genes or in some way related to genes. In the next post I will consider the remaining 75% and the fact that over 50% of our genome isn’t even human…

Wednesday, 21 September 2011

In the beginning evolution created the eukaryotic cell

If you take the book of Genesis literally, then 6000 years ago God took all of 6 days to create the earth and everything on it, including man. Now I’m pretty sure most people reading this blog will agree with me when I say that we shouldn’t take the book of Genesis literally. The earth is in fact closer to 4.6 billion than 6000 years old and it wasn’t until about 2.5 million years ago that the genus Homo (to which we belong – Homo sapiens) was first seen. However, unlike in the book of Genesis, humans did not just appear - there were many tiny evolutionary steps along the way. The biggest and arguably the most important, was the formation of the building blocks of all complex life.

A stylized Eukaryotic cell
The average human body is estimated to contain between 50 and 100 trillion cells (that’s potentially as high as 100,000,000,000,000). The strange thing about these trillions upon trillions of cells is that at their most basic level they are, in essence, pretty much all the same. This isn’t just a characteristic of our cells either; the same is true for all complex life on the planet. At the most basic level there is little difference between me, you, a giraffe or a whale. These building blocks of complex life are known as ‘eukaryotic’ cells (from the Greek meaning ‘a good kernel’ – having a nucleus) and due to their ubiquity among all complex life, Eukaryota (organisms made of eukaryotic cells) are termed a domain of life (or an Urkingdom). Eukaryota are one of the three domains to which all life on the planet can be classified and are the youngest of the three. What I want to look at with you here is where the Eukaryota domain came from.

A stylized Prokaryotic cell
Alongside the Eukaryota Urkingdom, the two others are known as the Prokaryota (again taken from Greek, this time having no nucleus) and Archaea (‘ancient things’). These two Urkingdoms are thought to be a similar age and date back to between 3.5 and 3.8 billion years ago – the birth of cellular life on earth. Both kingdoms are mostly made up of simple, single-celled organisms. Prokaryotes are bacteria such Escherichia coli, Staphylococcus aureus, Neisseria meningitidis and so on. Archaea were once thought to be bacteria and it wasn’t until the late 1970s that this was shown to be incorrect. Before the 70s the best way to look at differences between cell types was down a microscope to consider the morphology of the cells. Archaea and Prokaryotes appear very similar, for instance bacteria do not have a membrane-bound nucleus and neither do Archaea (whereas our cells do). However in the 70s, work was conducted by Carl Woese and George Fox to look at molecular differences instead of morphological differences in what had been classically thought of as bacteria and they found striking differences in some organisms, which they termed the Archaea. Archaea are defined by their ability to survive in extreme conditions (for example halophiles which live in extremely salty conditions) and are thought to be the oldest form of life, in part due to the harsh conditions present on Earth around 3.5 billion years ago.

So around 3.5 billion years ago the Earth was inhabited by simple, single-celled life forms, the Archaea and Prokaryotes. These life forms dominated the planet for about the next 1.5 billion years until a giant evolutionary leap was made. This giant leap was of course the formation of the eukaryotic cell. It is proposed that around 2 billion years ago, an archaeal cell adept at phagocytosis (the process of taking smaller molecules into itself) took in a smaller bacterial cell that was adept at using oxygen to produce energy. These two cells then entered a symbiotic relationship (a partnership in which both parties benefit) with the archaeal cell providing nutrients for the bacterial cell, which in turn was acting as the power source for the archaeal cell. Due to this new internal source of energy, the cell was able to grow much larger and become more complex. The formation of this symbiotic union was first proposed by Lynn Margulis in 1966 and is known as the ‘endosymbiotic theory’. The proposal was that, as a result of the energy supplied by this bacterium, the cell was able to undergo a 200,000-foldincrease in its genome size, allowing it to rapidly evolve and become more complex. It didn’t take long (in the grand scheme of things) for the single cells to become multi-cellular (fossil records indicate the first multi-cellular eukaryotic cell to be around 1.8 billionyears old) and from then on the only way was forward, becoming ever more complex, until 2.5 million years ago the Homo genus was formed (along with all other complex life). The rest, as they say, is history.

As I mentioned, all the cells that make up our bodies are eukaryotic cells, and you may be wondering what happened to the bacteria that entered that ancient cell to drive the evolution of complexity: in fact - we still have them. All the cells in our bodies (with a few exceptions) contain a powerhouse known as mitochondria. Mitochondria are the source of respiration, where the food we eat and the oxygen we breathe react together to produce the energy we need to survive. These tiny energy producers are the descendants of the bacterial cell engulfed by an archaeal cell around 2 billion years ago. This theory is backed up by the fact that the mitochondria in our cells have a genome of their own – that’s right, our cells contain two genomes, our human one and a mitochondrial one which resembles a bacterial genome (highlighted by the fact that the mitochondrial genome is circular, as with many bacterial genomes, not linear like ours). Analysis has of course been done on the mitochondrial genome and it was found to have an ancient relative with bacteria of the α-proteobacterial genus, supporting the chimeric origin of our cells.
A mitochondrion

Further evidence to support the endosymbiotic theory for the origin of eukaryotic cells can be found through molecular comparisons of Archaea, Prokaryotes and Eukaryotes. An example of this can be seen in the formation of proteins. Protein formation uses molecules known as RNA, which are thought to be some of the oldest molecules, if not actually the oldest – some believe the earliest form of life was a simple RNA strand (a hypothesis gaining more support). Eukaryotes have a form of RNA used to start the production of a strand of protein known as Met-tRNA. Prokaryotes on the other hand have the addition of a small chemical group giving formyl-Met-tRNA. Archaea have Met-tRNA, like eukaryotic cells, indicating the close link between archaeal and eukaryotic cells.

So around 2 billion years ago, an event so monumental occurred that it shaped the planet from that moment until this, and will continue to do so for many more years to come. This event was the beginning of complex life and it all stemmed from one simple moment, the engulfment of a small cell by a larger one (something that occurs throughout our body on a daily basis). I think we will be hard pressed to find such a small event that has had such a giant evolutionary consequence as this one has.

Sunday, 18 September 2011

Give Us A Hand (post 2 of 2)

I left the last post with the question of what work is being done to make transplantation safer and improve the availability of organs..?

One of the major hopes for future years is the ability to engineer transplantable material in a laboratory, which may well reduce the need for organ donation and better yet, reduce the risk of rejection since the organs can be engineered to have the recipient’s cells (and thus the correct HLA). Being able to make an organ in a lab using a patient’s own cells sounds like something out of science fiction but it is fast becoming science fact. Early last year a team lead by Korkut Uygun were able to engineer a rat liver in a lab. This was achieved by taking a liver and stripping it of all its living cells. This left a framework for the correct structure of a liver to which the team were able to add a whole new set of hepatocytes (the major cell type that makes up the liver). Think of this like drawing a stick man then adding detail to it to draw a cartoon person. The hepatocytes that were added were produced from stem cells taken from the recipient rat meaning the team produced a (semi) functional liver with the correct HLA for the recipient rat. Impressive as this work is, it still has some way to go before being of real clinical use. For starters, livers aren’t only made of hepatocytes and the work was only on a small rat liver, but it is no doubt an incredibly exciting prospect. Another approach for engineered organs was successfully used only a few months ago in which a patient received a windpipe transplant. The interesting thing about this transplant was that the windpipe framework was made of a completely synthetic material, which was then coated in the recipient’s cells. Hats off to the surgeon Paolo Macchiarini and the producer of the synthetic material Alexander Seifalian (a researcher at UCL I’d like to add) for this pioneering work.

Not only could we one day produce organs in a lab but we may also be able to use animal organs, a procedure known as xenotransplantation. This isn’t without its complex issues as you can imagine, not least in terms of different organ sizes but some organs do hold hope (pig hearts and our hearts, for example, are of very similar size and structure). Being able to use animal organs would certainly avoid the issues of a shortage of available organs but whether people will accept the idea of receiving a pig heart is yet to be seen. The safety of such procedures is also unclear as there may be a risk of zoonosis (the link is back to an older blog I wrote about HIV in which I discuss zoonosis).

Another bright hope for the future is the ability to induce immunological tolerance (another topic I will write a blog on in the near future) to the new tissue. As I mentioned above, our immune system is highly tuned to be able to detect our specific HLA, but when the cells of our immune system are first produced they all have different receptors for binding to the HLA, obviously not all will be able to. The cells must therefore learn how to bind to the correct HLA, and any cells unable to will be destroyed. If we could find a way to make the immune cells recognise the new tissue as self (make the cells tolerant to the new tissue) it would avoid the whole issue of rejection.

One final thought would be for regenerative medicine (a link to a small article with more detail on regenerative medicine). This is in its early days but centres around the potential use of stem cells to repair damaged tissues which potentially could one day be used to stop the need for transplantation entirely.

Transplantation is not without its issues but the work in the area has an incredibly bright future and the power to have an astonishing impact on many people’s lives. One problem that may be encountered however is with funding. A major source of funding in the medical realm comes from the big pharma companies. However, these companies make huge profits from transplant patients who have to continue taking (and buying) drugs for the rest of their lives and there may therefore be reluctance to find ways around the issue of avoiding rejection which, in turn, may make funding hard to come by (something I hope not to be the case). Still, the future looks bright for the world of transplantation.

Thursday, 15 September 2011

Give Us A Hand (post 1 of 2)

57 years ago, Joseph Murray and his team at the Peter Brent Brigham Hospital in Boston performed a landmark surgery. For the first time it was possible to take a liver from one individual and successfully insert it to another, thus beginning the age of transplantation. Transplantation is a procedure that is now commonplace in many countries, for example, last year in the UK nearly 4000 organ transplants were carried out. However, transplantation is not without its complications and risks. Even when successful, a patient is still not out of the woods and for the rest of their life they are required to take a cocktail of immunosuppressive drugs and yet may still need a new transplantation within 10 years. As an example, the 10-year survival rate for a first time transplant of a deceased donor kidney is only 48%. What I’d like to look at with you here is why it is necessary for a patient to continue to take so many drugs and what the future may hold for the field of transplantation.

Any patient who receives a transplanted organ must take a vast array of immunosuppressive drugs for the rest of their life. These drugs are vital for survival of both the patient and the organ, due to the immune system being finely tuned to attack anything it sees as foreign, such as someone else’s organ. Our immune system must be able to recognise and attack invading microbes that look to cause damage (known as pathogens) while at the same time not attack our own cells (self-cells). Therefore the immune system must be able to distinguish between self and non-self (pathogens) in order to be of any use. Detection of anything by the immune system is achieved through the binding of immune cells to molecules on other cells. The molecules the immune system binds to on pathogens are simply known as antigens, while the (main) molecule used to detect self is known as the HLA (human leukocyte antigen). The HLA is a wonderfully complex molecule and I will probably write a whole blog on it at some later date but for now the extent of the detail I will go to is to say that unless two people are genetically identical (eg. identical twins) they will not have the same HLA; your HLA is as unique as you are. You may now be starting to see where I’m going with this. Since your HLA is unique to you and stops your immune system attacking your cells, if you receive an organ from someone not genetically identical its cells will have a totally different HLA – to the immune system this appears no different to an antigen on a pathogen, giving the immune system the license to attack. If it weren’t for immunosuppressive drugs all transplanted material would simply be destroyed by the recipient’s immune system, a phenomenon known as rejection. This is the main reason for the failure of any transplant operation and can occur anywhere from minutes after the operation to years later, hence the need for immunosuppression for the rest of life.

Transplantation of organs is no doubt a fantastic achievement but it isn’t without its complications and dangers. These problems are not the only issues facing transplantation, as there is also a distinct lack of donor tissue, in the US for example, it is estimated there is a deficit of 4000 livers each year. Without going into the debate of whether organ donation should be an opt-out system (something I believe) there is clearly a need for improvement in availability and safety. But what is the work being done to tackle these issues?

As with my other blogs I’ll leave you hanging on that note and post the next half in a day or two, so stay tuned!
Some slightly outdated stats but a nice image to demonstrate the lack of organs (and where organs are in the body)

Tuesday, 6 September 2011

Think Of A Number Between 1 and 20... (post 2 of 2)

I left the last post of this blog with the tantalising idea of being able to read people’s minds by looking at the electrical signals produced in the areas of the brain that govern speech. I would now like to consider the process of speech. When you want to say anything you must first produce the idea and the meaning of what you wish to say. Following that, you must choose the words you wish to use to convey this meaning before finally moving all the different muscles needed to produce the sounds of speech. These three processes are all controlled by different areas of the brain and have characteristic signals that we may be able to intercept and interpret.

The third final stage of speech lies with the movement of muscles in the tongue, the jaw and the whole of the facial area. Simply say a word slowly and think about all the movements that are occurring. These movements are controlled by the motor cortex, which produces characteristic electrical signals for any type of movement. So in terms of mind reading we may be able to focus on the motor cortex and look at the facial movements that are to be made seconds before speech will occur in order to work out what will be said, an approach used by Philip Kennedy and his team in 2008. Their work was conducted on a patient with locked-in syndrome by recording the electrical signals produced in the motor cortex when the patient thought of specific words. They were able to decipher three vowel sounds based on the electrical activity recorded from the motor cortex and then put these signals through a voice synthesiser in real time to produce the sounds. This is a modest start, but Kennedy believes that within 5 years they may be able to produce a working vocabulary of over 100 words that locked-in patients are most likely to use (yes, no, etc.). This would no doubt be a fantastic achievement; however the approach is specific to a single patient and relies on penetrating the brain with electrodes, which carries a high level of risk. Therefore it may be better to look further back in the speech process, at events which are universal rather than patient specific, and see if the research can be done in a less invasive way.

Using the motor cortex as the basis of 'mind reading'  therefore comes with certain problematic issues. Not only is it patient specific, but attempting to read the mind of someone seconds before they speak a word hardly classes as mind reading in the true sense (although it undoubtedly has an application to those suffering from locked-in syndrome). The motor cortex is merely the climax of the neurological process of speech. There are two other areas of the brain, known as 'staging areas' of speech, which send signals to the motor cortex, before transmitting to the facial muscles. The first of these two areas is known as ‘Wernicke’s Area’, which deals with meaning and ideas that are then passed to ‘Broca’s Area’, which produces the words necessary to convey them. One of the best times to study the brain is when things go wrong, and this was the case in the discovery of Wernicke’s and Broca’s areas (this is a link to some detail about their discovery).

So we know the regions of the brain that are the major players in the production of speech, therefore all that is left to do is to look at the electrical signals produced in these areas, identify the words these correspond to, and put them through a voice synthesiser. It sounds so simple when said like that… but obviously  it is not. 


Let’s start with reading electrical signals from the brain. The classical approach for measuring electrical activity was to place a net of electrodes on a subject's scalp and record the signals from there. While this may be safe, the skull interferes with the signals making it difficult to get the fine detail from small brain areas which is needed for deciphering the processes of language production. The best way to get fine detail is through the penetration of the brain with electrodes as used by Kennedy et al. However, as discussed above, this carries a high level of risk. The halfway house is a method known as electrocorticography (ECoG) whereby the skull is opened up and a net of electrodes is placed over specific areas of the brain.

ECoG provides a good method that mixes safety with fidelity when it comes to measuring the electrical signals of the brain and was put to good use in 2010 by Bradley Greger and his team. In their work on the motor cortex and ‘Wernicke’s Area’ using ECoG were able to detect signatures of 10 words. These were: yes, no, hot, cold, thirsty, hungry, hello, goodbye, more and less. Most of the data came from the motor complex, but this was the first sign of looking at the staging areas in an attempt to mind read.

As if safely reading the electrical signals of the brain wasn’t hard enough, next we have the issue of words. While it is pretty difficult to give an accurate description of how many words there actually are in a language (take dog: is it one or two – a noun - the animal and/or a verb - to follow persistently) the estimates seem to range from 171,476 to over 1 million. This means we may need to find over one million distinct electrical signals and program them into a speech synthesiser, which is no small task! And when you consider the fact that it is estimated that a word is added every 98 minutes the task would only continue to grow. 


There is however a short cut to programming all these words for a speech synthesiser and measuring the electrical signals. Think of a word like “school”, which is made up of four distinct sounds. “S” “K” “OO” “L.” These sounds are four examples of the building blocks of language. In English there are about 44 of these distinct sounds, known as phonemes. The hope is to find the electrical signatures for these 44 sounds, either in Wernicke’s or Broca’s Area, then construct the intended word from the findings. This hope is based on work by Schalk et al. published earlier this year which used ECoG on the motor cortex and Wernicke’s Area and was able to detect the phonemes “oo,” “ah,” “eh,” and “ee.” These weren’t the only findings, as it was also discovered that when looking at the motor cortex there was little difference between spoken words and imagined words, whereas when looking at Wernicke’s Area there was a much larger signal seen when words were only imagined as opposed to spoken – giving tantalising evidence of the inner voice I mentioned earlier and suggesting that it may perhaps be generated in this area of the brain.

Mind reading is a long way off because for one thing - it effectively still requires brain surgery. However with the way things are developing, there is already a strong hope that the lives of people suffering from severe paralysis may soon be able to achieve at least a basic level of communication. Questions no doubt have to be raised as to how far we should take this technology. If it could be used on anyone without the need for surgery would it be abused? Where might it eventually stop? I’m sure you have had thoughts you’d rather other people never heard…

By the way, that number you thought of - was it 17?

Monday, 5 September 2011

Think Of A Number Between 1 and 20... (post 1 of 2)

Now if you have thought of a number between 1 and 20, then the chances are you would have said it to yourself in your head. This inner voice is also used when reading these words on your screen, I myself am using it while typing. When we think of words in our inner voice we generate an internal dialogue, a conversation with ourselves – The question I would like to consider is this: what if we could hack into someone else's internal dialogue and understand it? 


Besides the ethical questions this undoubtedly raises, there are profound scientific applications for the ability to read a person’s internal dialogue. Take for instance a person suffering from locked-in syndrome, whereby they are awake and aware but completely unable to communicate in any way more complex than the blinking of the eyes. If we could read the mind of patients suffering from this terrible syndrome we would be able to provide them a much higher quality of life. The key reason that it is believed we may indeed one day be able to hack into the internal dialogue is based on the fact that the brain produces a characteristic electrical signal when we think of any word. All we need to do is work out which words correspond to which electrical signals. No small task…

So the key to mind reading lies in the electrical signals produced by the brain. The first time these electrical signals were recorded was in 1924 by Hans Berger (the chap on the right), who invented the electroencephalograph (EEG) by placing electrodes on the head of a subject and measured the electrical output of all the brain's neurones. The ability to read these electrical signals progressed over the next 70 years and by the mid-1990s it was possible to have them processed by a computer make a cursor move on a screen. This “brain computer interface” (BCI) allowed the cursor to move on a screen by training the computer to recognise two distinct electrical signals and respond to these in the appropriate way. For instance, one signal would correspond to left and another to right. These electrical signals were produced by thinking of specific movements (i.e. using electrical signals generated in the motor cortex) for instance, thinking of hitting a tennis ball produced a signal that the computer interpreted as left, while kicking a football would produce a different signal, interpreted as right.


Early EEG recording by Berger

While moving a cursor on a screen is no doubt a fantastic achievement, having to communicate solely by moving a cursor around a screen would be pretty difficult and slow. Just imagine moving a mouse around an image of a keyboard on a screen, let alone having to focus on precise movements in your head, which a computer has to interpret. If we really want to read minds, we need to be able to focus in on the brain areas governing speech and decipher the electrical signals produced there in real time. If we can do that, we may then be able to produce a speech synthesiser to convert these electrical signals into sounds – in essence, giving the patients a voice.

The question is, which areas of the brain do we need to look at and what work being done to study these areas?


Stay tuned to find out...

Tuesday, 30 August 2011

On The Origins Of HIV (part 3 of 3)

There may be some who have recently read my blog posts and have disagreed with some of the things I stated. HIV, like many other things, is surrounded with conspiracy theories, so I will now contemplate a few of these and hope to give evidence to suggest why they deserve to be called conspiracy.

The first of these theories is one that had the most support for a certain time, the idea that HIV was produced in a lab in America and not as a product of natural evolution of SIV, following its zoonosis. This theory was touted very strongly by some individuals. The idea that HIV was man-made goes hand in hand with a further conspiracy that it was made for biological warfare against black and homosexual people. A door-to-door survey, conducted in California by Klonoff and Landrine in 1999, found that 27% of the black Americans they spoke to believed that HIV had been made in a government lab. A further study, this time a telephone survey conducted by Bogart and Thorburn, found that 20% of men and 12% of women strongly agreed with the statement “AIDS is a form of genocide against blacks.” They also found that  30% of men and 27% of women agreed with the statement “HIV was produced in a government lab.” 


It was claimed that HIV was produced under the auspices of the Special Virus (Cancer) Program, which ran in the US between 1962 and 1978, and tested on human subjects – starting the pandemic. At the time the claim was made, the dates of the evolution of SIV to HIV had not been fully elucidated and the earliest known examples of HIV were those first found in America in 1983. Since then it has been calculated that HIV is much older than that (between 1884 and 1924 as I explained in post 2) and that the first known sample of HIV was found in the Belgian Congo in 1959. I suppose we can’t rule out for certain that HIV wasn’t produced in a lab between 1962 and 1978 but this would not have been the start of the pandemic spread.

A second conspiracy to be considered centres around the production of the hepatitis B vaccine. This vaccine began pilot testing in the US in the early 70’s and was given to gay men to help stop the spread of the hepatitis B virus (HBV). In order to produce the vaccine it was necessary to grow it in chimpanzee cells. It is therefore argued that the vaccine may have become contaminated with SIV, which then entered humans through injection of the vaccine, allowing the mutation to HIV. For a time the dates seemed to fit, the vaccine began production in the early 70s and the first known case of HIV was 1983. However, as with the theory of its production in a lab, much earlier examples of HIV were later found which lay to rest the idea that the HBV vaccine was to blame for the existence of HIV.

The last theory to consider involves another vaccine, this time the oral polio vaccine (OPV). In 1955, Jonas Salk produced the inactivated polio vaccine (IPV), which quickly made a massive impact on the number of cases of polio worldwide. Later, in 1962, Albert Sabin produced OPV which, instead of a dead form of the polio virus, used a much weakened version (an attenuated form) which gave much more specific (and therefore better) immunity. In order to make the poliomyelitis virus attenuated so as not to cause symptoms it was necessary to grow it in chimpanzee cells. As had been argued with the HBV vaccine, Edward Hooper in his book The River, put forward the idea that chimp cells may have been infected with SIV, thus giving the chance for SIV to enter humans and mutate. This idea was strongly refuted and it was argued that the chimp cells could not have contained SIV because measures would have been taken to avoid such contamination. Further evidence against OPV as the cause of HIV can be found in the fact that, as the name implies, OPV is given orally and the stomach and intestines are highly adapted to prevent the entry of any infectious agent, making it highly unlikely that, had the vaccine been contaminated with SIV, it would have been able to enter the human blood stream. And finally, once again, the dates just don’t fit given the finding of a HIV positive patient from 1959.

So the origins of HIV were once shrouded in mystery and conspiracy with many people believing it to have been man-made and some going as far as believing it to be made for use as a biological weapon. However, as is often the case, the more science looked into the true origins of the virus the more the evidence piled up against these conspiracies. It is now generally accepted that HIV was formed through natural evolution following the transmission of SIV to a human host, most likely through the preparation of bush meat in West Africa sometime in the late 19th or early 20th century. There are those who still won’t fully accept these facts but I hope that I have managed at least to show that the science behind the origin of HIV is far more compelling than any covered up wrong-doing, deliberate creation or accidental contamination of a vaccine.

So that concludes the first of my science blog posts. I hope I have at least provided some information and education if no more than that. Stay tuned for more of my blog - I'm not sure what will be next; I’ll see what takes my fancy. 

Wednesday, 24 August 2011

On The Origins Of HIV (part 2 of 3)

So the question is this: what discovery was made in 1999 that painted the picture of the evolutionary origins of HIV? It all stated with a frozen sample of SIV which was found in the species Pan troglodytes troglodytes, commonly referred to as the chimpanzee. This sample of SIVcpz (with the cpz standing for chimpanzee) was discovered and analysed by Gao et al. who published their findings in the journal Nature. In their paper they listed 5 criteria which provide strong evidence of a link between SIV and HIV. These were…

1) Similarities in viral genome organisation
2) Phylogenetic relatedness (a way of looking at how closely related two things are in an evolutionary sense)
3) Prevalence in natural host
4) Geographical coincidence
5) Plausible routes of transmission

SIVcpz satisfied all of these criteria. In particular, this strain of SIVcpz showed very close genetic similarity to HIV-1 (the pandemic strain of the virus) indicating that HIV may have stemmed from SIV (it is thought to be that way round, as SIV has been known of for much longer and is now believed to be around 32,000 years old, much older than the human counterpart).

Once strong evidence of a link between HIV and SIV had been found, the next mystery to solve was why SIV mutated to produce HIV. Many theories have been proposed, the most plausible of these being known as the “hunter theory”. This theory centres on the premise that SIV was able to get into humans through the preparation of bush meat in Africa. The killing and eating of chimpanzee meat would have allowed for the blood of the chimps to come into contact with any wounds or cuts the hunters may have suffered. This mixing of blood gave the chance for SIV to enter a new human host. Once inside humans, the SIV would have come under attack from our immune system. as with any virus. This in turn would have applied selection pressure to the SIV, forcing it to evolve rapidly in order to ensure its survival. This rapid evolution eventually made the SIV perfectly adapted to survive in its new host and less well adapted to survive in its former host – thus completing the transition from SIV to HIV. And so it began…

From this humble beginning the infection of a few hunter-gatherers in Africa, HIV was able to spread worldwide and infect some 33.3 million people. However, the question still remained when did SIV jump from chimps to man and subsequently evolve to HIV, a process known as zoonosis. The earliest samples of HIV date back to just over 50 years ago. A plasma sample of HIV was found in an adult male in what is now the Democratic Republic of Congo, dating to 1959 (Zhu et al. 1998) and later, a lymph node sample was found in an adult female from the same area dating to 1960 (Worborey et al. 2008). Using these early samples Worobey et al. came up with an estimate for the zoonosis to have occurred sometime between 1884 and 1924, a date estimated by using the mutation rate between the two early samples of HIV and then comparing rate of change with the change made from the SIV genome. Think of this in terms of a car journey – if you know the distance between points A and B during a particular journey is 30 miles and you know this took an hour, you can easily work out the mph speed. If you also knew where you started your journey and how far you’ve travelled, let’s say 120 miles, you can work out that you started the journey 4 hours ago (assuming a constant speed). Using this metaphor, the genomes of the two different HIV samples can be thought of as points A and B and SIV can be thought of as the starting point.

So that concludes what is generally accepted as the origins of HIV. In my next post we will have a look at some of the conspiracy theories surrounding HIV and why they deserve the title “conspiracy” in order to round off this 3 part blog post.  

Monday, 22 August 2011

On The Origins Of HIV (part 1 of 3)

It was estimated in 2009 that a staggering 33.3 million people were infected with HIV. To put this into some context, that’s more than half the population of Britain infected with a virus that will most likely result in their death. HIV infection leads to acquired immunodeficiency syndrome (AIDS) which leaves sufferers at the mercy of a plethora of diseases which are often harmless to a healthy individual. What I’d like to look at with you for my first three posts is where HIV came from. Is it some blight set upon us by a vengeful God? Was it designed in an American research lab? Is it a biological warfare attack on blacks and homosexuals? I think it’s pretty safe for me to say no, no and no to all of those options, though there are many people who believe at least one, if not more of these. Over these 3 posts I’ll take you through the reasons why I believe I can safely say no to those options before touching on some of the conspiracy theories surrounding HIV for a little amusement.

 Firstly, a quick bit of history. HIV (of a sort) first came to the forefront of the scientific community in 1981 when four homosexual men in San Francisco were found with rare infections and a fever of “unknown origin” (Gottlieb et al. 1981). Later studies of these men found them to have highly deficient immune system consistent with an acquired immunodeficiency. The virus responsible for this condition was isolated in 1983 (Barre-Sinoussi et al. 1983) and later termed the human immunodeficiency virus (HIV). As more and more people were diagnosed with HIV, minds began to turn to the origins of the virus.

It had been known for some time that monkeys were susceptible to a similar disease, caused by a slightly different virus known as simian immunodeficiency virus (SIV). Genetic analysis of different SIV strains showed striking similarities between the two viruses which were far too powerful to be down to chance alone. This led the scientific community to believe there must be an evolutionary link between HIV and SIV. While the link between SIV and HIV was hypothesised it wasn’t until 1999 that credible evidence was found.

Stay tuned to find out what changed in 1999…

Thursday, 11 August 2011

To Christen the blog

So a brief introduction before getting into the science. I'm currently a student at UCL on my summer break and wanting to share some of the fantastic world of science with you. On this blog I will share with you random bits of this world that I find interesting enough to write about and share with you. Chances are there will be a large focus on HIV as this is an area of real interest to me, but other things like circadian rhythms, infectious diseases, the immune system and so on may well come up. Some science based sport topics may also come up depending what floats my boat (if I get cramp playing football chances are I'll write an article on cramp).


Feel free to keep checking back in to see the things that are amusing my mind each week and hopefully to learn a thing or two (or at least kill a few minutes while the kettle boils).


First post will follow soon, to be titled "on the origin of HIV."


Take it easy!