The Compatibility Gene, page 7
At 6:13 a.m. on 3 December 1967, after an operation lasting four and a half hours, Darvall’s heart began beating in Washkansky. Barnard later wrote that ‘from birth [my life] had built towards one moment in an operating theatre when a blue heart turned red with life and a man was reborn. At that moment two lives had been fused into one.’8 The grandiosity of Barnard’s sentiment was deserved; his transplant success was momentous. Kidney transplantation had been achieved many years earlier but the public were captivated by the transfer of the most symbolic organ of all, the heart. After all, nobody writes poems about kidneys.
Immediately after the transplantation, Barnard became a celebrity. ‘On Saturday, I was a surgeon in South Africa, very little known. On Monday I was world-renowned.’ And two years later he divorced his wife. He embarked on a string of relationships with glamorous and famous women, marrying twice more – the third time, when in his mid-sixties, to an eighteen-year-old model. He became outspoken on a range of issues, from protesting against South African apartheid to suggesting that Princess Diana could have survived her 1997 car-crash had the emergency services in Paris acted differently.
Many surgeons around the world had raced to perform the first successful heart transplantation, and nobody suspected Barnard would be first. In the same week as Barnard’s operation, a Brooklyn hospital tried and failed to give a new heart to a nineteen-day-old child.9 In London, meanwhile, a physician attempted to persuade the National Heart Hospital to attempt a heart transplant but, according to the physician, the hospital delayed the operation because of complex ethical and legal problems and the patient died waiting.10 It was perhaps inevitable that the first successful heart transplant would take place in a country with less red tape, because the ethical and legal problems were so intractable.
To transplant a heart, it has to be fresh. But how could anyone take a beating heart out of somebody when to do so would kill them? The answer required establishing the concept of ‘brain death’ or ‘irreversible coma’ – and this was precisely the term used by the committee assembled at Harvard in 1968, to deal with the problem that Barnard’s successful transplant had made prescient.
The committee deliberated from January to August 1968 before agreeing on four criteria as defining this new state of human existence. The patient’s body had to manifest: 1) no response to pain, 2) no spontaneous movement, 3) no reflexes, and 4) no electrical activity in the brain.11 These conditions had to persist for at least twenty-four hours, in the absence of any nervous-system depressants. Behind these criteria lies the subtle idea that not all our tissues and cells die at the same time. So someone’s nervous system can be dead – and the person is truly unable to function in any way – but other organs could still be ‘alive’, able to work fine for a while longer. This was already agreed in the medical profession – albeit unofficially – and probably the biggest impact of the Harvard Committee’s report was to propel the issue of brain death to the forefront of public and academic scrutiny.
Philosophical, ethical, legislative and religious literature erupted in response. Just three months after Barnard’s success, US Senator Walter Mondale opened a subcommittee hearing on the questions raised by transplantation and said: ‘These advances and others yet to come raise grave and fundamental ethical and legal questions for our society – who shall live and who shall die; how long shall life be preserved and how shall it be altered; who shall make decisions; how shall society be prepared.’12 Popular magazines Reader’s Digest and Time also discussed the issues and questioned whether it was appropriate for a doctor to unilaterally decide when death occurs – where do the family’s wishes fit in?13
On the other hand, some medical professionals felt the criteria were already too restrictive and proposed, in the mid-1970s, that brain death be defined by a loss of higher-brain functions, rather than requiring all electrical signals in the brain to be inactive. When the brain loses its higher functions, so the argument ran, psychological traits of that person are lost: death, in other words, has really occurred. It’s easy to see from here how defining brain death is highly problematic. Compounding the difficulties, patients fulfilling the criteria for brain death may not appear dead. They might still be breathing, even if being helped to do so artificially. There is trauma for everyone involved in this ultimate decision.
In making such judgements, people often look to religion for guidance. Considerable resistance to designating somebody as ‘brain dead’ when seeking transplant donors has come, for example, from some branches of Orthodox Judaism, which insist that an irreversible end to cardiac and respiratory activity must remain the only criterion for death. Other branches of Judaism – Conservative and Reform Judaism, for instance – have been more accommodating. As far as the Catholic Church is concerned, Pope Pius XII paved the way forward in 1957 when he clarified that responsibility for giving a clear and precise definition of death lay with the doctor. For Islam, the Qur’an itself does not precisely define death, but some Islamic countries, such as Turkey, now have explicit laws that accept the definition of brain death. Hinduism, Protestantism and many other world religions similarly do not have any formal resistance to the idea of brain death.14
Still, the debate will only ever become more difficult as new medical advances will inevitably complicate the situation further. Brain parts cannot be transplanted today, but what if this were to become possible in the future? Such an idea may seem monstrous, but in the late 1960s many claimed it was monstrous to transplant a heart. While we have long since stopped considering the heart as the body’s emotional, soulful core, higher-brain function really could contain something of our identity. Rather than a robust and all-encompassing philosophical definition of death that can endure through future medical advances, we have more of a practical consensus amongst medical practitioners that works within current technology.15
Today, someone certified as brain dead can pass on their heart, lungs, liver, kidneys, pancreas, intestine and tissues, including corneas, skin and bone. One donor can save or transform the lives of up to nine others. Nevertheless, every day, three people in the UK and seventy-seven in the US die while waiting for the right donor.16 As Medawar made us realize back in the 1950s, the problem is the immune reaction against somebody else’s tissues and cells. If the right match can be made between donor and recipient, this obstacle can be overcome. But what exactly is it that needs to be matched between people? What are the big things that vary in cells and tissues from different people – things that the immune system is especially reactive to?
Long before compatibility genes – in fact, long before we knew anything at all about genes – the first human variation to cause immune reactions was discovered: blood groups. Blood – a liquid suspension of different types of cells – is exquisitely complex. Red blood cells carry oxygen, white blood cells fight disease, and small protein molecules include those that cause clotting. For blood transfusions, the primary issue of compatibility concerns the different sugar molecules that different people have at the surface of their red blood cells. The names of the well-known blood groups – A, B, AB and O – denote the fact that some of us have sugar A, some B, some both (blood group AB), while others have neither (blood group O).
The Austrian scientist Karl Landsteiner discovered these blood groups in 1901 at the age of thirty-two. Until that time, Landsteiner hadn’t had an easy time in his career as a scientist. He had received his medical doctorate ten years earlier from the University of Vienna and, after spending time in Munich, he returned there in November 1897 only on a voluntary basis. An application to the medical faculty at Trieste was turned down in 1898 and he finally managed to obtain a paid research position in the University of Vienna’s Institute of Pathological Anatomy – officially as an assistant to the institute’s head, although in practice he pursued his own ideas.17 Not until after his discovery of blood groups would he become a full member of the University’s medical faculty.
Having read about experiments showing that human blood can react to animal blood, Landsteiner wondered if a reaction might occur when blood from different people was mixed together. He reasoned that, since blood from different species must vary, perhaps blood from individuals within the same species also differs. In 1901, he set about testing this with a characteristically simple – yet rigorous and brilliant – experiment. He began by taking blood from six women who had recently given birth – perhaps because these samples were easy to obtain – and separated their red blood cells from the serum (the liquid part of blood, without cells or clotting factors). He then mixed cells from one person with the serum from another, in all possible combinations, and drew up a table detailing whether or not a reaction – indicated by the red blood cells clumping together – occurred in each instance.
He couldn’t explain what caused the cells to clump (we now know that it’s part of a specific immune response) but he looked for patterns in whose cells and serum reacted. Nobody’s serum reacted when recombined with their own cells. But cells combined with serum from different people did react in some cases. According to the predominant view of the time, which held that problems with blood transfusions reflected one’s history of disease, this reaction might have been ascribed to the presence of disease within some of the samples that caused cells from others to clump.18 Landsteiner, however, had an epiphany. What if, he thought, his results indicated not disease, but simply a natural incompatibility between different people’s blood?
Immediately, he next tried the same experiment with six people from his own research lab, including himself. The importance of this second experiment was that he knew these people weren’t ill in any obvious way and so any reactions seen this time would be unlikely to be caused by disease. He again found that some people’s cells and serum would react. He became confident that his hunch was right – that there was a natural variation in different people’s blood. But was there any pattern to the reactions? It was a huge leap forward for science when he realized that he could account for the pattern of reactions by proposing that humans have three different types of blood – he called them A, B and C (C being what we now call O, the blood type that lacks both A and B sugars).
At first, not even Landsteiner – never mind anybody else – quite realized how supremely important this discovery was. When reporting these experiments in his paper ‘Agglutination of normal people’s blood’, published in German in 1901, he concluded by saying: ‘I hope this will be of some use.’19 Landsteiner always spoke humbly of his achievements. Even as late as 1930, in a very rare public interview that he gave to the Austrian daily newspaper Der Wiener Tag soon after the announcement that these experiments had won him the Nobel Prize, he said that his discovery of blood groups would surely not interest a layperson.20 Of course, the truth is that this is of immense importance to us – it paved the way for successful blood transfusions, and, fundamentally, these experiments revealed the first molecular mark of our individual differences.
Medawar would later call Landsteiner’s work one of the great triumphs of modern clinical biology.21 But at the time it took many years for Landsteiner’s discoveries to become widely known; he wasn’t one for actively seeking to promote his work and he made no effort to make his papers easy to read – telling the truth was all he aspired to in his writing.22 It wasn’t until doctors had to explain why their blood transfusions often didn’t work at the start of the First World War that the enormity of Landsteiner’s discovery became widely recognized. Even then, long after the practical importance of his work was clear, a letter that Landsteiner wrote to his student on 12 February 1921 indicates he was being criticized by some scientists in the US because his original paper in 1901 only described three blood groups, whereas, in fact, there are four.23 It had simply taken another year for the fourth, much rarer, group (AB) to be identified by one of Landsteiner’s students.24
Science meant everything to Landsteiner, and he worked long hours for these experiments while living in seclusion with his mother, with whom he had a close relationship following his father’s death, when Landsteiner was aged seven. He lived with his mother until her death in April 1908, by which time Landsteiner was aged forty-nine, and it was only at this point that he married. He kept a cast of his mother’s face on his bedroom wall for the rest of his life. In the lab, he was a good mentor to some but he also soured relationships with others by complaining often about his working conditions. He found long-term friendships difficult.25
When going to Stockholm in 1930 to collect his Nobel Prize, it’s curious that he didn’t take his wife or son with him. In fact, when he heard that he had won the prize he didn’t even tell his wife or son about it at first – they only found out later that evening when a friend came round to visit.26 Also strange is that in the group photos of the 1930 Nobel Prize winners, everyone is facing the front except for Landsteiner – he turned his chair to face sideways, deliberately looking in a different direction to everyone else around him. Landsteiner, it can safely be said, was nothing if not eccentric.
At the turn of the twentieth century, around the time that Landsteiner discovered blood groups, the work of another Austrian burst into view – even though this Austrian was long dead. The Austrian monk Gregor Mendel had died in 1884, aged sixty-one, with his life-time’s work remaining unknown – his seminal discovery, written in 1865, was published in forty-four pages entitled ‘Experiments on plant hybrids’ in the journal of the local Brünn Biological Society. On 8 May 1900, the British zoologist William Bateson took a train from Cambridge to London. With him, he had Mendel’s seminal paper, which had been published thirty-five years earlier. Mendel had spent his time in his monastery’s greenhouse, cross-pollinating pea plants to know how peas’ colour and shape passed from one generation to the next. He discovered wrinkled and smooth pea plants, for example, didn’t blend this trait to produce offspring with just slightly wrinkled peas. Rather, the next generation were either smooth or wrinkled, just like one of the parents. The huge importance of this horticultural observation was that it implied that traits are inherited in discrete units; which we now call genes.
Mendel’s paper had been recently cited by Dutch and German scientists and, as his train steamed along, Bateson settled back in his seat and was instantly gripped.27 According to recollections of Bateson’s wife, Beatrice, he quickly realized the impact of Mendel’s work – that traits must be inherited in discrete units – and henceforth championed the discovery. In 1905, he coined the term ‘genetics’ and so began a new branch of science.
Like everyone at the time, Landsteiner was not aware of Mendel’s work until Bateson and others rediscovered it. So Landsteiner had not initially considered that blood groups could be inherited but when he did, probably one or two years after Bateson’s train ride, he immediately realized that blood groups could help out in paternity disputes,28 which was in effect an early use of the power of genetic fingerprinting. We now know that your blood group is determined by a gene involved in adding sugar molecules to the surface of red blood cells. This gene comes in A, B and O forms. So, if you inherited the A version of this gene from your mother and the B version from your father, then your red blood cells will have both A and B sugars and you’ll be blood group AB; inherit two A genes and you’ll be blood group A, and so on.
As a foetus, your immune system learns to be tolerant to your own blood-group sugars. This, of course, is Burnet’s and Medawar’s theory of acquired tolerance in action. But your blood serum contains antibodies able to attack the versions of those sugars that you don’t possess. Somebody who’s blood group A cannot, for example, accept blood from a B group donor because a huge immune reaction will occur which can cause fever, shock, kidney failure, and may be even be fatal.29
Showing the foresight of a truly great scientist, Landsteiner even tried to relate his discoveries in blood group compatibility to problems in skin grafting.30 But this had to wait for Medawar and Burnet. While working in his lab, Landsteiner suffered a heart attack on 24 June 1943 and reluctantly went to hospital, where he died two days later. Every scientist dies with the frustration of problems being unsolved.
Blood-group genes come in just three versions – such human differences are simple compared to the immense variation in our compatibility genes that control skin graft success. Recall that these genes vary the most from person to person. There are three compatibility genes that encode for proteins found on nearly all cells in your body (formally called class I compatibility genes) and these are termed A, B and C. You inherit a set of these three genes from your mother and another from your father. In this way, we each have six different class I compatibility genes: 2 As, 2 Bs and 2 Cs. It’s theoretically possible to inherit several identical genes from Mum and Dad, but that’s unlikely, because in all there are 1,243 versions of the A gene, 1,737 different B genes and 884 Cs found so far.31 So the number of combinations in these genes that we can each inherit is mind-boggling.
In Landsteiner’s experiments mixing serums with red blood cells, compatibility genes weren’t important. That’s because human red blood cells are short-lived cells with the relatively straightforward task of carrying oxygen around the body and – unlike most human cells – they don’t have the proteins encoded by compatibility genes.32 But matching compatibility genes is the major factor for the long-term success of organ transplants.
For most organ transplants, hospitals will assess the match between recipient and potential donors across two class I compatibility genes – A and B – and one of the class II compatibility genes called DR. If a perfect match isn’t available, they will check directly whether or not the recipient has an acute immune reactivity against the donor’s cells. For bone marrow transplants, they also check the match of two other compatibility genes, C and DQ.33 For a kidney transplant in which no genes were matched, the half-life of the graft would be about seven years – but if the most important six can be matched, the graft’s half-life rises to between twelve and twenty years. This is, of course, fantastic, but it also leads to dilemmas.
