On April 18, scientists at Sun Yat-sen University in Guangdong, China, published an article in the obscure open-access journal Protein & Cell documenting their attempt at using an experimental new method of gene therapy on human embryos. Although the scientific significance of the results remains open to question, culturally the article is a landmark, for it has reanimated the age-old debate over human genetic improvement.
The Chinese scientists attempted to correct a mutation in the beta-globin gene, which encodes a crucial blood protein. Mutations in this gene lead to a variety of serious blood diseases. But the experiments failed. Although theoretically the new method, known as CRISPR (short for “clustered regularly spaced short palindromic repeats”) is extremely precise, in practice it often produces “off-target” mutations. In plain English, it makes a lot of changes in unintended locations, like what often happens when you hit “search/replace all” in a word-processing document. The principal conclusion from the paper is that the technique is still a long way from being reliable enough for the clinic. Nevertheless, the science media and pundits pounced on the story, and for a while “#CRISPR” was trending on Twitter.
CRISPR is the fastest, easiest, and most promising of several new methods known collectively as “gene editing.” Using them, scientists can edit the individual letters of the DNA code, almost as easily as a copy editor would delete, a stray comma or correct a speling error. Advocates wax enthusiastic about its promise for correcting mutations for serious genetic diseases like cystic fibrosis and sickle-cell anemia. Other applications might include editing HIV out of someone’s genome or lowering genetic risks of heart disease or cancer. Indeed, every week brings new applications: CRISPR is turning out to be an extraordinarily versatile technique, applicable to many fields of biomedical research. I’m pretty immune to biomedical hype, but gene editing has the marks of a genuine watershed moment in biotechnology. Once the kinks are worked out, CRISPR seems likely to change the way biologists do experiments, much as the circular saw changed how carpenters built houses.
The timing of the paper was provocative. It was submitted on March 30 and accepted on April 1; formal peer review was cursory at best. Two weeks before, scientists in the United States and Europe had called for a moratorium on experiments using CRISPR on human “germ-line” tissue (eggs, sperm, and embryos), which pass alterations on to one’s descendants, in contrast to the “somatic” cells that compose the rest of the body. The embryos in the Chinese experiments were not implanted and in fact could not have become humans: They were the unviable, discarded products of in vitro fertilization. Still, the paper was a sensational flouting of the Westerners’ call for restraint. It was hard not to read its publication as an East Asian Bronx cheer.
Popular
"swipe left below to view more authors"Swipe →
The circumstances of the paper’s publication underline the fact that the core of the CRISPR debate is not about the technological challenge but the ethical one: that gene editing could enable a new eugenics, a eugenics of personal choice, in which humans guide their own evolution individually and in families. Commentators are lining up as conservatives and liberals on the issue. Conservatives, such as Jennifer Doudna (one of CRISPR’s inventors) and the Nobel laureates David Baltimore and Paul Berg, have called for cautious deliberation. They were among those who proposed the moratorium on using CRISPR on human embryos. “You could exert control over human heredity with this technique,” said Baltimore. George Q. Daley, of Boston Children’s Hospital, said that CRISPR raises the fundamental issue of whether we are willing to “take control of our genetic destiny.” Are we ready to edit our children’s genomes to perfection, as in the movie Gattaca? Could the government someday pass laws banning certain genetic constitutions or requiring others?
The CRISPR liberals are optimists. They insist that we should proceed as rapidly as possible, once safety can be assured—for example, that an “edit” wouldn’t inadvertently cause cancer while treating thalassemia. Some, such as the Oxford philosopher Julian Savulescu, insist that we have a “moral imperative” to proceed with engineering our genomes as fast as our sequencers can carry us. Savulescu believes it would be unethical to have the technology to produce better children and not use it. (For once, I’m with the conservatives.)
This debate is very familiar to a historian. Thus far, CRISPR is following the classic arc of breakthrough methods in genetics and biotech. First come millennialist debates over the new eugenics; then, calls for caution. A few cowboys may attempt rash experiments, which often fail, sometimes tragically. Finally, the technology settles into a more humdrum life as another useful tool in the biologist’s kit.
Each instance of this pattern, however, occurs in a different context, both scientifically and culturally. And while scientists, philosophers, and other commentators have been discussing the scientific risks and merits of CRISPR ad nauseam, no one seems to be placing the debate itself in this broader historical setting. Over the last 150 years of efforts to control human evolution, the focus on the object of control has tightened, from the population, to the individual, to the gene—and now, with CRISPR, to the single letters of our DNA code. Culturally, during this period, the pendulum has swung from cooperative collectivism to neoliberalism. The larger question, then, is: With the emergence of gene editing during an era of self-interested free-market individualism, will eugenics become acceptable and widespread again?
Until relatively recently, the only way to create genetically better humans was to breed them. In 1865, Charles Darwin’s half-cousin Francis Galton sought both to inspire society’s richest, wisest, and healthiest to breed like rabbits and to persuade the sick, stupid, and poor to take one for the empire and remain childless. In 1883, he named the plan “eugenics,” from the Greek eugenes, meaning “well-born” or “well-bred.” In Galton’s mind, eugenics was a much kinder approach to population management than ruthless Malthusian efforts to eliminate charity and public services. However misguided eugenics may seem today, Galton saw it as a humane alternative to simply letting the disadvantaged freeze, starve, and die.
In early-20th-century America, Galton’s plan suddenly seemed far too passive and slow. A new generation of eugenicists, spurred by novel experimental methods in genetics and other sciences, sought to take a firmer hand in controlling the reproduction of the lower classes, people of color, and the insane or infirm. Can-do Americans passed laws restricting marriage and immigration to prevent the degradation of an imagined American “stock.” Some, such as the psychologist Henry Goddard and the biologist Charles Davenport, sought to round up the so-called feebleminded (those with a mental age below 12) and institutionalize them as a sort of reproductive quarantine—adult swim in the gene pool. But others pushed for laws to simply sterilize those seen as unfit. That way, they could then marry or have sex with whomever they wanted without endangering the national germ plasm. Altering the body seemed more humane than confining it.
Involuntary sterilization soon lost any veneer of benevolence. In the United States, thousands of people were sterilized against their will, under eugenic laws passed in more than 30 states. For the most part, educated middle- and upper-class white Protestant men decided who was fit to reproduce, and naturally they judged fitness in their own image. In Germany, a decades-old program of Rassenhygiene or “race hygiene” took a cue from the vigorous American eugenics movement. The fingerprints of Davenport and other American eugenicists are on the infamous 1933 Nazi sterilization law. Controlling bodies was not so humane after all.
Around midcentury, many American scholars and scientists turned to environmental and cultural solutions for social problems, including poverty, mental illness, and poor education. However, some thinkers—biologists and others—advocated for more and better biotechnology. How much cleaner and more rational it would be, they argued, to separate sex from reproduction and make babies in the laboratory, using only the highest-quality sperm and eggs. In his 1935 book Out of the Night, the geneticist Hermann Joseph Muller called this “eutelegenesis.” He and others painted sunny pictures of free love and sperm banks. But three years earlier, in Brave New World, the English novelist Aldous Huxley had taken a much darker view of the scientific control of evolution. “Bokanovsky’s Process”—test-tube human cloning—was a “major tool of social stability!” said his director of hatcheries and conditioning. It was the biotechnical core of “Community, identity, stability,” the motto of the One World State.
Since then, each step in the development of biotechnology has seemed to bring Bokanovsky’s Process closer to realization. In 1969, the Harvard biologist Jonathan Beckwith and colleagues discovered how to isolate, or “clone,” a gene. At about the same time, Dan Nathans and Hamilton Smith at Johns Hopkins discovered how to use a type of molecular scissors called restriction enzymes to snip, insert, and reattach DNA strands in the lab. Each enzyme cuts the DNA at a specific site. (CRISPR, too, is based on naturally occurring bacterial enzymes.) In the 1970s, researchers discovered more than 100 different restriction enzymes, forming a battery of tools to cut DNA almost anywhere one wished. The new research enabled genes to be recombined—cut and pasted at will, even between species. To techno-optimists, genetic engineering would make the old, inhumane eugenics unnecessary. There would be no need to prevent people with bad genes from reproducing if one could simply repair those genes.
Public outcry. Eugenic angst. Predictions of enzymatic Armageddon. The city of Cambridge, Massachusetts—home to Harvard and MIT—banned recombinant DNA research outright. (Some of the schools’ top scientists promptly decamped for New York, Maryland, and California.) In 1974, fearing a massive clampdown from on high, scientists self-imposed a moratorium on recombinant DNA research. Ten months later, at a meeting at the Asilomar Conference Center near Monterey, California, David Baltimore, Paul Berg, James Watson, and other scientific luminaries agreed on a set of guidelines for laboratory safety: how to prevent, for example, a lethal bacterium from escaping the lab and causing epidemics or massive agricultural or ecological disaster. Within five years, fears had subsided and recombinant DNA had become a standard laboratory technique—forming the basis of a burgeoning biotech industry, whose early triumphs included synthetic insulin, the cancer drug interferon, and exogenous erythropoietin, a hormone that regulates the production of red blood cells.
But Asilomar is not the only—or even the best—historical comparison for CRISPR. Since the early 1960s, visionary scientists had imagined an era of “genetic surgery,” in which defective genes could simply be repaired or replaced. Rather than curing diseased patients, or segregating them from the “healthy” population, researchers said they would cure diseased molecules. In 1980, the UCLA researcher Martin Cline made the first primitive attempt at using engineered molecules therapeutically. Like the CRISPR researchers, he targeted the beta-globin gene. Cline, however, ignored more than just his colleagues’ own recommendations: Flouting National Institutes of Health regulations, he went overseas and injected a live virus containing the beta-globin gene into the bone marrow of two young women. Fortunately, the dosage was too small to have any effect; the girls were not helped, but neither were they harmed. Cline, on the other hand, suffered: He was publicly censured and had his federal funding restricted.
By the late ’80s, gene therapy seemed poised for a breakthrough. Led by NIH researcher W. French Anderson, starry-eyed biologists anticipated cutting and pasting their way to the end of genetic disease. Hundreds of grant applications were filed for gene-therapy research. In 1990, Anderson and colleagues conducted the first approved trial, on an exceedingly rare disease called adenosine deaminase deficiency, in which the loss of a single enzyme wipes out the entire immune system. The trial appeared to be a success. But the gene-therapy cowboys were humbled in 1999, when Jesse Gelsinger, a teenager suffering from a rare liver disorder, died of massive organ failure from the engineered virus used to ferry a gene into his cells. Then, in 2002, a French gene-therapy trial to correct immune-system failure was a success—at least until the subjects of the experiment developed leukemia, because the virus used as a delivery vehicle disrupted a gene required for normal cell growth. The FDA then suspended retroviral gene-therapy trials on bone-marrow cells until regulatory measures could be implemented. Unintended consequences killed the gene-therapy hype.
In the succeeding years, gene therapy has quietly returned. Old methods have been improved, new methods have been developed, and researchers have had limited success with treatments for a variety of cancers, AIDS, and several eye diseases. Hope remains high among the optimists, but even they acknowledge that the promise remains greater than the results.
The gene-therapy craze of the 1990s yielded two fundamental ethical distinctions. First, researchers distinguished engineering the germ line from engineering somatic cells. Germ-line modifications are not used to treat disease in an individual, but to prevent it (or lower the risk) in future individuals. Unlike preventive public-health measures such as the quarantine, however, meddling with the genome has a high risk of unintended consequences. The genome is like an ecosystem, with every element ultimately connected to every other. Inadvertently damaging alterations could thus be seen as harming the genomes of the others without their consent. Yet Anderson was willing to consider germ-line modifications should somatic gene therapy eventually prove safe. (Scientists like Harvard’s George Church make similar arguments about CRISPR today.) The second distinction was that gene therapy should only be used to treat disease—not to enhance or alter normal traits. In short, gene therapists considered therapeutic applications ethical but enhancement not—and creating a master race was right out. (Anderson was more principled about some things than others; he is currently serving time for child molestation.)
Parallel to the development of genetic engineering, advances in reproductive technology made Muller’s and Huxley’s vision of test-tube babies a reality. On July 25, 1978, Louise Brown was born through in vitro fertilization, a technique developed by Patrick Steptoe and Robert Edwards. Combining IVF with new genetic-screening technologies made it technically possible to reject embryos with undesirable traits—or select those with desirable ones. “You do not need the still distant prospect of human cloning to begin to get worried,” wrote Anthony Tucker in The Guardian. James Watson, who had recently recanted his conservative position on recombinant DNA research, nevertheless predicted: “All hell will break loose.”
An even braver new world dawned in 1996, when the Roslin Institute in Scotland announced the birth of Dolly the sheep—the first “cloned” large mammal. The technique, formally known as somatic-cell nuclear transfer, revived the debate over designer babies. The US National Bioethics Advisory Commission launched an investigation and, in 1997, published a report that led to the unusual step of restricting a procedure that did not exist. The NIH prophylactically prohibited cloning human beings with federal funds.
Researchers promptly announced plans to attempt it with private money. One was Brigitte Boisselier, who was supported by Clonaid, the research arm of the transhumanist group the Raëlians. Its leader, Raël (né Claude Vorilhon), claimed to have been contacted by extraterrestrials. On December 27, 2002, Boisselier announced that a cloned baby, called Eve, had been born, although Clonaid wouldn’t reveal any data or produce Eve for inspection. The mainstream scientific community rolled its collective eyes. Once again, though, the dust eventually settled, and somatic-cell nuclear transfer remains a legitimate laboratory technique. Clonaid claims to be cloning away still. But no armies of Hitlers have stormed across Europe, and to date, no genetically optimized Superman has communed with the groovy dudes from the next galaxy.
One important result from the cloning debate was that the kibosh on genetic enhancement began to relax. In 2001, Julian Savulescu started to argue for “procreative beneficence,” a principle that holds that people are morally obligated to have the best children possible—including through genetic-enhancement technologies. (Savulescu’s Enhancing Human Capacities, published in 2011, continues the campaign.) The eugenically named, self-proclaimed visionary Gregory Stock published Redesigning Humans in 2002; it rosily envisioned writing “a new page in the history of life, allowing us to seize control of our evolutionary future.” What could go wrong?
CRISPR, then, is the latest chapter in a long, darkly comic history of human genetic improvement. Like whole-gene engineering in the 1970s, gene editing is proving remarkably versatile in basic science research: New applications appear weekly. But conservative researchers such as Doudna, Baltimore, and Berg insist that the taboos against germ-line engineering and enhancement remain in place. However, notwithstanding Baltimore’s and Berg’s reassurance that eugenics is “generally considered abhorrent,” some commentators are actively and publicly advocating what they consider a new kind of eugenics. Their argument is couched in technology, but it rests on politics.
The eugenics movement of the early 20th century was rooted in a spirit of collectivism. Ideals of progress and perfection dominated American culture. Across the political spectrum, Americans sought social improvement through a variety of reforms, ranging from public health to food production to workplace environments and education. Such a project required collective effort. Government legislation was broadly accepted as a tool of positive change. Cooperation for the good of society was a sign of good citizenship. And science, epitomizing rationality, efficiency, and mastery over nature, was society’s most potent tool of progress.
Eugenics, often referred to as “racial hygiene,” was associated with the Progressive hygiene movement in public health. In 1912, Harvey Ernest Jordan, who later became dean of the University of Virginia’s medical school, addressed a conference of eugenicists on the importance of their field for medicine. He asserted, with the buoyancy of the era, that the country was emerging from a benighted period of selfish individualism—which Mark Twain had dubbed the “Gilded Age”—into an enlightened phase of concern for one’s fellow man. Eugenics was of vital interest to medicine, he wrote, because it sought to prevent disease and disability before it occurred:
Modern medicine, yielding to the demands of real progress, is becoming less a curative and more a preventive science…. This represents the medical aspect of the general change from individualism to collectivism.
Progressives had faith in government as an instrument of social—and biological—change. By the 1960s, that faith had eroded. The Cold War had sparked an anti-authoritarian New Left that criticized state control as a corruption of the collectivist spirit. Left-wing biologists sometimes found themselves in an awkward position. When Beckwith, a staunch leftist, cloned the first gene, he held a press conference warning against the dangers of his own research. “The work we have done may have bad consequences over which we have no control,” he said. His graduate student James Shapiro commented, “The use by the Government is the thing that frightens us.”
During the 1970s, New Deal liberalism began to give way to neoliberalism. At the turn of the 21st century, biotech and info tech had grown as dominant as Big Oil and Big Steel had been in 1900. The Internet has become our railroad system. The last 30 years have seen Jordan’s “general change from individualism to collectivism” reversed: Elites justify increasing inequality with a libertarian rhetoric of individual freedom.
Individualism, say the biotech cheerleaders, immunizes us against the abuse of reproductive genetics. In their free-market utopia, control over who gets to be born would be a matter of personal choice, not state order. Couples should have the freedom to undergo in vitro fertilization and to select the healthiest embryos—even the best ones, because “best” is no longer a matter of official mandate. For those who define eugenics as state control over reproduction, this is not eugenics.
Others adopt a definition closer to that of the 1921 International Eugenics Congress: Eugenics is “the self-direction of human evolution.” Many critics over the years have argued that eugenics wasn’t wrong; rather, it was done badly and for the wrong reasons. So it goes today. In 2004, Nicholas Agar published Liberal Eugenics, a philosophical defense of genetic enhancement. In a nutshell, he argues that genetic enhancements ought not to be treated differently from environmental enhancements: If we are allowed to provide good schools, we must be allowed to provide good genes. Like Savulescu, Agar insists that it is immoral to prohibit parents from producing the best children they can, by whatever means. In 2008, in the foreword to a reissue of Charles Davenport’s 1912 Heredity and Eugenics, Matt Ridley, a viscount, zoologist, science writer, and Conservative member of the House of Lords, argued that the problem with eugenics was its underlying collectivist ideology. Selfishness would save the human race. “There is every difference between individual eugenics and Davenport’s goal,” he wrote. “One aims for individual happiness with no thought to the future of the human race; the other aims to improve the race at the expense of individual happiness.” Similarly, Gregory Stock wrote that a “free-market environment with real individual choice” was the best way to protect us from eugenic abuse. Liberal eugenics is really neoliberal eugenics.
In which case, it’s hard to see how individual choice and the invisible hand will defang the dangers that eugenics still poses. In a 2011 Hastings Center Report, the Australian bioethicist Robert Sparrow showed how libertarian, individualist eugenics would lead to the same ends as good old-fashioned Progressive eugenics. Savulescu’s “best possible children” must naturally have the most opportunities to flourish and the fewest impediments to a happy, fulfilling life. Accordingly, parents should select the traits that society privileges. But in our current society, who has the most opportunities? Of course: a tall, white, straight, handsome man. If neoliberal genetic enhancement were to proceed unregulated, then social convention, cultural ideals, and market forces would drive us toward producing the same tired old Aryan master race.
Further, the free market commodifies all. Neoliberal eugenics creates a disturbing tendency to regard ourselves, one another, and especially our children as specimens to be improved. The view that “the genome is not perfect,” as John Harris, another pro-enhancement philosopher, puts it, perpetuates the notion of genetic hygiene. Even cautious reports, like one that appeared recently in the International Business Times, propagate this idea: “any hope that [CRISPR] will help physicians ensure spotless genomes,” they write, remains distant. Not to put too fine a point on it, but whatever the time line, the goal of a spotless genome implies genetic cleansing.
Scholars of disability have mounted a vigorous critique of the pursuit of genetic perfection. Call it the Gattaca defense: By granting individuals the power and permission to select against difference, we will be selecting for intolerance of difference. But Sparrow notes that enforcing diversity is itself morally problematic. Should gene editing become a safe and viable option, it would be unethical to prohibit parents from using it to correct a lethal genetic disease such as Tay-Sachs, or one that causes great suffering, such as cystic fibrosis or myotonic dystrophy.
Where, then, does one draw the line—and how easy would it be to enforce? Harris, Savulescu, Agar, and others say that once you let in any modifications, you have to allow them all.
What gets all too easily lost in this debate is that it takes place in a genocentric universe. Even those opposed to genetic enhancement presume DNA to be the ultimate determiner of all that is human, and biotechnology the most effective tool for solving social problems. Such genetic determinism is inherently politically conservative—whatever one’s personal politics.
Here’s why: Sci-fi genetic fantasies, whether hand-waving or hand-wringing, divert our attention from other, more important determinants of health. Studies by the World Health Organization, the federal Office of Disease Prevention and Health Promotion, the Centers for Disease Control and Prevention, and academic researchers leave no doubt that the biggest factors in determining health and quality of life are overwhelmingly social. Genetics plays a role in disease, to be sure, but decent, affordable housing; access to real food, education, and transportation; and reducing exposure to crime and violence are far more important. In short, if we really wanted to engineer better, happier, healthier humans, we would focus much more on nurture than on nature.
The reason we don’t is obvious: the very selfishness that neoliberals proclaim as the panacea for eugenic abuse. Genetic engineering primarily benefits industry and the upper classes. In vitro fertilization and genetic diagnosis are expensive; genetic therapy would be even more so. Genetic medicine is touted as the key to ending “one-size-fits-all” medicine, instead tailoring care to the idiosyncrasies of each individual. President Obama’s Precision Medicine Initiative, announced in his last State of the Union address, extolled a vision of individualized care for all. But historians of medicine have shown that the rhetoric of individualized medicine has been with us at least since the days of Hippocrates. The reality is that for 25 centuries, individualized treatment has been accessible to the rich and powerful, while lower-status people in every era—be they foreigners, slaves, women, or the poor—have received one-size-fits-all care, or no care at all.
The idealists and visionaries insist that costs will drop and that technologies now accessible only to the rich will become more widely available. And that does happen—but new technologies continually stream in at the top, leading to a stable hierarchy of care that follows socioeconomic lines. Absent universal healthcare, ultra-high-tech biomedicine depends on a trickle-down ideology that would have made Ronald Reagan proud.
Further, molecular problems have molecular solutions. The eternal eugenic targets—disease, IQ, social deviance—are overdetermined; one can explain them equally as social or as biomedical problems. When they’re defined as social problems, their solutions require reforming society. But when we cast them in molecular terms, the answers tend to be pharmaceutical or genetic. The source of the problem becomes the individual; the biomedical-industrial complex, along with social inequities, escape blame.
In short, neoliberal eugenics is the same old eugenics we’ve always known. When it comes to controlling our evolution, individualism and choice point toward the same outcomes as authoritarian collectivism: a genetically stratified society resistant to social change—one that places the blame for society’s ills on individuals rather than corporations or the government.
I’ll be excited to watch the workaday applications of techniques like CRISPR unfold, in medicine and, especially, basic science. But sexy debates over whether reproductive biotechnology will permit us to control our genetic evolution merely divert us from the cultural evolution that we must undertake in order to see meaningful improvement in human lives.