The chemical weapon attack in Syria that has killed at least 70 people employed the nerve gas sarin. And, it is believed that it was the nerve agent VX that was used to assassinate Kim Jong-nam in a public airport. These uses of nerve agents violate the international Chemical Weapons Convention (CWC). While the Syrian government signed the CWC in 2013, it was never ratified, and of course, signatory agreement does not guarantee compliance. Nor do such treaties among nation states necessarily provide any security against the development and use of biological and chemical weapons by non-state actors. These events are disturbing and, we believe, portend a larger, and ever growing issue of how such neurological agents could be used, altered and/or developed anew as weapons.
International advances in brain science over the past decade are enabling ever greater capabilities to control neurological processes of thought, emotion and behavior. So, while the CWC and Biological Toxin and Weapons Convention (BTWC) prohibit development of drugs, microbes and toxins that can be made into weapons, these prohibitions are not absolute – many of these substances can be – and are – used in basic neuroscience research, or in research programs that seek to develop defenses against biochemical weapons. What’s more, new tools and methods with which to edit genes, such as CRISPR/Cas9, can make it easier to modify bacteria, viruses or certain toxins to be weaponized. Until recently, these approaches were regarded as not yet ready for human applications; but the use of CRISPR-modified cells in humans by scientists in China has established both a new timetable and a new level of risk for such possibilities. Gene editing kits are commercially available and not excessively expensive. Thus, real concerns arise about the ability of both state and non-state actors to bio-engineer agents, including those that act on the nervous system. In light of this, last year, then Director of National Intelligence James Clapper identified the very real potential to use gene editing to create lethal or highly disruptive biological agents; a warning that was seconded by the President’s Council of Advisors on Science and Technology (PCAST).
In its military response to the events in Syria, the United States government has strongly communicated that continued use of these agents ‘…crosses a line’. Indeed it has. The time to “wait and see” if neuroweapons will be developed and used has passed. The specter of available agents has been realized, and with it, should be recognition that the tools that make science easier to execute, and more accessible, should also prompt revision of the ways such methods and products are regarded and regulated. Indeed, recognizing the risk and growing threat of neuroweapons is important; but we believe insufficient. There is profound ethical obligation to acknowledge that science and technology can be used to harm as well as heal. As we further make strides to explore and affect the brain, it is critical to pay close attention to the directions that each and every step may lead. Thus, it will be essential to pursue and obtain a deeper and fuller understanding of the ways brain science can be harnessed to create weapons, and to establish more comprehensive, ethical guidelines and oversight policies.
A working group of the European Union’s Human Brain Project (HBP) is focusing efforts on a thorough review of what constitutes ‘dual use’ applications of brain science, both within the HBP programs, and more broadly; recommending more stringent policies for regulation of neuroscientific research that can be employed in such ways. This is laudable and noteworthy, even if only as a first step. But perhaps the more imposing issues remain: such research will still likely be conducted by individuals and groups that do need heed proposed guidelines or policies; and while it may be possible to regulate research (at least to some extent), the use of neuroweapons by state and non- state actors is far more difficult to address and control. Let these challenges serve as opportunities for action. We suggest that the scientific executive committees of both the BTWC and CWC could be utilized as forums for acknowledging and assessing the potential risks and threats posed by current and near-term capabilities in brain science, and that the international community of brain scientists and ethicists further proactive discourse and engagement toward informing and developing policies and regulations to govern dual-use neuroscientific research and its applications. We believe that such action would represent a necessary response to a real and growing danger.
ABOUT THE AUTHORS
Dr. Diane DiEuliis is Professor at National Defense University, in the Center for the Study of Weapons of Mass Destruction. Her research areas focus on emerging biological technologies, biodefense, and preparedness for biothreats. Dr. DiEuliis also studies issues related to dual use research, disaster recovery research, and behavioral, cognitive, and social science as it relates to important aspects of deterrence and preparedness.
Dr. James Giordano is Professor in the Departments of Neurology and Biochemistry, Chief of the Neuroethics Studies Program in the Pellegrino Center for Clinical Bioethics, and Co-director of the O’Neill-Pellegrino Program for Brain Sciences and Global Health Law and Policy at the Georgetown University Medical Center. He serves as a Task Leader and Researcher of the EU Human Brain Project’s Working Group on Dual-Use, and is an appointed member of the US Department of Health and Human Services Secretary’s Advisory Council for Human Research Protections.
The views expressed in this blog do not necessarily represent those of the EU Human Brain Project, the US Department of Health and Human Services, or the United States Department of Defense.
Genome editing, or the ability to manipulate the DNA of an organism, has been facilitated by gene function studies made possible by the progress and affordability of genome sequencing (Gaj T, et al 2013; Ding Y, et al 2016). To make ethical decisions regarding evaluation and regulation of new genome editing technologies, it is important to gain an understanding of their mechanisms of action and potential applications.
The scientific community has developed several commonly used genome-editing techniques: zinc finger nucleases (ZFNs), transcription activator-like effector nucleases (TALENs), and clustered regularly interspaced short palindromic repeat (CRISPR). ZFNs, TALENs, and CRISPR all use nucleases (proteins or enzymes that can cleave DNA), which can be customized to recognize a particular sequence of base pairs in DNA. More recently, Gao et al (2016) demonstrated the utility of yet another nuclease, Argonaute, for genome editing. In general, once recognition of the target DNA sequence occurs, the system’s nuclease binds to the DNA and creates breaks in both strands of the DNA, also known as a double-strand break (DSB).
The DSB can then be repaired within the cell through either of two DNA repair mechanisms: error-prone nonhomologous end joining (NHEJ) or homology-directed repair (HDR) (Wyman and Kanaar 2006). NHEJ is usually the default repair mechanism for DSBs in cells and is useful in research, but is prone to producing errors in the repaired DNA. When DSBs need to be repaired with greater precision (e.g. for therapeutic purposes), use of the more accurate repair mechanism, HDR, is recommended (Ding Y, et al 2016; Cortez 2015). HDR requires the addition of a fragment of DNA that is identical to the original, unbroken DNA sequence. The proteins involved in HDR use the DNA fragment as a template to ultimately repair the DSB, restoring the original DNA sequence. In genome editing applications, the DNA template can be manipulated such that HDR will introduce a new mutation, or change in the base pair sequence, into a particular gene (Cortez 2015). Depending upon the purpose of the experiment, the outcome of such gene editing could be to inactivate or activate a gene, or to induce or repress expression of a gene. This technology has potential applications in areas such as disease modeling, disease treatment and prevention, and agriculture.
Zinc Finger Nucleases
Zinc finger nucleases (ZFNs) were the first genome editing technology to be popularized in the scientific community (Pabo CO, et al 2001). Zinc finger proteins, major components of ZFNs, were first identified in a species of African aquatic frog, X. laevis (Miller J, McLachlan AD, and Klug A 1985). They are approximately 30 amino acids long and are characterized by the spacing of two particular types of amino acids: cysteine and histidine. In a ZFN, each zinc finger binds to three base pairs on the DNA strand (Pabo CO, et al 2001).
After determining the structure of the zinc finger-binding domain, researchers recognized the utility of the zinc finger framework for the design and selection of new DNA-binding proteins. Kim, Cha, and Chandrasegaran (1996) were the first to fuse zinc finger proteins with the cleavage domain of another protein, Fok I. This fusion created a hybrid protein that could be customized to cleave DNA at any particular site. Still, zinc finger interactions with DNA were complex and unpredictable. To improve target specificity, more zinc fingers were added to the hybrid. Each zinc finger protein is capable of recognizing three DNA base pairs on the target DNA site; adding more zinc fingers increases the length of the recognition sequence, and thus, target specificity (Bartsevich VV and Juliano RL 2000; Laity JH, Dyson HJ, and Wright PE 2000).
Zinc finger nucleases (ZFNs) are composed of two parts: zinc finger proteins and a DNA cleavage domain (derived from Fok I). An active ZFN (see Figure 1 below) requires two different ZFN complexes—one for each strand of the double-stranded target DNA. This requirement expands the length of recognition sites, further increasing target specificity. Researchers can alter the DNA binding domain of zinc finger proteins to customize them to recognize a genomic target of choice (Desjarlais and Berg 1992). Additionally, ZFNs are relatively small in size, allowing for easy delivery into cells compared to other genome editing techniques (Lee J et al 2015). Despite such advantages and improvements to the performance of ZFNs, challenges with efficiency, target availability, off-target effects, and specificity remain (Bae K, et al 2003; Kim et al 2009; Ramirez et al 2008; Cornu et al 2008).
Transcription activator-like effector nucleases (TALENs) are similar to ZFNs, in that they consist of a DNA-binding domain and a DNA cleavage domain (also derived from Fok I) (Miller JC, et al 2011). The DNA-binding domains of TALENs are proteins called transcriptor-like effectors (TALEs), derived from plant bacteria. TALEs are comprised of 33-35 amino acid repeats, each of which recognizes a single base pair on the DNA (Deng D, et al 2012). Usually, TALENs are used in pairs. See Figure 2 below for the structure of TALENs.
TALENs are relatively easy to design and construct. Large-scale, systematic studies indicate that TALE repeats can be combined to recognize any user-defined sequence (Reyon et al 2012). The assembly of these relatively large protein complexes has been facilitated by the development of systems such as FLASH (fast ligation-based automatable solid-phase high-throughput). FLASH is both a rapid and cost-effective way to facilitate large-scale assembly of TALENs for use in genome editing. Cermak et al (2015) used another system, the Golden Gate method, demonstrating that it could be used to construct TALENs within five days. On the other hand, delivery and optimization of these constructs can be complicated by their large size and repetitive nature (Holkers, et al 2013). Targeting multiple sites on DNA with TALENs may also be hindered by the large size of the nuclease, although this issue could be ameliorated through diversification of the TALEs (Yang, et al 2013).
Clustered Regularly Interspaced Short Palindromic Repeat (CRISPR)
Ishino et al (1987) were the first to observe CRISPR clustered repeats in E. coli. More research followed, with Francisco Mojica characterizing CRISPR as a microbial immune system designed to adapt to and eliminate foreign genetic material (Mojica FJ, et al. 2005; Reiss A, et al 2014). The novel Cas9 (CRISPR-associated protein-9) gene, found to code for a nuclease, was discovered next by Bolotin, et al (2005) in the bacteria S. pyogenes. The team also found that a particular sequence—PAM (protospacer adjacent motif)—of approximately two to five nucleotides is required for target recognition. The PAM sequence itself may differ depending upon the type of Cas9 being used, but it must always be present in proximity to the DNA target site.
Over the next decade, researchers sought to understand the role of CRISPR and how it interferes with invading genetic material (Makarova et al 2006), focusing mostly on the CRISPR type II system that requires only one Cas protein to introduce DSBs (Barrangou et al 2007). Brouns et al (2008) found that small RNAs (CRISPR RNAs or crRNAs) guide Cas proteins to target DNA sites. Further research (Marraffini and Sontheimer 2008; Garneau et al 2010) informed Emmanuelle Charpentier’s work that identified yet another guide RNA: trans-activating CRISPR RNA (tracrRNA) (Deltcheva et al 2011), which forms a duplex with crRNA to guide the Cas9 nuclease to its DNA targets. In June 2012, Charpentier and Doudna published findings confirming the role of Cas9 as an endonuclease (enzyme that cleaves DNA) and demonstrating the fusion of crRNA and tracrRNA to generate a single guide RNA (sgRNA) (Jinek et al 2012), significantly increasing the simplicity and specificity of the gene-editing system. In 2013, a team led by Feng Zhang successfully demonstrated targeted genome cleavage using Cas9 in both human and mouse cells, in addition to showing the system’s utility in targeting multiple DNA sites and driving HDR DNA repair (Cong et al 2013). See Figure 3 below for the Cas9 mechanism of action.
The targeting efficiency of CRISPR/Cas9 relative to other techniques such as ZFNs or TALENs is high, in part due to the guidance of the sgRNA and the PAM requirement. Still, there has been much work to improve the system’s performance, including increasing the efficiency of HDR DNA repair for use with CRISPR—specifically by suppressing factors associated with the NHEJ pathway (Chu et al 2015). Furthermore, other variants of Cas9 (i.e. Cas9s from other species or synthetically-developed mutant Cas9s) have been identified and can be used in different contexts to better serve the needs of a particular experiment (Jiang et al 2013). The process of targeting multiple DNA sites with CRISPR has been facilitated by the development of a single plasmid (fragment of double-stranded DNA separate from a cell’s chromosomal DNA; plasmids are replicable and are frequently used to deliver genes into a cell) that can contain multiple sgRNAs that can guide Cas9 to different DNA target sites (Sakuma et al 2014; Ma et al 2014; Guo et al 2015). Polstein et al (2015) have also developed a method to activate the CRISPR-Cas9 system at particular points in time via light stimulation, giving researchers more control over when genes are expressed.
While the CRISPR system has its benefits, it also has limitations. There is concern over off-target effects, although efforts have been made to reduce such effects through improved design of guide RNAs and the development of Cas9 variants (Cong et al 2013). Additionally, mismatches of guide RNAs to the wrong target sites remain as obstacles to the effectiveness of CRISPR/Cas9-based targeted genome editing. Lastly, delivery of the necessary components of the system may be difficult due to relative size.
Despite such setbacks, CRISPR remains the most affordable and versatile genome editing technique available. Thus, the CRISPR/Cas9 system has seen huge investment and wide publicity, owing to the system’s relative ease of application and versatility for targeted genome editing in multiple species and for many research, therapeutic, diagnostic, public health, and agricultural applications. Some of these applications have proved more controversial than others. Ousterout, et al. (2015) used CRISPR/Cas9 to correct multiple mutations associated with Duchenne muscular dystrophy, a genetic disorder that leads to muscle degeneration over time (Muscular Dystrophy Association 2016). CRISPR has also been used to facilitate the development of a cheaper Zika virus diagnostic test (Gregory 2016).
On the more controversial side, a total of four proof-of-concept studies –in yeast, fruit flies, and two species of mosquitoes—have been published, demonstrating successful development of gene drives in the lab in at least three organisms (Committee on Gene Drive Research 2016). Gene drives are a method in which the odds of passing on a particular gene to the next generation of offspring are significantly increased beyond the means of natural reproduction (Ledford and Callaway 2015). Developments in agriculture include CRISPR-Cas9-edited mushrooms that can be cultivated and sold, without further oversight from the US Department of Agriculture (USDA) (Waltz 2016). Adding to the list of controversial developments with CRISPR, UK studies involving editing human embryos with CRISPR technology were approved by the Human Fertilisation and Embryology Authority in early 2016 (Callaway 2016). More recently, proposals have been made for human trials that would use CRISPR/Cas9 genome editing in cancer treatment for patients (Begley 2016).
Argonautes are another family of endonucleases that are also important in defending the cell against foreign genetic material (Gao et al 2016). These endonucleases use single-stranded nucleic acids as guides, and exist in nearly all organisms. Some Argonautes have demonstrated that they bind to single-stranded DNAs and can cleave target DNAs (Swarts et al 2014). Gao, et al (2016) demonstrate that NgAgo (Natronobacterium gregoryi Argonaute) is programmable with single-strand DNA guides to be a “precise and efficient tool for genome editing in mammalian cells.”
In response to the Gao et al (2016) study, some have speculated that NgAgo may pose a challenge to proponents of the CRISPR/Cas9 genome editing system. In the CRISPR/Cas9 system, Cas9 requires that the guide RNA exhibit a specific structure for correct binding to occur. In the Argonaute system, however, the guides need not have any specific structure for binding. The CRISPR system is also limited by the PAM sequence—such that the DNA can only be cleaved if it is in proximity to the PAM sequence (Garneau et al 2010). Argonautes, on the other hand, do not require the presence of any specific sequence on targets, and researchers suggest that these endonucleases have a broad targeting range and low tolerance for mismatches (Gao et al 2016).
Our previous posts discussed recent breakthroughs in genomics, and provided some recommendations on how to address the difficult questions that these scientific advancements raise. Here, we have attempted to provide our readers with an overview of the technologies in question to emphasize the importance of context as the scientific and political communities grapple with regulating these gene-editing technologies.
ABOUT THE AUTHORS
Sam Wu, BS is a research associate at the Pellegrino Center for Clinical Bioethics at Georgetown University Medical Center.
Kevin T. FitzGerald, SJ, PhD is a research associate professor at the Pellegrino Center for Clinical Bioethics, GUMC.
Scientists perceive the public as both critical of and influential on the progress of science. Yet at the same time, scientists fear or distrust public input into the scientific research arena because they perceive the public as more likely to impede or misdirect the research process due to their lack of understanding of science and research.
Recent advances in science demonstrate the field’s far-reaching societal implications, from industry to medicine to what it means to be human. As Yankelovich stated, “to move ahead on important national issues without public support is to invite being undermined in the long run.” This statement makes the supposed “public mistrust crisis” all the more disturbing, and the need to do something about it all the more imperative. We are at a point where many experts would agree that there is a need to build public trust, but recommendations on how to do so vary considerably.
Some call for more inclusive discussions to help develop alternative solutions. Others blame the media and point to the need for the scientific community to take responsibility for communicating effectively with the public about their research, which can mean determining risks, benefits, and goals without public input. Still others blame educational shortcomings, calling for changes to science curriculum that will foster a greater understanding of the research process and its inherent degree of uncertainty. Various models of science communication and public engagement have been described, attempting to bring some order to the growing list of possible options (for example, see Lewenstein and Rowe and Frewer). Despite the abundance of recommended solutions to the “public mistrust crisis,” there remain significant obstacles in choosing, designing, and implementing good public engagement mechanisms.
To move forward and improve these efforts, we need first to address the prevalent and recurring opinion that most of the public is anti-science and uninformed, and therefore, incapable of participating well in science policy-making. Such narrow thinking has encouraged the development of public engagement events that focus primarily on informing the public of relevant technical information and/or benefits of specific technologies in question, with the goal of fostering public acceptance or understanding (i.e. the deficit model of science communication). Allum, et al. (2008), however, find that scientific knowledge is only one factor, and a weak one at that, among many that determine public attitudes and policy preferences on particular issues. Knowledge, whether technical or lay, is filtered by way of an individual’s social and political identities. Hence, while aiming at increasing public scientific understanding may be an important element of public engagement, it is by no means sufficient and should not be the only aim.
The next step forward would be to increase our understanding of the complex network of factors that precipitate controversy and public mistrust in particular contexts – in this case, in discussions regarding emerging biotechnologies. Such insight may, in turn, inform the selection and design of public engagement strategies that aim to address disagreements specific to that particular scientific debate. Notably, not all public engagement events should seek to achieve the same goals or to address the same issues. Different technologies can raise different questions of ethical and practical importance, and can be subject to a variety of regulations depending on the nature of the technology. These differences provide the context within which the dilemmas inherent to the design of any engagement event must be resolved: upstream v. downstream, deliberation v. decisiveness, inform v. elicit, top-down v. bottom-up, and commissioning research v. involving civil society groups (Nuffield Bioethics, Emerging Biotechnology 2014). The purpose and design of a public engagement event will, therefore, depend upon the particular technology or scientific advance in question and the factors, stakeholders, and communities involved.
Additionally, public engagement mechanisms can be designed, not merely with the goal of fostering public understanding or acceptance, but also with the goal of making diverse interests and values explicit and creating room for disagreement and consensus to inform policy-making. Rather than highlighting consensus and glossing over disagreements, both points of agreement and disagreement should be emphasized as valuable outcomes of engagement. Decision-makers can, for instance, use persistent points of disagreement as jumping-off points to refine proposals or to develop new alternatives altogether (O’Doherty, et al. 2009).
Incorporating diverse community interests throughout the decision-making process will encourage more innovative and ethical policy-making that leaves room for deliberation. One could even frame the design of public engagement methods in a public discourse ethics, which would enhance the values of equity, solidarity, and sustainability – helping to prevent biased decision-making that favors the goals of science.
Emerging biotechnologies pose new and challenging ethical questions, fraught with uncertainty and ambiguity as to how they should be answered. As stated earlier, a complex network of factors influences public attitudes and preferences, and to suggest that addressing only some public knowledge “deficit” is the best way forward ignores the dynamic, interconnected nature of our society. It is, therefore, crucial that we re-evaluate our policy-making processes and extend these processes to include stakeholders beyond science, policy, and industry. Employing this approach, we may better integrate our rapidly developing technologies into a future envisioned by our entire society, and not merely the science and technology community.
ABOUT THE AUTHORS
Sam Wu, BS is a research associate at the Pellegrino Center for Clinical Bioethics at Georgetown University Medical Center.
Kevin T. FitzGerald, SJ, PhD is a research associate professor at the Pellegrino Center for Clinical Bioethics, GUMC.
Controversial breakthroughs, newly proposed guidelines, a private meeting of experts, and a lack of engagement mechanisms to include the public. Article two in a series on emerging biotechnology.
Naturereported that two research teams have sustained human embryos in vitro for twelve to thirteen days, coming closer to the widely used 14-day limit than ever before. A potential benefit of this scientific advance is that researchers may be able to study early human development with “unprecedented precision”; on the other hand, this research once again raises ethical and practical questions of where to set limits on human embryo research. The Ethics Advisory Board of the US Department of Health, Education, and Welfare originally proposed the 14 day limit in 1979. Twelve countries have since encoded this limit into law and others have written it into guidelines, limiting almost all in vitro research to within those first 14-days of development.
Hyun, et al. suggest that the 14-day rule has been successful in our pluralistic society because it provides space for scientific inquiry and advancement, but also takes into consideration other views that stress the moral status of human embryos. In other words, the rule’s success is due to the fact that it protects the two chief goals that any rule covering human embryo research should uphold: “supporting research and accommodating diverse moral concerns.” The authors further suggest that by viewing established limits in research as “policy tools” rather than as moral truths, “it becomes clear that, as circumstances and attitudes evolve, limits can be legitimately recalibrated.”
The International Society for Stem Cell Research (ISSCR) has attempted to do just that, with the recent release of its updated guidelines on embryonic-stem cell research and clinical translation. The guidelines include several new recommendations, and promote self-regulation of the stem cell research community. One such recommendation suggests that both human embryonic stem cell research and human embryo research undergo review by Embryo Research Oversight committees. While the ISSCR holds that the 14-day limit remain, it supports more research involving induced pluripotent stem (iPS) cells. Specifically, it recommends that generation of iPS cells be excluded from stem cell research oversight. ISSCR blames incomplete and inaccurate representations of scientific advancements, such as exaggeration of potential benefits and challenges, for public mistrust in science. It, therefore, advocates for improved communication strategies with the public.
Indeed, to “recalibrate” policies regularly in any rapidly advancing field (embryology, genomics, neuroscience, etc.) is good practice. To question current regulations is to create space and opportunity to clarify uncertainties, define goals, evaluate and improve processes, and engage more inclusively. Outcomes of these types of discussions may, in turn, inform ethical policy development that aligns more closely with societal needs and sociocultural contexts at a given time. Hyun et al. call for increased discussion and collaboration in the debate on whether to set a new limit on human embryo research, but like the ISSCR and many others in the field, they place an emphasis on the role and interests of scientists and experts. Hyun et al. propose that any rule must uphold the two chief goals of “supporting research” and “accommodating diverse moral concerns.” Do support and accommodate suggest the same level of consideration? Similarly, authors of the ISSCR guidelines have “Integrity of the Research Enterprise” as the first in a list of fundamental ethical principles intended to frame guideline development and implementation. They further state that the “primary goals of stem cell research are to advance scientific understanding and to generate evidence for addressing unmet medical and public health needs.”
Emphasis on the goals of the research enterprise, combined with common conceptions of the public as anti-science or too poorly educated to make adequate decisions, encourage the use of engagement models that are more about public indoctrination than public empowerment. Both Hyun, et al., and the ISSCR guidelines, emphasize the need for researchers to communicate with and “engage the public about what they are doing and why it matters.” From their perspective, examples of good engagement practices include the International Summit on Gene Editing, which FitzGerald, Wu, and Bouchard have described as problematic, and public comment periods, which have also proved problematic in cases such as the Notice of Proposed Rulemaking (NPRM) for Revisions to the Common Rule.
Hyun et al. and many others in the scientific community advocate for what is known as the “attitudinal deficit” model – essentially a reframing of the widely criticized “information deficit” model that has been used historically in encounters between science communities and the public. A deficit model is characterized as a one-way conversation with the aim of fostering public support for science. When first introduced, the initial focus was on increasing public knowledge of scientific, technical information, but has in more recent years turned to increasing public knowledge of the potential benefits of science to society. Laudably, the ISSCR guidelines attempt to address some of the potential pitfalls of such an approach (namely ‘science hype’), calling for researchers to “promote accurate, balanced, and responsive public representations of stem cell research.” However, the ISSCR guidelines still fail to move away from the deficit model, advocating for increased transparency and information resources intended to inform rather than engage the public about stem cell research.
Use of the deficit model makes the following assumptions: that the public is at a “deficit” and needs to be informed (of technical information and/or of potential benefits to society); that a reduced deficit will increase support for science and lead to better policy-making; and, that the advancements of science are for the common good. Public engagement events designed using this unidirectional model are often without space for public deliberation or leverage to effect change. Further, by structuring engagement with the aim of fostering public acceptance of scientific research (or, as Hyun et al. put it, to “prevent a public backlash and the implementation of reactive, more restrictive limits on research),” publics are often presented with options in a take-it-or-leave it fashion. What’s more, the available options are often decided upon by experts behind closed doors (e.g. “Scientists Talk Privately About Creating a Synthetic Genome”, 2016). This leaves little room for questioning or “recalibration” of policies, processes, or institutions – exactly contrary to what Hyun, et al. and other scientists purport to want as an outcome.
Despite such criticisms, discussions and decisions regarding emerging biotechnologies continue to take place amongst experts in private. Endy and Zoloth call for “pluralistic, public, and deliberative discussions,” rightfully pointing out that closed-door discussions, such as the private meeting on synthesizing the human genome, do not allow for broader consideration of important ethical questions, potential alternatives, and unintended consequences. Our recommendations on how to operationalize the numerous calls (see also: “Human Germline Genome Editing Debate”) for improved public engagement in ongoing debates will be the subject of our next blog.
ABOUT THE AUTHORS
Sam Wu, BS is a research associate at the Pellegrino Center for Clinical Bioethics at Georgetown University Medical Center.
Kevin T. FitzGerald, SJ, PhD is a research associate professor at the Pellegrino Center for Clinical Bioethics, GUMC.
BY KEVIN T. FITZGERALD, SJ, PhD, SAMANTHA (SAM) WU, BS, and Fr. CHARLES BOUCHARD, OP
In February 2016, it was reported that the Human Fertilization and Embryology Authority (HFEA) granted limited permission for researchers in the UK to genetically modify human embryos, with the hope of elucidating which genes are necessary for successful embryological development. Although Dr. Kathy Niakan and her team at the Francis Crick Institute are only allowed to use the embryos for 14 days, and may not implant a modified embryo in the womb, this permission crossed a frontier in genetic research. It was the first time human embryonic genetic modification had been authorized. This followed the publication of the controversial paper by Liang, et al. (2015) that detailed the researchers’ attempt to modify genes that cause β-thalassaemia in non-viable human embryos using the gene-editing technique, CRISPR. The paper, published in April 2015, kicked off a heated ethical debate.
Now, Frederik Lanner at the Karolinska Institute in Sweden, who got the go-ahead on a project that will also involve gene editing in human embryos, is making preparations to begin those experiments. Earlier this month, it was also reported that another team in China, led by Yong Fan, attempted to use CRISPR to generate HIV-resistant human embryos via the introduction of precise genetic modifications. While this project involved non-viable embryos, much like the research conducted by Liang, et al., “the purpose of this study was to evaluate the technology and establish principles for the introduction of precise genetic modifications in early human embryos.” The ethics committee of Guangzhou Medical University in China approved Fan’s work, and has reported that it has since approved two more similar projects.
The rapid rate of investment of both time and money in new projects involving gene editing and CRISPR makes it clear why the novel gene-editing technique was named Science’s “2015 Breakthrough of the Year.” Indeed, the technique and the research it facilitates have the potential to lead not only to treatments, but also to the elimination of some genetic mutations from the human genome altogether. Other novel biotechnologies, such as next-generation sequencing (NGS), have contributed to the revolution in gene-editing, making sequencing of the genome faster and cheaper than ever before. Clearly, these new technologies are altering, and will alter, medicine in ways that were science fiction only a few years ago.
Scientists have hailed the advancement of these projects with enthusiasm, convinced that the recent approval of the human embryo gene-editing research by funding agencies and IRBs is indicative of wider societal approval. Lanner, for instance, is hopeful that his work will be received with more optimism and less heated debate than the paper by Liang, et al. published just a year ago. At the same time, it is worth nothing that many of the ethical and practical questions, which made and continue to make the genome-editing debate controversial, remain unanswered:
Who should guide the potential impacts of these developments in clinical practice and in broader society? What applications are ethically permissible? Who will own these new technologies and the information resultant from their use? How should we regulate and oversee the technology in a way that such advances in science are not prioritized at the expense of public health or to the disadvantage of the poor or marginalized?
We argue that experts in research, medicine, industry, and policy currently dominate and guide the conversation about R&D and regulation of gene-editing technology – leaving it up to the “experts” to answer the aforementioned questions of ethical and practical importance. But one might also ask, do the “experts” have the appropriate knowledge to know what is in the public’s best interest? What is the “appropriate” or relevant knowledge to make such a judgment? Are we moving forward with these projects because we have answered those questions?
CRISPR, NGS, and many other biotechnologies are all pieces of the broader discussion regarding what the future of science, medicine, and society will look like. The outcomes of such a discussion may affect our conceptualizations of “disease” and what it means to be a “normal” human being – things that impact every human in society, not just a gathering of experts from a particular social stratum. By restricting the debate to only the “experts” or those who have a vested interest in the technology, society risks maintaining the status quo, or worse, exacerbating existing socioeconomic and health inequalities that disproportionately affect marginalized communities.
Recent Attempts at “Public Engagement”
Many scientists and policymakers have recognized that we are at a pivotal moment in research and health care. They acknowledge the benefits of collaboration and communication in the effort to achieve improved health. The FDA, for instance, launched precisionFDA, an online portal that enables “scientists from industry, academia, government and other partners to come together to foster innovation and develop the science behind…[NGS].”
With CRISPR, the international research community was prompted by growing ethical concerns to host the International Summit on Human Gene Editing in Washington, DC in December 2015. The summit brought together experts from across disciplines and continents charged with the task of discussing the “scientific, ethical, and governance issues” central to the debate on human gene-editing research. That the scientific community organized the conference suggests that it recognizes the need to incorporate a variety of perspectives in the debate. But, recognition alone is insufficient to ensure that such diverse interests are represented in the resulting policy recommendations.
At the conclusion of the summit, Chair of the Organizing Committee, David Baltimore, proposed guidelines for regulating human gene-editing research. The guidelines, however, were crafted to reflect the perspectives of scientists and academics, along with a few members of the general public who were invited and present. The guidelines lacked input from other public groups for whom human genome editing could have many implications. Unfortunately, the inadequate model of “public engagement” employed at the Summit is actually quite common across science policy issues.
Dr. Ruha Benjamin, Princeton University professor and author of People’s Science: Bodies & Rights on the Stem Cell Frontier, notes that the processes currently used to gauge public opinion on new scientific developments often create only an “illusion of opening up the science.” Institutions acknowledge, to some degree, the importance of gauging public interests and goals when it comes to scientific progress. The list of terms and mechanisms used in an attempt to determine such interests and goals is long: “public conferences,” “public meeting,” “public comment period, “public forum,” “public engagement,” and so on. Closer inspection of these mechanisms and their outcomes reveal several issues with the scientific community’s current “public engagement” efforts: 1) there is a lack of evidence and consensus to suggest which mechanism is most appropriate and when, 2) the public’s diversity of values and goals are underrepresented by a handful of public representatives, often selected by the host organization themselves, 3) differences are often resolved with more “expert” or “technical” information, rather than deeper discussion of differences in values or goals (expert knowledge is prioritized over lay knowledge), 4) the mechanism’s design provides the public with insufficient leverage to effect change in research or policy development, and 5) more often than not, there are insufficient evaluation measures in place to reinforce accountability. While these mechanisms have the potential to inform and engage the public in policy discussions that bear directly on the common good, these obstacles often lead them to fall short and leave voices unheard.
Our ethical tradition of fostering the common good says we need broader discussions that go beyond the one-way discussion that focuses on informing the public to foster “public acceptance” — beyond this “illusion of opening up the science.” We need deliberative, two-way discussions that make values and goals explicit, and that actively involve the public as community members and participants in research, clinical practice and decision-making. With genome editing, it is imperative that these discussions happen now before boundaries are crossed, even inadvertently, that cause harm to many people which cannot be readily remedied.
Moving forward, we suggest that academic institutions – increasingly home to interdisciplinary efforts and collaborations – make a much greater effort to research and design processes that engage public stakeholders in the discussion around genome editing and, more broadly, emerging biotechnologies. Several organizations have already implemented biomedical research tools and platforms created with participant preferences, values, and communities in mind that can serve as a good starting point. We also call for greater collaboration between natural science and social science researchers, which will increase our understanding of the assumptions and factors that influence policy-making and inform the design of mechanisms intended to generate a more open, deliberative public dialogue.
Sustained public engagement is essential to ensuring that science and medicine’s advancements continue to address broad public needs, and not only those of the few. Indeed, there have been calls for greater public involvement in the genome-editing debate. Despite these calls, however, the field continues to proceed with controversial experiments while ethical and practical questions remain unanswered, with scientists presuming that IRB approval of their controversial projects means that the public is becoming more accepting with “the passage of time.” It is, thus, time to increase our efforts to “open up” the science, not just to the experts, but also to those whose lives will be impacted and whose voices have yet to be heard.
Recently, the Food and Drug Administration (FDA) solicited input to guide ways that the agency regards and handles “Clinical Considerations for Investigational Device Exemptions (IDEs) for Neurological Devices Targeting Disease Progression and Clinical Outcomes”, in accordance with good practices regulation (21 CFR 10.115). The FDA will use this draft guidance to “…assist sponsors who intend to submit an IDE to the FDA to conduct clinical trials on medical devices targeting neurological disease progression and clinically meaningful patient centered outcomes”, and “… aid industry and FDA staff in considering the benefits and risks of medical devices that target … the cause or progression of neurological disorders or conditions” (such as movement disorders, like Parkinson’s disease and dystonia; as well as other pathologies, like Alzheimer’s dementia, Tourette’s syndrome, chronic pain, and psychiatric conditions such as depression).
The goal of the FDA regulation process is to establish that drugs and devices provided for medical care are safe and technically sound and the general constructs of Investigational Device Exemption (IDE) and Humanitarian Device Exemption (HDE) are aligned with such aims. But like any policies that tend to entail broad concepts, the real-world utility, viability and value of these programs are contingent upon: (1) the relative appropriateness to the context(s) in which any device is employed; and (2) if and how use-in practice reflects and is supported by the scope of regulatory oversight and control.
In recent years, IDE and HDE application, review and approval has become easier and more efficient; this is a notable improvement – and a step in the right direction. However, it may be that aspects of the overall structure and certain specifics of the IDE and HDE are not well suited to meet the contingencies (and exigencies) of actual clinical use of certain neurotechnologies, like deep brain stimulation (DBS). For example, the current regulatory framework necessitates filing and securing an IDE as a first step in investigator-initiated research (IIR) and/or other off-label use of DBS in those cases where other approaches have been shown to be ineffective or untenable, and for which DBS may prove to be viable as “humanitarian care”. In such instances, it may be that the proverbial cart precedes the horse, and the HDE might be more practical and valuable given both the nature of the disorder and treatment, and the value of the HDE in establishing a basis for further (and/or expanded) application, as supportable by an IDE.
Moreover, while both IDE and HDE establish parameters for using DBS in practice, neither regulatory mechanism creates or enforces a basis for provision of economic support necessary for right and good use-in-practice. As our recent work has demonstrated, non-payment of insurance costs for pre-certified DBS interventions has been, and remains a problem of considerable concern. Absent the resources to provide: 1) DBS as a demonstrably-important or necessary treatment option for those patients with conditions that are non-responsive to, or not candidate for other therapeutic options , and 2) continuity of clinical services as required, the sustainability of this neurotechnology may become questionable (Rossi, Okun, and Giordano, 2014). This is contrary and counter-productive to recent federal incentives to maximize benefits of translating extant and new neurotechnologies into clinically-relevant and affordable care, and to implementing precision medicine . This was the focus of much discussion at the fourth National Deep Brain Stimulation ThinkTank held last month in Gainesville, FL.
In the main, actions taken by the FDA to streamline the IDE and HDE process should be applauded. Yet, while certain aspects of the IDE and HDE mechanisms may be in order and valuable for regulating use of DBS, others may require re-examination, revision, or replacement, so as to remain apace with the rising tide of developments in the field, and needs and necessities of patients and clinicians in practice. In this vein, we recommend further study of IDE and HDE mechanisms to determine what works, what doesn’t, and what can – and should – be done to both improve these practices. It is our hope that doing this will fortify regulatory, policy and legal processes to ensure that they are aligned with, directive toward, and supportive of concomitant changes in standard of care guidelines and federal insurance structure.
Important to this endeavor would be both the development of a governmental-commercial enterprise to guide industrial efforts in neurotechnology (e.g.- a National Neurotechnology Initiative; NNTI), as well as the establishment and enactment of federal laws (e.g.- a neurological information non-discrimination act; NINA) to govern potential use(s) of information obtained through DBS and related neurotechnologies that are elements of novel big data initiatives. This might be something of a sea-change, and effecting such change will demand that the constituent currents flow in the same direction. If programs such as the BRAIN initiative and agendas of precision medicine and big data are to function as a “translational estate”, and work in ways that enable technically apt and ethically sound patient care, then what is needed is coordination of the institutions, organizations, resources and activities involved. Without doubt, this will entail considerable effort, which might make waves in the status quo; but we believe that it represents a worthwhile endeavor to achieve genuine and durable progress in the development and – right and good – use of neurotechnology in clinical practice.
ABOUT THE AUTHOR
James Giordano, PhD
James Giordano, PhD is Chief of the Neuroethics Studies Program at the Pellegrino Center for Clinical Bioethics, and is Professor in the Department of Neurology at Georgetown University Medical Center. Follow more of Professor Giordano’s work at explore.georgetown.edu, and http://www.neurobioethics.org.
On March 2, 2016, Dr. G Kevin Donovan testified at the “Bioethics and Fetal Tissue” hearing before the Select Investigative Panel of the Committee on Energy and Commerce of the US House of Representatives. Dr. Donovan was one of six witnesses to present testimony.
Chairman Blackburn, and members of the panel, I thank you for the opportunity to present testimony regarding the bioethical considerations in the harvesting, transfer, and use of fetal tissues and organs.
I am a physician trained in both pediatrics and clinical bioethics. I have spent my entire professional career caring for infants and children. It was this interest and concern that led me to further study in bioethics, because I have always been concerned about the most vulnerable patients, those who need others to speak up for them, both at the beginning and at the end-of-life. I also have significant familiarity with research ethics, having spent 17 years as the chair of the IRB, a board that monitors the rightness and the wrongness of medical research in order to protect human subjects. We took this aspect of our duties so seriously that I renamed our IRB the Institutional Research Ethics Board. Four years ago, I was called by my mentor, Dr. Edmund Pellegrino, to take his place as director of the Center for Clinical Bioethics at Georgetown University. Our duties include ethics education for medical students and resident physicians, ethics consultation for patients and doctors at the hospital, as well as the promulgation of scholarly papers and public speaking. We focus on both clinical ethics, that which directly involves the good of patients, as well as addressing normative questions, those which involve right and wrong actions.
This is what we want young physicians to know: medicine is a moral enterprise. Our actions have consequences that can be good or bad for patients, and we must always focus on the patient’s good and avoid doing harm. So what does this mean for the topic at hand? We’re talking about bioethics and the fetus. In order to make any moral judgments, we would have to be clear on the moral status of the fetus. Obviously, this is an area in which society has not reached a consensus, but that does not mean we cannot make sound judgments on the topic. In a question of biomedical ethics, it is good to start with solid science. What do we know about the fetus with certainty? Well, first of all we know that it is alive, that it represents growing, developing, cells, tissues, and organs, all of which develop increasing complexity and biologic sophistication, resulting in an intact organism, a human baby. Of course, this growth and development does not cease with the production of the baby, but continues for many years afterwards. As can be seen by this description, the fetus is not only alive, but is demonstrably human. I’m not talking about a “potential human” in the way that some parents talk about their teenagers as potential adults. I am referring to the scientific fact that a fetus constitutes a live human, typically 46XX or 46XY, fully and genetically human. In fact, it is the irrefutable humanness of these tissues and organs that have made them be of interest to researchers and scientists.
So, if a fetus is clearly both alive and human, can we justify taking these tissues and organs for scientific experimentation? If so, under what circumstances, and what sort of consent or authorization should be required? In the past century, medicine has made incredible progress resulting from scientific studies involving human tissues and organs, resulting in the development of medications, vaccines, and the entire field of transplantation medicine. Is there any difference between these accomplishments and those that would require the harvesting of bod parts and tissues from the fetus? First, we would have to admit that not all scientific experimentation has been praiseworthy. Studies done by Dr. Mengele in Germany, and by American researchers in Guatemala and Tuskegee, we morally abhorrent, and any knowledge gleaned from these would be severely tainted. No one would want to associate our current scientific studies involving the human fetus with such egregious breaches of research ethics. All that it takes to avoid such a comparison is a consensus on the moral status of the fetus.
Those who have proceeded with experimentation and research on embryonic and fetal cells, tissues, and organs typically have obtained them as the result of an abortion. It is this stark fact that makes such scientific endeavors controversial, because they have proceeded without the aforementioned consensus on the moral status of the fetus. Because we know that the fetus is alive, and human, we must find some explanation for why it should not be treated with the same dignity that we accord all other human lives. The most frequent argument offered is that, although it is a human life, it is not a human person. Various criteria are offered for a definition of personhood, but none have been found universally acceptable. We thus have a standoff between those who would protect this early vulnerable human life and those that would deny that it deserves protection. In order to resolve such an ethical dilemma, the guiding principle is this: one is morally permitted to take such a life once you can demonstrate with moral certainty that the life is not human. It is a concept that can be exemplified by the situation faced by a hunter when he sees a bush shaking. He may sincerely believe that it is a deer in the bush, but if he kills it prior to determining with certainty what it is that he is killing, he will be morally responsible (as well as legally) if he has in fact killed the farmer’s cow, or worse yet, the farmer. As we can see, two deeply held, but opposing viewpoints need not be resolved unless someone intends to act upon them. Then, the one who intends to take the action resulting in the death of the disputed entity must not do so unless they can first show with moral certainty that their perception of its moral worth is irrefutable. Those who would not disturb the normal progression of its life bear no such burden. It’s my contention that such proof does not exist, and deliberate fetal destruction for scientific purposes should not proceed until it does.
Moreover, without disputing the arguable necessity of research on fetal tissue, I would also point out that harvesting it n such a way is unnecessary. Not only do cell lines already exist that were produced in such a fashion, but new cell lines could be obtained from fetal tissues harvested from spontaneous miscarriages. This is not a theoretical alternative. Georgetown University has a professor who has patented a method of isolating, processing, and cryopreserving fetal cells from second trimester (16-20 week gestation) miscarriages. These have already been obtained and are stored in Georgetown freezers.
Moreover, the present practices of obtaining fetal tissues and organs would seem to go against the procedures that have been approved for others who harvest tissues and organs donated for transplantation. First, we follow a strict rule, the dead donor rule. It states that vital unpaired organs cannot be obtained unless the donor has died a natural death. This obviously is not the case in an induced abortion. Moreover, such tissues or organs cannot be harvested without consent of the patient or their proper surrogate. In pediatrics, parents are considered the normal proper surrogate. However, this interpretation rests on the presumption that the parent is acting in the best interests of the individual. It is difficult to sustain such an interpretation when it is the same parent who has just consented to the abortive destruction of the individual from whom those tissues and organs would be obtained.
We are at a difficult time in our nation’s history. We demonstrate much moral ambiguity in our approach to the human fetus. We have decided that we can legally abort the same fetus that might otherwise be a candidate for fetal surgery, even using the same indications as justification for acts that are diametrically opposed. We call it the fetus if it is to be aborted and its tissues and organs transferred to a scientific lab. We call it a baby, even at the same stage of gestation, when someone plans to keep it and bring it into their home. Language has consequences, but it can also reflect our conflicts. We are a nation justly proud of the progress and achievements of our biomedical research, but lifesaving research cannot and should not require the destruction of life for it to go forward. If we cannot act with moral certainty regarding the appropriate respect and dignity of the fetus, we cannot morally justify its destruction. Alternatives clearly exist that are less controversial, and moral arguments exist that support our natural abhorrence at the trafficking of human fetal parts. Surely we can, and surely we must, find a better way.
Written testimony from Dr. G Kevin Donovan at a joint hearing of the House Health & Government Operations Committee and the Judiciary Committee of Maryland on February 19, 2016, regarding the proposed end-of-life bill.
Thank you for the opportunity to address this proposed legislation. I am Dr. Kevin Donovan, a physician, and the director of the Pellegrino Center for Clinical Bioethics at Georgetown University Medical School. Much of my work in the hospital setting involves consultation on patients who are nearing the end of their lives, so I have a real interest in this bill.
You will hear from others about the problems that have arisen in identical bills. You will hear about the dangers it can cause to patients, particularly the disabled or chronically ill, to the medical profession, and society at large. So I will not tell you about these things. I will tell you that I oppose this bill for two reasons: it is discriminatory, not progressive, but itis deceitful.
Okay, what are we talking about? Aren’t progressive people in favor of this bill? Perhaps some are, but if they look deeply into it, they shouldn’t be. Years ago, Sen. Hubert Humphrey said the real worth of any society can be found in the way that it cares for its most vulnerable members. That would be progressive, but we live in a society that increasingly worships autonomy, freedom, and productivity, and pushes those that don’t fit the picture to the margins. Therefore, it should come as no surprise that marginalized people, the poor, blacks, Latinos, and virtually every disability rights group are afraid of this bill. It creates by law a class of people whose lives no longer should be preserved. Of course, creating separate classes of people is discriminatory, but isn’t that what the supporters of this bill want? Yes, and we should look and see who are the supporters of this bill. It is favored by the same classes of people that have taken advantage of it in places like Oregon. Who’s the typical proponent, and patient? As published statistics show, the typical patient is a white male, usually with cancer, educated and financially comfortable. This is someone who is used to thinking that they are in control, and want to maintain the illusion of control near the end their lives. And make no mistake – the data from the Oregon health department makes it clear that this is not an issue of avoiding pain. The stated reasons for seeking a fatal prescription are primarily loss of autonomy, loss of ability to engage in enjoyable activities, or fear of being a burden to others. Pain is low on the list, because with good palliative care, pain is controllable. We’ve just described legislation that would favor the white elite, not the sort of thing that progressives usually want to get behind. And as one commentator asked, “Is anyone ashamed that we live in a culture where people believe that if they aren’t autonomous, or might be a burden on others, that they should ingest drugs and die?” I would think that Maryland should be ashamed of promoting assisted suicide for patients before palliative care is universally available?
Okay, if this bill is discriminatory not progressive, why should it be seen as deceitful? First of all, our antenna should always go up when people start using euphemisms. What started out as a movement for physician assisted suicide became physician assisted death, and now just wants to be thought of as death with dignity. Really? This can’t be the only path to dignity, and it clearly is suicide that we’re talking about. After all, the Center for Disease Control defines suicide as, and I quote “Death caused by self-directed injurious behavior with an intent to die.” What we’re talking about in this bill is clearly suicide, whether or not we’re allowed to put that on the death certificate, and by the way suicide is already legal. And when we talk about the protections in this bill, we’re not being entirely forthright either. The ones who are really being protected are the physicians, once again, members of the power elite. We may say we are placing restrictions for the patients involved, but that’s not entirely true, because they’re not entirely coherent. After all, once we redefine supporting or encouraging a patient’s death as a good thing, how can we defend limiting that benefit, that good thing, to only those who will be dead in six months? Why not 12 months? Why only terminal illness – why not suffering from chronic disease? Why only assisted suicide? What if the patient can’t lift the toxic mixture to their lips – shouldn’t someone else be able to give them a lethal dose? Shouldn’t we be willing to end the suffering in this way, even if you’re under 18, even if you’re only a child? Proponents will say that this is not what the bill says, nor what they intend, but this is the natural, logical, and really inevitable result of this redefinition of death as a medical treatment. In fact, this has already happened in Belgium and the Netherlands, countries that have been doing this longer than we have.
Finally, this brings us to an area which make legislators truly sit up and take notice: it is just not good public policy. Suggesting that suicide or euthanasia, are legitimate tools of the state is frightening. Throughout history, we have learned that granting the state legal authority to kill innocent individuals has had dreadful consequences. In Maryland, if you are convicted murderer and in no way innocent, you would be protected from the killing power of the state. Why would we want to turn that power to kill on our patients with their physicians help? A Dutch cardiologist, who has experience with this in his own country, recently stated, “the fundamental question about this is whether it is a libertarian movement for human freedom and the right of choice, or an aggressive drive to exterminate the weak, the old, and the different – this question can now be answered. It is both.“ I have no doubt that some patients with great independence will choose an early death at the end of their lives. They should not be judged, they should never be prosecuted for the attempt. As a society, we should be offering patients loving support until the very end. We shouldn’t change the law in a way that might encourage their deaths, we shouldn’t take their hands and lead them towards that, and certainly shouldn’t make them feel that they are being subtly pushed into it. The sick, the vulnerable, the suffering and dying deserve so much better, and a truly caring society will provide them with no less.
” On March 4, the bill SB 418 was withdrawn by its sponsor Senator Ron Young for lack of supporting votes, killing it for the legislative year.”
A February 4, 2016 editorial in the Boston Globe addressed the recent Food and Drug Administration (FDA) approval of the opiate analgesic oxycodone (brand named OxyContin) for use in children. This has raised concerns about the relative safety and possible effects of such compounds, as well as the roles of industry and federal government in establishing guidelines and policies for the use of drugs – or any medical intervention. Pediatric pain can incur a host of lifelong neuro-biopsychosocial effects. Moreover, pediatric pain care is complicated by practical and legal issues of long-term and often escalated dosing of opioids, and there is a paucity of safety data and information about potential long-terms risks to the developing brain associated with commonly used analgesics in this fragile population.
Both the dictates of the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) initiative, and invocations of the Presidential Commission for the Study of Bioethical Issues speak to the imperative to translate brain research into viable clinical uses. In light of this, it becomes important to ask if and how novel neurotechnologies can meet the challenges and opportunities of assessing and treating pediatric pain. Research to date has shown promise: For example, neuroimaging studies have sought to identify and establish brain phenotypes for pain. As well, neurogenomics and proteomics may afford an understanding of pediatric pain syndromes and sensitivities to various pharmacotherapeutics. Such studies support the capability and potential clinical utility of neurotechnologically-based assessments. Interventional neuroscientific and neurotechnological techniques, including transcranial and in-dwelling approaches to neuromodulation (such as transcranial magnetic and electrical stimulation, (Moreno-Duarte, et al.; Moisset, et al.;Avery, et al.; Fagerlund, et al.) and forms of deep brain stimulation) (Russo, et al.; Gosset, et al.;Boccard, et al.), and highly specific analgesic ligands and novel pharmaceutical delivery preparations (Tseng, et al.; Healy, et al.;Molet, et al.), may each and all have value in augmenting – or in certain circumstances, perhaps replacing – other methods of pain control.
But any such view to improved approaches to pediatric pain care must also acknowledge a host of neuroethical issues. Thus, while the use of assessment neurotechnologies may be seen as relatively low risk, we must still consider potential burdens and harms of over- or mis-reliance upon perceived objectivity, misdiagnosis, and bias and stigma (of predisposition to pain, and in the subsequent provision of care and social regard). And while interventional neurotechnologies offer great potential to effectively mitigate certain types of pain, we must acknowledge the intersection of unknowns arising from a tentative understanding of the brain, nascent neurotechnology, and the possible longitudinal effects of altering brain structure and function during development.
On one hand the need to address pediatric pain prompts calls for rapid translation of pain research to clinical assessment and care to lessen the burden of the suffering child. On the other, consideration, if not caution must be taken to avoid burdens and harms that may occur as a result of heightened expediency from bench-to-bedside. Tension between these motives could impede the scope and progress of neurotechnologically-based approaches to assessment and treatment of pediatric pain. How then to proceed?
Funding the development and articulation of prospective, longitudinal research in pediatric pain management focusing on the benefits of various types of assessment and intervention, and long-term bio-psychosocial consequences incurred by implementing – or not implementing – particular approaches.
Sustained discourse and deliberation focusing on the neuroethical issues associated with pain care in children;
Ongoing examination – and possible revision – of guidelines, policies and laws to insure the probity of pediatric pain research and clinical care.
Our group is fond of the adage “measure twice and cut once”, and we unapologetically re-assert this summons here. But it’s important to note that we do not advocate this as an “either/or” proposition, but instead as a “both/and” obligation to meet the neuroethical opportunities and challenges afforded by advancing brain science in the clinical care of those most vulnerable, and to sustain the right and good use of neuroscience and its technologies in society across generations.
James Giordano PhD is Chief of the Neuroethics Studies Program at the Pellegrino Center for Clinical Bioethics, and is Professor in the Department of Neurology at Georgetown University Medical Center. Follow more of Professor Giordano’s work at explore.georgetown.edu, andhttp://www.neurobioethics.org.
Connect with PCCB for more updates on our research and events: