Neuroethics Fifteen Years On

Adina L. Roskies explains how new discoveries are changing the philosophical landscape

In 2002 I wrote a piece entitled “Neuroethics for the New Millenium” that described a new research area, neuroethics, as comprising the ethics of neuroscience and the neuroscience of ethics. In the 15 years since, neuroscience has come a long way, and neuroethics, too, has evolved. There are courses in neuroethics at many institutions of higher learning, and national and international societies at whose meetings hundreds of researchers interested in neuroethics convene to discuss neuroethical issues. Here I highlight a few of the questions and issues that advances in neuroscience have raised for neuroethics. Although the questions themselves are not new, they are in a sense newly animated by advances in neuroscientific capabilities.

One of the central neuroethical questions concerns enhancement, our ability to improve upon our natural mental capacities. The medical sciences aim to treat disease and dysfunction, but often these treatments can correct functionality beyond baseline, or can be used by those without a disorder to enhance normal function. What are the ethical issues surrounding the enhancement of our cognitive abilities? Neuroscience provides a number of avenues to enhance cognition or to augment other abilities. The most common is through pharmacology, with, for example, the common use of Ritalin or Adderall among college students without ADHD to improve performance on tests. However, enhancement is not confined to the administering of short-acting medications. Neural enhancement could potentially involve noninvasive or invasive brain stimulation, the implantation of neural prosthetics, or other more recherche methods such as targeted gene editing. Arguments about the ethics of enhancement are not new, and blanket arguments about the wrongness of enhancing our natural abilities seem doomed to fail – after all, we all want to educate ourselves and our children, yet education is just one method of cognitive enhancement.

To my mind no argument about the “unnaturalness” of neural enhancement holds water as a reason not to pursue it. The best arguments against neural or cognitive enhancement involve the harms that are likely to accrue directly because of the interventions, or indirectly because of the larger effects such changes will have upon society. It is also unlikely that any arguments about the ethics of enhancement will fit all cases, since the details of each enhancement technique and its consequences will likely be different. What is certain is that more methods for and types of enhancement will be possible as our understanding of the brain improves, and our ways of manipulating it expand and become more targeted. Whether and for what purpose they should be employed or made available are questions that we will increasingly have to answer, as a society, as policy-makers, and as individuals.

The rise of neuroethics coincides with the rapid development and spread of neuroimaging technologies. Prior to the development of functional MRI, our ability to measure neural activity in behaving healthy humans was quite limited, restricted to measures of surface electrical activity on the scalp which provides poor spatial resolution, or somewhat invasive and restricted measures of blood flow with positron emission tomography, available to only a few well-endowed medical research centres. fMRI has changed all that, enabling researchers almost anywhere to noninvasively scan normal participants doing cognitive tasks.

As neuroimaging has developed, it has become a tool for correlating brain activity signals with neural function, and even content. Early worries about the prospect of mindreading with neuroimaging techniques seemed overblown, even quaint, to many working in the field, myself included. After all, the signal from fMRI is noisy and also has limited spatial and temporal resolution. It will never afford an understanding of the activity of individual neurons in an area of tissue, but only measures of aggregate activity of many millions of neurons.

However, as time has gone on, a number of developments make the prospect of mindreading more realistic. fMRI technology has improved, with stronger and more stable magnets allowing for higher resolution imaging than was previously possible. More importantly, however, a number of novel analytical approaches have altered the landscape for what kind of information can be extracted from fMRI data. The application of multivariate techniques allows for the classification of patterns of brain activation in ways that provide good predictive power for identifying complex mental and emotional states.

For example, recent work has shown that negative affect can be identified, as distinct from pain; that objects of perception can be reconstructed from brain data with reasonable resolution; and that imagination of objects belonging to certain kinds of semantic categories allows those categories to be identified well above chance. Other analytical techniques enable researchers to compensate for individual differences in brain size, shape and functional organisation, allowing better pattern classification between people. The combination of these approaches has enabled researchers to make significant progress in classifying complex thoughts. As an example, a network trained to classify brain responses to sentences in two different languages was able to correctly classify the same sentences in a language it had not previously seen. This suggests that the brain represents semantic content independently of linguistic vehicle, and that this content is similarly represented across individuals and encoded in ways compatible with the limits of fMRI.

These recent advances have raised the prospects of mindreading with new urgency. While it is still not possible to “read” mental content from brain scans in the sense that one can unambiguously discern the contents of propositional thought, the prospects of gleaning sufficient information about thought content are no longer in the realm of science fiction. At least currently, however, this cannot be done without the knowledge and implicit consent of the subject.

On the philosophical side, the issue of mental privacy is surprisingly undertheorised, perhaps because realistic prospects for mindreading until recently have been nonexistent. In US law two constitutional amendments have been taken to be relevant to mindreading. The Fourth Amendment, which protects against unwarranted searches and seizures, could be taken to protect individuals against incursions into mental privacy by the state. The Fifth Amendment protects people from self-incrimination in criminal proceedings. This protection has been interpreted as extending to testimonial but not physical evidence. However, the status of brain imaging data is unclear, for assuming that mental states just are a result of brain activity, there is no clear distinction between physical and testimonial evidence: evidence of mental content is both. And self-incrimination is a rather restricted context.

More importantly, such protections only extend to the relationship between individuals and the state, and not, for example, individuals and other individuals, or companies. The United States has lagged far behind Europe in protecting data privacy on the internet, and has thus far failed to clearly articulate the philosophical and legal basis for personal privacy in the information age. These lacunae will also pose a risk for mental privacy. Theorising about the value and scope of mental privacy should be part and parcel of protecting freedoms in the future.

Philosophers have long discussed the nature and importance of agency, of being an autonomous being acting in the world. Although we lack an analysis of agency, there are a variety of dimensions or capacities that we enjoy, perhaps in varying degrees, that intuitively have bearing on our agency. One aspect of agency is our personal identity, that which makes us the same person over time. There are a variety of philosophical theories about what it is that makes us the same person over time; many of them depend on psychological factors, such as our memories, our personality, or our self-conception. There is another concept in this realm, which I will call self-identity, which is what a person self-identifies with (e.g. passions; religious or gender affiliations, etc.). Both may be valuable, but they are often conflated in the literature under the heading of personal identity.

Recent development in techniques involving brain interventions open up the possibility that we can alter the people we are (personal identity) or take ourselves to be (self-identity) via neurotechnologies. Fairly commonplace examples involve old neurotechnologies, such as pharmacological interventions, that in addition to their therapeutic value may, as a side effect, lead to changes in mood or personality that some have argued affect a patient’s personal identity. Novel techniques are on the horizon that may more dramatically affect these sorts of factors.

For example, Deep Brain Stimulation is an approved treatment for Parksinson’s Disease (PD) and an experimental technique for other disorders that involves implanting an electrode in subcortical structures and chronically stimulating neural tissue. Over 100,000 people currently undergo DBS as treatment for PD. DBS can be life-changing for those for whom other treatments are ineffective. It can reliably improve motor functioning for the vast majority of patients. However, some small proportion of patients report side effects of treatment that can include development of obsessive-compulsive behaviours, hypersexuality, change in mood or personality, gambling addictions, and psychosis. Although these effects are diverse, many of these behaviours can be understood as playing some part in making up who a person is.

In one famous case study, a man with broad and eclectic musical tastes suddenly developed a strong and focused preferences for the music of Johnny Cash, foregoing all his previous musical interests. The strong desire for Cash’s music abated when stimulation was interrupted, and returned again upon resumption of DBS. One might ask in what way this “desire” was one that could be attributed to the subject, or whether the treatment constituted an instance of “desire insertion”, a preference not attributable to the agent himself.

The ability to directly intervene on brain function, and in some cases to possibly alter aspects of who a person, is has alarmed some ethicists. Are these kinds of changes morally problematic in a special way? What sorts of things are constitutive of personal identity? Do some characteristics of people have a special status because of their role in constituting who someone is, or because of the way in which the play a role in someone’s self-conception? Are there changes that are especially harmful, or perhaps absolutely prohibited? Does it matter whether they are desired effects or inadvertent side effects of a treatment? After all, some therapeutic treatments actually aim to alter things like a person’s mood or desires. If changing oneself were itself morally prohibited, then many things we currently take to be valuable, such as certain types of self-improvement, would be morally wrong. The answers are unlikely to be so simple.

But even theoretically unproblematic questions such as when are the benefits of a treatment outweighed by the harms? are unlikely to be practically straightforward, or even objective. Consider, for example, a patient that was bedridden and hospitalised without DBS, but that with DBS regained his mobility but became psychotic and needed to be institutionalised. Should he undergo DBS? What if his views on the matter differ with and without stimulation? If personal identity really is altered, which person should decide? The more we are able to effect changes in a person’s brain to core features of mood, cognition, and function, the more pressing it will be to answer these kinds of questions.

Severe brain damage can leave people in a persistent vegetative (PVS) or minimally conscious state (MCS). It is estimated that there are approximately 14,000-35,000 people in PVS in the US. PVS patients, even though they have periods of apparent sleep and wakefulness, show no evidence of awareness of internal or external stimuli. It has been argued on this basis, and the almost nonexistent prospects for recovery after a year or so of PVS, that there is no ethical obligation to keep such patients alive. However, about a decade ago scientists put a number of PVS patients in the functional MRI (fMRI) scanner, and asked them to imagine playing tennis or navigating through their house. Researchers had already shown that different and highly distinguishable networks of brain regions are active in normal people engaged in these two tasks, allowing which task they were doing to be reliably identified from their brain scans. Asking this of PVS patients was an unprecedented, risky and expensive undertaking, since fMRI is enormously costly and these patients had been outwardly unresponsive to verbal and other stimuli for years. It was accepted that these patients had no mental life. Their results shocked the neuroscience community: A small percentage of PVS patients they tested showed distinguishable brain signatures to these two commands for mental imagery, in brain areas similar to those of normal people.

Further work showed that at least one of these patients was able to use these mental imagery tasks to indicate yes and no answers to questions that they were posed. Although not everyone is convinced that these results indicate that the patients who produced the different brain signatures are conscious, the evidence weighs heavily in that direction. It seems that at least some people who outwardly appear to consistently lack any mental life at all are nonetheless conscious and sufficiently cognitively intact to understand instruction, execute a relatively demanding cognitive task, and stay on task for a significant amount of time.

These results raise a number of pressing ethical and neuroscientific issues: How can we more affordably and quickly screen PVS patients to determine whether they fall into the small minority of patients with evidence of preserved function? How should such patients be treated? At the very least it seems that we should determine whether they feel pain, and take steps to treat it if they do. More difficult will be determining the extent of their preserved capacities, and what those entail. Should they be able to take part in decision-making about their own futures – i.e. are they sufficiently competent to weigh in on matters of life and death? Can we develop imaging prosthetics that will improve their ability to communicate? And can we help the families of PVS patients understand that these capacities are rare in PVS patients, and for the vast majority, their unresponsiveness is indeed due to the absence of awareness? The case of PVS patients is one in which neuroscience has shown that an entire class of people has been mistakenly diagnosed by clinical practice relying on behavioural and not brain data, with ethically troubling results.

The previous examples have been examples of the ethics of neuroscience. One thing that distinguishes neuroethics from general bioethics is that it encompasses the neuroscience of ethics. That is, it is concerned with understanding the neural basis of moral cognition, and the question of how such understanding will bear upon ethical thought. Although we still have a long way to go to really understand moral cognition, it is clear that emotional processing plays an important role in making some moral judgements. It has been argued that we ought to privilege rational over emotional processes in moral deliberation, that emotional processes are heuristics that are ill-adapted for use in today’s complex world. Others have argued (I think erroneously) that neuroscience has shown that we lack free will, and they conclude that moral responsibility is thus an illusion. Although I disagree with both these claims, they are illustrations of how understanding how we think and act at a neural level may affect the way we conceive of morality. That is perhaps the most distinctive aspect of neuroethics, and the one perhaps most likely to change the way that we see ourselves.

Share This

 

Adina L. Roskies is the Helman Distinguished Professor of philosophy and chair of Cognitive Science at Dartmouth College.