Neurolaw
Jeffery Rosen has an interesting piece about neurolaw in the New York Times. He walks us through various research projects that, he thinks, might eventually force lawyers and to reconsider some of the deeply-held assumptions about the law.
Like aeroman, I think some of the neurolaw boosters Rosen profiles many be overselling the near-term legal applications of their research. However, the author hears out both knowledgeable skeptics and serious enthusiasts.
Rosen participates in one series of experiments in which subjects are asked to do moral reasoning inside an MRI. Scientists record their brain activity as they decide what punishment would fit each hypothetical crime. The researchers are trying to figure out what goes on in our brains as we reason about justice and punishment. They're trying to learn what's different about the brain of a calculating, rational decision-maker compared vs. someone who follows their gut. This research could be useful for developing methods of persuasion geared towards specific types of decision-makers. Advertisers and market researchers are already exploring the possibilities of targeted persuasion. I'm sure the neurolaw people aren't far behind.
[The two scientists] talked excitedly about the implications of their experiments for the legal system. If they discovered a significant gap between people’s hard-wired sense of how severely certain crimes should be punished and the actual punishments assigned by law, federal sentencing guidelines might be revised, on the principle that the law shouldn’t diverge too far from deeply shared beliefs. [NYT]
A) I'm not sure the world would be a better place if we let the lizard brain dictate our criminal sentencing guidelines. B) On a practical level, why do we need brain scans to ascertain how people feel about crime and punishment. Why not just skip the MRI and go straight to the public opinion polling?
Rosen also worries about whether a more sophisticated understanding of the brain will force us to abandon our traditional conception of free will. I don't see why understanding the brain would be any more or less threatening to the concept of free will or personal responsibility than our prior understanding of determinism and indeterminism. We understood the underlying logical problems a long time ago, now we're just working more of the details of the causal chain inside the head.
There's lots of interesting stuff in this article, especially the sections about measuring implicit biases. What's lacking in the piece is a sense of why the relatively new neuro side of this research is so important for the law. Psychologists have been pursuing similar research programs by studying outwardly observable behaviors for years.
It's neat that our imaging techniques have evolved to the point where we can watch the fireworks in the brain, but these imaging studies don't seem to be telling the legal profession very much over and above what the law and psychology specialists have been studying for years.
That's not to denigrate these brain imaging research programs. Studying the brain is worthwhile for its own sake. However, some brain imaging scientists seem tempted to oversell the practical applications of their work in order to get grants for the very expensive equipment they need.
A lot of research in cognition has been funded by marketers. Understanding the deviations from ideal rationality gives marketers a significant edge.
Posted by: togolosh | March 13, 2007 at 12:44 PM
earlier this year, i recently started a whole new field of neuro science.
as someone who works with marketing jargon all the time, i recently came up with a whole new potential branch of neurosciences for worldwide study and application.
try plugging the following terms into a search engine...
neuropsychaiatrology
the person who studies this field:
neuropsychaiatroligist
earlier this year, on one of my blogs i wrote the first published study on psychaiatrology. it is my professional academic goal in life to become the first neuropsychaiatroligist that is state approved.
i doubt anyone in an academic, medical or legal field would take my brainstorming research seriously.
Posted by: revenantive | March 13, 2007 at 01:03 PM
I have been reading Chris Hedges pieces on the Christian right as well as Robert Altemyer's "The Authoritarians" on the web. It seems that some people make decisions and develop morality in a completely different way than myself. I consider myself a skeptic and part of the rational community. I am sure these people are wired differently our brain scans would be quite different I am sure. But I also believe they have developed tremendous rationalizations, justifications, and many other defense mechanisms through their development. Their brain is going to "fire" differently bc of their own unique development. Not sure if this is a significant point.
Posted by: Matt | March 13, 2007 at 01:09 PM
I don't see why understanding the brain would be any more or less threatening to the concept of free will or personal responsibility than our prior understanding of determinism and indeterminism.
Perhaps our greater understanding of the brain will remove hiding room, or break down compartmentalization of beliefs? Someone might agree that the universe is (mostly?) deterministic while avoiding thinking about what that means for us, or holding out hope that we'll be different somehow. (Quantum! The modern version of Epicurus's swerve.) When the causal chains become more explicit it may become harder to avoiding thinking about their implications on the grounds of "it's too abstract" or "it's not proven".
Or in other words, yes, (some) philosophers have struggled with this for up to 2600 years, but now ordinary people will have to think about it too.
And actually I think all this might take us *away* from "lizard-brain" (limbic/mammal-brain might be more accurate) judgements. That brain thinks in terms of punishment and retribution, but compassionate logic informed by determinism may end up thinking in terms of deterrence and prevention.
Posted by: Damien | March 13, 2007 at 01:45 PM
I think you're right that there's not much neuroscience can say about free will that philosophers haven't already said (except for mechanisms), and that psychologists have been doing similar research for a while. But I think you're too dismissive of the potential impact of neuroscience, just because it seems much flashier and more scientific than either of the other disciplines. A neuroscientist who shows up with a fancy picture of a brain firing away and who points to synapses and whatnot will be privileged in any public conversation, and will have a lot more impact than any philosopher. Just look at the relative coverage in the popular press of "hard science" breakthroughs versus philosophical innovation.
Posted by: dan | March 13, 2007 at 02:33 PM
I remember seeing on on tv some brain scanning thing which could detect recognition. So if a rapist denies guilt, hook him up to the machine, show him 15 different women, and see if his brain registers recognition of the woman's face, then he's seen her before. Or show him pictures of the crime scene, etc. That to me is more scary. I've never seen anything on that sort of technology since, though.
Apparently the CIA will also report crimes to the FBI they detect through lie detectors. So if somebody lies about embezzlement at his last employer during a background check, the CIA will report that to the FBI. So be careful what questions you answer during your background check.
The machines are getting into our heads, we can't even lie any more.
Posted by: citycrimes | March 13, 2007 at 02:45 PM
Psychologists have been pursuing similar research programs by studying outwardly observable behaviors for years.
yeah, but discussions about the brain are much sexier. plus neuro stuff sounds more like "real" science. i think that many people don't get either a.) what psychology has to offer in terms of understanding human behavior, or b.) the limitations of imaging research.
Posted by: Heather | March 13, 2007 at 07:34 PM
As I'm sure has been said elsewhere, the article fails to recognize the substantial weight of the right against self-incrimination embedded within western legal systems. The fact that it might be perfectly feasible to measure whether a person has seen a face or location before does not mean it will be justifiably deployed in the legal or law-enforcement setting. I imagine there would be pretty significant outcry if such systems were seriously introduced. This assumes, of course, that society doesn't atavistically reanimate trial by ordeal in a spasm of authoritarian groupthink -- extracting confessions whether through pain or involuntary mind-reading, although different in means, have the same result. If a system of mind-reading comes into play, we might as well retire the court system completely; Courts are fora of judgments based upon fact-finding, after all. Why have a trial when someone's guilt is written all over her brain?
Riffing on what Damien wrote, showing the mechanisms of thought will bring the issue of the problem of mind into the realm of the popular imagination in a way philosophy cannot (due to the latter's requirement of critical reasoning and abstract thought).
Also, there is an enormously important implication for the concept free will touched upon in the article. A lot of neurological research is suggesting that what we think of as our "self" is really just a special effect. In this construction of mind, our self is the phenomenon that coheres the decisions made "elsewhere" -- that is, decisions arrived at outside what we consider to be our intent. In effect, we are the last to know what we think, and we really don't have much say in the matter after all.
The law -- especially criminal law -- is permeated with issues concerning "directing mind", "mens rea", &c. If this emerging model of consciousness continues to gain purchase, this will all have to be revisited in a big way. The implications are staggering, and I suspect the general public would be taken totally unawares by the result. People do still tend to think in terms of the mind being a sort of self-contained motivator of action. We tend not to find people guilty of acts when they didn't "intend" them, but under this construction we never "intend" them in that sense, because "we" are simply the sum total of whatever neural activity occurs, and that is highly contextual and dependant upon purely biological factors. There is no "me" beyond the components of me, and if those components are altered, I become someone else. It's hard to attach true culpability to such an ephemeral concept.
Heather: It seems to me that both neurology and psychology have contributions to make here. Psychology's weakness is the inability to observe anything other than effects, and imaging seems to provide that background. I don't think one will trump the other, but psychology will definitely have to align itself with the realities of real-time neural observations. It could be that various parts of psychological theories of minds will turn out to be corroborated by neural imaging. And surely you can't mean that discussions of the mind are not also discussions of the brain? The brain is the mind. Without the brain, psychology is literally meaningless. It seems to me that having imaging could only be helpful to psychologists.
Posted by: Tim Bailey | March 14, 2007 at 01:03 AM
So far as I can see, the proscription against self-incrimination in the modern justice system is primarily to remove the possibility of coercing a confession through torture or other means of persuasion (monetary, familial, pharmaceutical).
It doesn't matter if torture uses fingernail pulling, waterboarding, burns, or just sleep deprivation. A confession obtained under duress isn't worth the paper it's signed on because everybody has a breaking point, sooner or later, where they will admit to whatever the interrogator suggests just to make the torture stop. Under the right conditions, the subject can even convince themselves the false confession is true. Torture and coercion don't just make for bad justice however, they are also an affront to a person's fundamental human right to be safe from undue persecution.
However, if you could have a reliable and non-damaging method of detecting whether somebody believed they were telling the truth, and it was possible to differentiate an unprompted truth from a "truth" implanted through coercive measures, then I think there would be no reason not to allow that as a tool for justice. However, its use would have to be narrowly limited to prime suspects and the specific crimes they are accused of, and disallowed for use in "fishing expeditions" in line with existing "unreasonable search and seizure" safeguards.
I have big doubts that something derived from MRIs can ever be that tool. I expect we would need something that requires much more individualized and fine-grained understanding and recording of a brain's response than can be obtained with an MRI. Perhaps a mapping and dynamic model of the interconnections of neural stacks in the brain using massively parallel, molecular/nanotechnological interstitial machinery? I think it will take quite a few more decades before the development of something like that will require completely revamping the current criminal justice system. But I don't think it's necessarily impossible.
Posted by: Pennant | March 14, 2007 at 03:42 AM
Why not just skip the MRI and go straight to the public opinion polling?
Because saying it comes from MRI studies is so much more sciency. I mean, there's a pretty well-developed idea that you shouldn't let people's knee-jerk reactions determine sentencing policy, and there's now quite a lot of empirical evidence that harsh sentencing doesn't help from an overall social perspective, so if you're a string-em-up type, you've got to find another way to justify your pathological craving for revenge and other people's suffering. Wrapping it up in a cloak of scienciness is just the way to go.
Posted by: Dunc | March 14, 2007 at 08:09 AM
People have been studying the art of manipulating people for centuries. Some people have a natural talent at it. While this is all very interesting research, I think it just reveals a mechanism behind something we already know about and use - how people come to decisions about things, which tells you how you can manipulate them (in general). I doubt there will be much direct legal effect (says the lawyer).
Posted by: Disgusted Beyond Belief | March 14, 2007 at 09:45 AM
Disgusted Beyond Belief writes;
I doubt there will be much direct legal effect (says the lawyer).
Doyle;
The implications are engineering like. The speed and power of direct brain like information production supercedes the role of a lawyer in public or private relations. The whole legal system based as it is on the 'law' is really a crude attempt to regulate human intercourse at the rough scale of text written law. Which is too slow, and ponderous as it is now to match the capability of computing speeds and volume.
Doyle
Posted by: Doyle Saylor | March 14, 2007 at 10:46 AM
Doyle, I'm not sure what it is you suggest as a replacement for the legal system. I'm speaking both as a lawyer and as a (former) professional computer programmer when I say, "huh?"
Posted by: Disgusted Beyond Belief | March 14, 2007 at 10:49 AM
I doubt there will be much direct legal effect (says the lawyer).
I agree if you're saying that there won't be an MRI scanner on the witness stand. However, I don't think the indirect effects will be insignificant. If it comes to be that the predominant model of the mind is that of an effect, rather than a cause, the doctrine and law surrounding intent will have to be adjusted. If it is believed that people's minds, and their personalities, are entirely contingent on unconscious activity, then in a very real sense nobody can "help" what they are doing. We all become subject to the automatism defence. The law will have to adapt its construction of the directing mind in order to demonstrate that people are culpable in the sense that is presently deployed in legal discussions of guilt.
Another way to look at it: If (when, really) we have a sufficient phenomenal and causal model of brain function, present evidence suggests that part of this will necessarily show the direct effects of any particular neural event. This means that if anyone experiences this event, most anyone would do the particular thing that the accused did. This falls outside the present construction of culpability (at least the one they taught me at law school) where someone must have a guilty mind (mens rea) at the time of the bad action (actus reus). In other words: a bad thought leads to (and coexists with) a bad act. The problem really arises around the question of bad thoughts, which are still parsed as being some sort of evil impulse, or malignancy of character. These characterizations in turn flow from magical thinking, which held sway in past ideas about why people did anything, including within the courts.
As the early cases of tumors show, behaviour is entirely the result of the neural state at the time of the crime, and has little to do with the nature of the accused's soul, or the "character" of his or her "mind," insofar as one views the mind as some sort of indivisible miasma of consciousness. Present theory suggests that the mind is highly divisible: what we perceive as the cohesive and coherent nature of human identity is largely a special effect, and almost anyone can be rendered insane rather easily.
If a certain set of precursor events necessarily leads to a subsequent action, proving guilt -- guilt that elicits punitive judgment -- then becomes a matter of punishing not "badness" but a particular set of neural states; states that anyone would experience under the same circumstances. At present, judges and juries might be sympathetic to an accused's motives, but will still be bound to find him or her guilty if the case is made out; under this new understanding of the mind, they would need to find that the accused was even capable of directing his or her actions at all. This may be harder than it sounds if most of what we do is decided outside our self. We will have to revisit the issue of free will.
Posted by: Tim Bailey | March 14, 2007 at 10:40 PM
I have a hard time believing that any of these technologies will fundamentally alter the way the legal system does business.
Whether it's a polygraph, an MRI, or some future imaging technology, there's always going to be a further question of interpretation. We know with polygraphs that sympathetic nervous system responses are correlated with lying in the average person. I.e., the average person, when telling an average lie will breathe a little faster, sweat a little more, etc. On the other hand, some psychopaths can literally lie without breaking a sweat, and some people just get nervous on some topics despite having no guilty knowledge, per se.
So, I'm glad that polygraphs aren't admissible in most courtrooms. I think the average brain mounted in an average human body in the role of a judge or jury is probably at least as sensitive to the subtle cues that a witness (or an attorney) is lying.
In effect, most of our intricately complex social brain is a lie-detector. It's neat that scientists have been able to create machines that to crudely approximate some of what our natural "wetware" does for us every day. But that's not necessarily a good reason to outsource these decisions.
At least when we rely on our own wetware we aren't dazzled by our own abilities to read character or divine intent. We've got our gut feelings, but if we're honest with ourselves, we also know in our guts that we're always guessing.
We even accept certain restrictive rules of evidence in court because we know that our gut feelings, however sophisticated (in a hardware sense), aren't always reliable and therefore that we we need brute algorithms to make sure that all the evidence is laid out before decision-makers to give them the best possible chance of making a truly informed decision.
Posted by: Lindsay Beyerstein | March 14, 2007 at 11:13 PM
I don't think it will ever make an impact because, as Lindsay says, it will be too uncertain. That's why lie detector tests can't be used. Though interestingly, they really can't dectect lies at all, only truth. I sat in on a fascinating legal lecture from one of the foremost polygraph experts in my state and he explained that the test is really very very accurate when it gives a 'truth' result - in other words, if you take the test and it says you were honest, it is usually right. On the other hand, it absolutely stinks at detecting lies - in other words, if the result is 'lie' then there is a very significant chance that the 'lie' result is wrong and the person is telling the truth. So in truth, the only thing the test is any good at saying is either "this person is probably telling the truth" or "the test could not tell us if this person was telling the truth." But even on the truth side, there is still uncertainty - too much uncertainty for it to ever be allowed as evidence because the 'scientific' nature of the test would likely trump any caveats about how accurate it is with a jury.
Sadly, there is a lot that is already wrong with our criminal justice system. What really needs to be fixed is the system. But what needs to be done will never be done because of the way our political system works.
Posted by: Disgusted Beyond Belief | March 15, 2007 at 09:06 AM
Let's say the police have five suspects. 4 agree to look at pictures of the crime scene while hooked up to clear their names and one refuses. Who are the police going to focus their attention on?
Posted by: citycrimes | March 15, 2007 at 10:44 AM
I would be curious to hear what you think about how the processes of neuroplasticity and self-directed neuroplasticity are a part of neurolaw. I blog about it here:
http://westallen.typepad.com/idealawg/2007/03/neurolaw_leavin.html
Seems like a rather large piece of the issue of responsibility?
Posted by: StephanieWestAllen | March 15, 2007 at 11:09 PM
I have a hard time believing that any of these technologies will fundamentally alter the way the legal system does business.
That turns entirely upon the vigilance shown by supporters of the right against self-incrimination. These technologies may be unconvincing, but the next generation of devices (based on much more extensive modelling and study) might be a different matter.
I don't think it will ever make an impact...
It will take a long time to refine, but never say never. Hopefully we'll have enough lead-time to develop proactive policy and law.
Let's say the police have five suspects ... Who are the police going to focus their attention on?
The police must never be allowed to make that offer. Polygraphs, as cheesy as they are, are bad enough.
Posted by: Tim Bailey | March 15, 2007 at 11:13 PM
What do you have to hide Tim? If you're not guilty you've got nothing to worry about.
Posted by: citycrimes | March 16, 2007 at 11:38 AM
I am totally guilty -- I ate all those corndogs, and never paid! But you don't get to read my mind to find that out.
Posted by: Tim Bailey | March 16, 2007 at 12:00 PM
Hi guys. The art of war is simple enough. Find out where your enemy is. Get at him as soon as you can. Strike him as hard as you can, and keep moving on.
I am from Pakistan and also now teach English, please tell me right I wrote the following sentence: "Rebate lazy parrot inn puerto rico."
With respect :P, Morris.
Posted by: Morris | April 17, 2009 at 06:10 AM