Narzędzia osobiste
Jesteś w: Start Groups Strefa dla członków PTKr Filozofia człowieka 2005 Frederica Saylor, "Scientist explores morality’s roots in the brain" (2005)

Frederica Saylor, "Scientist explores morality’s roots in the brain" (2005)

"Science & Theology News" September 26, 2005; http://www.stnews.org/news-1630.htm

Scientist explores morality’s roots in the brain


> <!-- Blurb --><span class="smallHeader">Joshua Greene outlines the significance of being able to see how the brain responds to moral questions.<span>
> <br> By Frederica Saylor
> <span class="dateText">(September 26, 2005)<span>


> (Photo: Josh Greene) <span>

> <strong>Related STNews articles<strong>
> <strong>Related external information<strong>
> <div>

A philosopher and cognitive neuroscientist, Joshua Greene is a post-doctoral researcher in Princeton University’s psychology department and the Center for the Study of Brain, Mind and Behavior. His experimental research uses behavioral methods coupled with functional magnetic resonance imaging to examine the neural basis of morality.

His dissertation examines the foundations of ethics based on recent psychological and neuroscientific research and evolutionary theory. In 2006, he will join Harvard University’s psychology department.

Greene recently spoke with Science & Theology News’ Frederica Saylor to outline the significance of pinpointing how the brain responds to moral questions.

How did your background lead you to neuroscience?
> My background is a bit unusual. As an undergraduate, I was a philosophy major and took courses in neuroscience and psychology. I ended up going to graduate school in philosophy and got a doctorate in philosophy. While I was working on my doctorate, I started doing experimental work in the psychology department with Jonathan Cohen, who is a neuroscientist and a cognitive psychologist, and John Darley, who is a social psychologist at Princeton. That&rsquo;s when we started doing our brain-imaging studies of moral judgment. <p>

What do the brain imaging studies you’ve conducted indicate?

The idea is to try to understand the respective roles of emotion and reasoning in moral judgment. Our strategy was to take moral dilemmas that philosophers have been arguing about for a long time and present them to people while they’re having their brain[s] scanned.

In particular, I’ve focused on questions that seem to, on the one hand, get into tension between two of the dominant strands in Western moral philosophy — those being the utilitarian or consequentialist strand that emphasizes cost-benefit reasoning — and the Kantian or deontological [strand] that emphasizes rights and obligations and duties.

My thinking about these things, which seems to be confirmed with the results of the brain-imaging studies, is that there’s a link between what we think of as cognitive processes and utilitarian judgment and between emotional processes and deontological judgments, at least in certain contexts.

How do you define morality?
> My approach is to not come up with a rigid definition of morality, but rather to focus on questions that everyone would agree are moral questions. I think ultimately, a moral issue is whatever we apply moral thinking to. So, something that we don&rsquo;t think of as a moral issue, someone else might. <p>

Generally speaking, moral issues concern the rights or well-being of other people. But that’s not always the case, at least according to some moral outlooks. There are some things you can do all by yourself that are morally wrong that wouldn’t necessarily harm anybody, at least in not any observable way.

I don’t think that there are clear boundaries to the domain of morality, but we can nevertheless study morality by studying things that everyone agrees are moral issues, like: When is it OK to sacrifice one life to save several others?

Can you give some examples of moral issues most people may agree on?

This is part of a broader philosophical problem that ethicists have been arguing about for at least a couple of decades. Dilemma No. 1 goes like this. I call this the “trolley case”: A train is headed toward five people. It’s going to run them over if nothing is done. But you are at a switch, and you can hit the switch so that it will divert the train onto a side track where there’s only one person. If you hit the switch, only one person will die instead of five. The question is: Is it OK to hit the switch so you minimize the number of deaths from this runaway trolley? And most people that we’ve asked say that this is OK.

A variation on this question [is the “footbridge case”]: The train is headed toward five people, and this time you’re standing on a footbridge over the tracks in between the oncoming train and the five people. It happens that you’re standing next to a large individual, much larger than you, and the only way you can save these five people is to push this big guy off of the bridge so that he’ll land on the tracks and stop the train with his heavy body, and the five people will live. So the question again: Is it OK to sacrifice this one person in order to save five others? And most people say in this case that it’s not all right.

These are very much the same — at least in consequential terms. It’s trading one life for five. The research question is: Why do people say that it’s OK in one case but not in the other case? What’s going on here? My hypothesis has been that there’s something about that up-close-and-personal violence — pushing the guy off the bridge with your bare hands — that triggers an emotional response in us that makes us say, “No, that’s wrong.” And it overrides any cost-benefit analysis that we’re likely to make, at least in most cases. Whereas, in the original case, where you’re just diverting the train, we don’t have that emotional response so we think about it in more abstract, cost-benefit sort of terms.

What occurs in the brain with the different responses?
> We looked at a whole bunch of questions, some of which were like the footbridge case with this up-close-and-personal violence, and some of them were more like the trolley case. When we compared the brain activity between these two sets of cases, we found that the cases with sort of up-close-and-personal violations elicited increased activity in brain regions associated with emotion and what you might call social cognition. <p>

Are these responses instinctive in most people or can they be trained?
> We don&rsquo;t really know. My theory about this is there&rsquo;s at least a strong genetic component. More specifically, the idea is that up-close-and-personal violence is an ancient part of our evolutionary history. It&rsquo;s something that our ancestors have been dealing with for a long time, and it makes sense that they would have instincts that respond to it. Whereas more distant kinds of harm &mdash; like redirecting a trolley with a switch, or even more distant, something like tax evasion &mdash; those are things that our ancestors never had to deal with, so it&rsquo;d be surprising if they had instincts that were tailored to deal with those things.<p>

My guess is that the difference between the trolley case and the footbridge case is really the difference between moral violations that we have a somewhat innate, biologically predisposed response or an emotional response to.

What are some more common, day-to-day examples of moral decisions people may have to make?
> Suppose you&rsquo;re walking by a lake and there&rsquo;s a child who&rsquo;s drowning in the lake. You could rush in and save this child very easily: just a few feet off shore, shallow water, little kid. But you&rsquo;d ruin your leather shoes in the process of doing this. You&rsquo;d be a terrible person if you said, &ldquo;I&rsquo;m not going to save the kid because I&rsquo;m worried about my shoes.&rdquo; <p>

In a different case, but in some ways very similar, we all get things in the mail saying: “Please send us $100. This is a letter from a reputable relief organization. We can save starving children in Africa who are badly in need of food and medicine. A small donation can save a life; please do your share.” And we say, “Well, I’d like to help these kids, but I was thinking of buying some leather shoes.” If you do that, we don’t think that you’re a terrible person. Maybe you’re not a saint for wanting to buy yourself an extra nice pair of shoes when you could be spending the money on saving starving children in Africa. But we don’t think that you’re a terrible person if you don’t do that in the way that we think you’d be a pretty terrible person if you let the kid drown because you were thinking about your shoes.

What is the role of religion in terms of making these moral judgments?
> I think it&rsquo;s really complicated. I think that the two interact, at least in people who are religious. I think to some extent, religions endorse what they endorse and reject what they reject because of pre-existing intuitions about what&rsquo;s right and what&rsquo;s wrong. But there&rsquo;s no question that religions can also shape people&rsquo;s sense of right and wrong. For example, Jewish religions and Islam play some role in making the devout followers of those religions not want to eat pork. That&rsquo;s not something that just happens automatically. There has to be some type of cultural inculcation that religion plays a large part in. <p>

What are some important elements for the future of this research?
> I think we&rsquo;re going to see a lot more cross-cultural work to try to get a sense of how moral judgments vary or how they&rsquo;re the same for different people with different cultural experiences. I think that in terms of more basic psychological work, there&rsquo;s a lot to be done in trying to understand what kind of factors affect our moral judgment. <p>

In terms of the neuroscience, I think that the technology is going to get increasingly better, and we’re going to make better use of it. Hopefully, we’re going to start understanding these judgments in terms of brain circuits using tools with better temporal resolution, and not just in terms of general regions of the brain.

Which fields not directly related to psychology or neuroscience might this research most affect?
> I think the behavioral sciences more broadly can certainly be affected. Economists, for example, study decision-making and are working with the model that claims people are highly rational and purely self-interested. Understanding moral psychology, I think, is going to make it clear that that model is inadequate. Aspects of our moral thinking are going to have to be incorporated into any comprehensive model of social decision-making.&nbsp; <p> Science & Theology News will continue this conversation with Joshua Greene tomorrow

Akcje Dokumentu
« Grudzień 2024 »
Grudzień
PnWtŚrCzPtSbNd
1
2345678
9101112131415
16171819202122
23242526272829
3031