Narzędzia osobiste
Jesteś w: Start Groups Strefa dla członków PTKr Filozofia człowieka 2006 Chhavi Sachdev, "Herbert Gintis: The flip side of altruism" (2006)

Chhavi Sachdev, "Herbert Gintis: The flip side of altruism" (2006)

"Science & Theology News" January 16, 2006; http://www.stnews.org/altruism-2544.htm

Herbert Gintis: The flip side of altruism


> <!-- Blurb --><span class="smallHeader">Everybody&rsquo;s an altruist, according to economist Herbert Gintis. Is it socialization or genetics?<span>
> <br> By Chhavi Sachdev
> <span class="dateText">(January 16, 2006)<span>

 

Economist Herbert Gintis studies why people do the things they do using game theory and mathematical models. An emeritus professor at the University of Massachusetts, Gintis has also taught at the Santa Fe Institute and Barnard College. The author of Game Theory Evolving and co-author of the forthcoming title, The Cooperative Species: Human Sociality and its Evolution, Gintis believes that most people are predisposed to cooperate and be altruistic, and they aren’t even aware of it. Altruistic tendencies have a nastier flip side as well, but that’s all part of what makes society work, according to Gintis.

How did you get interested in studying altruism?

I studied all of the behavioral sciences — biology, psychology, sociology, political science, economics, and anthropology — and I thought that it was crazy that these fields could have completely different theories of how people behave. Not only different, they’re contradictory. We’re part of the Network of the Nature and Evolution Process. About 18 people work for it — economists, biologists, anthropologists, psychologists. We use the experimental game theory to see what people want to do, what their preferences are.

What have you found about human beings and their preferences?

Economists have a model of choice that’s called the rational actor model. It generally assumes that people are selfish. In fact, that’s a very important part of it. And one of the things we wanted to do is test whether or not that is the case. We found out that it is not the case. It’s a rather very interesting phenomenon: People tend to be predisposed to cooperate with others at a cost to themselves as long as others will also cooperate. And people are willing to punish others when they do not cooperate.

The reason humans are so successful is normally attributed to the fact that they’re smart. The reason they’re smart is because humans operate in complex groups. The reason they can operate in complex groups is that they have strong reciprocity: Not only do they share, but they’re willing to punish non-sharers. If you look at the whole range of social species, you find that punishing is very important.

Take bees. You always think of the hive as the big social collective, everybody does what they’re supposed to do. But that’s not true. Workers often try to lay eggs, even though only the queen is supposed to lay eggs. If workers lay eggs, there are other workers that run around, eat the eggs, then punish the workers that laid the eggs. Wherever you find cooperation, you’ll also find punishment. Think of your own body. Each cell has its own self-interest to multiply. Why don’t they go berserk? How do you get cells to cooperate? The answer is, you punish cells that don’t cooperate. As far as we know, there is no other vertebrate species that punishes. Humans are by far the most social vertebrate species and we argue that that’s why humans are so cooperative.

How do you define altruism? In your work, you speak of reciprocity. What is that?

An act is altruistic if it benefits another at a cost to yourself, where there is no possible mechanism whereby you could gain even in the long run somehow: Long term benefit to someone else with a long term cost to yourself. We call that altruism. By the way, it could be a long-term benefit to a group at a long-term cost to yourself. We want a definition of altruism that isn’t subjective and also extends to animals. There is altruism in animals. It almost always depends on kin groups — that is, you’re nice to your kids, which is a biological objective.

There are really two types of reciprocity. Generally, before the work we did, reciprocity meant I help you if you help me. And that, of course, is not altruism. What we study — “strong reciprocity” — is a predisposition to cooperate, even when it’s costly, and a predisposition to punish violators, free riders.

The problem with the term “altruism” is that there are many forms of altruism. For instance, unconditional altruism is where I help others no matter what. I just help. That’s altruism, but it’s not strong reciprocity. Mostly people think altruism is goody-goody or warm and fuzzy. But, the biggest part of making society work is needing to retaliate, wanting to hurt people who hurt you. It’s much more important than the precondition to cooperate, because if you don’t have punishment, you can’t get cooperation. Strong reciprocity can be cooperation and conditional punishment.

So, we believe the heart of altruism is not only the willingness to cooperate and help — empathy and caring for others — but also this negative side of human nature: retaliation or retribution.

Let me give you an example that you would not even think is altruistic normally, but is: road rage.

What exactly do you mean by road rage and how is that altruistic?

If you don’t drive the proper way, some guy honks his horn and you feel humiliated; you’ve done a bad thing and you got caught. But he didn’t do it because he cared about keeping people honest. He honked his horn because he was pissed at you. This is true in subjective altruism. By honking your horn or yelling at someone for doing a bad thing, this is an altruistic act. It might have cost you something, not much. But it keeps the rules of the road going. It keeps people honest, so it’s an altruistic act.

You’re upholding the norm of fairness by hurting someone who was unfair. But you didn’t do it because you wanted to uphold a norm for the group. You did it because you were angry at the guy.

What about when people do something spontaneous and show genuine feelings of empathy?

Our argument is that everybody is altruistic. But for most people if the cost gets sufficiently high, they stop being altruistic.

We argue that everyday life has little bits of altruism all over the place. They’re generally not that costly, but they’re extremely important. For instance, when I go on an airplane, everyone is nice to each other; they’re never going to see each other again. Why be polite? You can imagine if you put chimps on an airplane, it would be a total disaster. Why go through these little amenities: “Can I help you with your bag?” “Let me move for you.” “Let me get up so you can get out and go to the bathroom.” Think about it. If you put a bunch of sociopaths on an airplane, it would be a disaster. But these little amenities, in everyday life, we tend to help each other even if it doesn’t cost that much. This makes society work.

It may be that you really care about the other guy, and very often that’s the case. Human beings’ notion of empathy is very strong. And that’s what altruism is. It’s wanting to help people at a cost to yourself — but also punish people at a cost to yourself when they’re behaving in an anti-social manner. And then the question is, how can human beings be this way? No other species is like this. And that’s where the biology comes in. You have to show that groups in which you have a strong reciprocator, an altruist, will do better than groups that don’t have altruism.

Is that how game theory fits in?

The main thing game theory does is provide a methodology for experiments. After that, you’ve got all the biology, genetics, sociology. But game theory pervades everything we do. There’s a whole theory on that called “behavioral game theory.”

Tell us about your recent findings on how altruism is transmitted.

We’re basically very hard-nosed mathematical behaviorists. We take the economic model of the rational actor and we put it through the hoops.

Sociologists have a concept called socialization, which means the internalization of norms, which is completely opposite from every other behavior. There’s no such thing in biology or economics or political science or anthropology.

It seems to me that the principle of socialization was one of the established behavioral universal principles in academic sociology. What we propose is that human beings have this capacity to be programmed. Humans are the only ones where humans can want things just because they were socialized to want them — want to be fair, want to share, want to help your group, want to be patriotic, want to be honest, want to be trustworthy, want to be cheerful — when they are costly to our selves. If you’re honest as a principle, that’s good for everybody else and it costs you. So being honest is part of strong reciprocity.

Where does this come from?

Within a complex society, the general approach to this is gene-culture co-evolution. In biology, you get genetic information. In sociology or anthropology, you get cultural information. But really, in human society, they go together. Genetic evolution leads to culture. In that culture, given strong reciprocity, you can be rewarded for being nice or for cooperating. So cultural evolution can lead to genetic evolution. Human beings become nicer and more reciprocal and more honest. So, you get this whole dialectic back-and-forth between cultural evolution and genetic evolution and the product is human beings.

As a result, you get this very complex society, where if individuals don’t learn to share or learn to be honest, the system punishes them. People who are programmable, then, are more fit than people who are not programmable.

So in that context of programmability, socialization — the internalization of norms — becomes the interest of fitness. If you brush your teeth because your parents told you to or you turned the other cheek, because in the long run that pays off, then you are more fit than your neighbors who are sociopaths.

The interesting thing is, once you get people who are programmable, you can program them to do a lot of things, even things that aren’t in their interest. You can program people to be honest, even when it’s in their self-interest sometimes not to be honest. A lot of people won’t be honest. By programmability, you can get people to be more altruistic than they would ever be if they were self-interested. Now we have all these suicide bombers. That’s completely obvious. You get these people who are programmable and you can program them to be willing to commit suicide. You could never do that to an animal, not willingly. Our argument is that this is one more kind of collective social mechanism that allows us to cooperate.

People don’t like to see others suffer. After 9/11, you saw in NYC, the whole city rise up, people helping each other. It’s just a spontaneous empathy. Empathy itself is the result of gene-culture coevolution, because, again, people are empathetic; They help in the short run because they feel like it; they feel good helping. But in the long run it helps them because people help back or some phenomenon allows empathy to be fitness enhancing. I think that’s more important than what they were taught by their parents. But some of that can come in too.

Chhavi Sachdev is international editor at Science & Theology News. <br/>
Akcje Dokumentu
« Maj 2024 »
Maj
PnWtŚrCzPtSbNd
12345
6789101112
13141516171819
20212223242526
2728293031