We are genetically justic seekers and it defies “homo economicus” model of human rationality

http://en.wikipedia.org/wiki/Ultimatum_game

ultimatum game experiments have shown that people are generally willing to sacrifice monetary rewards when offered low allocations, thus behaving inconsistently with simple models of self-interest. Economic experiments have measured how this deviation varies across cultures.

Explanation

The highly mixed results (along with similar results in the Dictator game) have been taken to be both evidence for and against the so-called “Homo economicus” assumptions of rational, utility-maximizing, individual decisions. Since an individual who rejects a positive offer is choosing to get nothing rather than something, that individual must not be acting solely to maximize his economic gain, unless one incorporates economic applications of social, psychological, and methodological factors (such as the observer effect).[citation needed] Several attempts have been made to explain this behavior. Some suggest that individuals are maximizing their expected utility, but money does not translate directly into expected utility.[6] Perhaps individuals get some psychological benefit from engaging in punishment or receive some psychological harm from accepting a low offer. It could also be the case that the second player, by having the power to reject the offer, uses such power as leverage against the first player, thus motivating him to be fair.[citation needed]

The classical explanation of the ultimatum game as a well-formed experiment approximating general behaviour often leads to a conclusion that the rational behavior in assumption is accurate to a degree, but must encompass additional vectors of decision making.[citation needed] However, several competing models suggest ways to bring the cultural preferences of the players within the optimized utility function of the players in such a way as to preserve the utility maximizing agent as a feature of microeconomics. For example, researchers have found that Mongolian proposers tend to offer even splits despite knowing that very unequal splits are almost always accepted.[7] Similar results from other small-scale societies players have led some researchers to conclude that “reputation” is seen as more important than any economic reward.[7] Others have proposed the social status of the responder may be part of the payoff.[8] Another way of integrating the conclusion with utility maximization is some form of inequity aversion model (preference for fairness). Even in anonymous one-shot settings, the economic-theory suggested outcome of minimum money transfer and acceptance is rejected by over 80% of the players.[citation needed]

An explanation which was originally quite popular was the “learning” model, in which it was hypothesized that proposers’ offers would decay towards the sub game perfect Nash equilibrium (almost zero) as they mastered the strategy of the game; this decay tends to be seen in other iterated games.[citation needed] However, this explanation (bounded rationality) is less commonly offered now, in light of subsequent empirical evidence.[9]

It has been hypothesised (e.g. by James Surowiecki) that very unequal allocations are rejected only because the absolute amount of the offer is low.[citation needed] The concept here is that if the amount to be split were ten million dollars a 90:10 split would probably be accepted rather than spurning a million dollar offer. Essentially, this explanation says that the absolute amount of the endowment is not significant enough to produce strategically optimal behaviour. However, many experiments have been performed where the amount offered was substantial: studies by Cameron and Hoffman et al. have found that higher stakes cause offers to approach closer to an even split, even in a 100 USD game played in Indonesia, where average per-capita income for all of 1995 was 670 USD. Rejections are reportedly independent of the stakes at this level, with 30 USD offers being turned down in Indonesia, as in the United States, even though this equates to two week’s wages in Indonesia.[10]

Neurological explanations

Generous offers in the ultimatum game (offers exceeding the minimum acceptable offer) are commonly made. Zak, Stanton & Ahmadi (2007)[11] showed that two factors can explain generous offers: empathy and perspective taking.[clarification needed] They varied empathy by infusing participants with intranasal oxytocin or placebo (blinded). They affected perspective-taking by asking participants to make choices as both player 1 and player 2 in the ultimatum game, with later random assignment to one of these. Oxytocin increased generous offers by 80% relative to placebo. Oxytocin did not affect the minimum acceptance threshold or offers in the dictator game (meant to measure altruism). This indicates that emotions drive generosity.

Rejections in the ultimatum game have been shown to be caused by adverse physiologic reactions to stingy offers.[12] In a brain imaging experiment by Sanfey et al., stingy offers (relative to fair and hyperfair offers) differentially activated several brain areas, especially the anterior insular cortex, a region associated with visceral disgust. If Player 1 in the ultimatum game anticipates this response to a stingy offer, they may be more generous.

An increase in rational decisions in the game has been found among experienced Buddhist meditators. fMRI data show that meditators recruit the posterior insular cortex (associated with interoception) during unfair offers and show reduced activity in the anterior insular cortex compared to controls.[13]

People whose serotonin levels have been artificially lowered will reject unfair offers more often than players with normal serotonin levels.[14]

This is true whether the players are on placebo or are infused with a hormone that makes them more generous in the ultimatum game.[15][16]

People who have ventromedial frontal cortex lesions were found to be more likely to reject unfair offers.[17] This was suggested to be due to the abstractness and delay of the reward, rather than an increased emotional response to the unfairness of the offer.[18]

===========================================================

http://en.wikipedia.org/wiki/Trust_game#Trust_game

if individuals were only concerned with their own economic well being, proposers (acting as dictators) would allocate the entire good to themselves and give nothing to the responder.

===================================================

http://en.wikipedia.org/wiki/Public_goods_game

The group’s total payoff is maximized when everyone contributes all of their tokens to the public pool. However, the Nash equilibrium in this game is simply zero contributions by all; if the experiment were a purely analytical exercise in game theory it would resolve to zero contributions because any rational agent does best contributing zero, regardless of whatever anyone else does.[1]

In fact, the Nash equilibrium is rarely seen in experiments; people do tend to add something into the pot.

The empirical fact that subjects in most societies contribute anything in the simple public goods game is a challenge for game theory to explain via a motive of total self-interest, although it can do better with the ‘punishment’ variant, or the ‘iterated’ variant; because some of the motivation to contribute is now purely “rational”, if players assume that others may act irrationally and punish them for non-contribution.

=============================================================

http://en.wikipedia.org/wiki/Homo_economicus

homo economicus attempts to maximize utility as a consumer and economic profit as a producer.[1] This theory stands in contrast to the concept of homo reciprocans, which states that human beings are primarily motivated by the desire to be cooperative and to improve their environment.

http://en.wikipedia.org/wiki/Homo_reciprocans

states that human beings are primarily motivated by the desire to be cooperative and to improve their environment.

homo sociologicus

(introduced by German Sociologist Ralf Dahrendorf in 1958), Hirsch et al. say that homo sociologicus is largely a tabula rasa upon which societies and cultures write values and goals; unlike economicus, sociologicus acts not to pursue selfish interests but to fulfill social roles.

=============================================================

Amir: We have to define rational agent differently instead of being surprised by above findings

“a rational agent is an agent which has preferences, models uncertainty via expected values, and always chooses to perform the action with the optimal expected outcome for itself (considering the the happiness gain from a fair setting) from among all feasible actions.”