Attention is thought to play a key role in the computation of stimulus values at the time of choice, which suggests that attention manipulations could be used to improve decision-making in domains where self-control lapses are pervasive. We used an fMRI food choice task with non-dieting human subjects to investigate whether exogenous cues that direct attention to the healthiness of foods could improve dietary choices. Behaviorally, we found that subjects made healthier choices in the presence of health cues. In parallel, stimulus value signals in ventromedial prefrontal cortex were more responsive to the healthiness of foods in the presence of health cues, and this effect was modulated by activity in regions of dorsolateral prefrontal cortex. These findings suggest that the neural mechanisms used in successful self-control can be activated by exogenous attention cues, and provide insights into the processes through which behavioral therapies and public policies could facilitate self-control.
Learning occurs when an outcome deviates from expectation (prediction error). According to formal learning theory, the defining paradigm demonstrating the role of prediction errors in learning is the blocking test. Here, a novel stimulus is blocked from learning when it is associated with a fully predicted outcome, presumably because the occurrence of the outcome fails to produce a prediction error. We investigated the role of prediction errors in human reward-directed learning using a blocking paradigm and measured brain activation with functional magnetic resonance imaging. Participants showed blocking of behavioral learning with juice rewards as predicted by learning theory. The medial orbitofrontal cortex and the ventral putamen showed significantly lower responses to blocked, compared with nonblocked, reward-predicting stimuli. In reward-predicting control situations, deactivation in orbitofrontal cortex and ventral putamen occurred at the time of unpredicted reward omissions. Responses in discrete parts of orbitofrontal cortex correlated with the degree of behavioral learning during, and after, the learning phase. These data suggest that learning in primary reward structures in the human brain correlates with prediction errors in a manner that complies with principles of formal learning theory.
When deciding between different options, individuals are guided by the expected (mean) value of the different outcomes and by the associated degrees of uncertainty. We used functional magnetic resonance imaging to identify brain activations coding the key decision parameters of expected value (magnitude and probability) separately from uncertainty (statistical variance) of monetary rewards. Participants discriminated behaviorally between stimuli associated with different expected values and uncertainty. Stimuli associated with higher expected values elicited monotonically increasing activations in distinct regions of the striatum, irrespective of different combinations of magnitude and probability. Stimuli associated with higher uncertainty (variance) elicited increasing activations in the lateral orbitofrontal cortex. Uncertainty-related activations covaried with individual risk aversion in lateral orbitofrontal regions and risk-seeking in more medial areas. Furthermore, activations in expected value-coding regions in prefrontal cortex covaried differentially with uncertainty depending on risk attitudes of individual participants, suggesting that separate prefrontal regions are involved in risk aversion and seeking. These data demonstrate the distinct coding in key reward structures of the two basic and crucial decision parameters, expected value, and uncertainty.
A basic tenet of microeconomics suggests that the subjective value of financial gains decreases with increasing assets of individuals ("marginal utility"). Using concepts from learning theory and microeconomics, we assessed the capacity of financial rewards to elicit behavioral and neuronal changes during reward-predictive learning in participants with different financial backgrounds. Behavioral learning speed during both acquisition and extinction correlated negatively with the assets of the participants, irrespective of education and age. Correspondingly, response changes in midbrain and striatum measured with functional magnetic resonance imaging were slower during both acquisition and extinction with increasing assets and income of the participants. By contrast, asymptotic magnitudes of behavioral and neuronal responses after learning were unrelated to personal finances. The inverse relationship of behavioral and neuronal learning speed with personal finances is compatible with the general concept of decreasing marginal utility with increasing wealth.
A negative outcome can have motivational and emotional consequences on its own (absolute loss) or in comparison to alternative, better, outcomes (relative loss). The consequences of incurring a loss are moderated by personality factors such as neuroticism and introversion. However, the neuronal basis of this moderation is unknown. Here we investigated the neuronal basis of loss processing and personality with functional magnetic resonance imaging in a choice task. We separated absolute and relative financial loss by sequentially revealing the chosen and unchosen outcomes. With increasing neuroticism, activity in the left lateral orbitofrontal cortex (OFC) preferentially reflected relative rather than absolute losses. Conversely, with increasing introversion, activity in the right lateral OFC preferentially reflected absolute rather than relative losses. These results suggest that personality affects loss-related processing through the lateral OFC, and propose a dissociation of personality dimension and loss type on the neuronal level.
Reward probability crucially determines the value of outcomes. A basic phenomenon, defying explanation by traditional decision theories, is that people often overweigh small and underweigh large probabilities in choices under uncertainty. However, the neuronal basis of such reward probability distortions and their position in the decision process are largely unknown. We assessed individual probability distortions with behavioral pleasantness ratings and brain imaging in the absence of choice. Dorsolateral frontal cortex regions showed experience dependent overweighting of small, and underweighting of large, probabilities whereas ventral frontal regions showed the opposite pattern. These results demonstrate distorted neuronal coding of reward probabilities in the absence of choice, stress the importance of experience with probabilistic outcomes and contrast with linear probability coding in the striatum. Input of the distorted probability estimations to decision-making mechanisms are likely to contribute to well known inconsistencies in preferences formalized in theories of behavioral economics.
People experience relief whenever outcomes are better than they would have been, had an alternative course of action been chosen. Here we investigated the neuronal basis of relief with functional resonance imaging in a choice task in which the outcome of the chosen option and that of the unchosen option were revealed sequentially. We found parametric activation increases in anterior ventrolateral prefrontal cortex with increasing relief (chosen outcomes better than unchosen outcomes). Conversely, anterior ventrolateral prefrontal activation was unrelated to the opposite of relief, increasing regret (chosen outcomes worse than unchosen outcomes). Furthermore, the anterior ventrolateral prefrontal activation was unrelated to primary gains and increased with relief irrespective of whether the chosen outcome was a loss or a gain. These results suggest that the anterior ventrolateral prefrontal cortex encodes a higher-order reward signal that lies at the core of current theories of emotion.
What decisions should we make? Moral values, rules, and virtues provide standards for morally acceptable decisions, without prescribing how we should reach them. However, moral theories do assume that we are, at least in principle, capable of making the right decisions. Consequently, an empirical investigation of the methods and resources we use for making moral decisions becomes relevant. We consider theoretical parallels of economic decision theory and moral utilitarianism and suggest that moral decision making may tap into mechanisms and processes that have originally evolved for nonmoral decision making. For example, the computation of reward value occurs through the combination of probability and magnitude; similar computation might also be used for determining utilitarian moral value. Both nonmoral and moral decisions may resort to intuitions and heuristics. Learning mechanisms implicated in the assignment of reward value to stimuli, actions, and outcomes may also enable us to determine moral value and assign it to stimuli, actions, and outcomes. In conclusion, we suggest that moral capabilities can employ and benefit from a variety of nonmoral decision-making and learning mechanisms.