Risk-taker or risk-averse? It’s all about context
There are three common ways of measuring individual risk attitudes: the choice list procedure, the ranking procedure and the allocation procedure. If individual risk attitudes can be used to help explain and predict other economic decisions (such as the choice of investments, insurance policies, pension schemes, etc.) we should expect that different procedures should at least on average lead to the same results. For example, the same individual should be classified as risk averse, risk neutral or risk-taking irrespective of the procedure used.
Several studies have compared elicited risk attitudes across different procedures, with mixed results. Dave et al. (2010) and Harrison and Rutstrom (2008) compare the choice list procedure and a method closely related to the ranking procedure, and generally find that standard expected utility theory with constant relative risk aversion explains the data well in both procedures. In contrast, Deck et al. (2008) observe significant differences in the obtained coefficients of risk aversion between the choice list procedure and a method closely related to the ranking procedure, and account for these inconsistencies using participants’ personality traits.
Our recent study (Loomes and Pogrebna 2014) extends the range of comparisons between the choice list procedure and the ranking procedure, and allows further comparisons with the allocation procedure. We conducted an experiment designed to provide data about the amount of within-procedure variability and also about the extent of consistency or inconsistency between the three procedures under examination. Recently, Choi et al. (2007), Andreoni and Sprenger (2012a, 2012b), Charness and Gneezy (2012) and Hey and Pace (2011) advocated the use of an allocation procedure for measuring individual attitudes towards risk and uncertainty. However, with the exception of Cheung (2013), we are not aware of any systematic comparison that has been conducted between the allocation procedure and other methods of eliciting risk attitudes.
Three procedures for measuring risk attitudes
We consider three popular procedures in the research literature:
- The choice list (sometimes called the multiple price list) procedure (e.g. Cohen et al. 1987, Tversky and Kahneman 1992, Holt and Laury 2002) presents a table of binary choices designed so that as a respondent works through the table she can be expected to switch at some point from one ‘side’ to the other. When the choices are between risky alternatives, the switching point is assumed to be indicative of the individual’s risk attitude.
- The ranking procedure (e.g. Binswanger 1980 and 1981, Eckel and Grossman 2002) presents a set of options and asks the respondent to identify which option she ranks top. When applied to a set of risky prospects that have different combinations of spread and return, the idea is to identify the individual’s risk attitude as reflected in her most-preferred balance between mean and variance. We extended this procedure to obtain a full ranking over the whole set of options.
- The allocation procedure (e.g. Loomes 1991) provides the respondent with a budget and allows her to distribute it between different state-contingent claims. When applied to risk, the chosen allocation – in conjunction with information about the rate of exchange between claims – should allow the individual’s risk attitude to be inferred.
Are people consistent within the same procedure?
Except in cases where individuals follow an available rule of thumb, most individuals’ responses to different questions within a particular procedure exhibit a degree of variability which appears to increase as the structure and/or parameters of the questions become more dissimilar. Thus it may be unsafe to expect that just one or two questions of any kind can provide a reliable measure at the individual level.
Even when we hold constant the parameters and the type of task, significant differences in patterns of response can still be induced by something as seemingly innocuous as the way in which a task table is presented. This is consistent with decision-making having a potentially influential procedural component.
Are people consistent between different procedures?
The overall picture is that most individuals exhibit a good deal of variability between different procedures intended to elicit their risk attitudes. There is some rank correlation between risk attitudes elicited by different questions, but the imprecision of most people’s preferences may make them susceptible to considerable procedural effects.
The fact that some individuals display high degrees of consistency within a particular type of task does not necessarily mean either that they have highly articulated underlying preferences or that the task is particularly good at detecting preferences that will transfer to other contexts. In fact, the opposite might be the case – it may be that a number of people who are quite uncertain about their preferences may find it appealing to use a simple heuristic that ‘solves’ the problem for them in that particular procedure. However, this may have little or no predictive power in other tasks where that heuristic is not so readily available.
Does the typical individual display the same risk attitude consistently?
No, not typically.
So, how should we react to these findings? In the short run, one recommendation is that researchers who wish to take some account of and/or make some adjustment for risk attitude in their studies should take care to pick an elicitation procedure as similar as possible to the type of decision they are studying. Ideally, they should use several different questions and/or at least two different procedures in order to check the sensitivity of the risk attitude parameter estimates they generate.
In the longer run, the challenge is to engage with the inherently stochastic nature of human decision-making and develop models of the processes that produce people’s responses. Deterministic models of behaviour which do not allow for imprecision in preferences may be analytically more tractable, but they are not realistic – and adding some more or less arbitrary random error term to a deterministic core will not make them so.
If the variability in human judgement is a reflection of decision-making as a cognitive process, we need to try to gain a better understanding of how contextual or procedural factors interact with that process. Wishing such influences away and assuming that decision processes are reducible to one-size-fits-all sets of axioms has not and will not produce a descriptively adequate account of human behaviour under risk and uncertainty.
Published in collaboration with Vox.
Authors: Graham Loomes, professor of behavioural science at Warwick Business School, University of Warwick and Ganna Pogrebna, University of Warwick
Image: A woman plays on the edge of a cliff on a rainy winter afternoon at Bronte beach in Sydney August 8, 2013. REUTERS/Daniel Munoz
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Hyperconnectivity
The Agenda Weekly
A weekly update of the most important issues driving the global agenda
You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.
More on Financial and Monetary SystemsSee all
Max Floetotto
November 6, 2024