Picture under CC-Licence 2.0, BY-NC-ND by European Parliament.

I thought I’d share a few of the ideas I’ve been thinking about recently, ahead of a workshop at the London School of Economics and a conference at the University of Tilburg, where I will be speaking on these topics.

These ideas are all related to the role of experts—especially experts about economics—in evaluating climate change. Here, by expertise, I mean something very specific: knowing the consequences of particular judgments, both in practical and theoretical ways. For instance, the kind of economic expertise I am interested in could be shown through understanding the practical implications of introducing a particular form of climate regulation, or by knowing the theoretical consequences of changing the value of an economic parameter in a model. In practice, we usually determine such expertise by employing a proxy for such understanding, such as looking for highly cited, peer-reviewed papers. This is a proxy because we do not think that everyone who has such expertise writes such papers and perhaps some people who write such papers have less expertise than others who do not manage to.

In this post, I’ll introduce two major worries about the judgments or preferences of economic experts being explicitly involved. The first is about in which situations they are appropriate; do they threaten democratic values? I will point out that there are unrecognized areas in which they are always appropriate: domains where the moral theory is fixed. The second worry is that there are some psychological and sociological problems with appealing to economic experts. I will argue that while these need to be acknowledged, they are less likely to be a problem when considering experts in the manner that I have.

Suppose you are looking for ethical expertise or guidance. There are many situations in which this might be appropriate. For instance, you could be considering whether to take some particular risky medical treatment or even giving it to a relative who for some reason cannot decide herself. There are different kinds of information that an ideal expert would have. The ideal expert should have relevant normative understanding, such as understanding the relevant legal norms or your value system (whether that includes religion or not). The ideal expert should also have non-normative, such as information about your prognosis, about the potential counterindications of that medical treatment, and about the success rates of various programs.

In certain circumstances, when the moral theory is fixed, the first kind of ideal (normative understandings) is unhelpful. In those contexts, the ideal expert would be the one who knows the most relevant factual information. I claim that climate economics is such a context, since the normative assumptions are presumed in the economic framework. The moral theory is fixed in the sense that asking particular questions–in particular questions about discounting–already assumes substantive moral judgments, such as a consequentialist framework. In other words, if we accept economic evaluations of climate programs, many moral questions are already answered and the relevant information is about implementation and what can occur. This is what I will argue in Tilburg.

The other worry is that experts are subject to the same kinds of biases and heuristics as others. So, for instance, there might be disciplinary anchoring effects, such that certain disciplines are liable to give similar parameter assignments (particular values to mathematical variables) because they are used to the ones given by their cohort. Luckily, this is an empirical assertion which can be—and is in the process of being—tested. Robert Pindyck has been testing between disciplines to see whether estimates for particularly important variables vary, although the results are not out yet. [H/t to Paul Kelleher for the pointer to this talk!]

Another potentially relevant worry is that economists, when asked to forecast, may be subject to overconfidence (Angner 2006, Martini and Boumans 2014). This is a limitation that I think should be recognized, but I argue that, while it may be true that there is overconfidence in these contexts, it is unlikely that experts are worse than laypeople and there may be ways in which forecasting can be improved (Tetlock 2015). More specifically, there is little reason to think that having the kind of expertise that I defined above makes it more likely that they will be overconfident than not having this kind of expertise, although this is also an empirical claim that should be subjected to scrutiny.

I am very excited to discuss these ideas with my colleagues in September in the Netherlands and England. I think that the role of experts in climate policy evaluation is an area ripe for discussion and debate. I am still considering the ideas, so am more than interested in any comments that you could leave below.


Angner, E. (2006). Economists as experts: Overconfidence in theory and practice. Journal of Economic Methodology, 13(1), 1-24. Link: http://www.tandfonline.com/doi/abs/10.1080/13501780600566271

Martini, C and Marcel Boumans (eds). (2014). Experts and Consensus in Social Science. Springer: Switzerland. Link: https://www.springer.com/us/book/9783319085500

Tetlock P. E. and Dan Gardner. (2015). Superforecasting: The Art and Science of Prediction. Crown: New York.