June 28, 2011

Risk, uncertainty, and value judgements in science policy

Yesterday my colleagues and I at Michigan State University and Kellogg Biological Station had a reading group to discuss Pielke's The Honest Broker. We read chapters 4-6 for today, which are titled, "4) Values; 5) Uncertainty; and 6) How science policy shapes science in policy and politics."

We talked about whether science is a good tool in making decisions. Certainly it can be good for informing decisions, such as if there's a tornado coming and you need to know whether you should evacuate. Unfortunately, as we saw in one of my previous posts, sometimes scientific assessments of risk and uncertainty do NOT translate well into action. Pielke agrees with this perspective. He thinks that science just adds smoke and mirrors to debates that are really about core values. So unless the situation under debate is one with low uncertainty and highly shared values (a tornado is coming, we should evacuate), we need more recognition of the underlying values of a debate (see: the climate change debate).

Pielke repeatedly refers to two works by Dan Sarewitz, who is one of my professors at Arizona State and regarded by many as a science policy guru. The first article is "How science makes environmental controversies worse" (2004). The second is "Science and Environmental Policy: An Excess of Objectivity" (2000). Both are worth a thorough reading: one thing I've discovered in grad school is that I sometimes read the same article months, or a year, apart, and find revelatory new nuggets of knowledge each time I read it.
The "Excess of Objectivity" book chapter is an insightful commentary on how science can actually impede the political process, by focusing on always disputable and uncertain facts while ignoring underlying value conflicts in highly politicized environmental issues. The “excess of objectivity” refers to the incompatibility of multiple fields of science, and how while each field claims objectivity, they drive controversy and muddy the political waters.

"How science makes environmental controversies worse" makes the same core argument, using a set of different examples from the 2000 election results, to climate change, to genetically modified food (another good case study is the debate over nuclear waste: see this editorial). This discussion reminded me of an article I read during my first weeks of grad school, "Value Judgments and Risk Comparisons. The Case of Genetically Engineered Crops" (2003) by Paul Thompson, who is an environmental and agricultural philosopher at MSU.

I wrote up an analysis of it that I think highlights the issues of value, risk, and uncertainty in environmental controversies pretty well: Thompson focuses on the inherent value judgments that scientists make about genetically engineered (GE) crops and environmental risk. He aims to identify the values behind the GE debate, rather than taking a philosophical or scientific position in the debate. He focuses on a relatively small aspect of this debate, which are claims for and against a comparative evaluation of the environmental risk of GE vs. traditional (non-GE) crops. This is the standard metric used by scientists and federal agencies to assess the risk of GE crops. Thompson’s argument is that risk assessments are inherently based on value-based judgments; the science itself cannot settle a claim about environmental risks.

He shows that the current regulatory system ironically puts the burden of proof on anti-GE activists, who are “in the position of needing to justify special treatment for this class of plants” (emphasize added, Thompson, 2003, p. 11). This gap charges the largely non-scientific public with demonstrating the scientific credibility of their value system, against the grain of the values held by the scientific community, which of course causes further problems on multiple levels. Thompson identifies several other challenges in the regulation of GE crops based on the current framework.

Risk assessments, especially environmental risk assessments, depend on value-based judgments of how much and what types of risk are “acceptable,” despite attempts to scientifically quantify this risk. The definitions of risk by the scientists and activists are essentially incompatible for comparing the risks of GM vs. non-GM crops, or even defining the concept of environmental risk. This highlights very clearly that science, rather than aiding the decision-making process, can complicate and add uncertainty to political debates.

On a related note, I'm headed to Boston today to attend the Science and Democracy Network conference! I'm really excited to talk to like-minded scholars about our work, and make some great connections.


  1. Dennis and I had a conversation about risk and uncertainty in decision-making the other day. I attempted to explain how it is a flaw that scientific reasoning is commonly seen as the only valid way of justifying legislation. But I also thought about all of the token individuals and families who are singled out to prove a point politically (e.g. the soldier's mom). It was an interesting discussion to have with a modeler. his argument being that you are constantly updating models to make them better reflect reality. However, there is inequity with regards to who controls these instruments, and alternative ways of knowing the world. How do we make sound decisions based on models which may or may not be correct, while still taking into account high likelihood coupled with personal narratives? In other words, how can we make unquantifiable or uncertain accounts count. I think an issue too lies with responsibility. It seems that humans have to be unquestionably linked to climate change before governments are willing to make deals. The trouble is, regardless of the major cause, we still need to take measures to adapt, now! Speaking of, today on Democracy Now! there was an author speaking about his new book related to exacerbation of conflict in the face of climate change (among other related issues) Book Excerpt:

    Wish I had the link to the interview.

    So much more to write. Have fun at the conference!


  2. Thanks for the comment, Jen! There is definitely something to be said about any type of modeling and the built-in assumptions that seem to be taken for granted. The main question I'm left with, as related to the earthquake thing as well, is who defines risk, and who determines how much risk is acceptable?

    Also, I think this means we need to take Dan's class on uncertainty and decision-making :P


Note: Only a member of this blog may post a comment.