Sunday, October 18, 2009

How much of a say should the public have in the direction of science (and how much should be left to the experts?)

This is part of a longer answer I wrote to a recent inquiry by on how much of a say the public should have in the direction of science (and how much should be left to the experts):

"... Many of the survey data we collected at the University of Wisconsin and at Arizona State (Scheufele & Corley, 2008) show that the public trusts scientists to do a good job on the science behind emerging technologies. But some applications in the area of nanotechnology, for instance, have also raised ethical concerns about human enhancement or the creation of synthetic life that have more to do with how we use emerging technologies than the science behind them.

The Public:
So who should shape societal debates about the science and its applications? On the one hand we have a chronically underinformed public who shows limited interest in scientific issues (or political issues, for that matter). As a result, they often make decisions or form policy stances about emerging technologies with little information about the science behind them (Scheufele, 2006b). And this is a description, not a criticism. In fact, we all use information shortcuts or heuristics every day when faced with the need to make choices with incomplete information. Should we be worried about the suspicious looking guy lingering outside our apartment? And what toothpaste should we buy, given virtually unlimited choices in the supermarket? Eventually, we find answers to all of these questions without collecting all available information. We trust certain brands, we rely on previous experience, and we make gut decisions.

Why is that? The answer is simple. We are all cognitive misers or satisficers to varying degrees (Fiske & Taylor, 1991). We use as little information as we think we can get away with or only as much as we think we need to make a decent decision. That is just human nature. And we’re all miserly for different reasons and for different issues. Why don’t most scientists follow Miley Cyrus’s personal life? Probably because they don’t care, and because they see no payoff from learning more about B-list celebrities for either their personal or professional lives. Many citizens, of course, feel the same way about science. Why would they spend time learning about emerging technologies, as long as they feel that they can trust regulatory agencies and universities to produce and manage scientific discoveries responsibly?

But this is exactly the problem that science communicators have battled with for a long time. We should not be concerned about the fact that audiences know little about specific technologies, but that they know little about science. One in four (25%) members of the general public understand the concept of a scientific study, and only about two in five can correctly describe a scientific experiment (42%) or the scientific process more broadly (41%) (National Science Board, 2008). And most empirical studies suggest that this won’t change anytime soon. As a result, my colleague Dominique Brossard here at Wisconsin has argued for a long time that a key variable in well-functioning scientific societies is what she calls “deference toward scientific authority” (Brossard & Nisbet, 2007; Brossard, Scheufele, Kim, & Lewenstein, 2009; Lee & Scheufele, 2006), i.e., the ability to negotiate personal value systems and beliefs with a willingness to defer to scientific expertise for factual information about emerging technologies. And this has nothing to do with blindly trusting scientists. In fact, our work at Wisconsin has shown that values are a critical component of how people make decisions about science, and justifiably so (Brossard, et al., 2009; Ho, Brossard, & Scheufele, 2008). Concerns about destroying unborn life as part of embryonic stem cell research, for instance, can’t be addressed with more science. They can only be resolved in a comprehensive societal debate that deals with values and scientific facts at the same time.

This brings us to the second group – scientists – and their role in guiding scientific progress. In short, the input that scientists can provide into societal debates surrounding emerging technologies is critical. In fact, I have argued many times before that scientists have not played as much of a role in participating in societal debates as they should have (Nisbet & Scheufele, 2007, forthcoming; Scheufele, 2006a, 2007; Scheufele et al., 2009), and that science and society are worse off as a result.

And what we need is not just feedback from the most vocal or most opinionated scientists in a given field, but rather a systematic understanding of what the leading experts in a given field think are prudent approaches to scientific development. The problem with that approach is the U.S. media system. U.S. journalists tend to cover scientific issues by showing “both sides.” This misguided understanding of objectivity often creates science journalism that pits a vast majority of scientists against a small number of vocal dissenters. The recent (and ongoing) debate about global warming is a good example of that pattern.

So is there a better approach to determining scientific consensus on an issue? And the answer is “yes.” Elizabeth Corley in the School of Public Policy at Arizona State and I recently published a series of papers from a systematic survey of leading U.S. scientists in the field of nanotechnology (Corley, Scheufele, & Hu, 2009; Scheufele, et al., 2009; Scheufele et al., 2007). We asked these scientists about their views on public-scientist interactions, about their recommendations for regulations, and about their perceptions of the potential risks and benefits surrounding nanotechnology. And the scientists’ insights are invaluable for societal decision making about these new technologies, including their recommendations for regulatory frameworks at the international level and for risk assessments in specific areas (Corley et al., 2009).

But our survey also showed that scientists sometimes rely on information shortcuts and heuristics, just like everyone else. We found that scientists, when they’re being asked for policy recommendations about emerging technologies, do rely on their professional judgments about the risks and benefits connected to nanotechnology. But our data also showed that – after controlling for their professional judgments – scientists’ personal ideologies have a significant impact on their support for regulations.

These findings, of course, say less about scientists and their expertise than they do about the lack of conclusive data about risks related to nanotechnology. Policy makers need to realize that when they ask scientists to give them advice about inconclusive findings, they will get both their professional judgment and their personal views.


National Science Board. (2008). Science and Engineering Indicators 2008 (Chapter 7). National Science Foundation Retrieved January 21, 2008, from
Scheufele, D. A., Brossard, D., Dunwoody, S., Corley, E. A., Guston, D. H., & Peters, H. P. (2009). Are scientists really out of touch? The Scientist. Retrieved from


G.A.Veltri said...

I think one important dimension to consider when discussing aspects of public involvment of science policy decisions is risk perception.
To involve the public is to promote a pre-emptive risk containment strategy. Willing citizens are prepared to run higher risks than those that will have just live to it.

In addition, there are the well-known social and political consequences of scientific and technological change that require a democratic deliberation (and discussion).