Thursday, October 22, 2009

German nano news wave fed by pseudoevents


The news wave following yesterday's consumer warning by Germany's Umweltbundesamt is in full swing.  Adding momentum are a series of mediated and pseudo events following the original UBA announcement, including a statement from Annette Schavan, minister for science and education, a rebuttal by industry, and speculations among bloggers that the new swine flu vaccine may contain untested nanoparticles.

Wednesday, October 21, 2009

Germany's Umweltbundesamt will issue nano consumer warning



Germany's Umweltbundesamt (UBA) [Federal Environmental Agency] will release a new study today advising consumers to avoid products using nanoparticles, as long as their effects on the environment and human health are largely unknown. The federal agency is also calling for regulations on labeling and reporting products containing nanomaterials. This would affect the more than 800 German companies that use the new technology in their procts.

See here for the full story from news magazine Der Stern, based on an initial report in the Sueddeutsche Zeitung.

The wave of coverage surrounding the UBA report, also drew renewed attention to the August, 2009 story linking deaths among Chinese factory workers to exposure to high dosages of nano particles in a factory.

Sunday, October 18, 2009

How much of a say should the public have in the direction of science (and how much should be left to the experts?)

This is part of a longer answer I wrote to a recent inquiry by scienceandreligiontoday.com on how much of a say the public should have in the direction of science (and how much should be left to the experts):


"... Many of the survey data we collected at the University of Wisconsin and at Arizona State (Scheufele & Corley, 2008) show that the public trusts scientists to do a good job on the science behind emerging technologies. But some applications in the area of nanotechnology, for instance, have also raised ethical concerns about human enhancement or the creation of synthetic life that have more to do with how we use emerging technologies than the science behind them.

The Public:
So who should shape societal debates about the science and its applications? On the one hand we have a chronically underinformed public who shows limited interest in scientific issues (or political issues, for that matter). As a result, they often make decisions or form policy stances about emerging technologies with little information about the science behind them (Scheufele, 2006b). And this is a description, not a criticism. In fact, we all use information shortcuts or heuristics every day when faced with the need to make choices with incomplete information. Should we be worried about the suspicious looking guy lingering outside our apartment? And what toothpaste should we buy, given virtually unlimited choices in the supermarket? Eventually, we find answers to all of these questions without collecting all available information. We trust certain brands, we rely on previous experience, and we make gut decisions.

Why is that? The answer is simple. We are all cognitive misers or satisficers to varying degrees (Fiske & Taylor, 1991). We use as little information as we think we can get away with or only as much as we think we need to make a decent decision. That is just human nature. And we’re all miserly for different reasons and for different issues. Why don’t most scientists follow Miley Cyrus’s personal life? Probably because they don’t care, and because they see no payoff from learning more about B-list celebrities for either their personal or professional lives. Many citizens, of course, feel the same way about science. Why would they spend time learning about emerging technologies, as long as they feel that they can trust regulatory agencies and universities to produce and manage scientific discoveries responsibly?

But this is exactly the problem that science communicators have battled with for a long time. We should not be concerned about the fact that audiences know little about specific technologies, but that they know little about science. One in four (25%) members of the general public understand the concept of a scientific study, and only about two in five can correctly describe a scientific experiment (42%) or the scientific process more broadly (41%) (National Science Board, 2008). And most empirical studies suggest that this won’t change anytime soon. As a result, my colleague Dominique Brossard here at Wisconsin has argued for a long time that a key variable in well-functioning scientific societies is what she calls “deference toward scientific authority” (Brossard & Nisbet, 2007; Brossard, Scheufele, Kim, & Lewenstein, 2009; Lee & Scheufele, 2006), i.e., the ability to negotiate personal value systems and beliefs with a willingness to defer to scientific expertise for factual information about emerging technologies. And this has nothing to do with blindly trusting scientists. In fact, our work at Wisconsin has shown that values are a critical component of how people make decisions about science, and justifiably so (Brossard, et al., 2009; Ho, Brossard, & Scheufele, 2008). Concerns about destroying unborn life as part of embryonic stem cell research, for instance, can’t be addressed with more science. They can only be resolved in a comprehensive societal debate that deals with values and scientific facts at the same time.

Scientists:
This brings us to the second group – scientists – and their role in guiding scientific progress. In short, the input that scientists can provide into societal debates surrounding emerging technologies is critical. In fact, I have argued many times before that scientists have not played as much of a role in participating in societal debates as they should have (Nisbet & Scheufele, 2007, forthcoming; Scheufele, 2006a, 2007; Scheufele et al., 2009), and that science and society are worse off as a result.

And what we need is not just feedback from the most vocal or most opinionated scientists in a given field, but rather a systematic understanding of what the leading experts in a given field think are prudent approaches to scientific development. The problem with that approach is the U.S. media system. U.S. journalists tend to cover scientific issues by showing “both sides.” This misguided understanding of objectivity often creates science journalism that pits a vast majority of scientists against a small number of vocal dissenters. The recent (and ongoing) debate about global warming is a good example of that pattern.

So is there a better approach to determining scientific consensus on an issue? And the answer is “yes.” Elizabeth Corley in the School of Public Policy at Arizona State and I recently published a series of papers from a systematic survey of leading U.S. scientists in the field of nanotechnology (Corley, Scheufele, & Hu, 2009; Scheufele, et al., 2009; Scheufele et al., 2007). We asked these scientists about their views on public-scientist interactions, about their recommendations for regulations, and about their perceptions of the potential risks and benefits surrounding nanotechnology. And the scientists’ insights are invaluable for societal decision making about these new technologies, including their recommendations for regulatory frameworks at the international level and for risk assessments in specific areas (Corley et al., 2009).

But our survey also showed that scientists sometimes rely on information shortcuts and heuristics, just like everyone else. We found that scientists, when they’re being asked for policy recommendations about emerging technologies, do rely on their professional judgments about the risks and benefits connected to nanotechnology. But our data also showed that – after controlling for their professional judgments – scientists’ personal ideologies have a significant impact on their support for regulations.

These findings, of course, say less about scientists and their expertise than they do about the lack of conclusive data about risks related to nanotechnology. Policy makers need to realize that when they ask scientists to give them advice about inconclusive findings, they will get both their professional judgment and their personal views.

References:

National Science Board. (2008). Science and Engineering Indicators 2008 (Chapter 7). National Science Foundation Retrieved January 21, 2008, from http://www.nsf.gov/statistics/seind08/.
Scheufele, D. A., Brossard, D., Dunwoody, S., Corley, E. A., Guston, D. H., & Peters, H. P. (2009). Are scientists really out of touch? The Scientist. Retrieved from http://www.the-scientist.com/news/display/55875/.


Sunday, October 11, 2009

Surveys about science: A primer

Here are a few excerpts from an entry on "Surveys" I just wrote for Susanna Priest's forthcoming Encyclopedia of Science and Technology Communication.
Population surveys are one of the most important tools for tapping how much citizens know about science and technology, how they perceive potential risks and benefits, and what their attitudes are about emerging technologies or research on particular applications.

Sample surveys are defined as systematic studies of a geographically dispersed population by interviewing a sample of only certain members in an attempt to generalize to their population. Two terms of this definition are particularly important: “systematic” and “generalizable.”

[...]

The idea of systematically studying a population is a first main goal of sample surveys. Surveys therefore typically rely on a standardized questionnaire is in order to gather reliable and valid information from a wide variety of respondents. Reliability, in this context, refers to the idea that the same instrument – applied to comparable samples – will produce consistent results. But reliability is not enough. It is very possible, for example, that a questionnaire consistently measures the wrong construct. Validity therefore adds a second quality criterion, and refers to the idea that questionnaires need to provide not just consistent but also unbiased and accurate measurements of people’s behaviors, attitudes, etc.

Reliability and validity are tied to a number of factors in the survey process. But two aspects are particularly important when constructing a questionnaire: the overall structure of the questionnaire and wording of specific questions.

When structuring a survey questionnaire, the first concern is length. If a survey takes too much time to complete, it will likely result in significant incompletion rates. Unfortunately, the respondents who tend drop out of lengthy surveys are not a random subset of the population. Rather, they tend to be – among other characteristics – younger, more mobile, and employed full-time. As a result, excessively long survey instruments often produce samples that are plagued by systematic non-response among particular groups in the population, and are therefore limited in terms of their generalizability (see below).

A second concern with respect to questionnaire construction is the way questions are ordered on the questionnaire. Well-constructed questionnaires typically ask easy to answer questions first and sensitive or embarrassing questions later in the questionnaire. One of the most common pitfalls in survey instruments are priming effects, i.e., the notion that some questions can make certain considerations (for instance, risk or benefits of a specific technology) more salient in a respondent’s mind and therefore influence how he or she answers subsequent questions (for an overview, see Zaller & Feldman, 1992)

[...]

In addition to questionnaire structure, the wording of specific questions is a critical variable in building a valid instrument. In particular, well-constructed questionnaires use language and terminology that is designed to avoid biases. Such biases may stem from language that is likely to be more accessible to some respondents than others (e.g., terms that are more likely to be understood by certain ethnic groups or education-based cohorts) or that favors respondents who are more interested in or know more about science and technology in the first place. Any wording that feeds into these potential biases introduces systematic measurement error, since it does not produce an equally valid measure across all groups of the population.

[...]

These concerns about systematic measurement error are particular relevant for a researcher’s ability to generalize from a sample to the general population. This is both a statistical and a substantive problem.

From a statistical perspective, surveys are designed to allow researchers to make inferences from observed sampling statistics (e.g., 52 percent of the sample favor more research on a particular technology) to unobservable population parameters (the proportion of people favoring this research in the population). For surveys based on probability sampling (i.e., surveys that give each person in the population the same, known chance of being selected into the sample) the margin of error provides an indicator of how close the statistic observed in a sample is to the population, and how certain researchers can be about this inference (usually calculated with a certainty of 95%). For the example above, a margin of error of +/-3% would therefore indicate that we can be 95% certain that the true level of support for more research in the population falls somewhere between 49% and 55%.

But generalizability of survey results goes beyond just statistical considerations – especially for scientific issues, such as nanotechnology or stem cell research. Given the interplay between societal dynamics, scientific complexities, and a lack of widespread awareness, some have raised concerns about the appropriateness of using large scale surveys to tap public reactions to science and technology. These concerns typically fall into one of two categories that are both extremely important for any type of polling: first, what are we doing with people who are not fully aware or knowledgeable about the issue that we are interested in, and, second, can we capture an issue in all its complexities in a short survey?

The concern about unaware respondents is not unique to polling about science and technology. Political surveys routinely show that large proportions of the U.S. public are unable to accurately place presidential candidates relative to one another, even on simple issues, such as gun control (e.g., Patterson, 2002). And in fact, attitude formation about political and scientific issues – for many citizens – has little to do with awareness of or knowledge about the specifics of a particular issue (Scheufele, 2006).

In order make sure that all respondents have the same minimal baseline understanding of the technology that is being studied, surveys typically provide a short introduction to the issue as part of the question. Ideally, this introduction is comprehensive, but does not influence answers to subsequent questions by priming respondents about particular risks or benefits of the technology.

[...]

The second concern that is often raised related to the substantive generalizability of survey results about science and technology is the issue of how much detail a telephone survey can get into. Some have argued, in fact, that the systematic nature of standardized surveys is directly at odds with the need for an in-depth and contextualized understanding of how citizens interact with emerging technologies.

And of course these critics are right to a certain degree. Phone surveys, for instance, have clear constraints with respect to length and to the number of questions that can be asked about a single topic. Respondents participate on a voluntary basis and they spend a substantial amount of time on the phone with the interviewer. If researchers ask too many questions about a given topic or if the interview is too long, people tend to get bored or even annoyed and hang up. And this is not just a problem of having fewer respondents overall. Rather, as outlined earlier, if an interview is too long or goes into too much detail it usually creates problems with representativeness.

What we end up with, in this case, is a sample of people that is no longer representative of the overall population. And that, of course, hurts the validity of a poll because it no longer does what it is intended to do, i.e., capture the opinions of everybody in a given population, not just people who are more interested in a given issue or who happen to have more time to respond to a pollster's questions.

As a result, it is important to understand surveys for what they are, i.e.., one method of data collection that allows researchers to tap behaviors, levels of knowledge, and public attitudes toward science and technology in a very systematic and generalizable fashion. This comes with trade-offs related to the complexity of data that surveys provide. In particular, large scale population surveys are concerned with social patterns across large groups of respondents, and pay less attention to the potential complexity of a particular respondent’s belief system, for instance, and how it has developed over the course of his or her life.

Surveys can also be limited in how much they allow for causal inferences. This is particularly problematic for cross-sectional surveys, i.e., data collections at one point in time. Cross-sectional surveys may show a statistical correlation between exposure to science news in newspapers and scientific literacy, for instance, but they typically cannot provide conclusive evidence on the direction of this link. In other words, are knowledgeable respondents more likely to read the science section in newspapers, or does exposure to science news promote learning about science? Answers to these questions are typically provided by other research designs, some survey-based and some not.

Among the survey-based approaches that allow researchers to make some inferences about causality are longitudinal survey designs. These fall into three categories. Trend studies use multiple data collections with different samples to track responses to the same question over time. While trend studies can help researchers identify aggregate-level changes, they do not provide insights into how individual respondents change over time. Panel studies address this problem by providing multiple data collections over time for the exact same set of respondents. Cohort studies, finally are concerned with the effects that socialization or other influences have during certain periods of people’s lives. Is there a difference, for example between respondents who went to college during the first moon landing and those who went to college in the 1990s with respect to levels of interest in science and technology and science media use over the course of their life? In order to answer these questions, cohort analyses examine different subgroups (or cohorts), often defined by age, and compare their development as they grow older.

References:

Dillman, D. A. (2007). Mail and Internet surveys: The tailored design method (2nd ed.). New York, NY: Wiley.


Patterson, T. E. (2002). The vanishing voter: Public involvement in an age of uncertainty. New York: Alfred A. Knopf Publishers.

Scheufele, D. A. (2006). Messages and heuristics: How audiences form attitudes about emerging technologies. In J. Turney (Ed.), Engaging science: Thoughts, deeds, analysis and action (pp. 20-25). London: The Wellcome Trust.

Zaller, J., & Feldman, S. (1992). A simple theory of survey response: Answering questions versus revealing preferences. American Journal of Political Science, 36(3), 579-616.
The encyclopedia, including the full chapter on surveys, is scheduled to appear with Sage in July 2010.

Tuesday, October 06, 2009

Partisan gaps in attitudes toward biofuels in Wisconsin

Almost two thirds of Wisconsinites support the use (62.5%) and production (60.4%) percent of biofuels. They are less sure, however, about the best ways to promote this new technology. Only a minority of Wisconsin citizens think that federal (47.9%), state (42.7%) or government subsidies (48.7%) should be used to promote biofuels. While almost three in five (59.7%) Wisconsin citizens think that the free market should regulate biofuels, however, a majority (52.7%) also believes that the oil industry will not invest in the new technology without government regulations.


These results are part of a new study of attitudes toward biofuels among Wisconsin citizens, conducted by a research group led by myself and Bret Shaw in Life Sciences Communication at UW-Madison.


The tensions between markets and regulations are to some degree explained by clear ideological rifts within the Wisconsin population. While a majority registered Democrats support the use of government subsidies for biofuels research (60.6%), less than 40% of registered Republicans do (38.9%). Similarly, three out of four (75.6%) Republicans believe that the free market should regulate biofuels -- a view that is shared by only 43.7% of Democrats. A majority of both Democrats (60.0%) and Republicans (51.3%), however, agree s that without government regulations, the oil industry will never invest in the development of biofuels.

Click here for the official UW-Madison press release with more details on the study.

Friday, October 02, 2009

Wisconsin among 20 most cited institutions worldwide

Wisconsin is once again in good company. Along with Harvard, MIT, Yale, Cambridge, Oxford and other institutions, UW-Madison was just ranked in the top 20 of the most cited institutions worldwide in the last decade. Enough said.

Click here for a PDF copy of the full report.