I'm a professor at U Michigan and lead a course on climate change problem solving. These articles often come from and contribute to the course.
By: Dr. Ricky Rood , 9:18 PM GMT on June 21, 2011
A Science-Organized Community: Organizing U.S. Climate Modeling (3)
In the previous entry I set out the need of a scientific organization; that is, an organization that is designed and run to honor the tenets of the scientific method. This stands in contrast to, say, a laboratory or a center that is populated by scientists carrying out a multitude of projects, each following the scientific method. One motivation for the scientific organization is the steady stream of reports from the past two decades calling for better integration of U.S. climate activities to provide predictions to meet societal needs. At the foundation of my argument is that the way we teach, fund and reward scientific investigation has been, traditionally, fragmenting. Without addressing this underlying fragmentation, there are high barriers to achieving the needed integration. (see, Something New in the Past Decade?, The Scientific Organization, High-end Climate Science).
What does it take for an organization to adhere to the scientific method? Ultimately, I will arrive at the conclusion that it takes a diligence of management and governance, but for this entry I will continue to focus on the elements of the scientific method, and specifically the development of strategies to evaluate and validate collected, rather than individual, results.
In May I attended a seminar by David Stainforth. Stainforth is one of the principals in the community project climateprediction.net. From their website, “Climateprediction.net is a distributed computing project to produce predictions of the Earth's climate up to 2100 and to test the accuracy of climate models.” In this project people download a climate model and run the model on their personal computers, then the results are communicated back to data center where they are analyzed in concert with results from many other people.
This is one example of community science or citizen science. Other citizen science programs are Project Budburst and the Globe Program. There are a number of reasons for projects like this. One of the reasons is to extend the reach of observations. In Project Budburst people across the U.S. observe the onset of spring as indicated by different plants – when do leaves and blossoms emerge? A scientific motivation for doing this is to increase the number observations to try to assure that the Earth's variability is adequately observed – to develop statistical significance. In these citizen science programs people are taught how to observe - a protocol is developed.
Education – that is another goal of these citizen science activities, education about the scientific method. In order to follow the scientific process, we need to know the characteristics of the observations. If, as in Project Budburst, we are looking for the onset of leafing, then we need to make sure that the tree is not sitting next to a warm building or in the building’s atrium. Perhaps, there is a requirement of a measurement, for example, that the buds on a particular type of tree have expanded to a certain size or burst in some discernible way. Quantitative measurement and adherence of practices of measurement are at the foundation of developing a controlled experiment. A controlled experiment is one where we try to investigate only one thing at a time; this is a difficult task in climate science. If we are not careful about our observations and the design of our experiments, then it is difficult, perhaps impossible, to evaluate our hypotheses and arrive at conclusions. And the ability to test hypotheses is fundamental to the scientific method. Design, observations, hypothesis, evaluation, validation – in a scientific organization these things need to be done by the organization, not each individual.
Let’s return to climateprediction.net. A major goal is to obtain a lot of simulations from climate models to examine the range of variability that we might expect in 2100. The strategy is to place relatively simple models in the hands of a whole lot of people. With this strategy it is possible to do many more experiments than say one scientist or even a small team of scientists can do. Many 100,000s of simulations have been completed.
One of the many challenges faced in the model-based experiments is how to manage the model simulations to provide controlled experiments. If you think about a climate model as a whole, then there are a number of things that can be changed. We can change something “inside” of the model, for example, we can change how rough we estimate the Earth’s surface to be – maybe grassland versus forest. We can change something “outside” of the model - the energy balance, perhaps, some estimate of how the Sun varies or how carbon dioxide will change. And, still “outside” the model, we can change the details of what the climate looks like when the model simulation is started – do we start it with January 2003 data or July 2007? When you download a model from climateprediction.net, it has a unique set of these parameters. If you do a second experiment, this will also have a unique set of parameters. Managing these model configurations and documenting this information allows, well, 100000s of simulations to be run, with a systematic exploration of model variability. Experiment strategy is explained here.
What impressed me about climateprediction.net is the ability to design and execute a volunteer organization that allows rigorous investigation with of a group of thousands of people on thousands of different computers distributed all over the globe. Protocols have been set up to verify that the results are what they should be; there is confidence in the accuracy of the information collected. Here is an example where scientists are able to define an organization where the scientific method permeates the organization. Is this proof that a formalized scientific organization is possible? What are the attributes that contribute to the success of a project like climateprediction.net? Are they relevant to a U.S. climate laboratory?
Bringing this back to the scale of U.S. climate activities – in 2008 there was a Policy Forum in Science Magazine by Mark Schaefer, Jim Baker and a distinguished number of co-authors. All of these co-authors had worked at high levels in the government, and they all struggled with the desire and need to integrate U.S. climate activities. Based on their experience they posed an Earth System Science Agency made from a combined USGS and NOAA. In their article they pointed out: “The synergies among our research and monitoring programs, both space- and ground-based, are not being exploited effectively because they are not planned and implemented in an integrated fashion. Our problems include inadequate organizational structure, ineffective interagency collaboration, declines in funding, and blurred authority for program planning and implementation.” Planning and implementation in an integrated fashion, I will add – consistent with the scientific method – that is what is needed for a successful scientific investigation by an individual; it is needed to make climateprediction.net substantive; it is needed for any climate organization that is expected, as a whole, to provide integrated climate information.
Figure 1: Location of participants in climateprediction.net. From the BBC, a sponsor of the experiment.
The views of the author are his/her own and do not necessarily represent the position of The Weather Company or its parent, IBM.