UPDATES: A number of feckless political commentators have simply missed this response I prepared, so I’m posting it to the top for a day or two. I’ll have a follow up on what I’ve learned since then in the next day or two. Also, NCDC weighs in at the LA Times, calling the BEST publicity effort without publishing science papers “seriously compromised”
Also – in case you have not seen it, this new analysis from an independent private climate data company shows how the siting of weather stations affects the data they produce. – Anthony
As many know, there’s a hearing today in the House of Representatives with the Subcommittee on Energy and Environment, Committee on Science, Space, and Technology and there are a number of people attending, including Dr. John Christy of UAH and Dr. Richard Muller of the newly minted Berkeley Earth Surface Temperature (BEST) project.
There seems a bit of a rush here, as BEST hasn’t completed all of their promised data techniques that would be able to remove the different kinds of data biases we’ve noted. That was the promise, that is why I signed on (to share my data and collaborate with them). Yet somehow, much of that has been thrown out the window, and they are presenting some results today without the full set of techniques applied. Based on my current understanding, they don’t even have some of them fully working and debugged yet. Knowing that, today’s hearing presenting preliminary results seems rather topsy turvy. But, post normal science political theater is like that.
I have submitted this letter to be included in the record today. It is written for the Members of the committee, to give them a general overview of the issue, so may seem generalized and previously covered in some areas. It also addresses technical concerns I have, also shared by Dr. Pielke Sr. on the issue. I’ll point out that on the front page of the BEST project, they tout openness and replicability, but none of that is available in this instance, even to Dr. Pielke and I. They’ve had a couple of weeks with the surfacestations data, and now without fully completing the main theme of data cleaning, are releasing early conclusions based on that data, without providing the ability to replicate. I’ve seen some graphical output, but that’s it. What I really want to see is a paper and methods. Our upcoming paper was shared with BEST in confidence.
BEST says they will post Dr. Muller’s testimony with a notice on their FAQ’s page which also includes a link to video testimony. So you’ll be able to compare. I’ll put up relevant links later. – Anthony
UPDATE: Dr. Richard Muller’s testimony is now available here. What he proposes about Climate -ARPA is intriguing. I also thank Dr. Muller for his gracious description of the work done by myself, my team, and Steve McIntyre.
A PDF version of the letter below is here: Response_to_Muller_testimony
Chairman Ralph Hall
Committee on Science, Space, and Technology
2321 Rayburn House Office Building
Washington, DC 20515
Letter of response from Anthony Watts to Dr. Richard Muller testimony 3/31/2011
It has come to my attention that data and information from my team’s upcoming paper, shared in confidence with Dr. Richard Muller, is being used to suggest some early conclusions about the state of the quality of the surface temperature measurement system of the United States and the temperature data derived from it.
Normally such scientific debate is conducted in peer reviewed literature, rather than rushed to the floor of the House before papers and projects are complete, but since my team and I are not here to represent our work in person, we ask that this letter be submitted into the Congressional record.
I began studying climate stations in March 2007, stemming from a curiosity about paint used on the Stevenson Screens (thermometer shelters) used since 1892, and still in use today in the Cooperative Observer climate monitoring network. Originally the specification was for lime based whitewash – the paint of the era in which the network was created. In 1979 the specification changed to modern latex paint. The question arose as to whether this made a difference. An experiment I performed showed that it did. Before conducting any further tests, I decided to visit nearby climate monitoring stations to verify that they had been repainted. I discovered they had, but also discovered a larger and troublesome problem; many NOAA climate stations seemed to be next to heat sources, heat sinks, and have been surrounded by urbanization during the decades of their operation.
The surfacestations.org project started in June 2007 as a result of a collaboration begun with Dr. Roger Pielke Senior. at the University of Colorado, who had done a small scale study (Pielke and Davies 2005) and found identical issues.
Since then, with the help of volunteers, the surfacestations.org project has surveyed over 1000 United States Historical Climatological Network (USHCN) stations, which are chosen by NOAA’s National Climatic Data Center (NCDC) to be the best of the NOAA volunteer operated Cooperative Observer network (COOP). The surfacestations.org project was unfunded, using the help of volunteers nationwide, plus an extensive amount of my own volunteer time and travel. I have personally surveyed over 100 USHCN stations nationwide. Until this project started, even NOAA/NCDC had not undertaken a comprehensive survey to evaluate the quality of the measurement environment, they only looked at station records.
The work and results of the surfacestations.org project is a gift to the citizens of the United States.
There are two methods of evaluating climate station siting quality. The first is the older 100 foot rule implemented by NOAA http://www.nws.noaa.gov/om/coop/standard.htm which says:
The [temperature] sensor should be at least 100 feet from any paved or concrete surface.
A second siting quality method is for NOAA’s Climate Reference Network, (CRN) a hi-tech, high quality electronic network designed to eliminate the multitude of data bias problems that Dr. Muller speaks of. In the 2002 document commissioning the project, NOAA’s NCDC implemented a strict code for placement of stations, to be free of any siting or urban biases.
The analysis of metadata produced by the surfacestations.org project considered both techniques, and in my first publication on the issue, at 70% of the USHCN surveyed (Watts 2009) I found that only 1 in 10 NOAA climate stations met the siting quality criteria for either the NOAA 100 foot rule or the newer NCDC CRN rating system. Now, two years later, with over 1000 stations, 82.5% surveyed, the 1 in 10 number holds true using NOAA’s own published criteria for rating station siting quality.
During the nationwide survey, we found that many NOAA climate monitoring stations were sited in what can only be described as sub optimal locations. For example, one of the worst examples was identified in data by Steven McIntyre as having the highest decadal temperature trend in the United States before we actually surveyed it. We found it at the University of Arizona Atmospheric Sciences Department and National Weather Service Forecast Office, where it was relegated to the center of their parking lot.
Photograph by surfacestations.org volunteer Warren Meyer
This USHCN station, COOP# 028815 was established in May 1867, and has had a continuous record since then. One can safely conclude that it did not start out in a parking lot. One can also safely conclude from human experience as well as peer reviewed literature (Yilmaz, 2009) that temperatures over asphalt are warmer than those measured in a field away from such modern influence.
The surfacestations.org survey found hundreds of other examples of poor siting choices like this. We also found equipment problems related to maintenance and design, as well as the fact the the majority of cooperative observers contacted had no knowledge of their stations being part of the USHCN, and were never instructed to perform an extra measure of due diligence to ensure their record keeping, and that their siting conditions should be homogenous over time.
It is evident that such siting problems do in fact cause changes in absolute temperatures, and may also contribute to new record temperatures. The critically important question is: how do these siting problems affect the trend in temperature?
Other concerns, such as the effect of concurrent trends in local absolute humidity due to irrigation, which creates a warm bias in the nighttime temperature trends, the effect of height above the ground on the temperature measurements, etc. have been ignored in past temperature assessments, as reported in, for example:
Pielke Sr., R.A., C. Davey, D. Niyogi, S. Fall, J. Steinweg-Woods, K. Hubbard, X. Lin, M. Cai, Y.-K. Lim, H. Li, J. Nielsen-Gammon, K. Gallo, R. Hale, R. Mahmood, S. Foster, R.T. McNider, and P. Blanken, 2007: Unresolved issues with the assessment of multi-decadal global land surface temperature trends. J. Geophys. Res., 112, D24S08, doi:10.1029/2006JD008229
Klotzbach, P.J., R.A. Pielke Sr., R.A. Pielke Jr., J.R. Christy, and R.T. McNider, 2009: An alternative explanation for differential temperature trends at the surface and in the lower troposphere. J. Geophys. Res., 114, D21102, doi:10.1029/2009JD011841.
Steeneveld, G.J., A.A.M. Holtslag, R.T. McNider, and R.A Pielke Sr, 2011: Screen level temperature increase due to higher atmospheric carbon dioxide in calm and windy nights revisited. J. Geophys. Res., 116, D02122, doi:10.1029/2010JD014612.
These issues are not yet dealt with in Dr. Richard Muller’s analysis, and he agrees.
The abstract of the 2007 JGR paper reads:
This paper documents various unresolved issues in using surface temperature trends as a metric for assessing global and regional climate change. A series of examples ranging from errors caused by temperature measurements at a monitoring station to the undocumented biases in the regionally and globally averaged time series are provided. The issues are poorly understood or documented and relate to micrometeorological impacts due to warm bias in nighttime minimum temperatures, poor siting of the instrumentation, effect of winds as well as surface atmospheric water vapor content on temperature trends, the quantification of uncertainties in the homogenization of surface temperature data, and the influence of land use/land cover (LULC) change on surface temperature trends.
Because of the issues presented in this paper related to the analysis of multidecadal surface temperature we recommend that greater, more complete documentation and quantification of these issues be required for all observation stations that are intended to be used in such assessments. This is necessary for confidence in the actual observations of surface temperature variability and long-term trends.
While NOAA and Dr. Muller have produced analyses using our preliminary data that suggest siting has no appreciable effect, our upcoming paper reaches a different conclusion.
Our paper, Fall et al 2011 titled “Analysis of the impacts of station exposure on the U.S. Historical Climatology Network temperatures and temperature trends” has this abstract:
The recently concluded Surface Stations Project surveyed 82.5% of the U.S. Historical Climatology Network (USHCN) stations and provided a classification based on exposure conditions of each surveyed station, using a rating system employed by the National Oceanic and Atmospheric Administration (NOAA) to develop the U.S. Climate Reference Network (USCRN). The unique opportunity offered by this completed survey permits an examination of the relationship between USHCN station siting characteristics and temperature trends at national and regional scales and on differences between USHCN temperatures and North American Regional Reanalysis (NARR) temperatures. This initial study examines temperature differences among different levels of siting quality without controlling for other factors such as instrument type.
Temperature trend estimates vary according to site classification, with poor siting leading to an overestimate of minimum temperature trends and an underestimate of maximum temperature trends, resulting in particular in a substantial difference in estimates of the diurnal temperature range trends. The opposite-signed differences of maximum and minimum temperature trends are similar in magnitude, so that the overall mean temperature trends are nearly identical across site classifications. Homogeneity adjustments tend to reduce trend differences, but statistically significant differences remain for all but average temperature trends. Comparison of observed temperatures with NARR shows that the most poorly-sited stations are warmer compared to NARR than are other stations, and a major portion of this bias is associated with the siting classification rather than the geographical distribution of stations. According to the best-sited stations, the diurnal temperature range in the lower 48 states has no century-scale trend.
The finding that the mean temperature has no statistically significant trend difference that is dependent of siting quality, while the maximum and minimum temperature trends indicates that the lack of a difference in the mean temperatures is coincidental for the specific case of the USA sites, and may not be true globally. At the very least, this raises a red flag on the use of the poorly sited locations for climate assessments as these locations are not spatially representative.
Whether you believe the century of data from the NOAA COOP network we have is adequate, as Dr. Muller suggests, or if you believe the poor siting placements and data biases that have been documented with the nationwide climate monitoring network are irrelevant to long term trends, there are some very compelling and demonstrative actions by NOAA that speak directly to the issue.
1. NOAA’s NCDC created a new hi-tech surface monitoring network in 2002, the Climate Reference Network, with a strict emphasis on ensuring high quality siting. If siting does not matter to the data, and the data is adequate, why have this new network at all?
2. Recently, while resurveying stations that I previously surveyed in Oklahoma, I discovered that NOAA has been quietly removing the temperature sensors from some of the USHCN stations we cited as the worst (CRN4, 5) offenders of siting quality. For example, here are before and after photographs of the USHCN temperature station in Ardmore, OK, within a few feet of the traffic intersection at City Hall:
NCDC confirms in their meta database that this USHCN station has been closed, the temperature sensor removed, and the rain gauge moved to another location – the fire station west of town. It is odd that after being in operation since 1946, that NOAA would suddenly cease to provide equipment to record temperature from this station just months after being surveyed by the surfacestations.org project and its problems highlighted.
3. Expanding the search my team discovered many more instances nationwide, where USHCN stations with poor siting that were identified by the surfacestations.org survey have either had their temperature sensor removed, closed, or moved. This includes the Tucson USHCN station in the parking lot, as evidenced by NOAA/NCDC’s own metadata online database, shown below:
It seems inconsistent with NOAA’s claims of siting effects having no impact that they would need to close a station that has been in operation since 1867, just a few months after our team surveyed it in late 2007 and made its issues known, especially if station siting quality has no effect on the data the station produces.
It is our contention that many fully unaccounted for biases remain in the surface temperature record, that the resultant uncertainty is large, and systemic biases remain. This uncertainty and the systematic biases needs to be addressed not only nationally, but worldwide. Dr. Richard Muller has not yet examined these issues.
Thank you for the opportunity to present this to the Members.