Berkeley Earth, a California-based non-profit research organization, has been preparing independent analyses of global mean temperature changes since 2013. The following is our report on global mean temperature during 2019.

We conclude that 2019 was the second warmest year on Earth since 1850. The global mean temperature in 2019 was colder than 2016, but warmer than every other year that has been directly measured. Consequently, 2016 remains the warmest year in the period of historical observations. Year-to-year rankings are likely to reflect short-term natural variability, but the overall pattern remains consistent with a long-term trend towards global warming.

Annual Temperature Anomaly

2019_Time_Series

 

Relative to 1981-2010 Average Relative to 1951-1980 Average
Year Rank Anomaly in Degrees Celsius Anomaly in Degrees Fahrenheit Anomaly in Degrees Celsius Anomaly in Degrees Fahrenheit
2019 2 0.54 ± 0.05 0.96 ± 0.08 0.90 ± 0.05 1.62 ± 0.08
2018 5 0.40 ± 0.05 0.72 ± 0.08 0.77 ± 0.05 1.38 ± 0.08
2017 3 0.47 ± 0.05 0.84 ± 0.08 0.84 ± 0.05 1.51 ± 0.08
2016 1 0.58 ± 0.04 1.05 ± 0.08 0.95 ± 0.04 1.71 ± 0.08
2015 4 0.45 ± 0.05 0.80 ± 0.08 0.81 ± 0.05 1.46 ± 0.08
2014 7 0.31 ± 0.05 0.55 ± 0.08 0.67 ± 0.05 1.21 ± 0.08
2013 11 0.24 ± 0.05 0.44 ± 0.08 0.61 ± 0.05 1.10 ± 0.08
2012 15 0.22 ± 0.04 0.39 ± 0.08 0.59 ± 0.04 1.05 ± 0.08
2011 17 0.20 ± 0.05 0.37 ± 0.08 0.57 ± 0.05 1.03 ± 0.08
2010 6 0.32 ± 0.05 0.58 ± 0.08 0.69 ± 0.05 1.24 ± 0.08
Uncertainties indicate 95% confidence range.

 

The global mean temperature in 2019 was estimated to be 1.28 °C (2.31 °F) above the average temperature of the late 19th century, from 1850-1900, a period often used as a pre-industrial baseline for global temperature targets.

The temperature uncertainties can be visualized using the schematic below where each year’s temperature estimate is represented by a distribution reflecting its uncertainty. In the analysis that Berkeley Earth conducts, the uncertainty on the mean temperature is approximately 0.05 °C (0.08 °F) for recent years. The global mean temperature in 2019 fell between those observed 2016 & 2017 with a modest uncertainty in the true ranking due to overlapping uncertainties.

2019_Probability_Distribution

The last five years stand out as a period of significant warmth well above all previous years since 1850. This reflects the long-term trend towards global warming. Though 2019 is slightly cooler than 2016, its temperature remains consistent with the long-term warming trend.

In addition to long-term warming, individual years are also affected by interannual variations in weather. Both 2015 and 2016 were warmed by an extreme El Niño event that peaked in Nov/Dec of 2015 and was reported by NOAA as essentially tied for the strongest El Niño ever observed. That exceptional El Niño boosted global mean temperatures in 2015 and 2016. By contrast, 2019 began with a weak El Niño event and finished with neutral conditions. This largely neutral weather pattern would not be expected to have had a large impact on temperature in 2019. Internal weather variability, such as El Niño and La Niña, generate year-to-year variations in temperature that occur in addition to the long-term warming trend.

Temperature Distribution in 2019

The following map shows the degree to which local temperatures in 2019 have increased relative to the average temperature in 1951-1980.

2019_Anomaly_Map

As can be expected from global warming caused by greenhouse gases, the temperature increase over the globe is broadly distributed, affecting nearly all land and ocean areas. In 2019, 88% of the Earth’s surface was significantly warmer than the average temperature during 1951-1980, 10% was of a similar temperature, and only 1.5% was significantly colder.

We estimate that 9.9% of the Earth’s surface set a new local record for the warmest annual average. In 2019, no places on Earth experienced a record cold annual average.

The following map qualitatively categorizes local temperatures in 2019 based on how different they were from historical averages after accounting for the typical climate variability at each location. In a stable climate only 2.5% of the Earth would be expected to have temperatures “Very High” or higher in any given year. In 2019, 52% of the Earth have annual averages that would rate as “Very High” compared to the historical climate, including large portions of the tropics. Locations with new records for annual average temperature are also indicated.

YTD_indicator_map

Land areas generally show more than twice as much warming as the ocean. When compared to 1951-1980 averages, the land average in 2019 has increased 1.32 ± 0.04 °C (2.38 ± 0.08 °F) and the ocean surface temperature, excluding sea ice regions, has increased 0.59 ± 0.06 °C (1.06 ± 0.11 °F). As with the global average, 2019 was the 2nd warmest year on land. For the ocean surface, we find that 2019 nominally ranks as the 3rd warmest year. However, the differences between the 1st, 2nd, and 3rd warmest years in the ocean are small compared to the measurement uncertainty, meaning they are all essentially indistinguishable. We take note of the fact that other groups have announced that 2019 set a new record for total ocean heat content, including both surface and subsurface waters. The following figure shows land and ocean temperature changes relative to the average from 1850 to 1900. The tendency for land averages to increase more quickly than ocean averages is clearly visible.

2019_Land_Ocean_Compare

As in other recent years, 2019 also demonstrated very strong warming over the Arctic that significantly exceeds the Earth’s mean rate of warming. This is consistent with the process known as Arctic amplification. By melting sea ice, warming in the Arctic regions causes more sunlight to be absorbed by the ocean, which allows for yet more warming. 2019 was the 2nd warmest year in the Arctic.

Both the tendency for land to warm faster than ocean and the higher rate of warming over the Arctic are expected based on our understanding of how increases in greenhouse gas concentrations will impact the Earth’s climate. As has been reported by the Global Carbon Project and other observers, 2018 saw new records for both the level of carbon dioxide in the atmosphere and the annual amount of carbon dioxide emitted by human activities.

National Average Temperature

Though the focus of our work is on global and regional climate analysis, it is also possible to use our data to estimate national temperature trends.

In our estimation, 2019 was the hottest year since instrumental records began in the following 36 countries: Angola, Australia, Belarus, Belize, Botswana, Bulgaria, Cambodia, Comoros, Djibouti, Gabon, Guatemala, Hungary, Jamaica, Kenya, Laos, Latvia, Lithuania, Madagascar, Mauritius, Moldova, Myanmar, Namibia, Poland, Republic of the Congo, Romania, Serbia, Slovakia, Somalia, South Africa, Taiwan, Thailand, Tuvalu, Ukraine, Vietnam, Yemen, and Zimbabwe. In addition, it was also the warmest year thus far observed in Antarctica.

The following chart provides a summary of the warming that countries experienced in 2019 relative 1951 to 1980 averages.

2019_Nations

These estimates for the changes in national annual average temperatures are derived from our global temperature fields. Due to uncertainties in the analysis and the limits of our spatial resolution some national average estimates may differ slightly from the values reported by national weather agencies.

Monthly Temperature Pattern

Every month in 2019 was at least 1.1 °C (2.1 °F) warmer than the 1850 to 1900 average. Three months (June, July, and September) set a new monthly record for the globe, and no month ranked lower than 4th.

2019_Seasonal

2019_Months_Plot

Long-term Trend

Though it is interesting to understand the characteristics of individual years, global warming is ultimately about the long-term evolution of Earth’s climate. The following chart shows a ten- year moving average of the Earth’s surface temperature, plotted relative to the average temperature from 1850-1900.

2019_Projection

Since 1980, the overall trend is +0.19 °C/decade (+0.34 °F/decade) and has changed little during this period. By continuing this trend, we can make a rough guess of how the near-future climate may develop if the forces driving global warming continue at their present rate.

As shown in the chart, several recent years have had temperatures more than 1 °C (1.8 °F) above the average temperature from 1850-1900, often used as an estimate of the pre-industrial climate. The Paris Agreement on Climate Change aims to keep global temperature rise to well below 2 °C (3.6 °F) and encourages parties to strive for warming of no more than 1.5 °C (2.7 °F). At the current rate of progression, the increase in Earth’s long-term average temperature will reach 1.5 °C (2.7 °F) above the 1850-1900 average by around 2035 and 2 °C (3.6 °F) will be reached around 2065. The increasing abundance of greenhouse gases in the atmosphere due to human activities is the direct cause of this recent global warming. If the Paris Agreement’s goal of no more than 2 °C (3.6 °F) warming is to be reached, significant progress towards reducing greenhouse gas emissions needs to be made soon.

Prediction for 2019

Based on historical variability and current conditions, it is possible to roughly estimate what global mean temperature should be expected in 2020. Our current estimate is that 2020 is likely to be similar to 2019 but with a potential to be somewhat warmer or cooler. It appears highly likely (~95% chance) that 2020 will be one of the five warmest years. In addition, we estimate a roughly 20% chance that 2020 could set a new record for warmest year.

2020_Prediction

Comparisons with other Groups

When preparing our year-end reports, Berkeley Earth traditionally compares our global mean temperature analysis to the results of five other groups that also report global mean surface temperature. The following chart compares Berkeley Earth’s analysis of global mean temperature to that of the NASA’s GISTEMP, NOAA’s GlobalTemp, the UK’s HadCRUT, Cowtan & Way, and ECMWF‘s reanalysis.

2019_Comparison

All of the major surface temperature groups except HadCRU have also ranked 2019 as the 2nd warmest year. HadCRU has 2019 as the 3rd warmest year.

The slight disagreement in the ranking reflects both the uncertainty in these estimations and the differences in how various research programs look at the Earth. For example, the NOAA and HadCRU efforts omit most of the polar regions when estimating mean temperature changes. As a result, some groups are unable to capture the strong Arctic warming observed by Berkeley Earth, ECWMF, and others.

Methodology

In reconstructing the changes in global mean temperature since 1850, Berkeley Earth has examined 20 million monthly-average temperature observations from 48,000 weather stations. Of these 20,000 stations and 210,000 monthly averages are available for 2019.

The weather station data is combined with sea surface temperature data from the UK Met Office’s Hadley Centre (HadSST). This ocean data is based on 392 million measurements collected by ships and buoys, including 18 million observations obtained in 2019. We reprocess and interpolate the HadSST data to provide a more complete picture of the oceans. After combining the ocean data with our land data, we arrive at a global picture of climate change since 1850.

Uncertainties arise primarily from the incomplete spatial coverage of historical weather observations, from noise in measurement devices, and from biases introduced due to systematic changes in measurement technologies and methods. The total uncertainty is much less than the long-term changes in climate during the last 150 years.

This report is based on such weather observations as had been recorded into global archives as of early January 2020. It is common for additional observations to be added to archives after some delay, an issue that is more likely this year due to the US government shutdown. Consequently, temperature analysis calculations can be subject to revisions as new data becomes available. Such revisions are typically quite small and are considered unlikely to alter the qualitative conclusions presented in this report.

Copyright

This report was prepared by Berkeley Earth. The contents of this report, including all images and the referenced videos, may be reused under the terms of the Creative Commons BY-4.0 copyright license for any purpose and in any forum, consistent with the terms of that license.

Members of the news media may also use the materials in this report for any new reporting purpose provided that Berkeley Earth is properly acknowledged, without concern for whether or not the CC BY-4.0 license is followed.

Data

Updated data files will appear at our data page, and are updated monthly.

In particular, monthly and annual time series are available.

The following is a summary of global temperature conditions in Berkeley Earth’s analysis of November 2019.

Globally, November 2019 was the second warmest November since records began in 1850.

The global mean temperature was 0.88 ± 0.05 °C above the 1951 to 1980 average.  This is equivalent to being 1.26 ± 0.07 °C above the 1850 to 1900 average, which is frequently used as a benchmark for the preindustrial period.

Month_only_time_series_combined

November 2019 was slightly cooler than November 2015 (by ~0.05 °C), but warmer than all other Novembers since global temperature estimates began in 1850. In part, November 2015 was warmer due to a major El Niño event that peaked in late 2015 and early 2016. This November, conditions in the tropical Pacific are considered to be neutral with neither El Niño warming nor La Niña cooling present. This state is expected to continue for at least the next few months.

Temperature anomalies in November 2019 showed an appreciable decline from October 2019.  However, this had the effect of returning global conditions to a state similar to those observed in May through September, following an unusually warm October.

Monthly_time_series_combined_1980

Spatially, November 2019 continues the pattern of wide-spread warmth. Unusually warm conditions were present in much of the Arctic, parts of Africa, South America, Europe, and Antarctica. Unusually cold conditions were present across central Asia and parts of North America. In addition, very warm ocean conditions were present across parts of the Indian Ocean, Southern Atlantic, and Northern Pacific. We estimate that 7% of the Earth’s surface experienced their locally warmest November average, compared to only 0.05% that experienced a locally coldest November.

Month_anomaly_map

Over land regions, 2019 was the 7th warmest November, coming in as 1.16 ± 0.09 °C above the 1951 to 1980 average.  It was the 2nd warmest November in the oceans, recorded as 0.62 ± 0.07 °C above the 1951 to 1980 average.  The warmest November on land occurred in 2010, while the warmest in the oceans occurred in 2015.

Month_only_time_series_land

Month_only_time_series_ocean

After 11 months, the Earth in 2019 has been marked by above average temperatures nearly everywhere, with the notable exception of a relatively cool conditions in part of North America.

YTD_anomaly_map

So far, only three of eleven months in 2019 have been the warmest recorded, though all of the months in 2019 have been within the top 5 warmest.

SeasonalAnomalyNov

It appears nearly certain (>99% likelihood) that 2019 will conclude as the second-warmest year since measurements began in 1850, behind only the exceptional warmth of 2016.

Annual_time_series_combined

The following is a summary of global temperature conditions in Berkeley Earth’s analysis of
October 2019.

Globally, October 2019 was the second warmest October since records began in 1850.

The global mean temperature was 0.99 ± 0.06 °C above the 1951 to 1980 average.  This is equivalent to being 1.38 ± 0.07 °C above the 1850 to 1900 average that is frequently used as a benchmark for the pre-industrial period.Month_only_time_series_combined

Though October 2019 was nominally slightly cooler than October 2015 (by ~0.01 °C), this difference is within the margin of error, so 2019 and 2015 are effectively tied.

Temperature anomalies in October 2019 showed a marked uptick from September 2019.  It is too early to know if this is merely 1-month fluctuation or the start of a short-term trend.

Monthly_time_series_combined_1980

In 2015, the jump to a warm October preceded an exceptionally warm December through April period that ultimately allowed 2016 to the warmest year observed. However, in the 2015/2016 period an exceptionally strong El Niño was present.  Such conditions are not present currently and are not expected to arise during the next few months.

Spatially, October 2019 was marked by extreme warmth across the Arctic, significant warmth across much of Asia, Europe, the Middle East, and parts of Australia.  Unusually warm conditions were also present in the Indian Ocean and the Northern Pacific.  Extraordinary cold was present in parts of North America, including in some places that saw record-breaking monthly low averages for October.

Month_anomaly_map

Over land regions, 2019 was also the 2nd warmest October, coming in as 1.46 ± 0.09 °C above the 1951 to 1980 average.  It was similarly the second warmest October in the oceans, recorded as 0.59 ± 0.07 °C above the 1951 to 1980 average.  In both cases, the warmest October occurred in 2015.

Month_only_time_series_land

Month_only_time_series_ocean

After 10 months, the Earth in 2019 has been marked by above average temperatures nearly everywhere, with the notable exception of a relatively cool conditions in part of North America.

YTD_anomaly_map

So far, only three of ten months in 2019 have been the warmest recorded, though all of the months in 2019 have been within the top 5 warmest.

October2019_Seasonal

It appears nearly certain (99% likelihood) that 2019 will conclude as the second-warmest year since measurements began in 1850, behind only the exceptional warmth of 2016.

Annual_time_series_combined

 

 

Berkeley Earth, a California-based non-profit research organization, has been preparing independent analyses of global mean temperature changes since 2013. The following is our report on global mean temperature during 2018.

We conclude that 2018 was likely the fourth warmest year on Earth since 1850. Global mean temperature in 2018 was colder than 2015, 2016, and 2017, but warmer than every previously observed year prior to 2015. Consequently, 2016 remains the warmest year in the period of historical observations. The slight decline in 2018 is likely to reflect short-term natural variability, but the overall pattern remains consistent with a long-term trend towards global warming.

Annual Temperature Anomaly

Global Mean Temperature Anomaly 1850-2018

 

Relative to 1981-2010 Average Relative to 1951-1980 Average
Year Rank Anomaly in Degrees Celsius Anomaly in Degrees Fahrenheit Anomaly in Degrees Celsius Anomaly in Degrees Fahrenheit
2018 4 0.41 ± 0.04 0.73 ± 0.08 0.77 ± 0.04 1.39 ± 0.08
2017 2 0.46 ± 0.05 0.84 ± 0.08 0.83 ± 0.05 1.50 ± 0.08
2016 1 0.58 ± 0.05 1.05 ± 0.08 0.95 ± 0.05 1.71 ± 0.08
2015 3 0.44 ± 0.04 0.79 ± 0.08 0.81 ± 0.04 1.46 ± 0.08
2014 6 0.30 ± 0.05 0.54 ± 0.08 0.67 ± 0.05 1.21 ± 0.08
2013 10 0.24 ± 0.04 0.44 ± 0.08 0.61 ± 0.04 1.10 ± 0.08
2012 14 0.22 ± 0.04 0.39 ± 0.08 0.58 ± 0.04 1.05 ± 0.08
2011 16 0.20 ± 0.04 0.36 ± 0.08 0.57 ± 0.04 1.03 ± 0.08
2010 5 0.32 ± 0.05 0.57 ± 0.08 0.69 ± 0.05 1.24 ± 0.08
Uncertainties indicate 95% confidence range.

 

In our estimation, temperatures in 2018 were around 1.16 °C (2.09 °F) above the average temperature of the late 19th century, from 1850-1900, a period often used as a pre-industrial baseline for global temperature targets.

The temperature uncertainties can be visualized using the schematic below where each year’s temperature estimate is represented by a distribution reflecting its uncertainty. In the analysis that Berkeley Earth conducts, the uncertainty on the mean temperature for recent years is approximately 0.05 °C (0.08 °F). Since 2018 was colder than 2015 by only 0.04 °C (0.06 °F), the ranking of 3rd versus 4th warmest year can reasonably be regarded as ambiguous.

Temperature estimate and uncertainty

Though 2018 only ranks fourth overall, 2015 through 2018 still stand out as a period of significant warmth well above all previous years since 1850. This reflects the long-term trend towards global warming. Though 2018 is slightly cooler than the immediately preceding years, its temperature remains consistent with the long-term warming trend.

In addition to long-term warming, individual years are also affected by interannual variations in weather. Both 2015 and 2016 were warmed by an extreme El Niño event that peaked in Nov/Dec of 2015 and was reported by NOAA as essentially tied for the strongest El Niño ever observed. That exceptional El Niño boosted global mean temperatures in 2015 and 2016. By contrast, 2018 began with a weak-to-moderate La Niña event. Such conditions would be expected to have somewhat reduced the global mean temperature in 2018. Internal variability, such as El Niño and La Niña, generate year-to-year variations in temperature that occur in addition to the long-term warming trend.

Temperature Distribution in 2018

The following map shows the degree to which local temperatures in 2018 have increased relative to the average temperature in 1951-1980.

Temperature anomaly map in 2018

As can be expected from global warming caused by greenhouse gases, the temperature increase over the globe is broadly distributed, affecting nearly all land and ocean areas. In 2018, 85% of the Earth’s surface was significantly warmer than the average temperature during 1951-1980, 13% was of a similar temperature, and only 2.4% was significantly colder.

An animation showing the evolution of temperatures since 1850 across the globe has also been prepared:

Download video: http://berkeleyearth.lbl.gov/downloads/2018_Warming_Map.mp4

We estimate that 4.3% of the Earth’s surface set a new local record for the warmest annual average. Most significantly in 2018, this included large portions of Europe and the Middle East.

The heatwave which affected Europe in 2018 included by far the warmest May to October average that has been observed since record-keeping began. This long period of unusual summer warmth had a significant impact on the region and was accompanied in many areas by a significant drought.

European Summer Temperatures 1850-2018

In 2018, no places on Earth experienced a record cold annual average.

The following map qualitatively categorizes local temperatures in 2018 based on how different they were from historical averages after accounting for the typical climate variability at each location. In a stable climate only 2.5% of the Earth would be expected to have temperatures “Very High” or higher in any given year. In 2018, 44% of the Earth fell in these categories. Locations with new records for annual average temperature are also indicated.

Temperature anomaly indicator map

Land areas generally show more than twice as much warming as the ocean. When compared to 1951-1980 averages, the land average in 2018 has increased 1.13 ± 0.05 °C (2.03 ± 0.09 °F) and the ocean surface temperature, excluding sea ice regions, has increased 0.48 ± 0.06 °C (0.86 ± 0.11 °F). As with the global average, 2018 was the 4th warmest year on land. For the ocean surface, we find that 2018 was the 5th warmest year. We take note of the fact that other groups have announced that 2018 set a new record for total ocean heat content, including both surface and subsurface waters. The following figure shows land and ocean temperature changes relative to the average from 1850 to 1900. The tendency for land averages to increase more quickly than ocean averages is clearly visible.

Land & Ocean warming comparison

As in other recent years, 2018 also demonstrated very strong warming over the Arctic that significantly exceeds the Earth’s mean rate of warming. This is consistent with the process known as Arctic amplification. By melting sea ice, warming in the Arctic regions causes more sunlight to be absorbed by the ocean, which allows for yet more warming. 2018 was the sixth warmest year in the Arctic.

Both the tendency for land to warm faster than ocean and the higher rate of warming over the Arctic are expected based on our understanding of how increases in greenhouse gas concentrations will impact the Earth’s climate. As has been reported by the Global Carbon Project and other observers, 2018 saw new records for both the level of carbon dioxide in the atmosphere and the annual amount of carbon dioxide emitted by human activities.

Lastly, we note that the equatorial Eastern Pacific shows a weak cooling pattern in the annual average map. This is reflective of the La Niña conditions occurring in the early part of 2018.

National Average Temperature

Though the focus of our work is on global and regional climate analysis, it is also possible to use our data to estimate national temperature trends.

In our estimation, 2018 was the hottest year since instrumental records began in the following 29 countries: Albania, Armenia, Austria, Bahrain, Belgium, Bulgaria, Bosnia and Herzegovina, Croatia, Cyprus, Czechia, France, Germany, Greece, Hungary, Italy, Kosovo, Liechtenstein, Luxembourg, FYR Macedonia, Monaco, Montenegro, Oman, Poland, Qatar, Serbia, San Marino, Slovakia, Switzerland and the United Arab Emirates. In addition, it was also the warmest year thus far observed in Antarctica.

The following chart summarizes the warming that countries experienced in 2018 relative the 1951 to 1980 averages. As mentioned previously, Europe and the Middle East experienced greater warmth in 2018 than most other regions.

Temperature anomaly by country and region 2018

These estimates for the changes in national annual average temperatures are derived from our global temperature fields. Due to uncertainties in the analysis and the limits of our spatial resolution some national average estimates may differ slightly from the values reported by national weather agencies.

An animated version has also been prepared to better communicate the changes over time:

Download video: http://berkeleyearth.lbl.gov/downloads/SwitchboardTemperatureMovie_2018.mp4

Monthly Temperature Pattern

Every month in 2018 was at least 0.67 °C warmer than the 1951 to 1980 average, but no month in 2018 set a new monthly record for the globe. In the maps below, the persistent heat anomaly over Europe is visible through the latter portion of the year. A prolonged period of winter warmth in Antarctica is also visible. In the oceans, the weak La Niña pattern is visible in the Eastern Equatorial Pacific during the early months of 2018. Though a transition to weak El Niño conditions had been predicted for the end of 2018, the temperature patterns in the Pacific at the end of the year remain weak and unconsolidated.

Monthly Temperature Anomaly Maps

Long-term Trend

Though it is interesting to understand the characteristics of individual years, global warming is ultimately about the long-term evolution of Earth’s climate. The following chart shows a ten-year moving average of the Earth’s surface temperature, plotted relative to the average temperature from 1850-1900.

Global mean temperature projected to 2060

Since 1980, the overall trend is +0.19 °C/decade (+0.34 °F/decade) and has changed little during this period. By continuing this trend, we can make a rough guess of how the near-future climate may develop if the forces driving global warming continue at their present rate.

As shown in the chart, several recent years have had temperatures more than 1 °C (1.8 °F) above the average temperature from 1850-1900, often used as an estimate of the pre-industrial climate. The Paris Agreement on Climate Change aims to keep global temperature rise to well below 2 °C (3.6 °F) and encourages parties to strive for warming of no more than 1.5 °C (2.7 °F). At the current rate of progression, the increase in Earth’s long-term average temperature will reach 1.5 °C (2.7 °F) above the 1850-1900 average by 2035 and 2 °C (3.6 °F) will be reached around 2060. The increasing abundance of greenhouse gases in the atmosphere due to human activities is the direct cause of this recent global warming. If the Paris Agreement’s goal of no more than 1.5 °C (2.7 °F) warming is to be reached, significant progress towards reducing greenhouse gas emissions must be made soon.

Prediction for 2019

Based on historical variability and current conditions, it is possible to roughly estimate what global mean temperature should be expected in 2019. Our current estimate is that 2019 is likely to be warmer than 2018, but unlikely to be warmer than the current record year, 2016. At present it appears that there is roughly a 50% likelihood that 2019 will become the 2nd warmest year since 1850.

Global Mean Temperature Anomaly 1850-2018 with 2019 prediction

This forecast is somewhat lower than the comparable forecasts issued by the UK Met Office and Gavin Schmidt of NASA. Those forecasts were both made somewhat what earlier, before the end of December, and included a greater amount of El Niño related warming than now seems likely.

Comparisons with other Groups

This year is unusual due to the ongoing shutdown of the United States government. The shutdown has prevented NOAA and NASA from doing climate change analysis or presenting their results for 2018. The shutdown has also affected NOAA’s role as a key international depository and distribution point for climate data. Some archives, especially those closely related to weather forecasting, have remained available. However, other resources are off-line or incomplete due to the shutdown.

The government shutdown has affected multiple streams of input data that Berkeley Earth uses in our analysis. However, because we often acquire the same or similar data from multiple sources, we ultimately concluded that we had obtained enough data to complete our analysis of December 2018 temperatures.

The Hadley Centre of the UK Met Office has been a key partner in this analysis. They provide the ocean data that we interpolate to generate our sea surface temperature fields. Despite initial indications that this might not be possible due to disruptions in the data archives managed by NOAA, the Hadley Centre was able to complete their sea surface temperature analysis (HadSST). Though their sea surface temperature analysis is available, the UK Met Office’s final report for 2018 has also been delayed by the unavailability of other climate data archives run by NOAA.

When preparing our year-end reports, Berkeley Earth traditionally compares our global mean temperature analysis to the results of five other groups that also report global mean surface temperature. Initially, we had planned to join most of the other climate analysis groups with a coordinated release on January 17th; however, that was impossible under current conditions. At present, the global reanalysis project run by the ECMWF is the only other project that has been able to issue their report. ECMWF also concluded that 2018 was the fourth warmest year, though their estimate for 2018 was in a near tie with their 3rd warmest year.

Despite the recent disruption, we do have the results of each of the other groups for January through November of 2018. Even without official results for December 2018, the partial year results are such that it is very likely that all six groups will ultimately concur that 2018 was the fourth warmest year.

The following chart compares Berkeley Earth’s analysis of global mean temperature to that of the other groups, though only Berkeley Earth and ECWMF data have been updated through 2018.

Multi-group temperature comparison 1850-2018

Methodology

In reconstructing the changes in global mean temperature since 1850, Berkeley Earth has examined 19 million monthly-average temperature observations from 46,000 weather stations. Of these 20,000 stations and 205,000 monthly averages are available for 2018.

The weather station data is combined with sea surface temperature data from the UK Met Office’s Hadley Centre (HadSST). This ocean data is based on 374 million measurements collected by ships and buoys, including 19 million observations obtained in 2018. We reprocess and interpolate the HadSST data to provide a more complete picture of the oceans. After combining the ocean data with our land data, we arrive at a global picture of climate change since 1850.

Uncertainties arise primarily from the incomplete spatial coverage of historical weather observations, from noise in measurement devices, and from biases introduced due to systematic changes in measurement technologies and methods. The total uncertainty is much less than the long-term changes in climate during the last 150 years.

This report is based on such weather observations as had been recorded into global archives as of early January 2018. It is common for additional observations to be added to archives after some delay, an issue that is more likely this year due to the US government shutdown. Consequently, temperature analysis calculations can be subject to revisions as new data becomes available. Such revisions are typically quite small and are considered unlikely to alter the qualitative conclusions presented in this report.

Copyright

This report was prepared by Berkeley Earth. The contents of this report, including all images and the referenced videos, may be reused under the terms of the Creative Commons BY-4.0 copyright license for any purpose and in any forum, consistent with the terms of that license.

Members of the news media may also use the materials in this report for any new reporting purpose provided that Berkeley Earth is properly acknowledged, without concern for whether or not the CC BY-4.0 license is followed.

Data

Updated data files will appear at our data page, and are updated monthly.

In particular, monthly and annual time series are available.

Berkeley Earth, a California-based non-profit research organization, has been preparing independent analyses of global mean temperature changes since 2013. The following is our report on global mean temperature during 2017.

We conclude that 2017 was likely the second warmest year on Earth since 1850. Global mean temperature in 2017 was 0.03 °C (0.05 °F) warmer than 2015, but 0.11 °C (0.20 °F) colder than 2016. As a result, 2016 remains the warmest year in the historical observations. Continue reading “Global Temperature Report for 2017″

Horrific Air Pollution in Europe Reaches 7 cigarettes per day equivalent, a pack a day in India and China

It’s winter, and that’s the worst air pollution period for Europe and China. The levels over much of the continent are in the unhealthy range. In the figure we show a map of the pollution of particulate matter in Europe, “PM2.5”, the most lethal of the common air pollutions. The map was taken from our website: http://berkeleyearth.org/air-quality-real-time-map/ where it is updated hourly. Grey areas (such as in Italy and Russia) are regions in which hourly updates are not publicly available.

Europe air pollution

The scale of “cigarettes per day” is used to make the levels easiest to understand. They were calculated by comparing the known health risk of cigarettes to the known health risks of PM2.5 as estimated by the World Health Organization. Throughout much of Europe the pollution levels give a health effect equivalent to that of every man, woman and child smoking 5 cigarettes per day; in the worst regions of Europe, the level exceeds 7 cigarettes per day equivalent.  For more information on PM2.5 and cigarette equivalence, see our memo: http://berkeleyearth.org/air-pollution-and-cigarette-equivalence/

The second plot shows yesterday’s air pollution around the world.  The worst pollution is in India and China, where levels reach over a pack of cigarettes per day (PM2.5 above 400 micrograms per cubic meter). It was not a good day for much of the world, except for the US, Japan, and some small scattered regions. The pollution tends to be exacerbated in winter, when more fuel is burned for heat (even renewables such as wood and biomass contribute to air pollution) and when atmospheric conditions are likely to trap the pollution.

World Air Pollution 2017-01-24 at 4.21.39 PM

For more detailed information on Berkeley Earth’s work on air pollution, see: http://berkeleyearth.org/air-pollution-overview/.

 

 

 

2016 was the warmest year since humans began keeping records, by a wide margin. Global average temperatures were extremely hot in the first few months of the year, pushed up by a large El Nino event. Global surface temperatures dropped in the second half of 2016, yet still show a continuation of global warming. The global warming “pause”, which Berkeley Earth had always stressed was not statistically significant, now appears clearly to have been a temporary fluctuation.

Monthly_time_series_combined_2000

 

 

 

 

 

 

 

 

 

 

 

 

 

Robert Rohde, Lead Scientist with Berkeley Earth, said “The record temperature in 2016 appears to come from a strong El Nino imposed on top of a long-term global warming trend that continues unabated.”

In addition, 2016 witnessed extraordinary warming in the Arctic. The way that temperatures are interpolated over the Arctic is now having a significant impact on global temperature measurements. Zeke Hausfather, Scientist at Berkeley Earth said, “The difference between 2015 and 2016 global temperatures is much larger in the Berkeley record than in records from NOAA or the UK’s Hadley Centre, since they do not include the Arctic Ocean and we do. The arctic has seen record warmth in the past few months, and excluding it leads to a notable underestimate of recent warming globally.”

Elizabeth Muller, Executive Director of Berkeley Earth, said, “We have compelling scientific evidence that global warming is real and human caused, but much of what is reported as ‘climate change’ is exaggerated. Headlines that claim storms, droughts, floods, and temperature variability are increasing, are not based on normal scientific standards. We are likely to know better in the upcoming decades, but for now, the results that are most solidly established are that the temperature is increasing and that the increase is caused by human greenhouse emissions. It is certainly true that the impacts of global warming are still too subtle for most people to notice in their everyday lives.”

Richard Muller, Scientific Director of Berkeley Earth, said: “We project that continued global warming will lead us to an average temperature not yet experienced by civilization. It would be wise to slow or halt this rise. The most effective and economic approach would be to encourage nuclear power, substitution of natural gas for future coal plants, and continued improvement of energy efficiency.”

Additional figures on Berkeley Earth’s 2016 temperature results are available at www.BerkeleyEarth.org (click on banner at the top of the page).

 

 

 

 

by Zeke Hausfather

Air pollution in China is a critical global health problem, responsible for somewhere between 700,000 and 2.2 million premature deaths annually. The largest single contributor to air pollution-related mortality is particulate matter below 2.5 microns in size, or PM2.5. Exposure to PM2.5 has been found to increase the risk of heart attacks, strokes, lung disease, and lower respiratory disease based on a number of longitudinal studies around the world comparing populations with differing levels of exposure. China is the world’s largest consumer of coal, and coal is responsible for the vast majority of electricity generated in China. I’ll argue that replacing coal-based generation with low-polluting alternatives like nuclear, natural gas, or renewables could save between 200 and 1,000 lives per gigawatt-year.

China has, on average, among the worst air pollution of any country in the world. Many of the areas of the country have PM2.5 levels that rank on average (using the U.S. EPA rating scheme) as “Unhealthy”, with spikes up to “Very Unhealthy” or “Hazardous” as relatively common occurrences. Occasionally cities in China will experience air pollution levels literally off the charts, with PM2.5 concentrations reaching levels of 1,000 or more micrograms per cubic meter (µg/m3)—the standard health scale ends at 500 µg/m3. The image below, from Berkeley Earth’s real-time air pollution monitoring of China, shows PM2.5 levels on a typical fall day.

zeke_fig1_deathsgwyear

Figure 1. PM2.5 levels over China and nearby countries on October 11th at 13:00 UTC. Real-time updates are available at: http://berkeleyearth.org/air-quality-real-time-map/

A large portion of PM2.5 in China comes from coal use. When coal is burned, it both directly releases particulate matter as a result of incomplete combustion and releases sulfur dioxide, nitrogen oxides, and black carbon that serve as important precursors to atmospheric PM2.5 formation. In many cases this secondary formation is more important than direct PM2.5 emissions from coal. Approximately 55 percent of coal consumed in China is used to generate electricity provided to the grid. 40 percent is consumed in industrial processes, while around 3 percent is used in the residential sector (with the remaining 2 percent consumed by commercial and other). China generates approximately 3,700 terawatt-hours or 422 gigawatt-years (GW-years) of electricity from coal annually.

Determining the contribution of coal-based electricity generation to PM2.5-related mortality requires estimating what percent of total PM2.5 in China comes from coal-fired power plants. A recent study by the Health Effects Institute and Tsinghua University performed extensive atmospheric modeling of PM2.5 sources to attribute overall concentrations. It found that about 40 percent of total PM2.5 could be directly attributed to coal. Of this, about 21 percent came from industrial uses (steel production, for example), 12 percent came from power plant coal, and 6 percent came from domestic coal burning (for space heating). It is important to note, however, that they assumed widespread use of scrubbers for electric power production; if the use of scrubbers is low then a larger fraction of the observed PM2.5 would be attributed to coal use. The breakdown of PM2.5 by source is shown in the pie chart below, both for the high power-plant scrubber use scenario put forward by the Health Effects Institute and a low scrubber use scenario that assumes power-sector emissions are more similar to those of industry.

zeke_fig2_deathsgwyear

Figure 2. Breakdown of estimate PM2.5 contributions by source in the Burden of Disease Attributable to Coal-Burning and Other Air Pollution Sources in China study from the Health Effects Institute and Tsinghua University in the High Scrubber Use scenario. The Low Scrubber Use scenario shows estimated contribution percentages if power plant emissions were more similar to industrial sector emissions.

The Health Effects Institute also provided an estimate of total mortality from PM2.5 of 916,000 deaths per year (with a remarkably narrow uncertainty range of 820,000 to 993,000), a mortality that is smaller than the prior Berkeley Earth estimate of 1.6 million, although within the published uncertainty range (700,000 to 2.2 million). Assuming that 12 percent of PM2.5 (and thus 12 percent of PM2.5 mortality) is attributable to coal-based electricity generation, we can estimate deaths per gigawatt-year (GW-year) as 260 (uncertainty range 233 to 282) using the Heath Effects Institute numbers and 454 deaths per GW-year (uncertainty range 199 to 625) for the Berkeley Earth numbers.

It is somewhat interesting to note that while both power generation and industrial sectors consume comparable amounts of coal, the PM2.5 contribution from the industrial sector is twice that of the electricity sector in the Heath Effects Institute model. This is largely due to assumptions regarding the utilization of emissions control technologies like sulfur, nitrogen, and PM2.5 scrubbers. Nearly all power plants are assumed to use scrubbers, while use in the industrial sector is more spotty. However, there is strong reason to believe that the actual utilization of scrubbers by power plants is much lower than officially reported. Operation of scrubbers consumes a non-negligible amount of energy, and there have been numerous reports of plant operators shutting down scrubbers to increase profits when they can get away with it. While China has started to crack down on this behavior, it is likely that actual scrubber use is still well below the 90+ percent assumed.

We can get an estimate of the mortality contribution of coal-base electricity generation when scrubbers are not fully utilized by examining a case where coal power plants had the same PM2.5 contribution as industry. While this is likely not true across the entire power sector, it may well be the case (or even a conservative estimate) for individual plants. This still results in effective power-sector emissions per ton of coal that are around 30 percent lower than for the industrial sector, as more coal is consumed in the power sector. If the power generation sector has emissions similar to the industrial sector, deaths per GW-year would be 456 (412 to 497) for the Heath Effects Institute mortality model and 745 (326 to 1025) for the Berkeley Earth model. These results are illustrated in Figure 3.

zeke_fig3_deathsgwyear

Figure 3. Estimated mortality per gigawatt-year of coal-based electricity generation attributable to PM2.5. Both estimates from the Health Effects Institute and Berkeley Earth are shown. The high scrubber use scenario follows the Health Effects Institute assumptions, while the low scrubber use scenario assumes emissions allocation similar to the industrial sector. Both published error bars (solid bars) and estimated error bars using the mortality impact uncertainty in the Berkeley Earth approach (dashed bars) are shown for the Health Effects Institute numbers.

Other studies conducted on mortality impacts of coal-based generation outside of China have found comparable results. A 2007 article in The Lancet estimated a mortality rate per GW-year of 202 for the U.K. and an average value of 231 globally. These estimates are noticeably lower than most of the estimates we consider for China, though they overlap with the low end of our values. This discrepancy might be due to two factors: first, there is reason to believe that emissions control technologies are in more widespread use in places like the U.K. and other parts of the world than in China. Second, the mortality estimates will be impacted by population density, with coal generation located in coastal China (where plants are primarily concentrated) having a much larger exposure impact than in most regions of the world.

Ultimately, at 422 GW-years of electricity produced annually from coal, we can bound Chinese coal-generation-related deaths at between 84,000 (200 deaths per GW-year) and 434,000 (1025 per GW-year). The consumption of coal for electricity generation thus has large negative public health implications for China. Moving to away from coal and toward alternatives like nuclear, natural gas, and renewables such as solar and wind, would not only reduce China’s greenhouse gas emissions but also save lives.

by: Zeke Hausfather, Berkeley Earth zeke@berkeleyearth.org

The relationship between greenhouse gas (GHG) emissions and future warming is complex, depending on the atmospheric lifetime of gases, their radiative forcing, and the thermal inertia of the Earth, particularly our oceans. Many non-CO2 GHGs have shorter atmospheric lifetimes, and the global warming-equivalent values commonly used for analysis of emission impacts fail to effectively capture important relationships between emission time and resulting impact on global surface temperature.

In order for researchers to easily translate emissions of CO2, CH4, and N2O into future warming consistent at a global level with the results obtained from the latest generation of climate models, we have developed a simple python-based climate model we call SimMod (available on github here). If provided with annual emissions of each GHG, it will convert these into atmospheric concentrations, radiative forcing, and transient climate response (warming) per year through 2100 (or any specified period).

The model comes with four built-in emission scenarios, the IPCC’s RCP scenarios that can be used as a starting point for analysis. We also used the published atmospheric concentrations, radiative forcing, and transient climate response to evaluate the model performance. The emissions scenarios are shown in Figure 1, below.

Figure1

Figure 1: Emissions of CO2, CH4, and N2O for the four RCPs.

 

These emissions are converted into concentrations either using pulse-response functions for each gas (simple exponential decay for CH4 and N2O; a response function fit to the BERN carbon cycle model in the case of CO2) or using the BEAM carbon cycle model for CO2, whichever the user specifies. The resulting atmospheric concentrations for each RCP scenario are shown in Figure 2. In general, the modeled concentrations match RCP scenarios well, with some exceptions for high CO2 emission scenarios (e.g. RCP 8.5) where carbon cycle feedbacks reduce ocean uptake in a manner not reflected in the simple pulse response model.

Figure2

Figure 2: Atmospheric concentrations CO2, CH4, and N2O for the four RCPs and SimMod, with values normalized for the year 2000. Dashed lines are SimMod results; solid lines are RCP-provided values.

Atmospheric concentrations of each gas are converted into radiative forcing using the IPCC’s simple radiative forcing functions. When provided with the same atmospheric concentrations as the RCP scenarios, the resulting radiative forcing closely matches RCP scenario forcing, as shown in Figure 3.

Figure3

Figure 3: Total direct radiative forcing values (CO2 + CH4 + N2O) for the four RCPs and SimMod using the RCP-provided concentrations. Dashed lines are SimMod results; solid lines are RCP-provided values.

Finally, radiative forcing is converted into transient climate response using a continuous diffusion slab ocean model adapted from Caldeira and Myhrvold (2012) and a specified climate sensitivity. The global average temperature is estimated by a weighted average of the ocean model response and the equilibrium temperature response over land. Figure 4 shows the resulting transient temperature response given the RCP scenario forcings compared to the IPCC’s latest climate model runs (CMIP5). The black line is the multi-model mean, while the grey area is the 95% confidence intervals of climate models. The solid red line is the SimMod transient climate response, while the dashed red line represents the equilibrium response (e.g. if there were no oceans to buffer the climate response time).

Figure4

Figure 4: SimMod transient (solid red) and equilibrium (dashed red) temperature response compared to CMIP5 model results for each RCP.

We hope this model provides a useful tool for researchers looking to move away from simplistic global warming potentials to examine the time-evolution of the temperature response to different emission or mitigation scenarios.

On May 1, 2016, a fire began southwest of Fort McMurray,  Canada. It swept through the community destroying over 2000 structures.

You can read about the latest developments here.

Since 2014 Berkeley Earth has been expanding its real time database of air pollution. Our original focus was on China, but we have continued to add other regions of the world, including Canada

Below see the snapshot from May 7, 2016

May 7 air quality map whole world

A close up view:

CloseupMurray

Today’s update , available here http://berkeleyearth.org/air-quality-real-time-map/ , shows the change

 

Capture

by Richard A. Muller and Elizabeth A. Muller

For many people, comparing air pollution to cigarette smoking is more vivid and meaningful than is citing the numbers of yearly deaths. When we published our scientific paper on air pollution in China in August 20151, we were surprised by the attention we got for a quick comparison we made comparing air pollution on a particularly bad day in Beijing to smoking 1.5 cigarettes every hour. We were also surprised to find that a prominent researcher, Arden Pope, had previously calculated that average pollution in Beijing is similar to smoking 0.3 cigarettes per day – and that this comparison is used to reassure people that the pollution really isn’t that bad.

China cigarette map 13 Dec 2015 sm

In this memo, we will derive the rough value of conversion, so people can think of air pollution in terms of cigarettes equivalent. The sole goal of this calculation is to help give people an appreciation for the health effects of air pollution. We will also discuss the apparent discrepancy with Arden Pope (now resolved), which stems from our comparing the health impacts of cigarettes, rather than the amount of PM2.5 (the most deadly pollutant) delivered.

In summary, we find that air pollution can be approximated as cigarettes equivalent as follows:

Air Pollution Location Equivalent in cigarettes
per day
US, average 0.4
EU, average 1.6
China, average 2.4
Beijing, average 4.0
Handan, average 5.5
Beijing, bad day 25.0
Harbin, very bad day 45.0
Shenyang, worst recorded 63.0

Calculation
We start with some numbers estimated by the US Center for Disease Control: 480,000 people die in the US every year due to smoking.2 The number of cigarettes sold in the US has been dropping, from 470 billion per year in 1998, to 280 billion per year in 2013. For the purpose of our rough estimate, we will take an average number of 350 billion; it is easy to adjust the numbers using different values.

Now we combine these numbers. The ratio of deaths per year, to cigarettes per year, is 0.00000137, expressed in scientific notation as 1.37 x10-6. Put another way, there are 1.37 deaths every year for every million cigarettes smoked. We note that this figure agrees with the value of 1.4 published by Bernard Cohen in 1991.3

Now let’s consider air pollution. The most harmful pollution consists of small particulate matter, 2.5 microns in size or less, called PM2.5. These particles are small enough to work their way deep into the lungs and into the bloodstream, where they trigger heart attack, stroke, lung cancer and asthma. In the Berkeley Earth review of deaths in China we showed that 1.6 million people die every year from an average exposure of 52 μg/m3 of PM2.5. To kill 1.6 million people would require, assuming 1.37 x10-6 deaths per cigarette, 1.1 trillion cigarettes. Since the population of China is 1.35 billion, that comes to 864 cigarettes every year per person, or about 2.4 cigarettes per day.

Thus the average person in China, who typically breathes 52 μg/m3 of air pollution, is receiving a health impact equivalent to smoking 2.4 cigarettes per day. Put another way, 1 cigarette is equivalent to an air pollution of 22 μg/m3 for one day.

The average PM2.5 in Beijing over the year is about 85 μg/m3, equivalent to about 4 cigarettes per day. The average value in the industrial city of Handan, about 200 km south of Beijing, is about 120 μg/m3, equivalent to 5.5 cigarettes/day. We were in Beijing when the level rose to 550 μg/m3, equivalent to 25 cigarettes per day. In Harbin, the air pollution has reached the limit of the scale, 999 μg/m3. That would be equivalent to 45 cigarettes per day. As we are writing this, the air pollution in New Delhi, India, is 547 μg/m3, equivalent to about 25 cigarettes each day. A recent peak reported in the Washington Post for the city of Shenyang4 set a new record of 1400 μg/m3, equivalent to over three packs of cigarettes per day for every man, woman, and child living there.

Here is the rule of thumb: one cigarette per day is the rough equivalent of a PM2.5 level of 22 μg/m3. Double that level, and it is equivalent to 2 cigarettes per day. Of course, unlike cigarette smoking, the pollution reaches every age group.

The EPA estimates5 that the average air pollution in the United States in 2013 was 9.0 μg/m3. That is equivalent to 0.41 cigarettes per day for every person in the US. From our crude calculation, and taking into account the US population, that average exposure would be expected to lead to 66,000 deaths per year in the US. That is in reasonable agreement with the value of 52,000 per year published by Caiazzo et al. 6

Europe’s Environment Commissioner, Janez Potocnik, noted that pollution caused 400,000 premature deaths in 2010 in Europe.7 That’s equivalent, for the EU population of 508 million, to everyone smoking 1.6 cigarettes per day.

This sounds bad, but it may be even worse. The US EPA estimates that for every smoking death, there are 30 other people who suffer significant smoking- related health impairment.

Comparison of methods with Arden Pope
Arden Pope had published a number equating the average pollution in Beijing to about 0.3 cigarettes per day, nearly a factor of 10 lower than our value. His value was based on the comparison of the weight of the inhaled component of PM2.5 from smoking cigarettes, not on observed health effects. We discussed his number at some length with him, and he has subsequently gave us permission to quote his current stand as follows: “Although the potential differential toxicity of fine particulate matter air pollution from various sources is not fully understood, fine PM from the burning of coal, diesel, and other fossil fuels as well as high temperature industrial processes may be more toxic than particles from the burning of tobacco.”

For the current memo, rather than just compare the amount of material absorbed in the body, we considered the equivalence of health effects from air pollution and smoking, and that is what our table represents.

Conclusion
Air Pollution kills more people worldwide each year than does AIDS, malaria, diabetes or tuberculosis. For the United States and Europe, air pollution is equivalent in detrimental health effects to smoking 0.4 to 1.6 cigarettes per day. In China the numbers are far worse; on bad days the health effects of air pollution are comparable to the harm done smoking three packs per day (60 cigarettes) by every man, woman, and child. Air pollution is arguably the greatest environmental catastrophe in the world today.

  1. Rohde and Muller, 2015, Air Pollution in China: Mapping of Concentrations and Sources, PLOS ONE (available here: http://berkeleyearth.org/air-pollution- overview/)
  2. www.cdc.gov/tobacco/data_statistics/fact_sheets/fast_facts/
  3. Bernard L. Cohen, 1991, Catalog of Risks Extended and Updated, Health Physics vol. 61, pp. 317-335.
  4. http://news.nationalpost.com/news/doomsday-smog-shenyang-records-worst- air-pollution-reading-since-china-started-monitoring-smog
  5. http://www.epa.gov/roe/
  6. http://dx.doi.org/10.1016/j.atmosenv.2013.05.081
  7. http://elpais.com/m/elpais/2013/10/18/inenglish/1382105674_318796.html

By Zeke Hausfather

In recent weeks we’ve seen a political controversy over NOAA’s adjustments to temperature records, with accusations from some in congress that records are being changed to eliminate a recent slowdown in warming and to lend support to Obama administration climate policies. This makes it sound like the NOAA record is something of an outlier, while other surface temperature records show more of a slowdown in warming. This is not true; all of the major surface temperature records largely agree on temperatures in recent years. This includes independent groups like Berkeley Earth that receive no government funding. A record warm 2014 and 2015 (to date) has largely eliminated any slowdown in temperatures, whether data is adjusted or not.

Figure 1, below, shows a 12-month running average of temperatures for each of the five series. All are quite similar in trajectory, with only small differences between each series. In all five the last 12 months have been the warmest on record, and 2015 will almost certainly be the warmest year on record in all five.

zekeFigure1

It’s worth noting that while each group has a different methodology, much of the underlying data is the same. NOAA, the UK’s Hadley Centre, and independent researchers Cowtan and Way (C&W) use effectively the same land data (the Global Historical Climatological Network, or GHCN for short). NASA also uses GHCN data, but adds additional stations in the U.S. and in the Arctic and Antarctic. Berkeley uses a much larger set of stations (about 36,000 stations, compared to around 7,000 for the other records), though NOAA will be switching to a similarly large database of stations soon. Berkeley, Hadley, and C&W all use a sea surface temperature series called HadSST3 produced by the Hadley Centre. NOAA and NASA use a sea surface temperature series called ERSST (version 4) produced by NOAA.

Automated adjustments to land data to remove detected problems like station moves or instrument changes are used by NOAA, NASA, and Berkeley Earth; Hadley and C&W do relatively little adjustments to land data outside of quality control. NOAA and Hadley only calculate temperatures in areas with nearby stations, while NASA, Berkeley, and C&W fill in areas without stations based on statistical techniques using the nearest available stations.

No matter which groups’ record you use, you end up with a pretty similar global temperature record. Figure 2, below, shows the trend in temperatures for three periods: 1950-present, 1970-present, and the nominal “slowdown” period of 1998-present. It shows that while the 1998-present period is warming a tad slower than the 1970-present period on average, the uncertainties are large and the warming rate over the post-1998 period is pretty much the same as the longer 1950-present period.

 

zekeFigure2

 

The relatively lower 1998-present trends are also a result of cherry-picking the 1998 El Nino as a start date (since temperature were anomalously high at that point, the trend thereafter will be lower than starting before or after the El Nino event). For example, calculated trends from 1996 or 2000-present are more similar to the 1970-present trends.

 

zekeFigure3

 

The actual adjustments that NOAA does to the record have a relatively small impact on temperatures in recent years, though small changes can have outsized impacts when calculating short-term trends. The larger impacts of NOAA adjustments by far are in the early part of the record, where they raise temperatures compared to the unadjusted series. Contrary to what most folks assume, the net effect of adjustments is to reduce, not increase, the amount of warming that we’ve experienced over the past century.

The fact that independent groups like Berkeley Earth find results nearly identical to NOAA should help put to rest and lingering concerns that some nefarious scheme has been hatched among scientists to cook the proverbial book. Rather, temperature data is complex and inhomogeneous, coming from multiple different sources and instruments over the past 250 years. Some adjustments are needed when switching from buckets to ship engine intake valves to buoys, as each will read temperatures a bit differently. The overall effect of these adjustments is small on a global level, however, and they have relatively little bearing on our understanding of modern warming.

Earlier this month the Shenyang EPA reported the worse levels of air pollution since record keeping began: Estimates varied between 1000 and 1400 micrograms per cubic meter of PM2.5— the fine particulate matter that has deadly health consequences. The US embassy in Shenyang reported “off the chart” recordings.

In our recent study in PLOS on air pollution in China we estimated that the Chinese residents are typically exposed to roughly 50 micrograms per cubic meter and that this exposure results in roughly 1.6 million deaths per year, or 17% of all deaths. The US EPA recommends that long-term exposure to this particulate matter be limited to no more than 12 micrograms per cubic meter.

Concentrations of 1000 to 1400, roughly 100 times want the EPA advises, are hard to fathom. A picture is worth 1400 words

TOPSHOTS This picture taken on November 8, 2015 shows a residential block covered in smog in Shenyang, China's Liaoning province.  A swathe of China was blanketed with dangerous acrid smog after levels of the most dangerous particulates reached almost 50 times World Health Organization maximums.         CHINA OUT AFP PHOTOSTR/AFP/Getty Images
TOPSHOTS
This picture taken on November 8, 2015 shows a residential block covered in smog in Shenyang, China’s Liaoning province. A swathe of China was blanketed with dangerous acrid smog after levels of the most dangerous particulates reached almost 50 times World Health Organization maximums. CHINA OUT AFP PHOTOSTR/AFP/Getty Images

In terms of health effects we could put it this way: breathing 1400 micrograms is the equivalent of every person smoking roughly 3 packs of cigarettes a day, or 60 cigarettes.

As Live Science reports:

That much pollution is “a big deal,” said Dr. Norman Edelman, a senior consultant for scientific affairs with the American Lung Association.

Fine particulate matter is dangerous for human health because the particles are so tiny that they can bypass the body’s normal defense systems, such as the mucus membranes that line the mouth and nose. The particles can penetrate deep into the lungs, and sometimes can even pass through the tissue of the lungs and enter the bloodstream, Edelman said.

Particulate pollution is hard to escape because its sources are so prevalent in modern cities and towns. But breathing in these superfine particles damages the respiratory tract, experts say, and it can worsen people’s pre-existing conditions and increase the risk of new  infections.

If you look at an area that is subjected to spikes in pollution, you’ll see an increase in hospital admissions for lung and heart disease.

The peak values recorded in Shenyang don’t tell the entire story. Levels throughout the prefecture hit hazardous levels. Below see an map of PM2.5 in China recorded at the time of the peak pollution.

Shenyang

As the chart indicates the values exceeded the highest EPA classification of “Hazardous” and were not confined to the city.  In fact, these are the highest levels Berkeley Earth has observed anywhere in China during the 19 months that we have been archiving real-time observations.

In Northeastern China, it is not uncommon for air pollution to spike October / November. At this time of year, many municipal heating systems reactivate their coal-burning boilers for the winter. That activation process is accompanied by a large surge in fine particulate emissions, much higher than normal operating conditions. That activation, coupled with weather conditions that trapped particulates at low altitudes, is likely to be the immediate cause of this year’s historic haze in Shenyang. A similar, but less severe peak was seen at a similar time last year:

Shenyang2
Particulate pollution levels averaged for Shenyang Prefecture. Peaks in late October / early November are likely the result of reactivating municipal heating systems. Also visible is a peak coinciding to the use of fireworks during Chinese New Year in February. The prefecture average peak of nearly 800 is less than the highest levels observed in the city itself.

Currently Berkeley Earth is continuing its research into air pollution collecting real time data from China and other parts of the far east, while expanding our scope to include european states. Current PM2.5 conditions for China and Japan can be observed on our real-time map. As regions are added they will be included in the real time maps and added to the data archives located here.

Over the past few days we have been publishing the first major update of our data since early 2015. The data can be found here on our data page. Given the time of year and the wide ranging discussion about whether or not 2014 was a record year, it seemed a good time to asses the probability of 2015 being a record year. In addition, there is an interesting point to be made about how the selection of data and selection of methods can lead to different estimates of the global temperature index. That particular issue was addressed in the draft version of the IPCC’s 5th Assessment Report:

“Uncertainty in data set production can result from the choice of parameters within a particular analytical framework, parametric uncertainty, or from the choice of overall analytical framework, structural  uncertainty. Structural uncertainty is best estimated by having multiple independent groups assess the same  data using distinct approaches. More analyses assessed now than at the time of AR4 include a published  estimate of parametric or structural uncertainty. It is important to note that the literature includes a very broad range of approaches. Great care has been taken in comparing the published uncertainty ranges as they  almost always do not constitute a like-for-like comparison. In general, studies that account for multiple  potential error sources in a rigorous manner yield larger uncertainty ranges.”   ( Draft)

We will return to that issue, but first our results to date along with a estimate of how the year will end. We start with the land component.

2015 Partial Land
The annual average anomaly is shown in blue. The grey shaded region shows the 95% uncertainty region. The red dot indicates global average for the first 9 months plus uncertainty. The green bars indicate our projection for the year end result

While our record goes back to 1750, here we only show from 1850 to present. While we calculate an absolute temperature field, in order to compare with other dataset producers we present anomalies based on 1961-1990 period. The estimate for the year end ( the green bars ) is calculated by looking at the difference between  Jan-Sept  and Jan-Dec  for every year in the record. This difference and its uncertainty is combined with the Jan2015-Sept2015  estimate to produce an prediction for the entire year. As the green bar indicates there is a modest probability that 2015 land temperatures will set a record.

In addition to a land product, we also create an ocean temperature product. The source data here is HADSST, however we apply our own interpolation.

2015 Partial Ocean
The annual average anomaly is shown in blue. The grey shaded region shows the 95% uncertainty region. The red dot indicates global average for the first 9 months plus uncertainty. The green bars indicate our projection for the year end result.

The estimate for ocean temperature indicates a large probability of a record breaking year.

When we combine the Surface Air Temperature (SAT) above the land with an ocean temperature product and form a global temperature index we see the following:

2015 Partial Land and Ocean
The annual average anomaly is shown in blue. The grey shaded region shows the 95% uncertainty region. The red dot indicates global average for the first 9 months plus uncertainty. The green bars indicate our projection for the year end result.

Our approach projects an 85% likelihood that 2015 will be a record year.  When we speak of records we can only speak of a record given our approach and given our data.  We also need to be aware that focusing on record years can sometimes obscure the bigger point of the data. It is clear that the earth has warmed since the beginning of record keeping. That is clear in the land record. It is clear in the ocean temperature record, and when we combine those records into the global index we see the same story. Record year or not, the entirety of the data shows a planet warming at or near the surface.

Structural Uncertainty

As the quote from the draft version of AR5  indicated there is also an issue of structural uncertainty.  Wikipedia gives a serviceable definition:

 “Structural uncertainty, aka model inadequacy, model bias, or model discrepancy, which comes from the lack of knowledge of the underlying true physics. It depends on how accurately a mathematical model describes the true system for a real-life situation, considering the fact that models are almost always only approximations to reality. One example is when modeling the process of a falling object using the free-fall model; the model itself is inaccurate since there always exists air friction. In this case, even if there is no unknown parameter in the model, a discrepancy is still expected between the model and true physics.”

It may seem odd to talk about the global temperature index as a “model”, but at it’s core every global average is a model, a data model. What is being “modelling” in all the approaches is this:  the temperature where we don’t have measurements. Another word for this is interpolation. We have measurements at a finite number of locations and we use that information to predict the temperature at locations where we have no thermometers. For example, CRU uses a gridded approach where stations within a grid cell are averaged. Physically this is saying that latitude and longitude determine the temperature.  Grid cells with no stations are simply left empty.  Berkeley uses a different approach that looks at the station locations and interpolates amongst them using the expected correlations between stations.  This allows us to populate more of the map, and avoids biases due to the arbitrary placement of grid cell boundaries.

If we like, we can assess the structural uncertainty by comparing methods using the same input data: We did that here, and compared the Berkeley method with the GISS method and CRU method. That test shows the clear benefits of the BE approach. Given the same data the BE predictions of temperatures at unmeasured locations has a lower error than the GISS approach or the CRU approach.

However, comparisons between various methods are usually not so clean. In most cases, not only is the method different, but the data is different as well.  Below see a comparison with the other indices, focused in particular on the last 15 years.

2015 Partial Land and Ocean Compare2

The differences we see between the various approaches comes down to two factors: Differences in datasets and differences in methods. While all four records fall within the uncertainty bands, it appears as if NCDC does have an excursion outside this region; and if we look towards years end, it appears that their record shows more warmth than others.

In order to understand this we took a closer look at NCDC. I will start with some material we created while doing our original studies. Over the course of the climate debate some skeptics and journalists have irresponsibly insinuated that the GHCN records  have been “manipulated”. This is important because CRU and GISS and NCDC all use GHCN records. We can test that hypothesis of “manipulation” by comparing what our method shows using  GHCN data and then comparing that with non-GHCN data.  If the hypothesis of “manipulation” were true, we would expect to find differences  or evidence that the record was being “manipulated”. We reject that hypothesis.

GHCN_NonGHCN_Compare

This implies is that the reason for the difference between NCDC and BE probably doesn’t lie entirely in the choice of data since we get the same answer whether we use GHCN data or not. It suggests that the NCDC method or some interaction of data and method is the reason for the difference between BE and NCDC.

Since NCDC use a gridded approach  there is the possibility that the selection of grid size is driving the difference. We see a similar effect with CRU which uses a 5 degree grid. That choice results in grid cells with no estimate. In short, they don’t estimate the global temperature. To examine this we start by looking at the Berkeley global field:

map

 

There are two things to note here. First the large positive anomaly in the Arctic and second the cool anomalies at the South Pole. In some years we will see that CRU has a lower global anomaly because they do not estimate where the world is warming the fastest: the Arctic. This year, however, we have the opposite effect with NCDC. They are warmer because of missing grid cells at the South Pole. The “choice” of grid cell size influences the answer: As we can see some years the choice of grid cell size results in a warmer record, and other years a cooler record.

NOAA 2015 Prelim Map

Since NCDC uses GHCN data and since they use a gridded approach with grid cells that are too small, they end up with no estimate for the cooling South Pole. The end result is a temperature index that runs hotter than other approaches. For example, both GISS and Berkeley Earth use data from SCAR for Antarctica. When you combine SCAR with GHCN as both BE and GISS do, there are 86 stations with at least one month of data in 2015.

GISS can be viewed here. With 1200km gridding the global average anomaly for September is  .68C. If we switch to 250km gridding, the anomaly increases to .73C. This year, less interpolation results in a warmer estimate. Depending on the year and depending on the warming or cooling at either pole, the very selection of a grid size can change the estimated anomaly. Critics of interpolation may have to rethink their objections.

2015 looks like it is shaping up to be an interesting year both from perspective of “records” and from the perspective of understanding how different data and different methods can result in slightly different answers. And it’s most interesting because it may lead people to understand that interpolation or infilling can lead to both warmer records and cooler records depending on the structure of warming across the globe.

 

 

As the AP  reports  Lelieveld has published a paper in Nature  on the mortality associated with outdoor air pollution that  confirms what BerkeleyEarth found in its study of air pollution in China. By their estimate   ~1.357M deaths in China are caused by air pollution. By our estimate there are ~1.6 Million deaths per year in China due to air pollution, or roughly 18% more than Lelieveld’s computer modelling estimates.  When we consider the uncertainties involved in both estimates,  the conclusion is essentially the same: Air pollution kills, and not only in China, but around the world. By their estimate worldwide deaths top 3 million per year.

 

As Richard Muller writes:

“The pieces are all fitting together; the picture is emerging from the jigsaw puzzle.  The case is getting stronger that air pollution around the world is a severe, perhaps the most severe environmental disaster in the world today.  Someday it might be overtaken by global warming, but air pollution is today’s killer. Worldwide, PM2.5 kills more people per year than AIDS, malaria, diabetes or tuberculosis, and its effects are most damaging in the developing world. But even in the US, air pollution is responsible every year for more deaths than those caused by automobile accidents.”

 

The Lelieveld work provides an important complement to our study. Both studies used the same WHO approach to estimating mortality from PM2.5 concentrations; however, the two papers took different approaches to estimating   concentrations. BerkeleyEarth worked from ground station data: air quality measurements taken at ground level at over 900 locations in China. We recorded the actual concentration levels in situ. The Lelieveld study took a different approach.

 

Working from the Emissions Database for Global Atmospheric Research (EDGAR) which estimates source emissions and combining that with a weather and chemical transport model, Lelieveld et al, were able to estimate concentrations globally. This approach has uncertainties since the data relies on estimates of emissions and their sources, not to mention the uncertainties involved in estimating transport. Nevertheless, the estimates they obtained for mortality match with our estimates for mortality. Since the results match in areas where we have actual observations of PM2.5 concentrations we have some measure of confidence in their global numbers

 

On benefit of the modelling approach  is that it allows you to make estimates in areas where you have no air quality measurement systems. As BerkeleyEarth continues to expand it collection of obsveration from Japan,India, Europe and other parts of the world, we will be in a position to report on the accuracy of the modelling in those parts of the world

 

In our study of air pollution in China, we stumbled upon two extraordinary days.  The first, which we initially thought was a fluke, turned out to be a single day when air pollution across China was more extensive than we had seen in a years worth of data.  It was a mystery until  lead scientist, Robert Rohde, asked a key question: could this have been the Chinese New Year?

The second extraordinary event lasted over a week, and it was a period of remarkably clean air over Beijing — achieved purposefully but undoubtedly at no small expense by the Chinese government.  They wanted blue skies for their parade celebrating the liberation of China from Japan in World War II, and they got those blue skies.  If nothing else, their success showed that, in principle, the Chinese can indeed control their own air quality.
These are extraordinary events.  The backdrop is the work recently published by Berkeley Earth describing air pollution in China.  To accomplish this, we used the same mathematical methods that we had used in our study of global warming.  This study covered four months of data.  We wrote:

 

During our analysis period, 92% of the population of China experienced >120 hours of unhealthy air (US EPA standard), and 38% experienced average concentrations that were unhealthy. China’s population-weighted average exposure to PM2.5 was 52 μg/m3. The observed air pollution is calculated to contribute to 1.6 million deaths/year in China [0.7–2.2 million deaths/year at 95% confidence], roughly 17% of all deaths in China.

Starting in 2012 China began the development of an air quality monitoring system that now includes over 900 stations in nearly 200 cities. These stations report hourly data on  PM2.5 , PM10, SO2, NO2, O3, and CO.  Unlike other studies that relied on modelling or satellite observations, our study compiled data from ground stations. The density of the stations and the frequency of reporting allowed us to build area estimates for each of these pollutants.

That data has been compiled into datasets available in our data repository,  but for a quick idea of what the data actually looks like over time, this youtube video helps:

In addition,  we provide a real time  google map view of pollution over  China

Capture

 

One of the challenges was establishing a quality control procedure for the data. Since onsite surveys of over 900 stations and instruments was precluded, we had to rely on statistical methods for assessing the data quality and for  reducing  the impact of outliers, badly calibrated instruments, and other possible problems. For example:

 The most common quality problem was associated with stuck instruments that implausibly reported the same concentration continuously for many hours.  A regional consistency check was also applied to verify that each station was reporting data similar to its neighboring stations.  Approximately 8% of the data was removed as a result of the quality control review.

In addition, if sites reported less than  30% of the time or if they had probable errors on latitude/longitude locations (33 sites) they were eliminated from the analysis.

The approach taken to generating the fields was similar to that taken in our temperature analysis where kriging was employed. One aspect of the approach that merits comment is the smoothing we employ to improve overall estimates:

An additional feature of the correlation function, … is the correlation at zero distance, R(0). With perfect data one would expect R(0) ≈ 1, but this diverges from unity as a result of “noise” in the data, including instrumental noise, errors in reported station location, and ultra-local pollution sources that don’t significantly impact more than one station. The fraction of variance in the typical station record attributable to such “noise” can be estimated as 1 – R(0). The analysis framework was intentionally designed to allow for R(0) < 1 as a method of compensating for noisy and erroneous data. With R(0) < 1, the interpolated field will not exactly match the observations at a station location, but instead the field will be somewhat smoother in a way that attempts to compensate for the typical level of noise in the observations. Along with the quality control steps described above, this approach plays an important role in controlling for potential problems in the data set.

With the quality control procedures that reject outliers and the smoothing of kriging , one obvious question is does the approach still allow us to capture “spikes” that are real rather than outliers  and, on the other hand, does the smoothing gloss over local minima?

China_pm25_Average_TimeSeries

Shown above is the average PM2.5 over China from April 2014 through August of 2015. The  spike which looks like it might be an outlier is Chinese New Year, Feb 19, 2015.

While the Chinese government put some restrictions on fireworks we are still able to see a marked difference in the time series. Our work in 2015 comports well with a detailed look at  CNY in 2014 .  In this case, it would appear, that the quality control processes is retaining “meaningful”  or real spikes in the data streams.

Clear Skies.

The other concern, related to the ‘smoothing” of the kriging approach, is how the method will handle the other extremes: Are clear sky days preserved?

 

Prior to the parade our method gives the following picture of PM2.5 concentrations:

China_PM25_Before_Military_Parade

Contrast that with the data from the same week in 2014, as shown below:

China_PM25_Military_Parade_Week_2014

The relative difference ( decrease in concentration ) is shown below:

 

China_Military_Parade_Map_Compare

The preparations required to clear the sky involved shutting down over 10,000 businesses:

China plans to stage its biggest military parade in years next week, as 12,000 soldiers and assorted tanks and missiles take to the capital’s streets to celebrate the defeat of invading Japanese forces 70 years ago. But for many here, the victory parade is meaning a loss of business. To clear the capital’s notoriously dirty air before delegates from 30 countries arrive next week, China has ordered more than 10,000 factories and a number of construction sites in and around Beijing to close or reduce output. Officials have put new limits on drivers, local shop owners and even electronic commerce, citing concerns about security and traffic.

 

While there is much to comment on relative to man’s impact on air quality in both these cases, the focus of our research was collecting and preserving the observational record so that policy makers can make informed decisions. Showing that both types of extremes are preserved with the methodology increases our confidence in the quality of the data and method

Can natural gas help us reduce climate change by acting as a bridge fuel away from coal? New research from Berkeley suggests that it can, even if it modestly delays the date at which we switch to renewables.

The U.S. is in the midst of a natural gas boom. The combination of horizontal drilling and hydraulic fracturing has caused gas prices to tumble, resulting in a significant shake-up of the U.S. energy mix. Over the past 7 years U.S. CO2 emissions from electricity generation have fallen an impressive 17%, driven in large part by the replacement of dirty coal-fired generation by cleaner natural gas.

Natural gas has two major benefits over coal: it has less carbon emissions per unit of chemical energy, and can be converted into electricity at a higher efficiency (less energy is lost as waste heat). These two combined mean that the CO2 emissions from new natural gas power plants can be as little as one third of the emissions of existing coal plants.

However, natural gas also has a large potential downside: if it is leaks out before your burn it, the 100-year average climate effect of that leaked methane is about 12 times worse for the climate than the effect of CO2 from the same amount of gas if it were burned. Additionally, a large investment in gas could potentially delay the date at which we switch to a near-zero-carbon technology compared to a world where we stuck with coal for longer. On the flip side, gas makes it easier to have a large amount of intermittent renewables on the electric grid without causing disruption.

A new paper from Berkeley Earth looks in depth at how different gas leakage rates, generation efficiencies, and potential delays in zero-carbon alternatives impact the viability of gas as a bridge fuel.

As author Zeke Hausfather explains, “If we replaced current coal generation with new natural gas power plants today, and leakage rates end up being the EPA’s current best estimate, we could use that natural gas for 2.4 years for every year of coal that it replaces before breaking even on warming over the next 100 years.”

“If you compare a coal plant used for 10 years and replaced by renewables to a gas plant used for 24 years and replaced by renewables, you get the same amount of warming. This means that you could end up delaying renewables by quite a bit before the climate benefit of using gas as a bridge fuel is eliminated.”

The results are somewhat sensitive to natural gas leakage rates, which are currently highly uncertain; while the EPA estimates leakage of slightly below 2% of total production, others have found that leakage rates might be as much as 4% or above. However, the paper finds that it would take a leakage rate of 10% to make new gas worse than existing coal if both are used over the next 100 years and renewables are not delayed; for a shorter 30-year gas bridge, a leakage rate of over 13% would be required. This is because the methane has only an 8.6-year half-life in the atmosphere, and breaks down relatively quickly once gas stops being used. If renewables are delayed, the allowable leakage rate is lower.

“Natural gas is still a fossil fuel, and cannot be a long-term solution if the U.S. is to aggressively reduce greenhouse emissions,” Hausfather warns. “A gas bridge would likely have to last less than 30 years, and strong efforts would have to be made to ensure that natural gas leakage rates are kept low. Cheap natural gas can also compete with renewables, and there is an important role for renewable portfolio standards and other government programs to promote the adoption of near-zero-carbon technologies.”

“This research suggests that using natural gas as a bridge fuel away from coal is viable if we cannot immediately transition to near-zero carbon technologies. Coal is responsible for the bulk of U.S. CO2 emissions from electricity generation, and gas provides a practical way to reduce such emissions, even when we include the effects of fugitive methane.”

Paper:
Bounding the Climate Viability of Natural Gas as a Bridge Fuel to Displace Coal, by Zeke Hausfather is published in the journal Energy Policy.

http://authors.elsevier.com/a/1RQ2~14YGgMDsF

Images:
https://www.dropbox.com/sh/y4t77mbm4eagz1b/AACu3uYi5Fvuz9GWD6fBiZvma?dl=0

Contacts:
Zeke Hausfather – zeke@berkeleyearth.org, 917-520-9601
Berkeley Earth; Energy and Resources Group, U.C. Berkeley

Mountains of ink have been spilled in recent years on whether or not global warming has paused or slowed down. We’ve discussed it extensively in the past, and numerous studies have examined whether the apparent pause might have been caused by additional ocean heat uptake, small volcanoes, a weak solar cycle, poor arctic coverage in existing datasets, and\or a whole host of other possible explanations.

A new high-profile paper released last week in Science by Tom Karl and colleagues at NOAA argues that any slowdown in warming ended (if it ever really existed in the first place) as a result of two simple factors: dealing with biases introduced by taking temperature readings from buoys and ships, and the passage of time.

In a newly released estimate of global temperatures, they argue that the rate of warming over the past 17 years is no different than that for the prior 50 years, and that there is no apparent pause or slowdown in warming.

New corrections versus old corrections

NOAA’s National Centers for Environmental Information (now housing what had been the National Climatic Data Center) has long produced one of the two official U.S. temperature records (the other is produced by NASA). The figure above shows the old record in red and the new record in black. Apart from a period prior to 1940 and after 2006 or so, the two are largely the same, and their long-term warming rates are quite similar.

In this new update, Karl and his co-authors made two major revisions to the previous approach. First, they updated their land record to include a much larger database of stations: around 32,000, compared with the 7,000 or so previously used. They did this by incorporating the databank produced in recent years by the International Surface Temperature Initiative, and running it through NOAA’s algorithm to detect and correct for instrument changes, station moves, time of observation changes, urban heat island effects, and other factors that could potentially introduce bias into the record.

The new land record is quite similar to the recent Berkeley Earth project in the data used and resulting temperature estimates, though the change from the old NOAA land record is relatively small and has little impact on the changes to the global record.

The second and far more important change the researchers made was to update the ocean  temperature series used. NOAA previously had used a series called Extended Reconstructed Sea Surface Temperature (ERSST) version 3. The new release updates this to version 4, which introduces new corrections to the record. These account for the way ships measure temperature: whether they lower buckets into the water or measure water temperature through valves in the ship’s hull, which can make a significant difference in the resulting temperature estimates.

NOAA Argo buoy and shipThey also added a correction for temperatures measured by floating buoys vs. ships, as well as changes in ship measurement techniques. A number of studies have found that buoys tend to measure temperatures that are about 0.12 degrees C (0.22 F) colder than is found by ships at the same time and same location. As the number of automated buoy instruments has dramatically expanded in the past two decades, failing to account for the fact that buoys read colder temperatures ended up adding a negative bias in the resulting ocean record. NOAA also added new adjustments for changes in ship measurement techniques, as ships transitioned from buckets to engine intake valves. These changes are by far the largest factors responsible for changing global temperatures in the past 17 years compared to temperatures found in the prior NOAA record.

Global ocean temperatures

While NOAA uses the ERSST ocean temperature series, other groups like the Hadley Centre in the UK have their own record, HadSST version 3. They also make adjustments for buoys in recent years and changes in ship measurement practices, but end up with a somewhat different estimate than ERSST. The figure above shows the old ERSST 3, the new version 4, and HadSST side by side. ERSST 4 and HadSST differ noticeably between 1920 and 1970, but are rather similar before 1920 and between 1970 and the late 1990s.

Since 1998, during the period of reported slowing-down of  warming, data from ERSST 3 was noticeably lower than from HadSST. The new ERSST 4 increases temperatures after 2006 to be the same as HadSST, mostly because of the new buoy corrections, but is still lower between 1998 and 2006. This difference explains why global temperature records based on HadSST tend to show flatter temperatures over the past 17 years, while the new NOAA record shows a more rapid trend. Over the longer term, the two show quite similar trends, but those small differences can be blown somewhat out of proportion when looking at short-term trends.

Global average temperatures by year

The new NOAA record really isn’t much different from that of other groups, as shown in the figure above. All have shown substantial warming since 1970, and even the differences after 1998 are relatively small. In all cases, the warming seen isn’t inconsistent with the prior warming trend, even if the short-term trends vary a bit.

temperature trends comparison v3

Consider a direct comparison of the trends between 1950 and 1997 and those between 1998 and 2014 during the “pause” period. This approach is quite similar to that taken by Karl and his colleagues in Figure 1 from their paper, but applied to all the major groups estimating global temperatures. While the new NOAA series (Karl et al) is warming noticeably faster in the recent period than it had in the old NOAA series or in the Hadley record, it is actually warming less rapidly than in the Cowtan and Way series and at about the same rate as in the Berkeley Earth series, each of which have shown little or no pause compared to the 1950 to 1997 period. However, when compared to the more recent (and faster warming) 1970 to 1997 period, all records show noticeably less warming in the recent period. Both prior periods were generally consistent with climate model projections (shown in red), but observations are decidedly on the low side of model trends during the recent 1998-2014 period.

All of which raises the second main reason to conclude that the “pause,” such as it was, is over: the passage of time.

Back when IPCC discussed the issue, data were available only though 2012. The last two years since then have been quite warm, with 2014 the warmest year on record in most series. With 2015 shaping up at this point to likely be a warm El Niño year and one potentially setting a new global temperature record, even series like Hadley’s HadCRUT will soon show little pause.

With new corrections versus without new corrections

Some of the early critics of the new Karl et al paper are contending that adjustments for buoys, ship measurement types, station moves, and the like in effect amount to manipulating the record, creating more warming than would otherwise occur. However, as Karl and his colleagues point out, the net effect of their adjustments is actually to reduce the amount of warming over the past century, as shown in the figure above from their paper. There are slightly higher temperatures after 1998 as a result of their adjustments, but much smaller than the changes prior to 1940.

As one of the authors of the paper, Russel Vose, mentioned to the New York Times in its report on the new NOAA study, “If you just wanted to release to the American public our uncorrected data set, it would say that the world has warmed up about 2.071 degrees Fahrenheit since 1880. Our corrected data set says things have warmed up about 1.65 degrees Fahrenheit. Our corrections lower the rate of warming on a global scale.”

Global mean surface temperature anomalies

Despite the absence of a pause in recent data, temperatures are still on the low end of climate model predictions during recent years. However, Gavin Schmidt at NASA GISS argues that this is the case primarily because models do not properly account for recent changes in volcanic emissions, solar forcing, and man-made aerosols. When models take those factors into into account, Schmidt argues that this “places the observations well within the modified envelope” regardless of which temperature record is used.

No one, of course, expects the new NOAA report will end the debate on the warming slow-down (pause, “hiatus” or whatever terminology suits one’s preferences).  Among those indelibly doubtful of the underlying climate science, this latest report assuredly will provide just more fuel for their fires. And among the “consensus” scientific community, the strengths and weaknesses, significance and limitations are certain to be grist for ongoing anaylses and interpretation in professional peer-reviewed journals and at upcoming professional conferences.

Time will tell whether Schmidt’s own forecast – “So while not as dead as the proverbial parrot, the search for dramatic explanations of some anomalous lack of warming is mostly over.” – ends up being prescient.

Also posted on Yale Climate Connections.

There has been much discussion of temperature adjustment of late in both climate blogs and in the media, but not much background on what specific adjustments are being made, why they are being made, and what effects they have. Adjustments have a big effect on temperature trends in the U.S., and a modest effect on global land trends. The large contribution of adjustments to century-scale U.S. temperature trends lends itself to an unfortunate narrative that “government bureaucrats are cooking the books”.

Figure 1. Global (left) and CONUS (right) homogenized and raw data from NCDC and Berkeley Earth. Series are aligned relative to 1990-2013 means. NCDC data is from GHCN v3.2 and USHCN v2.5 respectively.
Figure 1. Global (left) and CONUS (right) homogenized and raw data from NCDC and Berkeley Earth. Series are aligned relative to 1990-2013 means. NCDC data is from GHCN v3.2 and USHCN v2.5 respectively.

Having worked with many of the scientists in question, I can say with certainty that there is no grand conspiracy to artificially warm the earth; rather, scientists are doing their best to interpret large datasets with numerous biases such as station moves, instrument changes, time of observation changes, urban heat island biases, and other so-called inhomogenities that have occurred over the last 150 years. Their methods may not be perfect, and are certainly not immune from critical analysis, but that critical analysis should start out from a position of assuming good faith and with an understanding of what exactly has been done.

This will be the first post in a three-part series examining adjustments in temperature data, with a specific focus on the U.S. land temperatures. This post will provide an overview of the adjustments done and their relative effect on temperatures. The second post will examine Time of Observation adjustments in more detail, using hourly data from the pristine U.S. Climate Reference Network (USCRN) to empirically demonstrate the potential bias introduced by different observation times. The final post will examine automated pairwise homogenization approaches in more detail, looking at how breakpoints are detected and how algorithms can tested to ensure that they are equally effective at removing both cooling and warming biases.

Why Adjust Temperatures?

There are a number of folks who question the need for adjustments at all. Why not just use raw temperatures, they ask, since those are pure and unadulterated? The problem is that (with the exception of the newly created Climate Reference Network), there is really no such thing as a pure and unadulterated temperature record. Temperature stations in the U.S. are mainly operated by volunteer observers (the Cooperative Observer Network, or co-op stations for short). Many of these stations were set up in the late 1800s and early 1900s as part of a national network of weather stations, focused on measuring day-to-day changes in the weather rather than decadal-scale changes in the climate.

Figure 2. Documented time of observation changes and instrument changes by year in the co-op and USHCN station networks. Figure courtesy of Claude Williams (NCDC).
Figure 2. Documented time of observation changes and instrument changes by year in the co-op and USHCN station networks. Figure courtesy of Claude Williams (NCDC).

Nearly every single station in the network in the network has been moved at least once over the last century, with many having 3 or more distinct moves. Most of the stations have changed from using liquid in glass thermometers (LiG) in Stevenson screens to electronic Minimum Maximum Temperature Systems (MMTS) or Automated Surface Observing Systems (ASOS). Observation times have shifted from afternoon to morning at most stations since 1960, as part of an effort by the National Weather Service to improve precipitation measurements.

All of these changes introduce (non-random) systemic biases into the network. For example, MMTS sensors tend to read maximum daily temperatures about 0.5 C colder than LiG thermometers at the same location. There is a very obvious cooling bias in the record associated with the conversion of most co-op stations from LiG to MMTS in the 1980s, and even folks deeply skeptical of the temperature network like Anthony Watts and his coauthors add an explicit correction for this in their paper.

Figure 3. Time of Observation over time in the USHCN network. Figure from Menne et al 2009.
Figure 3. Time of Observation over time in the USHCN network. Figure from Menne et al 2009.

Time of observation changes from afternoon to morning also can add a cooling bias of up to 0.5 C, affecting maximum and minimum temperatures similarly. The reasons why this occurs, how it is tested, and how we know that documented time of observations are correct (or not) will be discussed in detail in the subsequent post. There are also significant positive minimum temperature biases from urban heat islands that add a trend bias up to 0.2 C nationwide to raw readings.

Because the biases are large and systemic, ignoring them is not a viable option. If some corrections to the data are necessary, there is a need for systems to make these corrections in a way that does not introduce more bias than they remove.

What are the Adjustments?

Two independent groups, the National Climate Data Center (NCDC) and Berkeley Earth (hereafter Berkeley) start with raw data and use differing methods to create a best estimate of global (and U.S.) temperatures. Other groups like NASA Goddard Institute for Space Studies (GISS) and the Climate Research Unit at the University of East Anglia (CRU) take data from NCDC and other sources and perform additional adjustments, like GISS’s nightlight-based urban heat island corrections.

Figure 4. Diagram of processing steps for creating USHCN adjusted temperatures. Note that TAvg temperatures are calculated based on separately adjusted TMin and TMax temperatures.
Figure 4. Diagram of processing steps for creating USHCN adjusted temperatures. Note that TAvg temperatures are calculated based on separately adjusted TMin and TMax temperatures.

This post will focus primarily on NCDC’s adjustments, as they are the official government agency tasked with determining U.S. (and global) temperatures. The figure below shows the four major adjustments (including quality control) performed on USHCN data, and their respective effect on the resulting mean temperatures.

Figure 5. Impact of adjustments on U.S. temperatures relative to the 1900-1910 period, following the approach used in creating the old USHCN v1 adjustment plot.
Figure 5. Impact of adjustments on U.S. temperatures relative to the 1900-1910 period, following the approach used in creating the old USHCN v1 adjustment plot.

NCDC starts by collecting the raw data from the co-op network stations. These records are submitted electronically for most stations, though some continue to send paper forms that must be manually keyed into the system. A subset of the 7,000 or so co-op stations are part of the U.S. Historical Climatological Network (USHCN), and are used to create the official estimate of U.S. temperatures.

Quality Control

Once the data has been collected, it is subjected to an automated quality control (QC) procedure that looks for anomalies like repeated entries of the same temperature value, minimum temperature values that exceed the reported maximum temperature of that day (or vice-versa), values that far exceed (by five sigma or more) expected values for the station, and similar checks. A full list of QC checks is available here.

Daily minimum or maximum temperatures that fail quality control are flagged, and a raw daily file is maintained that includes original values with their associated QC flags. Monthly minimum, maximum, and mean temperatures are calculated using daily temperature data that passes QC checks. A monthly mean is calculated only when nine or fewer daily values are missing or flagged. A raw USHCN monthly data file is available that includes both monthly values and associated QC flags.

The impact of QC adjustments is relatively minor. Apart from a slight cooling of temperatures prior to 1910, the trend is unchanged by QC adjustments for the remainder of the record (e.g. the red line in Figure 5).

Time of Observation (TOBs) Adjustments

Temperature data is adjusted based on its reported time of observation. Each observer is supposed to report the time at which observations were taken. While some variance of this is expected, as observers won’t reset the instrument at the same time every day, these departures should be mostly random and won’t necessarily introduce systemic bias. The major sources of bias are introduced by system-wide decisions to change observing times, as shown in Figure 3. The gradual network-wide switch from afternoon to morning observation times after 1950 has introduced a CONUS-wide cooling bias of about 0.2 to 0.25 C. The TOBs adjustments are outlined and tested in Karl et al 1986 and Vose et al 2003, and will be explored in more detail in the subsequent post. The impact of TOBs adjustments is shown in Figure 6, below.

Figure 6. Time of observation adjustments to USHCN relative to the 1900-1910 period.
Figure 6. Time of observation adjustments to USHCN relative to the 1900-1910 period.

TOBs adjustments affect minimum and maximum temperatures similarly, and are responsible for slightly more than half the magnitude of total adjustments to USHCN data.

Pairwise Homogenization Algorithm (PHA) Adjustments

The Pairwise Homogenization Algorithm was designed as an automated method of detecting and correcting localized temperature biases due to station moves, instrument changes, microsite changes, and meso-scale changes like urban heat islands.

The algorithm (whose code can be downloaded here) is conceptually simple: it assumes that climate change forced by external factors tends to happen regionally rather than locally. If one station is warming rapidly over a period of a decade a few kilometers from a number of stations that are cooling over the same period, the warming station is likely responding to localized effects (instrument changes, station moves, microsite changes, etc.) rather than a real climate signal.

To detect localized biases, the PHA iteratively goes through all the stations in the network and compares each of them to their surrounding neighbors. It calculates difference series between each station and their neighbors (separately for min and max) and looks for breakpoints that show up in the record of one station but none of the surrounding stations. These breakpoints can take the form of both abrupt step-changes and gradual trend-inhomogenities that move a station’s record further away from its neighbors. The figures below show histograms of all the detected breakpoints (and their magnitudes) for both minimum and maximum temperatures.

Figure 7. Histogram of all PHA changepoint adjustments for versions 3.1 and 3.2 of the PHA for minimum (left) and maximum (right) temperatures.
Figure 7. Histogram of all PHA changepoint adjustments for versions 3.1 and 3.2 of the PHA for minimum (left) and maximum (right) temperatures. 

While fairly symmetric in aggregate, there are distinct temporal patterns in the PHA adjustments. The single largest of these are positive adjustments in maximum temperatures to account for transitions from LiG instruments to MMTS and ASOS instruments in the 1980s, 1990s, and 2000s. Other notable PHA-detected adjustments are minimum (and more modest maximum) temperature shifts associated with a widespread move of stations from inner city rooftops to newly-constructed airports or wastewater treatment plants after 1940, as well as gradual corrections of urbanizing sites like Reno, Nevada. The net effect of PHA adjustments is shown in Figure 8, below.

Figure 8. Time of observation adjustments to USHCN relative to the 1900-1910 period.
Figure 8. Time of observation adjustments to USHCN relative to the 1900-1910 period. 

The PHA has a large impact on max temperatures post-1980, corresponding to the period of transition to MMTS and ASOS instruments. Max adjustments are fairly modest pre-1980s, and are presumably responding mostly to the effects of station moves. Minimum temperature adjustments are more mixed, with no real century-scale trend impact. These minimum temperature adjustments do seem to remove much of the urban-correlated warming bias in minimum temperatures, even if only rural stations are used in the homogenization process to avoid any incidental aliasing in of urban warming, as discussed in Hausfather et al. 2013.

The PHA can also effectively detect and deal with breakpoints associated with Time of Observation changes. When NCDC’s PHA is run without doing the explicit TOBs adjustment described previously, the results are largely the same (see the discussion of this in Williams et al 2012). Berkeley uses a somewhat analogous relative difference approach to homogenization that also picks up and removes TOBs biases without the need for an explicit adjustment.

With any automated homogenization approach, it is critically important that the algorithm be tested with synthetic data with various types of biases introduced (step changes, trend inhomogenities, sawtooth patterns, etc.), to ensure that the algorithm will identically deal with biases in both directions and not create any new systemic biases when correcting inhomogenities in the record. This was done initially in Williams et al 2012 and Venema et al 2012. There are ongoing efforts to create a standardized set of tests that various groups around the world can submit homogenization algorithms to be evaluated by, as discussed in our recently submitted paper. This process, and other detailed discussion of automated homogenization, will be discussed in more detail in part three of this series of posts.

Infilling

Finally we come to infilling, which has garnered quite a bit of attention of late due to some rather outlandish claims of its impact. Infilling occurs in the USHCN network in two different cases: when the raw data is not available for a station, and when the PHA flags the raw data as too uncertain to homogenize (e.g. in between two station moves when there is not a long enough record to determine with certainty the impact that the initial move had). Infilled data is marked with an “E” flag in the adjusted data file (FLs.52i) provided by NCDC, and its relatively straightforward to test the effects it has by calculating U.S. temperatures with and without the infilled data. The results are shown in Figure 9, below:

Figure 9. Infilling-related adjustments to USHCN relative to the 1900-1910 period.
Figure 9. Infilling-related adjustments to USHCN relative to the 1900-1910 period. 

Apart from a slight adjustment prior to 1915, infilling has no effect on CONUS-wide trends. These results are identical to those found in Menne et al 2009. This is expected, because the way NCDC does infilling is to add the long-term climatology of the station that is missing (or not used) to the average spatially weighted anomaly of nearby stations. This is effectively identical to any other form of spatial weighting.

To elaborate, temperature stations measure temperatures at specific locations. If we are trying to estimate the average temperature over a wide area like the U.S. or the Globe, it is advisable to use gridding or some more complicated form of spatial interpolation to assure that our results are representative of the underlying temperature field. For example, about a third of the available global temperature stations are in U.S. If we calculated global temperatures without spatial weighting, we’d be treating the U.S. as 33% of the world’s land area rather than ~5%, and end up with a rather biased estimate of global temperatures. The easiest way to do spatial weighting is using gridding, e.g. to assign all stations to grid cells that have the same size (as NASA GISS used to do) or same lat/lon size (e.g. 5×5 lat/lon, as HadCRUT does). Other methods include kriging (used by Berkeley Earth) or a distance-weighted average of nearby station anomalies (used by GISS and NCDC these days).

As shown above, infilling has no real impact on temperature trends vs. not infilling. The only way you get in trouble is if the composition of the network is changing over time and if you do not remove the underlying climatology/seasonal cycle through the use of anomalies or similar methods. In that case, infilling will give you a correct answer, but not infilling will result in a biased estimate since the underlying climatology of the stations is changing. This has been discussed at length elsewhere, so I won’t dwell on it here.

I’m actually not a big fan of NCDC’s choice to do infilling, not because it makes a difference in the results, but rather because it confuses things more than it helps (witness all the sturm und drang of late over “zombie stations”). Their choice to infill was primarily driven by a desire to let people calculate a consistent record of absolute temperatures by ensuring that the station composition remained constant over time. A better (and more accurate) approach would be to create a separate absolute temperature product by adding a long-term average climatology field to an anomaly field, similar to the approach that Berkeley Earth takes.

Changing the Past?

Diligent observers of NCDC’s temperature record have noted that many of the values change by small amounts on a daily basis. This includes not only recent temperatures but those in the distant past as well, and has created some confusion about why, exactly, the recorded temperatures in 1917 should change day-to-day. The explanation is relatively straightforward. NCDC assumes that the current set of instruments recording temperature is accurate, so any time of observation changes or PHA-adjustments are done relative to current temperatures. Because breakpoints are detected through pair-wise comparisons, new data coming in may slightly change the magnitude of recent adjustments by providing a more comprehensive difference series.

When breakpoints are removed, the entire record prior to the breakpoint is adjusted up or down depending on the size and direction of the breakpoint. This means that slight modifications of recent breakpoints will impact all past temperatures at the station in question though a constant offset. The alternative to this would be to assume that the original data is accurate, and adjusted any new data relative to the old data (e.g. adjust everything in front of breakpoints rather than behind them). From the perspective of calculating trends over time, these two approaches are identical, and its not clear that there is necessarily a preferred option.

Hopefully this (and the following two articles) should help folks gain a better understanding of the issues in the surface temperature network and the steps scientists have taken to try to address them. These approaches are likely far from perfect, and it is certainly possible that the underlying algorithms could be improved to provide more accurate results. Hopefully the ongoing International Surface Temperature Initiative, which seeks to have different groups around the world send their adjustment approaches in for evaluation using common metrics, will help improve the general practice in the field going forward.