The Cassandra Syndrome: Prediction, Uncertainty, and Fear of (Climate) Change, Part Seven

The seventh installment of a ten part series which considers the warnings of climate scientists in the context of historical revolutionary scientific theories that met strong resistance from guardians of the status quo…

by: Arthur Hoyle

Read Part One Here! Read Part Two Here! Read Part Three Here! Read Part Four Here! Read Part Five Here! Read Part Six Here!

Part Seven — Weather, Climate, and Global Warming — The Big Picture

Around the time that Darwin’s Origin was published, the science of meteorology was born. While all science is predictive, in that it identifies “laws” or patterns in nature that can reliably be expected to continue, meteorology was predicting a phenomenon — the weather — that everyone experiences every day and plans around — when to plant, what to wear, where and when to go on vacation. Weather prediction quickly became a matter of immediate and widespread interest, an easy conversation starter.

Although the fundamental mechanism of the weather—the movement of heat from the equator to the poles—was recognized as early as 1686 by the British astronomer Edmund Halley, meteorology as a science did not gain a foothold until the middle of the nineteenth century, when improved communications accelerated data sharing. Before the advent of scientific forecasting based on data gathering and analysis, farmers’ almanacs were the source of weather prediction. The predictions extrapolated past weather patterns of rainfall and temperature fluctuations into the future. Because weather is a global phenomenon driven by a host of variables, scientific forecasting was not possible until large sets of data aggregated from many geographical locations could be shared simultaneously. The introduction of the telegraph in 1844 enabled this.

In 1849 the US established a weather network at the Smithsonian Institution. By 1859 there were five hundred observing stations in the network, sharing local data telegraphically, moving the information faster than the weather itself. The number of variables, and the dynamics of weather, made prediction an uncertain science.  Initially, twenty-four hour forecasts issued by the US Army Signal Corps were characterized as “probabilities,” a caveat still in use today (40% chance of rain). In 1890 Congress established the US Weather Bureau within the Department of Agriculture, and in 1908 the Bureau began making weekly forecasts. There was a direct relationship between the extent of the forecast and its degree of uncertainty. The Weather Bureau prioritized accuracy over range, but farmers wanted long-term forecasts around which they could plan. To accommodate them, the Weather Bureau offered monthly forecasts, but at a high level of generality — cold, wet, hot, dry. Coordination among meteorologists was further enhanced by the formation of an International Meteorological Organization in 1879, and by the introduction of universal standard time in 1883.

Climatology — the study of weather patterns over time — evolved as a subdivision of meteorology near the end of the nineteenth century. At its inception, climatology was not interested in forecasting. It studied the past, using different sets of data taken from geologic evidence, and averaging them over time to detect long-term trends of warming and cooling. Its imperative was not speed, as in weather forecasting, but assembling large quantities of accurate data about the history of weather and from that data drawing conclusions about past climate regimes.

But late in the nineteenth century a few scientists began to speculate about the effects of industrialization on the earth’s atmosphere and weather, and turned climatologists’ eyes towards the future. Eduard Brückner, a German geographer, suspected that warming induced by industrial activity was occurring, and was contributing to drought and desertification in Europe and North America. In 1896 a Swedish chemist, Svante Arrhenius, who understood the role of carbon dioxide (CO2) as a heat trapping gas, estimated that doubling the amount of CO2 in the atmosphere from its then current levels of about 280 parts per million would raise the global average temperature by 5°- 6° centigrade (C), an increase that would drastically alter Earth’s climate. He believed that CO2 levels in the atmosphere were increasing because of the burning of fossil fuels.

The atmospheric temperature is regulated by the principle of radiative equilibrium, according to which the Earth maintains a balance between the energy it receives from the Sun and the energy it re-radiates into space. Heat moves from the equator, which absorbs more energy than it radiates, to the poles, which, because they are covered with snow and ice, reflect the Sun’s energy back into space and remain cold. The global circulation of the Sun’s energy is influenced by enormous physical forces— the Earth’s rotation, gravity, and the thermodynamics of rising and falling air masses, among others. The oceans, land surfaces, ecosystems, and agriculture also affect weather patterns and climate.

The Earth remains warm and habitable because not all of the Sun’s energy is re-radiated back into space. The atmosphere naturally contains gases — primarily CO2, methane, and water vapor—that act like a blanket, trapping heat — the greenhouse effect. But as the CO2 levels continue to rise from the burning of fossil fuels, the greenhouse effect intensifies, and the temperature of the atmosphere and the oceans rises in order to maintain the radiative equilibrium. As the atmosphere warms, glaciers, ice sheets, and sea ice begin melting, reducing the Earth’s reflective capability and exacerbating the warming trend in a positive feedback loop.

Climatologists noticed that the Earth’s atmosphere was warming, as Arrhenius had predicted, but they lacked the tools to ascribe the warming to human activity. Perhaps the warming was a result of natural variability, a phenomenon that reflects the complex variety of inputs into weather and climate. Over the millennia of history and pre-history, periods of warming and cooling have alternated in the rhythm of a sine wave, now above the average, now below it. The degree of warming in the twentieth century was slight, well within the range of natural variability. Then in 1938 a British meteorologist, Guy Callendar, demonstrated a connection between fossil fuel consumption, increased CO2 concentrations, and increased global temperatures. But his findings were met with skepticism based on the uncertain quality of his data, and the crudeness of his methods for extrapolating the data. In the 1950s, as computing technology became more sophisticated, and climatologists began to use computers to model climate behavior, his theory gained credibility.

The advent of large capacity mainframe computers, developed for the military during World War II, gave climatologists a tool for bringing coherence to the complex variables of weather and climate. These computers had the capacity to store, sort, and recombine vast amounts of climate data.

Climate results from the interactions of a number of linked Earth systems, including the atmosphere, the oceans, the cryosphere (ice and snow), land surfaces, and the biosphere. To model the climate on a computer requires data inputs from all these sources gathered from around the globe, but with this data climatologists are able to project climate trajectories into the future. Different scenarios can be projected based on variations in inputs. Because the CO2 content of the atmosphere is a primary factor in climate, simulation models can project impacts on the climate of increases (or decreases) in CO2 concentrations. The most important impact is warming, because warming triggers other changes in the Earth’s climate system, such as heating the oceans and melting sea ice and ice sheets. These changes in turn impact living species, including humans.

The link between CO2 concentrations and global average temperature can be expressed in a graph showing how temperature rises and falls as CO2 concentrations increase or decrease. In the twentieth century, the graph line has been moving steadily upwards at an accelerating rate. Because fossil fuel consumption generates large quantities of CO2, climatologists have come to the conclusion that human industrial activity is warming the planet. As early as 1956, the physicist Gilbert Plass, studying the effects of CO2 concentrations in the atmosphere, warned of significant and lasting global warming by the end of the twentieth century.

As concern about the potential effects of global warming on Earth’s climate and on human civilization spread among climate scientists around the world, networks and organizations were formed to share data, modeling techniques, and findings. This coordination required reconciling different national systems through standardization of collection and processing methods, a truly global scientific effort comparable in scale to space exploration. Growing awareness of “the CO2 problem” came to a head in the summer of 1988 when the climate scientist James Hansen, Director of NASA’s Goddard Institute for Space Studies, testified before a Senate committee that there was a 99% certainty that “the greenhouse effect has been detected, and it is changing our climate now.” That fall, the United Nations Environment Programme and the World Meteorological Organization jointly established the Intergovernmental Panel on Climate Change to carry out a scientific assessment of the Earth’s climate. The panel was composed of thousands of climate scientists working in government, university, and private laboratories around the world, linked not by the telegraph but by the Internet. The IPCC conducts no research. Rather, it creates and disseminates synthesis reports that present a consensus from the work of many scientists. It issued reports in 1990, 1995, 2001, 2007, and 2013. With each report, the certainty that the climate is warming because of human activity has increased.

Climate change moved from the laboratory into the policy arena in 1992 at the Framework Convention on Climate Change held in Rio de Janeiro. One hundred and sixty-five nations, including the US, agreed to set voluntary goals for greenhouse gas emissions. In 1997, meeting in Kyoto, Japan, the convention drafted a protocol, binding only on developed countries with advanced economies, to reduced greenhouse gas emissions to 1990 levels by the year 2000, a hopelessly unreachable goal. After emissions rose by 11% worldwide, and by 17% in the US, the US withdrew from the protocol, citing the priority of its economy over climate concerns, which it described as full of uncertainties. Russia and Canada followed. In 2015, when the convention met in Paris, the US rejoined the international effort, but our current administration, skeptical of climate science, has threatened to withdraw again in 2020.

The “uncertainties” that the US has cited as a pretext for withdrawing from the Kyoto Protocol had been carefully orchestrated by a coalition of vested corporate interests with large stakes in the fossil fuel industry that supplies society’s energy needs.

To Be Continued

 

Arthur Hoyle is the author of The Unknown Henry Miller: A Seeker in Big Sur, published in March 2014 by Skyhorse/Arcade. He has also published essays in the Huffington Post and the zine Empty Mirror. His second non-fiction book, Mavericks, Mystics, and Misfits: Americans Against the Grain, will be published later this year through Sunbury Press.

0 replies on “The Cassandra Syndrome: Prediction, Uncertainty, and Fear of (Climate) Change, Part Seven”