The Cassandra Syndrome

Presenting, in its entirety, “The Cassandra Syndrome” by Arthur Hoyle, an in-depth look at the warnings of climate scientists in the context of historical revolutionary scientific theories that met strong resistance from guardians of the status quo…

by: Arthur Hoyle


This essay considers climate scientists as descendants of the sibyl Cassandra, whose predictions, always accurate, went unheeded. It aims to place the forecasts and warnings of climate scientists in the context of previous historical revolutionary scientific theories that met strong resistance and denial from guardians of the status quo. It surveys the relationship among fixed belief, uncertainty, and humanity’s reasoning faculty, as expressed in science, across a broad span of history, looking for patterns that expose enduring human tendencies of resistance to change. The destination of the essay is contemporary climate science, where these tendencies are once again playing out, this time in a high-risk environment. Along the way the essay visits ancient Greece during the times of oracles and Aristotle, the Renaissance in Europe during the Copernican revolution, and England and the United States as they responded to Darwin’s theory of evolution.

We live in a time of extremes and uncertainties — ideological, political, economic, and social. Another extreme our planet has just begun experiencing, and about which people are uncertain, is our changing weather and climate. Though the evidence for climate change is steadily mounting, we are resisting appropriate responses, preferring instead doubt, denial, and delay. We want certainty, but when certainty arrives in the form of unshakeable evidence, climate change will have overwhelmed us. We need to act now even though many uncertainties about climate change remain — how soon? how extensive? how extreme? What is certain is that it is happening, and that we are contributing to it. We need to change and adapt.

Social scientists have been studying the relationship between uncertainty and extremism. They’ve found that uncertainty creates a need for fixed belief that propels the doubter towards extremism and rigidity as he seeks stabilization of his sense of place in the world. Uncertainty is experienced within the self as a feeling of aversion. It is a state of mind that people wish to escape. If the uncertainty is experienced as a challenge to be overcome, it can lead to positive action. But if it is experienced as a threat, people are inclined to run from it, to become defensive. If the threat comes in the form of an idea — the Sun, not the Earth, is the center of the universe, the climate is changing — flight will take the form of cognitive denial. The response will be to cling to an existing ideology that resolves the uncertainty, and to exclude information that contradicts the ideology and so disturbs certainty. The sense of certainty is strengthened if the individual allies himself with a group — a political party or religious faith — that upholds a rigid orthodoxy of belief and banishes dissenters. The more extreme the uncertainty, the more extreme the remedy. We can see in the extreme polarization of opinion about climate change the clash of opposing certainties.

Since climate change has been presented as an existential threat to human civilization, the impulse to denial has been strong. Those most vulnerable to its effects are also those most susceptible to campaigns of disinformation and denial, such as have been waged by the fossil fuel industry against climate science. But climate change can also be experienced as a challenge to human ingenuity, cooperation, and adaptability, rather than simply as a threat to “business as usual.” In the past, mankind has moved forward by accommodating radical new ideas about the physical universe and by shedding outworn ideas that have lost their usefulness as practical guides for navigating life. As a species, we are once again in one of those times, with our very survival at stake. Can we meet the challenge?


Part One: The Fall of Troy — A Cautionary Tale

Most of us are familiar with the story of the Trojan War — how Paris, a prince from the city of Troy, visited the Greek King Menelaus of Sparta and violated his host’s hospitality by running off with Menelaus’s beautiful wife Helen. And how Agamemnon, Menelaus’s brother and King of Mycenae, led a Greek army across the Aegean Sea and laid siege to Troy in order to recover Helen. How after ten years of warfare, during which many legendary warriors, including the Greek Achilles and the Trojan Hector, were slain, the Greeks decided, on the advice of the seer Calchas, to take the city by stratagem rather than by force. The wily Odysseus, King of Ithaca, proposed that the Greek army make a show of leaving, burning their tents and appearing to sail away, but leaving behind a large wooden horse as an offering to the goddess Athena to ensure a safe voyage home. The horse would conceal in its belly numbers of Greece’s finest warriors, including Odysseus himself and Menelaus. Another Greek warrior, Sinon, would remain outside the horse, pretending to have been abandoned, and would explain to the Trojans the significance of the offering.

The Greeks assented to this plan, burned their tents, and sailed out of the harbor. The Trojans, believing they had won the war, brought the horse inside the city walls, garlanded it with flowers, and set about feasting in celebration. When night fell, the Greek warriors lowered themselves from the belly of the horse and opened the gates of the city to admit the rest of the Greek army, which had hidden out of sight of the citadel. The invaders slaughtered the Trojan men in their beds, took captive their wives and children, and sacked the city.

One of the most poignant threads in this legendary tale belongs to Cassandra, the sibyl who warned the Trojans that the horse contained enemy warriors, but whose warning was ignored. Cassandra was the daughter of Priam and Hecuba, the king and queen of Troy. As children, she and her brother Helenus were left overnight in a temple of Apollo. Apollo, amongst his other duties, was also the god of prophecy. He took the form of two serpents, which entwined themselves around the two children, and flicked their tongues in the children’s ears. This overtly sexual act transmitted to the children the gift of prophecy.

Years later, as an adult, Cassandra, now a beautiful woman, again spent the night in Apollo’s temple. The god appeared to her, and made known his desire to lie with her. When she rebuffed him, Apollo punished her with the curse that her prophecies would always be accurate but never believed.

Cassandra foretold many of the pivotal events of the Trojan War. When she learned that her brother Paris intended to visit Sparta, she warned him not to go there, but her warning was ignored. During the course of the war, she made predictions of doom that unnerved her father because they jeopardized the morale of the warriors. He confined her to a locked chamber watched over by a guard. The Trojans came to regard her as deranged, her warnings the ravings of a madwoman.

When the horse was brought into the city, Cassandra frantically warned her countrymen of impending doom, but she was met with scorn and mockery. Desperate to expose the Greeks’ plot, she attempted to set the horse on fire, but was restrained. After the city had been sacked, Agamemnon took Cassandra captive and brought her back with him to Mycenae as his concubine. Once arrived, they were both killed by Clytemnestra and her lover Aegisthus, a fate Cassandra had also foreseen.

The story of the Trojan War is a legend featuring the exploits of mythical people aided by gods. It was composed sometime during the eighth century BC by several bards collectively known as Homer. It is a work of imagination, a fantasy, that expresses the aspirations, beliefs, and values of the ancient Greek people.

But there was an actual Troy, and there were city-states on the Greek mainland that in all likelihood interacted with the people who inhabited Troy across the sea in Anatolia, now known as Turkey. Archaeological evidence indicates that the Trojan civilization flourished from 3,000 BC to 1,200 BC during the Bronze Age. Troy was destroyed and rebuilt numerous times during the course of its history—sometimes by war, sometimes by earthquakes and fires. Its strategic location at the entrance to the Dardanelles made it a trading center between East and West and brought it the wealth that attracted invaders. The city eventually died not from war but because its harbor filled with silt, snuffing out its trade. The legendary Trojan War that “Homer” wrote about in the eighth century BC may have been a composite of many stories about the turbulent history of the city that were handed down orally before being committed to writing.

Regardless of the historical truth or fiction of the tale, the moral of the story resonates with another kind of truth—a truth about human nature. People do not like hearing predictions that contradict or threaten established certainties around which they have constructed a coherent and reassuring world-view. Cassandra was cursed because her purity led her to defy a god. Gods are figures of authority who demand obedience and subservience. As such, they embody entrenched and inflexible ideas, the unquestioned and unalterable norms and assumptions on which societies rely for order and stability. The Trojans prevented Cassandra from exposing the danger that awaited them in the belly of the horse because they regarded it as sacred, an offering to Athena, one of their immortal goddesses. To break it open would be an act of impiety, a taboo. Though the story of Cassandra and the Trojan horse is a fiction, history reminds us that resistance to unpopular ideas that question received wisdom is a human constant that has persisted across the ages.


Part Two : Oracles, Prophecies, and Signs — The Search for Certainty

Human beings, faced with mortality like other living things, wanting to ensure their safety and survival, try to build their lives around certainties that give them protection, or at least the illusion of it. Some of these “certainties” — like citadel walls and anti-ballistic missiles — are physical. Others — like religious beliefs and political credos — are ideological.

Science, with its grasp of physical laws that control matter, gives us certainties. Copernican theory tells us that the Earth is orbiting the Sun, and will continue to do so for a long time, year after year. Newton’s laws tell us that there is such a thing as gravitation, and that gravitation is a controlling principle of the universe on which we can rely. These “laws” are in fact predictions that certain patterns in nature will repeat for the foreseeable future.

But science, as we think of it in the West, is a tool developed relatively recently in human history. It began with Aristotle and the early Greeks in the fourth century BC. For many thousands of years before there was “science” (called “natural philosophy” until the 19th century), there was magic. Humans attempted to explain and control natural phenomena through religious rituals, such as human sacrifice to placate gods who controlled weather and therefore food supplies, and through magical interventions that, they believed, gave them control over the mysteries of nature. Both science and superstition answer a deeply felt human need for certainty. Certainty gives us the ability to predict what will happen in the future and so to make plans.

There were numerous systems of pre-scientific prediction employed by the peoples of the ancient world. The most notable and widely used methods were oracles and astrology, though divination based on signs and omens—the flight of birds, comets, eclipses — interpreted by seers were also commonly used. Perhaps most famously, the Roman seer Spurinna, after studying the entrails of a sacrificed animal (a practice known as haruspicy), warned Julius Caesar to “beware the Ides of March,” the day (March 15, 44 BC) on which Caesar was stabbed to death at the Roman Senate by sixty conspirators.

In the ancient Mediterranean world, oracles were distributed over a broad geographic area encompassing Greece, Asia Minor, and Egypt. The oracles were consulted by ordinary men bringing questions about their business, health, and family, as well as by rulers seeking to know in advance the outcome of military campaigns. The oracles were housed in temples dedicated to a god who spoke to consultants through various mediums who often answered in riddles or ambiguous pronouncements open to wide interpretation. The oracles were an essential means of allaying doubt and uncertainty, and no one would risk an important undertaking without seeking reassurance from a spokesperson for a deity.

Consultants brought votive gifts to gain access to the oracle — modest gifts, perhaps an animal, from simple folk — lavish offerings in gold, bronze, and silver from the mighty. This practice made the oracles a thriving business that enriched the priests who guarded the oracle and summoned it. Legend has it that the fabulist Aesop was murdered by the priests of Delphi because he ridiculed them as hucksters profiteering from the fear and gullibility of their consultants.

According to Herodotus, oracles originated in Egypt during the fifteenth century BC. Dodona was the oldest oracle in Greece, established in the thirteenth century BC in Ioannina, near what is now the Albanian border. The oracle at Dodona was an oak tree through which Zeus spoke by rustling the leaves. Priests attended the consultant as the leaves murmured and interpreted their meaning. Around 400 BC a temple was built beneath the canopy of the tree. The oracle supported the priests and priestesses who lived nearby in a village.

The most famous oracle in Greece, and the richest, was located in Delphi in a temple dedicated to Apollo, the god of prophecy who had tried to sleep with Cassandra. The first temple was built in the seventh century BC and came to rival the Parthenon in size as it grew.

Apollo spoke through a Pythia, a priestess named for a dragon that Apollo slew as it guarded the sanctuary. The Pythia was a peasant woman at least fifty years old who was a ward of the priests. She spoke only once a month, on the seventh day. Consultants waited in long lines for an audience with the Pythia, putting up at inns in the town of Delphi as they waited their turn. Wealthier consultants might bribe the priests to shorten their wait.

Before speaking, the Pythia purified herself by bathing naked in the Castalian Spring on the slopes of Mount Parnassus. She then positioned herself on a tripod concealed behind a screen and inhaled the fumes from burning plants that drugged her. The consultant, also drugged, asked his question from the other side of the screen, and received a babbling, incoherent answer that the attendant priest translated. Astonishing as it may seem, this sham was solemnly swallowed by some of the mightiest men in the ancient world.

Croesus was a wealthy king who ruled in Lydia, Asia Minor, during the sixth century BC. Before he undertook a military campaign against the Persian King Cyrus, Croesus desired to know the outcome. He decided to test the major oracles by sending ambassadors to them simultaneously to ask the question, “What is King Croesus doing today?” The oracle at Delphi replied that the king was cooking a tortoise with lamb’s meat in a bronze cauldron. This was the correct answer, and only the Delphic oracle pronounced it. Trusting the wisdom of the oracle, Croesus sent lavish votive gifts to Delphi as he sought a prediction for his war against Cyrus. The oracle gave the ambiguous answer, “If you wage war against Cyrus, a kingdom will fall.” Croesus took this fortune-cookie answer as a prediction of his success. He invaded Persia, was defeated at the Battle of Thymbra in 546 BC, and taken captive by Cyrus who, some say, burned Croesus on a pyre.

Alexander the Great also consulted oracles over the course of his military career. Before embarking on his campaign to the east in 336 BC, Alexander consulted the oracle but was rebuffed by the Pythia, who told him to come back later. Not one to cool his heels, Alexander dragged the Pythia from her sanctuary by the hair, whereupon she declared that he was invincible, a tactful reply that told Alexander what he wanted to hear.

But how did the Delphic oracle know that Croesus was cooking a meal? The priests used a secret network of informants to obtain information that could be passed to the Pythia, whom consultants believed was shielded from all contacts with the outside world. Since the Pythia only spoke once a month, there was ample time for priests to learn a consultant’s question and then retrieve the answer. It is believed that informants at King Croesus’s court used carrier pigeons to send a message to Delphi describing the king’s culinary plans.

The oracle could also be bribed to provide the answer that the consultant preferred. Politicians made use of this corruption to buttress their policies and sway public opinion. When the Athenians faced an invasion of the Persians led by Xerxes, the Delphic oracle warned them of impending doom and urged them to flee. Themistocles, an Athenian general, believed that the Greek naval fleet, though greatly outnumbered by the Persian armada, could prevail if the engagement were fought in a narrow sound that would give strategic advantage to the smaller, more nimble Greek ships. He bribed the oracle to predict the success of the strategy and was rewarded with victory at the Battle of Salamis in 480 BC.

Oracles continued to speak in the Roman period, but were phased out when Christianity took hold as the official state religion. Old Testament prophets like Isaiah, Jeremiah, and Ezekiel replaced the pagan predictors. The Pythia at Delphi became silent during the reign of the Emperor Julian (AD 361-363 ), and the oracle was demolished in AD 398 by the emperor of Byzantium.

Why did some of the finest minds in the ancient world, including Herodotus, Sophocles, Pindar, and Aeschylus, give credence to oracles through their writing? Plutarch even served as a priest in the oracle at Delphi during its Roman period, and must have participated in its frauds. The answer may lie in our need for certainty. To live in a constant state of uncertainty about the future is to live with perpetual doubt that can give rise to anxiety, an unpleasant emotion that makes living in the present moment difficult. Prophecy is a form of reassurance. Even if a prophecy is unfavorable, it eliminates the paralysis of doubt and facilitates planning and adaptation. It is not surprising that people prefer the illusion of certainty to the realities of uncertainty. Certainty is comforting. Just ask Croesus.


Part Three — From Superstition to Science — Aristotle and Ancient Greek Philosophy

During the sixth century BC in Greece a transition began from pre-scientific attempts to understand and control nature based on magic and superstition to a scientific method based on reason and logic. The mathematician Pythagoras, who used numbers as a way of demonstrating physical relationships (the Pythagorean Theorem), brought to bear on the mysteries of nature a new kind of consciousness that sought to understand natural phenomena by studying and analyzing them, rather than by attributing them to the workings of invisible supernatural beings. This shift in consciousness was a major advance in human intellectual development.

Pythagoras was one of a number of early Greek philosophers who approached the mysteries of nature by asking essential questions and then using logic to try to answer them. Others included Thales, Anaximander, Anaximenes, and Socrates. This approach was formalized and systematized by Aristotle in the fourth century BC.

Aristotle was born in Macedonia, northern Greece, in 384 BC. His father was a physician, his mother a woman of independent wealth. When he was seventeen his parents sent Aristotle to Athens to study under Plato in the Academy that Plato had founded. In 343 BC, Philip II, King of Macedon, hired Aristotle to tutor his son Alexander. After Alexander departed Greece on his eastern military campaign, Aristotle returned to Athens and established his own Lyceum. There, he delivered lectures on a wide variety of topics. The subjects he covered included biology, cosmology, physics, poetics, politics, and ethics. Treatises on these subjects were subsequently written out, based on Aristotle’s lecture notes, by the Greek Andronicus in the first century BC. They became the basis for western philosophy for over fifteen hundred years. For the history of science, probably Aristotle’s most important formulation was his cosmology, because it underpinned scientific understanding of the universe throughout the Christian Middle Ages in Europe.

Aristotle described the cosmos as an enclosed sphere containing all matter. It consisted of a celestial region and a terrestrial region, separated by a lunar region. The celestial region was eternal and unchanging. The terrestrial region was composed of four elements — earth, air, fire and water — in constant motion. The motion of these elements kept the terrestrial region in a state of continual change.

The Earth was the center of the universe, responsive to influences from other planets in the celestial region. Although Aristotle believed in a God who was the prime cause of the eternal world, this God had detached Himself from his creation and remained absorbed only in Himself. The changes in the terrestrial region could therefore not be explained by recourse to God. Nature could only be understood by studying its causes and effects, using observation leading to propositions based on reason and logic. In this division of the world into distinct celestial and terrestrial regions governed by different principles we can see an early manifestation of the split between the realms of religion and science that persists today.

Based on Aristotle’s description of the cosmos, the Roman astronomer Claudius Ptolemy in AD 140 constructed a model of the universe that was widely accepted in the scientific world until the Renaissance. In Ptolemy’s model, the Earth was at the center of a ring of perfectly concentric circles drawn by the orbits of the other planets. The closest circle was the Moon’s orbit, followed by Mercury, Venus, the Sun, Mars, Jupiter, and Saturn. Beyond Saturn lay the sphere of fixed stars. This perfect, unchanging celestial realm was God’s domain, beyond the reach of human understanding. But humans could understand life on the ever-changing Earth by studying it.

Aristotle’s speculations about the cosmos were based on sensory observation without the aid of instruments such as telescopes and satellites, so many of his conclusions were erroneous. He had poor data from which to theorize, and did not perform experiments. But his method of drawing conclusions inductively from observation is the bedrock of the scientific method still in use today, though with far more sophisticated tools.

Aristotle’s view of the cosmos did not go unchallenged by other Greek astronomers and mathematicians. In the third century BC Aristarchus of Samos proposed that the Sun was the center of the universe, and that the Earth orbited around it while rotating on its axis. Despite the fact that Aristarchus’s theory brought down on him the charge of impiety from Greek officialdom, his claim was later seconded by the astronomer Seleucus in 150 BC. With their proposition of a heliocentric universe, in contradiction of Aristotle’s geocentric model, Aristarchus and Seleucus were forerunners of the struggle between scientists and the Roman Catholic Church over the design of the universe that would erupt in the Renaissance.


Part Four — Aristotle and the Church — Natural Philosophy in the Middle Ages

As the Roman Empire gradually disintegrated in the centuries following the death of Christ, the Catholic Church became a stronghold of both temporal and spiritual power, and the custodian of the knowledge accumulated by pagan philosophers, especially Aristotle. In its early years, Christianity was one of several mystery religions, so called because their adherents practiced them in secrecy from fear of persecution. Then in AD 392 the Emperor Theodosius made Christianity the state religion and banned pagan worship as treasonous. The oracles were silenced and the pagan gods vanished.

But the early church fathers, Saint Augustine among them, sought an accommodation between the teachings of Holy Scripture and the knowledge contained in the teachings of the pagan philosophers. This accommodation became known as the handmaiden tradition. Aristotle’s natural philosophy was viewed as illuminating and buttressing the church’s theology. It explicated how God’s divine purpose and will were made manifest in the workings of the natural world. Aristotle’s cosmology was especially important for this accommodation because its geocentric model of the universe corroborated the Bible’s account of creation in Genesis.

Aristotle’s teachings spread widely throughout Europe and the Arab world. His treatises were translated into Latin and Arabic by scholars who also wrote commentaries on them. These scholars made no attempt to move beyond Aristotle’s formulations. Rather, in their commentaries they elaborated on the ways that Aristotle’s natural philosophy reinforced scripture, or they disputed its accuracy when it contradicted scripture. One commentator, the Neo-Platonist John Philoponus (AD 490-570), rejected Aristotle’s thesis that the celestial region had always existed, and argued instead that God had created the world from nothing. The universe had a beginning and would come to an end. Theologians cherry-picked those aspects of Aristotle’s natural philosophy that conformed to the teachings of the church and rejected those aspects that were incompatible with doctrine.

During the late Middle Ages, as European cities became established and enriched by trade and commerce, the Church founded universities where theology, medicine, law, and the arts (which included natural philosophy) were taught. In 1167 Oxford University was founded, followed by the University of Paris in 1170, Cambridge University in 1231, and the University of Rome in 1244. The theology faculty held sway, and the arts, including natural philosophy, were taught as a preparation for the study of theology. Theology dictated how people thought about the world and their place in it.

Aristotelianism was the basis of the arts curriculum in these centers of learning for over four centuries. Off-limits to the faculty teaching Aristotle was any discussion of issues such as the Trinity, incarnation, and transubstantiation, which were considered to be the exclusive domain of theologians. In 1277 the Church condemned as heretical 217 propositions from Aristotle’s natural philosophy because they implied or suggested a limit on God’s power, considered by the Church to be absolute. Any doctrine that limited the power of God would also, by implication, limit the power of the Church to tell people how to think. In the Middle Ages, following the handmaiden tradition, theology trumped natural philosophy. Thomas Aquinas (1225-1274) argued that theology is a science derived from God and therefore superior over all other speculative and practical sciences that rely on human reason.

But in the sixteenth century a Polish astronomer and mathematician named Nicolaus Copernicus, seeking a way to predict more accurately the dates of Easter on the Christian calendar, formulated a theory that would upset the tenuous accommodation between natural philosophy and theology.


Part Five — The Copernican Revolution and the Trial of Galileo — Natural Philosophy and Theology Diverge

Copernicus was the definitive Renaissance man whose knowledge and achievements spanned the breadth of European culture in the sixteenth century. Born in Prussia in 1473, he was educated at the University of Kracow, where he studied Aristotelian philosophy in the Department of Arts before moving to Italy, where he took degrees in canon law at Bologna University and medicine at the University of Padua. He was also a classical scholar and humanist who read widely in the Greek philosophers. He spoke six languages—German, Polish, Italian, Latin, Greek, and Hebrew— and wrote in Latin. He held ecclesiastical offices in the Roman Catholic Church, serving as secretary to his uncle Lucas, the Prince-Bishop of Warmia, and Lucas’s successor, Fabian of Lossainen. But astronomy and mathematics were his principal occupation and passion.

In 1503, at the age of thirty, Copernicus, troubled by inconsistencies that he found in the cosmological models of Aristotle and Ptolemy, began work on his heliocentric theory. In his study of the Greek philosophers, he had come across references to the heliocentric system proposed by Aristarchus. When asked by the Bishop of Fossombrone to provide more accurate predictions of the dates of Easter for the Christian calendar, Copernicus used the heliocentric theory to identify the dates of the spring equinox. In 1514 he wrote an outline of the theory, but shared it with only a few close friends. The work that fleshed out the theory, called On the Revolution of the Heavenly Spheres, was completed in 1532 but not published until 1543, the year of his death. Copernicus held back from making the theory public, perhaps fearing the scorn and criticism of the Church. But the Church initially tolerated it because heliocentrism was presented as a theory, not a description of how the planets actually moved. The two most radical propositions of the theory, from the Church’s point of view, were its assertion that the Sun, not the Earth, is the center of the universe, around which the other planets, including the Earth, revolve, and that the Earth rotates daily on its axis, creating the illusion that the Sun is moving across the sky.

Then in 1584 the Italian Dominican friar Giordano Bruno affirmed Copernicus’s heliocentric theory in “The Fifth Dialogue” of a work titled The Ash Wednesday Supper. The Inquisition, which had been established in 1232 by the Holy Roman Emperor Frederick II as a kind of secret state police administered by the Catholic Church, arrested Bruno and tried him for heresy. He was charged with, among other heresies, holding opinions contrary to the Catholic faith, and claiming the existence of a plurality of worlds and their eternity. Bruno also asserted that there was no fixed firmament of stars, and that the heavens were filled with innumerable wandering bodies extending infinitely into space. He even went so far as to speculate that distant planets orbiting a star like the Earth’s Sun might harbor life. Part mystic, part visionary cosmologist, Bruno was in the scientific avant-garde of his time.

Bruno defended his faith and his adherence to Catholic doctrine before the Inquisition, but refused when ordered to recant his cosmological views. He was burned at the stake in 1600, a martyr to science, and his ashes were thrown into the Tiber.

But Bruno’s was not the only voice raised in support of Copernican theory. In 1596 the German astronomer Johannes Kepler published The Cosmographic Mystery, a defense of the heliocentric system. He departed from the Copernican model in arguing that the planets traced elliptical rather than circular orbits around the Sun. Kepler was deeply religious, and he presented the Copernican model as a physical image of God and the Trinity — Father (the Sun), Son (the orbiting planets), Holy Spirit (the space between them). This metaphor may have helped him avoid Bruno’s fate. He also went to great lengths to reconcile the Copernican model with Biblical passages that were being interpreted as geocentric.

The next astronomer to become an advocate for the Copernican theory was the Italian Galileo Galilei. Galileo was a nobleman, well known and well connected in both ecclesiastical and scientific circles. He was a prolific inventor who introduced a number of instruments — the microscope, the thermometer, the micrometer, the chronometer — that advanced scientific study. In 1608 the Dutch scientist Hans Lippersley invented the telescope, and a year later Galileo built his own version of the instrument, greatly improving its optics. Its magnification ratio of 10:1 enabled Galileo to observe the moons orbiting Jupiter and the rings of Saturn. These discoveries he published in 1610 under the title Message from the Stars. He wrote in Italian so that the general public could share in this new knowledge. What he saw through the telescope made him a closet Copernican.

But in 1613 he confided his views to the Benedictine monk Benedetto Castello, and repeated them in another letter sent two years later to the grand duchess dowager Christina. Galileo questioned the authority of Holy Scripture in explaining the cosmos with the remark, “The Bible tells us how to go to heaven, but not how the heavens go.” Copies of the letter circulated, and fell into the hands of a Dominican friar who filed a written complaint of heresy against Galileo with the Inquisition. The Vatican had come to regard the Copernican theory as a threat to papal authority. In 1616 Pope Paul V opined that the Copernican view of the universe was heretical. On the Revolution of the Heavenly Bodies was placed on the Catholic Church’s Index of prohibited books. The Inquisition launched an investigation into Galileo’s views. He was summoned to a meeting with Cardinal Bellarmine, the Inquisitor who had presided over the trial of Giordano Bruno, and ordered to abandon his support of Copernican theory—neither to hold it nor to defend it. Galileo submitted to the order and was exonerated. It was clear that the Church no longer had any tolerance for heliocentrism. When science threatened “business as usual,” science was censored.

Then in 1623 Cardinal Maffeo Barberini, a friend and admirer of Galileo, was installed as Pope Urban VIII. Emboldened by this personal connection, and encouraged by friends, Galileo set to work on a discussion of cosmological design. The work, completed in 1632, was framed as a dialogue between a Copernican named Salviati, a Ptolemaean named Sagredo, and an Aristotelian named Simplicio. Galileo titled the work Dialogue on the Two Greatest Systems of the World, the Ptolemaic and the Copernican.

Galileo brought the manuscript to Rome to share with the Pope. But Urban, distracted by affairs of state, did not have time to read it or to meet with Galileo. The manuscript was reviewed by two sets of censors, both of whom approved it. Subsequently, foes of Galileo at the Vatican persuaded Urban that he was the model for the character named Simplicio. Sales of the Dialogue were banned, and Galileo was summoned back to Rome for interrogation by the Inquisition and placed under arrest. He was charged with violating the order not to discuss Copernican theory, and with presenting the theory as fact, not simply a speculative hypothesis.

Galileo attempted to deny that the Dialogue defended Copernican theory, but the Inquisitors read passages from the book that contradicted this. A member of the Inquisition met privately with Galileo at the apartment where he was being held and urged him to admit that he had made an error. Galileo was then subjected to “rigorous examination,” a euphemism for interrogation under the threat of torture. He faced imprisonment unless he agreed to renounce Copernican theory and forgo any further writing or teaching on the motion of the Earth and the stability of the Sun. Minutes of the interrogation that took place on June 21 indicate that Galileo fully recanted without being tortured. He was found guilty of “vehement suspicion of heresy” and placed under house arrest at his villa outside Florence, but allowed to pursue his other scientific interests. He died in 1642 and was denied burial in hallowed ground. His Dialogue remained on the Index for another two hundred years.

The story of Copernicus and Galileo is another cautionary tale of the struggle between certainty and doubt that science continually prolongs. The Church refused to adapt to the revolutionary upheaval of the geocentric world-view on which its authority and control rested. Scientific truth was forced to give way to established certainties around which the social order of the time was built. Bruno refused to abjure his own vision and paid painfully with his life. Galileo took a more pragmatic path. But scientists persisted in their mission to pursue the truth. Little more than forty years after Galileo’s death, Isaac Newton’s Principia asserted the law of gravitation that explained the planets’ orbits around the Sun.


Part Six — Darwin and the Descent of Man — Evolution as a Godless Pseudo-Science

What transpired between the time of the oracles in ancient Greece and the Renaissance of Copernicus and Galileo were two major shifts in human consciousness. The first shift was marked by a change from a magical to a rational way of understanding the physical world. Men ceased marveling at the world and began studying it. Though a clear advance in mankind’s intellectual development, the change brought with it a psychological diminishment. Man was no longer a semi-divine being communicating with the gods who aided and guided him. He was more alone, left to his own devices.

The second shift, from the geocentric to the heliocentric view of the universe, aligned man’s consciousness more in tune with physical reality, but brought further diminishment of his identity. The Earth was no longer the center of the universe, the principal object of God’s plan. It was merely one of a seemingly infinite number of heavenly bodies moving through space not by God’s will but in obedience to inflexible physical laws governing matter. Man was also subject to these laws, his every step on the Earth evidence of them.

A third shift would occur with the evolutionary theory of Charles Darwin, put forward in the middle of the nineteenth century, two hundred years after the pronouncements of Galileo and Newton. This shift further shrank man’s sense of himself. He was no longer God’s chosen creature, the pinnacle of creation, destined for immortality because he possesses a soul. He was merely one stage in a long process of ongoing biological transformation, distinct from other living species, but with regard to his biological processes no different from them.

Each of these shifts in consciousness was met with stiff resistance from those clinging to older ways of thought, those whose certainties were upended. For each shift carried with it not only new modes of thought but also new arrangements in the social and political order that entailed transfers of power and authority. Those in possession of this power and authority were reluctant to surrender it, as the fossil fuel industry is now.

Darwin’s upbringing and education did not point him in the direction of becoming the author of a revolutionary theory about life on Earth. He was born in Shrewsbury, a small town in Shropshire, in 1809, into a religious family of free-thinkers and abolitionists. His father was a physician, his mother a Unitarian who died when Darwin was eight years old. He was raised in the Church of England. After schooling in Shrewsbury he briefly studied medicine at Edinburgh University before enrolling at Cambridge University to prepare for a career in the ministry.

The direction of his life changed when, in 1831, he was offered the position of companion to the captain of HMS Beagle, which was about to embark on a five-year voyage of scientific exploration. He became the voyage’s naturalist—going ashore, collecting specimens, and keeping extensive notes in his diary. Darwin brought with him on the voyage a copy of Charles Lyell’s Principles of Geology, which argued that the fossil record indicates that the Earth is several hundred million years old, and could not have been created in six days sometime between 4,000 and 8,000 BC as the Bible claims. When the Beagle reached the Galapagos Islands off Ecuador, Darwin noticed physical differences among the same species living on different islands. He reasoned that if the Earth is undergoing a slow process of continual change over long periods of time, living species must also be undergoing change as they adapt to this changing environment. This realization was the beginning of Darwin’s theory of evolution.

On his return to England Darwin studied the methods of breeders who created artificial varieties of plants and animals, and concluded that the breeders were selecting for desired traits. He conjectured that a similar process might operate in nature and thus explain variation among species. Natural selection was the mechanism by which species adapted, changed, and survived—or didn’t.

Darwin was also influenced by the population theories of Thomas Malthus, who had argued in An Essay on the Principle of Population that population growth would always outpace the food supply, necessitating a struggle for survival. The best-adapted individuals would prevail in the struggle and reproduce, strengthening the species. Natural selection was thus an instrument of biological progress.

Darwin was reluctant to publish his theory because he realized the upheaval it would cause in both religious and scientific circles. The prevailing orthodoxy, upheld by both Protestants and Catholics, was that God had created each species separately, as recounted in Genesis, and that God had differentiated mankind from all other species by implanting in him an immortal soul that would continue to exist in an afterlife.

But Darwin was not the only scientist on the trail of evolutionary theory. When in 1858 the naturalist Alfred Russell Wallace published a paper outlining a theory of evolution, Darwin rushed to complete and publish his twenty years of study as On the Origin of Species (1859). Darwin did not use the word “evolution” in Origin, nor did he trace man’s path from an earlier species. In a continuation of the handmaiden tradition that had accommodated Aristotle, he presented his theory as an explanation of the natural law through which God worked. But commentators drew their own conclusions. If natural selection was the mechanism, and some species were discarded in the struggle for survival because they failed to adapt to change, then the entire process was random, and therefore not directed by an all-knowing, all-seeing, all-powerful deity.

Twelve years later Darwin sawed off the other leg of creationist theory when he published The Descent of Man, which proposed that humans and other primates had evolved from a common ancestor. Man was not a special being favored by God and set apart from other animals with the endowment of an immortal soul. He was simply a more advanced form of animal life, subject to the same laws that governed all biological processes.

Darwin’s theory alarmed many theologians and religious believers because it called into question the need for God and replaced Him with purely mechanistic, deterministic forces. Religion, in Darwin’s view—for by then he had abandoned Christianity—was an institution man had created to strengthen the bonds of tribe and nation and so assist him in the struggle for survival. Science was once again shrinking long-cherished beliefs in man’s stature. Darwin’s evolutionary theory spawned a new scientific discipline, anthropology, that made religion a subject for study, reversing the old relationship between theology and natural philosophy.

When anthropologists and geneticists found evidence that supported Darwin’s theory, creationists who clung to the Bible’s account of the beginnings of life reacted. They resisted the idea that life on Earth was unfolding without teleological motive, and insisted that the Bible, being the word of God, could not err. Darwin’s theory, being speculative, was called “pseudo-science” and “a most gigantic hoax,” disparagements that we are presently hearing in the conversation about climate science.

Creationism found its staunchest supporters amongst conservative Protestant sects—Methodists, Seventh-Day Adventists, and Presbyterians in the rural American South—the same American demographic that is resisting the claims of climate science. In the second decade of the twentieth century Evangelicals published a series of booklets called The Fundamentals that asserted the veracity of the Biblical account of creation—that God had separately created all life forms sometime between 6,000 and 10,000 years ago. Scientists such as the geologist Louis Agassiz pointed out that the fossil record did not show continuous progress. The fact of rock strata holding fossils was attributed to the flood described in Genesis.

The creationist backlash against evolutionary theory peaked in the US during the 1920s with the trial of John T. Scopes, a high school science teacher accused of teaching Darwin’s theory in violation of a Tennessee statute banning it. The parallel with the trial of Galileo hardly needs to be pointed out. The trial became a forum for national debate widely covered by the media when Clarence Darrow became Scopes’ defense attorney and William Jennings Bryan testified for the prosecution. Bryan believed that Darwinism was undermining religion and public morality, and that its “survival of the fittest” ethic sanctioned war, imperialism, and laissez-faire capitalism. He wanted the teaching of evolution to be banned in public schools across the country.

Although Darrow, in his questioning of Bryan, succeeded in exposing the ludicrousness of believing literally Biblical accounts of miracles, such as Jonah living for three days in the belly of a whale, Scopes was convicted and fined $100. The verdict was later overturned on a technicality, but Tennessee did not revoke the statute until 1967.

The trial did not end the debate between creationists and evolutionists. If anything, it hardened the polarity. The creationist movement received fresh impetus in the 1960s with the publication of The Genesis Flood, a bestseller that repeated the creationist argument for a young Earth. Even today, more than one third of Americans believe in the Bible’s account of the origins of life on earth. The creationist’s strategy of casting doubt on Darwin’s theory because it is speculative, and calling it “pseudo-science” and “a hoax,” has been adopted by skeptics of climate science, which presents dire and disturbing scenarios about the ecological consequences of modern man’s industrial civilization, and once again throws man’s future into doubt.


Part Seven — Weather, Climate, and Global Warming — The Big Picture

Around the time that Darwin’s Origin was published, the science of meteorology was born. While all science is predictive, in that it identifies “laws” or patterns in nature that can reliably be expected to continue, meteorology was predicting a phenomenon — the weather — that everyone experiences every day and plans around — when to plant, what to wear, where and when to go on vacation. Weather prediction quickly became a matter of immediate and widespread interest, an easy conversation starter.

Although the fundamental mechanism of the weather—the movement of heat from the equator to the poles—was recognized as early as 1686 by the British astronomer Edmund Halley, meteorology as a science did not gain a foothold until the middle of the nineteenth century, when improved communications accelerated data sharing. Before the advent of scientific forecasting based on data gathering and analysis, farmers’ almanacs were the source of weather prediction. The predictions extrapolated past weather patterns of rainfall and temperature fluctuations into the future. Because weather is a global phenomenon driven by a host of variables, scientific forecasting was not possible until large sets of data aggregated from many geographical locations could be shared simultaneously. The introduction of the telegraph in 1844 enabled this.

In 1849 the US established a weather network at the Smithsonian Institution. By 1859 there were five hundred observing stations in the network, sharing local data telegraphically, moving the information faster than the weather itself. The number of variables, and the dynamics of weather, made prediction an uncertain science.  Initially, twenty-four hour forecasts issued by the US Army Signal Corps were characterized as “probabilities,” a caveat still in use today (40% chance of rain). In 1890 Congress established the US Weather Bureau within the Department of Agriculture, and in 1908 the Bureau began making weekly forecasts. There was a direct relationship between the extent of the forecast and its degree of uncertainty. The Weather Bureau prioritized accuracy over range, but farmers wanted long-term forecasts around which they could plan. To accommodate them, the Weather Bureau offered monthly forecasts, but at a high level of generality — cold, wet, hot, dry. Coordination among meteorologists was further enhanced by the formation of an International Meteorological Organization in 1879, and by the introduction of universal standard time in 1883.

Climatology — the study of weather patterns over time — evolved as a subdivision of meteorology near the end of the nineteenth century. At its inception, climatology was not interested in forecasting. It studied the past, using different sets of data taken from geologic evidence, and averaging them over time to detect long-term trends of warming and cooling. Its imperative was not speed, as in weather forecasting, but assembling large quantities of accurate data about the history of weather and from that data drawing conclusions about past climate regimes.

But late in the nineteenth century a few scientists began to speculate about the effects of industrialization on the earth’s atmosphere and weather, and turned climatologists’ eyes towards the future. Eduard Brückner, a German geographer, suspected that warming induced by industrial activity was occurring, and was contributing to drought and desertification in Europe and North America. In 1896 a Swedish chemist, Svante Arrhenius, who understood the role of carbon dioxide (CO2) as a heat trapping gas, estimated that doubling the amount of CO2 in the atmosphere from its then current levels of about 280 parts per million would raise the global average temperature by 5°- 6° centigrade (C), an increase that would drastically alter Earth’s climate. He believed that CO2 levels in the atmosphere were increasing because of the burning of fossil fuels.

The atmospheric temperature is regulated by the principle of radiative equilibrium, according to which the Earth maintains a balance between the energy it receives from the Sun and the energy it re-radiates into space. Heat moves from the equator, which absorbs more energy than it radiates, to the poles, which, because they are covered with snow and ice, reflect the Sun’s energy back into space and remain cold. The global circulation of the Sun’s energy is influenced by enormous physical forces— the Earth’s rotation, gravity, and the thermodynamics of rising and falling air masses, among others. The oceans, land surfaces, ecosystems, and agriculture also affect weather patterns and climate.

The Earth remains warm and habitable because not all of the Sun’s energy is re-radiated back into space. The atmosphere naturally contains gases — primarily CO2, methane, and water vapor—that act like a blanket, trapping heat — the greenhouse effect. But as the CO2 levels continue to rise from the burning of fossil fuels, the greenhouse effect intensifies, and the temperature of the atmosphere and the oceans rises in order to maintain the radiative equilibrium. As the atmosphere warms, glaciers, ice sheets, and sea ice begin melting, reducing the Earth’s reflective capability and exacerbating the warming trend in a positive feedback loop.

Climatologists noticed that the Earth’s atmosphere was warming, as Arrhenius had predicted, but they lacked the tools to ascribe the warming to human activity. Perhaps the warming was a result of natural variability, a phenomenon that reflects the complex variety of inputs into weather and climate. Over the millennia of history and pre-history, periods of warming and cooling have alternated in the rhythm of a sine wave, now above the average, now below it. The degree of warming in the twentieth century was slight, well within the range of natural variability. Then in 1938 a British meteorologist, Guy Callendar, demonstrated a connection between fossil fuel consumption, increased CO2 concentrations, and increased global temperatures. But his findings were met with skepticism based on the uncertain quality of his data, and the crudeness of his methods for extrapolating the data. In the 1950s, as computing technology became more sophisticated, and climatologists began to use computers to model climate behavior, his theory gained credibility.

The advent of large capacity mainframe computers, developed for the military during World War II, gave climatologists a tool for bringing coherence to the complex variables of weather and climate. These computers had the capacity to store, sort, and recombine vast amounts of climate data.

Climate results from the interactions of a number of linked Earth systems, including the atmosphere, the oceans, the cryosphere (ice and snow), land surfaces, and the biosphere. To model the climate on a computer requires data inputs from all these sources gathered from around the globe, but with this data climatologists are able to project climate trajectories into the future. Different scenarios can be projected based on variations in inputs. Because the CO2 content of the atmosphere is a primary factor in climate, simulation models can project impacts on the climate of increases (or decreases) in CO2 concentrations. The most important impact is warming, because warming triggers other changes in the Earth’s climate system, such as heating the oceans and melting sea ice and ice sheets. These changes in turn impact living species, including humans.

The link between CO2 concentrations and global average temperature can be expressed in a graph showing how temperature rises and falls as CO2 concentrations increase or decrease. In the twentieth century, the graph line has been moving steadily upwards at an accelerating rate. Because fossil fuel consumption generates large quantities of CO2, climatologists have come to the conclusion that human industrial activity is warming the planet. As early as 1956, the physicist Gilbert Plass, studying the effects of CO2 concentrations in the atmosphere, warned of significant and lasting global warming by the end of the twentieth century.

As concern about the potential effects of global warming on Earth’s climate and on human civilization spread among climate scientists around the world, networks and organizations were formed to share data, modeling techniques, and findings. This coordination required reconciling different national systems through standardization of collection and processing methods, a truly global scientific effort comparable in scale to space exploration. Growing awareness of “the CO2 problem” came to a head in the summer of 1988 when the climate scientist James Hansen, Director of NASA’s Goddard Institute for Space Studies, testified before a Senate committee that there was a 99% certainty that “the greenhouse effect has been detected, and it is changing our climate now.” That fall, the United Nations Environment Programme and the World Meteorological Organization jointly established the Intergovernmental Panel on Climate Change to carry out a scientific assessment of the Earth’s climate. The panel was composed of thousands of climate scientists working in government, university, and private laboratories around the world, linked not by the telegraph but by the Internet. The IPCC conducts no research. Rather, it creates and disseminates synthesis reports that present a consensus from the work of many scientists. It issued reports in 1990, 1995, 2001, 2007, and 2013. With each report, the certainty that the climate is warming because of human activity has increased.

Climate change moved from the laboratory into the policy arena in 1992 at the Framework Convention on Climate Change held in Rio de Janeiro. One hundred and sixty-five nations, including the US, agreed to set voluntary goals for greenhouse gas emissions. In 1997, meeting in Kyoto, Japan, the convention drafted a protocol, binding only on developed countries with advanced economies, to reduced greenhouse gas emissions to 1990 levels by the year 2000, a hopelessly unreachable goal. After emissions rose by 11% worldwide, and by 17% in the US, the US withdrew from the protocol, citing the priority of its economy over climate concerns, which it described as full of uncertainties. Russia and Canada followed. In 2015, when the convention met in Paris, the US rejoined the international effort, but our current administration, skeptical of climate science, has threatened to withdraw again in 2020.

The “uncertainties” that the US has cited as a pretext for withdrawing from the Kyoto Protocol had been carefully orchestrated by a coalition of vested corporate interests with large stakes in the fossil fuel industry that supplies society’s energy needs.


Part Eight — Doubt, Disinformation, and Deceit — Climate Change Deniers

The strategy to discredit climate science was built from a memorandum by Republican pollster Frank Luntz and orchestrated by Fred Palmer, a lobbyist for Peabody Energy, an American coal mining company. The goal was to instill doubt in the public mind by stressing the uncertainties of climate science and giving political allies in the Republican Party the cover they needed to stall. They created organizations with neutral sounding names like “Global Climate Coalition” to disseminate their propaganda and disinformation, and hired hack scientists to dispute the findings of the IPCC. They were aided in their campaign by the timidity of the IPCC and the gullibility of the media, which presented the propaganda as the other side of the argument. This effort represents a stand by the old order (fossil fuel interests that control our energy supply) against science’s call for change that has obvious parallels with the Church’s resistance to Copernican theory and the creationists’ resistance to Darwin. If you don’t like what science is telling you, bash the science and the scientists.

The underbelly of climate science, as it attempts to predict the future, is uncertainty. The number of variables that must be accommodated in modeling climate trajectories is vast, and their behavior is difficult to foresee. Economic growth, technological advances in energy sourcing, geologic events such as volcanic eruptions, changes in solar activity and ocean current patterns, all affect the course that global warming, and the climate’s response, will take. What climate modelers can do, and have done, is project a range of probable outcomes, qualified by words such as “likely” and “extremely likely”—words that convey uncertainty.

The other weakness of the predictive models that lends itself to skepticism is that they are not, and cannot be, based on empirical data—the ground of “sound science” — because the future hasn’t happened yet. Climate models can project into the future from past trends that are based on concrete evidence, but the reality of variability in nature remains a constant. That is why the projections issued by the IPCC in its periodic reports point to a range of possible outcomes, and tend to err, if they must err, on the conservative side.

The vested interests that have an economic motive to discredit climate science and to sow doubt in the mind of the public that can be used to justify inaction aimed their propaganda campaign at the uncertainties.

The campaign of denial began during the Reagan administration, shortly following James Hansen’s testimony and the formation of the IPCC. The George C. Marshall Institute attacked climate science and climate scientists, and proposed that increased solar energy output was the cause of global warming. The campaign went into high gear after the IPCC, in its third report (2001), attributed most of the warming of the atmosphere over the course of the twentieth century to human activity. The major culprit was the fossil fuel industry. The burning of coal and petroleum to provide energy was spewing megatons of CO2 into the atmosphere every year, and as the economy grew, so did the quantity of emissions. Progress based on energy derived from fossil fuels was threatening the planet. This idea is anathema not only to the fossil fuel industry, but to capitalism, our reigning secular religion.

The industry responded by orchestrating a communications effort channeled through organizations that it funded. The Global Climate Coalition, the Information Council for the Environment, think tanks such as the George C. Marshall Institute, the Competitive Enterprise Institute, and the Cato Institute, joined lobbying entities like the American Petroleum Institute in commissioning their own sets of reports in a concerted effort to reposition global warming as theory rather than fact. Politically conservative scientists, such as S. Fred Singer, Frederick Seitz, and Sallie Baliunas, who were greenhouse skeptics, were enlisted to question climate science and to propose alternate theories to explain global warming.

Their cause was greatly aided in 2001 when Bjorn Lomborg, a Danish statistician and economist, published The Skeptical Environmentalist, which argued that the threat of global warming was greatly exaggerated, and the measures proposed to address it were too drastic. The prestige of the press that issued the book (Cambridge University Press) prompted several respected mainstream publications—The New York TimesThe Wall Street JournalForeign Affairs—to review the book favorably, spreading the skepticism throughout their readerships and the world of journalism. The message was, “Relax, everybody, we don’t have to worry about climate change,” a message preferable to the warnings of the IPCC. Business as usual could continue.

Although the Union of Concerned Scientists convened a forum in 2001 that pointed out flaws in Lomborg’s analysis, the groundswell of skepticism was picked up by Republican politicians already sympathetic, and indebted, to the fossil fuel industry. The George W. Bush administration cited the uncertainty of climate science, and the threat to the US economy, as the pretext for withdrawing from the Kyoto Protocol. James Inhofe, former Chair of the Senate’s Environment and Public Works Committee, echoing the doubts of creationists about evolution, called the threat of global warming “a hoax.” The mantra for Republicans postponing action on climate change became, “we need sound science.” By that they meant observed empirical evidence—certainty. But as Paul Edwards points out in his comprehensive overview of climate science, perfect certainty is an unrealizable ideal, left over from the Enlightenment. He reminds us that all knowledge is provisional. “Knowledge once meant certainty, but science long ago gave up that standard. Probabilities are all that we have, and the probability that the skeptics’ claims are true is vanishingly small.” He asserts, “You will never get a single definitive picture, either of how much the climate has already changed or of how much it will change in the future. What you will get, instead, is a range. What the range tells you is that ‘no change at all’ is simply not in the cards, and that something closer to the high end of the range—a climate catastrophe—looks all the more likely as time goes on.” Cassandra is speaking.


Part Nine — Our Climate Future – Cassandra’s Warning

What do the climate change deniers want us to discredit and ignore? What is the consensus of thousands of climate scientists from many nations around the globe about the future of the planet whose habitat we share with many other living things?

The concept of global warming-induced climate change is relatively easy to understand. It’s basic physics. Carbon arrives on the Earth as part of the Sun’s energy. Some of it is reflected, some of it is absorbed and recycled, and some of it is stored. Carbon stored underground in fossil deposits has, for a century and a half, been burned to give us energy. Gradually, over time, this burning carbon has upset the planet’s natural radiative equilibrium, causing the atmosphere to heat in order to maintain the equilibrium. Millions of years of carbon accumulation is being released into the atmosphere, and at an accelerating rate that increases as our population, and demand for energy, increases. We are subject to these planetary forces, but not exactly in the way that astrologers of old described.

As the atmosphere warms from the release of excess carbon, the warming air affects the elements that make up climate—ocean temperature and currents, air currents, ice and snow, cloud cover, rainfall, global temperature—altering their patterns. The altering climate affects both ecosystems and their life forms, as well as human civilizations. This much is certain. What is not certain is the extent and pace of these changes, which depend on variables that can be estimated but not predicted with certainty. We know the climate is changing, and will continue to change as CO2 concentrations increase. What we do not know is how fast the climate will change because of the build-up of heat, when the change will become irreversible, and what the full consequences of the altered climate will be for life, including human life.

As the IPCC aggregated data from climate scientists around the world and issued reports, it became ever more certain that greenhouse gas emissions—primarily CO2 and methane—were heating the planet, and at an increasing rate. The panel’s qualifiers became more assertive, changing from “likely” in 2001 to “very likely” in 2007, to “extremely likely” in 2013. The growing certainty was prompted by empirical observations that were bearing out the models projections. Sea level rise, caused by the warming of the oceans, had been measured, and the acidification of the sea from the absorption of carbon, was increasing. Sea ice was shrinking, and ice sheets were melting. The global average temperature, a measure of the amount of heat in the atmosphere, was rising at rates outside the range of natural variability. In fact, these key indicators were rising more rapidly than even the most extreme IPCC projections, faster than at any time during the past 10,000 years. The IPCC projections were sounding the death knell of the fossil fuel industry, which may explain its investment in doubt.

The IPCC reports contained projections of how increases in CO2 concentrations would raise the global average temperature, and also outlined broad scenarios of anticipated impacts from a warming climate. To aid policy planners, they set targets for containing global warming below the threshold of 2° C above pre-industrial levels. They estimated that a 2° C (3.6º F) increase would accompany a doubling of CO2 concentrations from their pre-industrial level of 280 ppm. 2º C was a consensus figure. The IPCC set 2º C as a threshold beyond which climate change impacts would become seriously disruptive to human society. In 2008, the global average temperature had already risen 1º C from its pre-industrial level of 13.8º C (56.8º F). While an increase of 1º C may seem insignificant, it represents an enormous amount of global heat. Impacts from this amount of warming are already being felt across the globe: floods, severe storms, wildfires, sea level rise, melting sea ice and ice sheets.

To hold global warming to 2º C by 2050, CO2 concentrations must stabilize at 450 ppm. This requires that CO2 emissions be halved by 2050 and become close to zero by 2100. For there to be any chance of hitting this target, emissions must peak by 2020 and then begin a steady decline. But we are moving in the wrong direction. CO2 concentrations had increased to 405 ppm by the end of 2017. The rates of emissions growth were 3 ppm in 2015 and 2016, reflecting increases in emissions tonnage from global industrial activity. Developing countries like China and India with large populations continue to build coal-burning plants as sources of energy as they attempt to raise their standard of living comparable to levels already reached in the West. And western nations, fearful of slowing their economies, are reluctant to curb emissions, despite the warnings from climate scientists.

These projections represent a consensus reached among the thousands of climate scientists feeding the results of their simulation models to the IPCC. Some individual climate scientists are far more pessimistic in their outlook. James Hansen believes that CO2 concentrations should not exceed 350 ppm if melting of the Antarctic and Greenland ice sheets is to be prevented, an event that would significantly raise sea levels and reduce the Earth’s capacity to re-radiate the Sun’s energy.  Evidence that these ice sheets are now in jeopardy has already been reported.

A more dire forecast comes from the independent scientist James Lovelock, the proponent of Gaia theory, which holds that the Earth functions like a single physiological system in which all parts are interconnected. Gaia is adaptive, like other living things, and has evolved to her present state over billions of years. Human life is a mere blip in her history, but has now become a threat to her health through its industrial activity and population growth. Lovelock believes the IPCC reports underestimate the climate threat because they rely on consensus, which is a political, not a scientific, process. Echoing Malthus, Lovelock sees the root problem as overpopulation. The planet cannot sustain its current human population of 7.7 billion people, projected to reach 9.7B by 2050, IPCC’s cut-off date for avoiding “dangerous anthropogenic interference” in the Earth’s climate system.

Lovelock believes we have already passed the point of no return. We can slow climate change, but we cannot stop it, nor prevent its most serious impacts. He foresees a time in the not distant future when the hot Earth will be able to support only about 100 million people, living in high and low latitudes where agriculture will still be possible. He advocates adaptation measures, which include creating safe havens—lifeboats—where the human species can survive and live in a more harmonious relationship with nature. Left to herself, Gaia will carry out the culling of the human species. Lovelock’s prognostication is pitiless, and reflects his deep anger at mankind’s abuse of its planetary home. He warns that if the extensive fossil fuel deposits buried in the Arctic are recovered and burned, as is now being considered, the result could be the death of Gaia.

The changes from the warming climate foreseen by the IPCC are less drastic, and rest on the assumption that mitigation measures, such as transition to renewable sources of energy that do not release CO2, will be carried out. But even a 2° C rise will noticeably change the climate. The IPCC outlines broad changes that will differ by geographic regions, and will occur more rapidly over land than the oceans: increased precipitation in some regions, decreases in others; increases in the number of extremely warm days and heat waves; increased frequency and intensity of floods and droughts; increased intensity of storms; sea level rise; damage to ecosystems; extinction of species. Impacts on humans will include economic losses due to extreme weather events, and dislocation of populations from regions made uninhabitable or unsustainable by changes to their ecosystems. All of these phenomena are already occurring. As the planet continues to warm they will intensify.

Climate change has been projected by the IPCC as a gradual process, but there are enough uncertainties in the factors contributing to climate change to make the possibility of abrupt and rapid climate change, occurring within years or even months, plausible. One trigger could be the thermohaline circulation, the ocean current patterns that affect climate by moving heat from the tropics to the poles. These currents become vulnerable to shutdown as the oceans warm and fresh water from melting ice flows into the sea. Another unsettling variable is the capacity of the oceans to absorb heat. As the ice sheets melt and the ocean temperature reaches a dynamic equilibrium, global heating will accelerate. No one knows the timetable for these potential tipping points.

There are several precedents for abrupt climate change bringing about the demise of civilizations. In most cases the climate event was prolonged drought that dried up food supplies and caused widespread famine and starvation. This occurred in the civilizations of Akkad and the Old Kingdom of Egypt in 2200 BC, and more recently with the Mayan civilization in AD 800. In AD 536 a mysterious catastrophic event—perhaps a series of violent volcanic eruptions—cooled the planet and caused drought and widespread disease. Should our resistance to changing our energy sources, and with them our highly consumptive way of life, bring about sudden and extreme climate change, we can expect to experience high rates of unemployment, food shortages, financial collapse, mass migrations, the spread of disease, and the outbreak of wars as nations fight for control of dwindling resources.  But there are steps we can take to reduce the chances of these devastating impacts from climate change.


Part Ten — What We Can Do – Mitigation and Adaptation

There are two orders of response to climate change, one being mitigation, the other adaptation. Both can be pursued simultaneously. Mitigation seeks to limit the extent of climate change, adaptation prepares for its impacts. Mitigation measures can be scaled to targets for stabilization of CO2 concentrations and global temperature, which are directly related. Lower emission rates mean a slower rate of temperature increase. Adaptation entails changes to our physical infrastructure and living patterns that accommodate the consequences of climate change, such as building sea walls to counter sea level rise. It is urgent that we undertake mitigation measures immediately in order to prevent human-induced global warming from reaching the level of “dangerous anthropogenic interference” with the climate. The longer we delay mitigation, the shorter becomes the time window for adaptation, and the fewer our options.

To stabilize CO2 concentrations and global temperature at tolerable levels, we must make immediate cuts in greenhouse gas emissions. At current emission rates, CO2 concentrations will reach 450 ppm by 2030. Reversing this upward trend will require large-scale changes in our energy systems and land use practices. Fossil fuel use in all our economic sectors must rapidly be phased out, and replaced by other sources of energy, including nuclear energy and renewables (solar, wind, wave). During the transition, emissions can also be reduced by improved energy efficiency and conservation.

Tropical deforestation, which releases huge amounts of carbon and destroys a source of carbon storage, must be halted. To hold the global temperature increase to 2° C or less by 2050, emissions must fall by 70% of their 2010 levels, and be at or near zero by 2100, according to model projections. These are daunting objectives.

Mitigation efforts must be undertaken on a cooperative, global level. More advanced countries, such as the US and Canada, must transfer their technologies to developing countries, and aid them in slowing population growth, a major driver of climate change. Eighty percent of the world’s people live in developing countries that are most vulnerable to the effects of climate change. Consumption levels, a key factor in energy use, must also be reduced, especially in the advanced countries, whose 20% of the world’s population consumes 80% of its resources.

Mitigating climate change carries economic costs. Estimates vary widely according to the extent of the mitigation scenarios that are implemented, and are expressed as losses in global consumption. The scenario that brings CO2 concentrations to 450 ppm by 2050 will reduce global consumption by 1.7% in 2030, 3.4% in 2050, and 4.8% in 2100. The dollar estimate of the cost of decarbonizing the energy system is $US 44 trillion. This would be offset by savings of $US 115 trillion in fuel costs.

The risks of inaction on climate change are considerable. The level of risk increases with global temperature. The risks include:

  • Risk of death, injury, ill-health, or disrupted livelihoods in low-lying coastal zones and small island developing states and other small islands, due to storm surges, coastal flooding, and sea level rise.
  • Risk of severe ill-health and disrupted livelihoods for large urban populations due to inland flooding in some regions.
  • Systemic risks due to extreme weather events leading to breakdown of infrastructure networks and critical services such as electricity, water supply, and health and emergency services.
  • Risk of mortality and morbidity during periods of extreme heat, particularly for vulnerable urban populations and those working outdoors in urban or rural areas.
  • Risk of food insecurity and the breakdown of food systems linked to warming, drought, flooding, and precipitation variability and extremes, particularly for poorer populations in urban and rural settings.
  • Risk of loss of rural livelihoods and income due to insufficient access to drinking and irrigation water and reduced agricultural productivity, particularly for farmers and pastoralists with minimal capital in semi-arid regions.
  • Risk of loss of marine and coastal ecosystems, biodiversity, and the ecosystem goods, functions, and services they provide for coastal livelihoods, especially for fishing communities in the tropics and the Arctic.
  • Risk of loss of terrestrial and inland water ecosystems, biodiversity, and the ecosystem goods, functions, and services they provide for livelihoods.

The least developed countries are most vulnerable to these risks because they have limited ability to cope. (IPCC, Climate Change 2014Summary for Policymakers, p. 13)The magnitude and breadth of these risks has become a matter of widespread public knowledge through media reports of impacts that have occurred. These reports have reduced the number of people in the US who doubt that climate change is happening. But this increase in public awareness and concern has not led to political action to accelerate mitigation steps. In fact, under our present Republican administration, the opposite is happening. Consequently, the emissions rate is increasing. Political upheavals in Europe, South America, Africa, and the Middle East are generating both migrations of people and nationalistic resistance to the migrants, signs that climate change is disrupting society on a global scale.

Even though many of the risks cited above have already been incurred, there is still time for us to take adaptive as well as mitigating steps. The steps taken will vary according to local needs, but will fall under the following broad headings (IPCC, Climate Change 2104Summary for Policymakers, p. 28):

  • Human development (education, health, housing)
  • Poverty alleviation
  • Livelihood security (especially for indigenous people)
  • Disaster risk management (remember Hurricane Katrina?)
  • Ecosystem management
  • Land use planning (agriculture and forestry practices, urban development)
  • Structure and physical modifications (sea walls, water storage, road and infrastructure improvements, technology innovations)
  • Institutional changes (laws, policies, financial systems)

The urgency of the climate change threat projected by models has already been felt through impacts that have actually occurred. Some of the most commonly experienced of these impacts are (IPCC, Climate Change 2014Summary for Policymakers, pp. 30-32):

  • Decreasing Arctic sea ice cover
  • Retreat of glaciers
  • Decreases in forests
  • Range shifts of plants and animals
  • Death of coral reefs
  • Increased coastal erosion
  • Reduced fisheries production
  • Degradation of indigenous livelihoods
  • Increased wildfires
  • Increased flooding & drought intensification
  • Thawing of permafrost
  • Stagnation and decline in wheat yields

Stabilizing the climate and adapting to its change will require a transformation of people’s beliefs, values, world-views, and aspirations. It calls for a consciousness shift comparable to what humans underwent in the transition from magical to rational thinking during the ancient world, or from the geocentric to the heliocentric model of the universe. We can no longer think of ourselves as the lords of creation, and of the planet as a resource to be endlessly exploited to satisfy our need for “progress.” We must become Gaia’s partner, not her adversary.

Climate change presents us with an enormous challenge. But it also gives us an enormous opportunity to correct many of the imbalances in the global community, to improve international relations, and to advance the technological underpinnings of society.

Let’s get on with it.

Appendix: What You Can Do

Climate change is a global challenge that affects every living person. Just by breathing we are emitting CO2 into the atmosphere. This may seem insignificant, but there are 7.7 billion human beings on Earth. Add in livestock, pets, and wild animals, and the CO2 count becomes significant—about half of total emissions, according to James Lovelock. The point is not to stop breathing, but to be aware that we are in this together. There are several actions we can all take to help meet the challenge of global warming.

Stay informed. Climate scientists are continually publishing the results of studies in scientific journals, and the important findings are often summarized in mainstream media. The New York TimesThe Washington PostThe Wall Street Journal, and the Los Angeles Times frequently cover climate science. If you do not subscribe to these newspapers, you can still read the articles online. Climate science findings are updated on major government and other institutional websites. In addition to the IPCC (, there is NASA (, NOAA (, The Global Carbon Project (, and The U.S. Global Change Research Program, which publishes the National Climate Assessment ( The Heat is Online ( is a private website maintained by the environmental journalist Ross Gelbspan.

Inform Others. Use social media to share links with Friends on Facebook and Followers on Twitter.

Write to your local, state, and federal representatives urging them to make climate change a priority policy issue.

Vote for candidates who accept the findings of climate science. Dump the deniers.

Be energy efficient. Lower thermostat in winter, raise in summer. Turn off lights. Observe speed limits when driving. Require gardeners to use electric, not gas-powered, leafblowers. Install solar panels on your home. Don’t eat beef.


Further Reading. There are a number of recently issued books on climate change and its challenges:

Preparing for Climate Change by Stephen H. Schneider and Michael Mastrandrea

Economic Risks of Climate Change: An American Prospectus by Trevor House

Storming the Wall: Climate Change, Migration, and Homeland Security by Todd Miller

Greening the Global Economy by Robert Pollin

The Uninhabitable Earth: Life After Warming by David Wallace-Wells


Arthur Hoyle is the author of The Unknown Henry Miller: A Seeker in Big Sur, published in March 2014 by Skyhorse/Arcade. He has also published essays in the Huffington Post and the zine Empty Mirror. His second non-fiction book, Mavericks, Mystics, and Misfits: Americans Against the Grain, will be published later this year through Sunbury Press.



Leave a Reply

Your email address will not be published. Required fields are marked *