Global Warming And Hurricanes: Only Heat, Or Is There Light?
Was there ever an Atlantic storm season to match 2005? The popular media would have us believe the 2005 season was singularly destructive, uniquely devastating, and it's only going to get worse next year. Then again, maybe we've seen it all before. Just ask some old-timers around town. “The 1920's were particularly bad for South Florida,” explains Dr. Chris Landsea. “Miami really got clobbered every few years.” Dr. Landsea ought to know; not only is he a NOAA meteorologist with the National Hurricane Center, he also grew up in Miami. And though he's a rather young man himself, he has a fine sense of Florida history. “Why do you think the University of Miami called their sports teams the ‘Hurricanes' sixty years ago?” Dr. Landsea asked recently during his presentation at Barry University in Miami Shores (15 February 2006).
By now the sobering tally is well known (see figure 1): last year's total of 27 named Atlantic storms smashed the previous record of 21 in 1933. What's more, 2005 saw 15 hurricanes (twice the average number) with seven hurricanes reaching major-status (at least Category 3 on the Saffir-Simpson scale), plus an unbelievable total of three that reached the most powerful Category-5 status. Storms of such power used to be thought of as once-in-a-lifetime events where they struck. Perhaps no longer.
Was 2005 really that bad?
Historically, the Florida Peninsula has seen its fair share of hurricanes (see figure 2), but lately we seem to have attracted storms the way a lamp draws night moths. The major hurricanes Charley, Frances, Ivan and Jeanne crossed the state in 2004, wreaking havoc and sending the record-keepers searching for precedent. We thought (or hoped) it couldn't get worse, but it didn't take long for that notion to change. In July 2005 Hurricane Dennis quickly reached major status in the Gulf of Mexico, grazing Florida's western panhandle at near Category-4 strength.
The 2005 South Florida storm season began in earnest around Labor Day, as the deceptively mild Tropical Storm Katrina formed just east of the Bahamas. What promised to be little more than a rainy nuisance for Broward and Palm Beach counties suddenly intensified and veered off course, careening into North Miami as a Category-1 hurricane. Locally severe flooding occurred in Miami-Dade and Broward counties, with schools and businesses closed for days. A main highway overpass construction project collapsed, choking local traffic. Less than two weeks later, Hurricane Rita roared through the Florida Keys, brushing Miami-Dade and seriously rattling nerves.
But the absolute topper was the late-October freak Hurricane Wilma. Though weaker than the Category-5 monster she was while demolishing Cozumel and Cancun, Wilma still had plenty of wallop left as she slammed into Naples at high Category-3 strength. Eight hours later the storm exited Florida near Ft. Lauderdale, pounding Grand Bahama Island on her way out, and leaving nearly four million battered and shaken Florida residents in her wake. Major property damage was widespread and electrical power outages lasted over two weeks in much of Miami Dade and Broward counties. By late November, life had returned to nearly normal, but many nervous eyes continued to watch the Weather Channel every night.
If you live in South Florida, well, yes…it was a really bad year.
Are we in for more of the same next year?
The brutal storms last year left everyone asking yet again: “Can it possibly get any worse?” Good question. The NOAA climatologists have their own theories. There is decadal variation in storm seasons, that is, the Atlantic storm seasons apparently go through 20-40 year cycles of variability in numbers and intensities of storms. The 1940's through the 1960's were years of relatively more storms per season, while the 1970's through the early 1990's saw significantly milder seasons with noticeably fewer storms. The seasons of 1995-1996 seemed to have marked the return of the more vigorous pattern. Dr. William Gray, a leading climatologist at Colorado State University, believes the 2004-2005 patterns will likely remain with us for some time, at least for the next 10 years or so. This doesn't mean we'll have as many storms every year, but our average season will likely be more active than what we saw during the 1980's. Dr. Gray's opinions certainly matter to Dr. Landsea; twenty years ago he was one of Dr. Gray's young doctoral students at Colorado State.
Dr. Landsea has culled data from over 100 years of storm records. Some come in the form of modern digital output from sophisticated electronic equipment aboard NOAA hurricane-hunting airplanes or from satellites in geosynchronous orbits, while others come in the form of old-fashioned ships' logs or from old newspaper articles from 100 years ago. He thinks that the current storm patterns are not particularly out of line with those of 60 years ago (see figure 3); we might be in for a rough stretch resembling the 1940's.
Some say recent storms are economically damaging to an extent never seen before, but don't be fooled by the big costs of today's hurricanes. “Once you normalize the data to adjust for inflation, population growth, and especially per capita wealth, today's storms aren't any more destructive or costly than those great South Florida storms of the 1920's.” Dr. Landsea points out. “Today, we all have a lot more personal wealth and real estate that gets wiped out in one big hit, but it was relatively just as hard on folks way back then.”
Not all agree with that assessment. Some experts believe it just might be getting worse. The 21st-Century market economies, insurance industry, and local, state and federal government agencies are much different in design, scope and responsive behavior compared with those of the 1920's. Recovery from bad storms is now a different process than it used to be.
Can we be sure about that recovery? Hurricane modeling is an evolving art, and as such, we should be prepared for surprises. Before we place our bets on the 2006 hurricane season, let's look at the process of storm prediction itself.
Storm modeling for fun and profit
The goal of any hurricane model is to provide information on the two most important features of a storm: track (where it's going) and intensity (how hard it's going to blow). Today, NOAA employs about two dozen sophisticated software packages, each requiring massive computing power and memory, and each also bearing an intriguing mnemonic name. Perhaps the most famous example is CLIPER (short for “CLIMatology” and “PERsistance”), a software package used by NOAA since the 1960's.
With any given model we hope for accurate predictions in both track and intensity, but in practice, the packages tend to give us “either/or” types of performance. That is, some packages (e.g., AVN, LBAR, and GFDL) are usually better suited for track, while others (e.g., SHIPS and SHIFOR) are usually better suited for intensity.
There are two general approaches to programming the models. First, there are the so-called “statistical” models that rely on historical data gleaned from the past. These use enormous datasets from previous seasons to generate regression equations that predict future events for any particular storm.
Then there are the so-called “dynamical” models. These do not use any historical data; instead, they use complicated mathematical equations and the laws of physics to predict the future atmospheric behavior of the storm-in-progress based only on its current state variables (so-called “real time” input).
In theory, the “statistical” and “dynamical” approaches are independent of one another, but in practice, NHC forecasters rely on various combinations. According to the fine website “HurricaneAlley.Net” (see references), the NHC98 package is one such model that combines statistical and dynamical solutions. The GUNS model averages the outputs of the GFDL, UKMET and NOGAPS models.
Amateurs give it a try at Barry University
Without taking a closer look at the modeling process itself, we tend to greatly overestimate the simplicity of analytical weather forecasting, and we assume everything is under control once the computers start humming. But it's hard to describe the detailed complications inherent in these sophisticated packages. It takes years for a project team to develop a reliable forecasting model for NHC.
Since 2004, I've used a greatly simplified approach to teach high school students the basics of weather modeling in the Barry-NOAA Summer Environmental Workshop. Students use digital light meters and hand-held thermometers to measure solar flux and surface temperatures on a variety of campus surfaces such as asphalt vs. grass, (see figure 4). These measurements are then used to generate simple straight lines on graph paper.
Most adults recall with some dread their old middle school nemesis “y = mx +b,” the formula for a straight line. We let x = the amount of light striking the surface per unit area per unit time, and let y = surface temperature in degrees Celsius. The students place their recorded values into spreadsheets and click away for regression options. The spreadsheet calculates m (the slope of the line), and b (the y-intercept value). Once we have m and b, we can predict y from x.
The regression equation generated in this way presents the simplest relationship between solar energy and ground temperature. It allows us to enter any given value for solar energy, and the equation immediately cranks out the predicted temperature value. It sounds easy, but the predicted temperature response comes with a calculated range of error. Cloud cover, relative humidity, surface cover, time of day…all these represent uncontrolled variables in the first analysis, and the students quickly learn that a better analysis requires far more data description. The model builds in complexity as it increases in its predictive ability.
Fortunately, the NHC hurricane models have been through many years of testing and validation. A model might begin as little more than an interesting student exercise in some meteorology professor's lab, but over time the package must mature into a reliable forecasting tool. Let's see how the modeling process works in real time.
Storms have their own good and bad luck
The forecasting begins with the birth of a new tropical depression somewhere out in the Atlantic Ocean. Each depression forms opportunistically, as it were, out of one of the regular atmospheric disturbances blowing offshore along the western coast of Africa. It takes a lot of luck for a storm to hold together as it follows the trade winds across the Atlantic because the odds are usually stacked against hurricane formation. “There are a lot of subtle effects that influence hurricane formation,” says Dr. Landsea. “But the big three are vertical wind shear high in the atmosphere, sea surface temperature, and deep-water temperature.”
A developing storm must fight vertical shear, the effect produced when the trade winds down at the ocean surface push the storm westward while high altitude winds six miles straight up are often racing back towards the east. The greater the vertical difference in wind directions and speeds, the more likely it is that the top of the storm shears off, and the entire system quickly dissipates. By corollary, the lesser the vertical shear, the more likely the storm will grow.
But a low-shear environment alone is not enough. Generally, it takes warm surface water to fuel the storm, but it takes deep warm water to energize it up to major status. Most hurricanes struggle to strengthen significantly because cool water often lies too close to the warm surface water. Stormy turbulence mixes the deeper cool water into the surface, and the thermal engine sputters. Hurricanes Katrina and Rita both lucked into deep warm water as they passed over a persistent loop of seawater in the Gulf of Mexico. They extracted that deep thermal energy and exploded to monstrous sizes. Had the steering winds shifted only slightly, those storms might have passed over surface waters lacking the deep heat, and New Orleans might have sustained far less damage.
Once formed, a new storm is quickly targeted by the storm watchers. The models are fed their initial data: sea-surface temperature (SST), air pressure, wind speed, direction, latitude, longitude, etc. The analyses and solutions are then updated every six to twelve hours. The output comes in the form of predicted track and intensity values for 24 hours, 48 hours, 72 hours, and 120 hours into the future. With each successive run, the models are evaluated on their performance and predictions (i.e., what the storm is doing now compared to what the model predicted six to twelve hours ago). Then the models are re-calibrated, stoked with new data and turned loose for the next run.
Because each hurricane model uses its own unique blend of equations and input data, no two models predict exactly the same track and intensity for a storm. Indeed, a model such as GFDL might perform brilliantly during the run of one hurricane, yet a week later, with the next storm it might fail miserably. We sometimes even see a model behaving schizophrenically within the history of a single storm, with predictions bouncing all over the lot from run to run.
Predicting the storm's future isn't the end of the story. At the conclusion of each run, the models do not simply delete the old data and clear the decks for the next round; all initial variable states and numerical solutions in each successive run must be carefully stored in order to build long-term archives. The memory banks of the NHC Cray super-computers are necessarily gigantic. The archival record of a single storm can form the basis of a graduate student's entire doctoral thesis. Even in the off-season, storm data management is full-time work for NHC programmers and data specialists. Once the storm season's on, though, the job takes on special urgency.
It can be a little stressful around NHC when bad hurricanes are prowling nearby. Having a cool head is a major job requirement. Fortunately, NHC's Max Mayfield is famous for staying cool under pressure. A NOAA veteran since 1988, he became Director of NHC's Tropical Prediction Center in 2000. Not only is he a world-class meteorologist, he's also an award-winning educator and administrator. He trains emergency planners and community leaders in hurricane preparedness, coordinates a fleet of research aircraft, supervises a large staff of software specialists, faces the glare of world media, and still manages to keep his family home safe during the storms.
The official forecast… place your bets
Every six to 12 hours NHC has critical decisions to make based on the huge amount of output. Forecasters must use years of experience and their own intuition to produce the so-called “consensus” track, one that lies somewhere near the middle of many model predictions. The consensus track comes with a cone of error, an ever widening range of uncertainty looking out in time ahead of the storm. For example, we are relatively certain as to where the storm will be in the next 24 hours, but we are much less certain on where it will be five days from now.
And if we find uncertainty in predictions for a single storm, imagine the difficulties in looking ahead to the next storm, or to the next storm season. The old adage that “one year might be a fluke, but two years makes a pattern” isn't very useful. Dr. Landsea distinguishes between the relatively short-term events that might affect storms within one or two seasons, and the long-term events that affect decades of storm behavior.
Those long-term factors include the so-called “conveyor belt” of surface and deep ocean water that circulates slowly yet majestically around the globe in a grand, five-year cycle. We find surface water sinking in one place, and rising later elsewhere, usually thousands of miles apart. It is the disruption to this conveyor belt that might be the cause of the 20-40 year decadal cycles we see in the storm data.
Can we trust those old records? Hurricane data available from the 1930's are no where near the level of sophistication and reliability found in today's datasets, and the analytical methodologies are not comparable in any way. On the other hand, it's hard to find any logical errors in Dr. Landsea's statistical methodology and rigor. The number crunching continues; every year NHC introduces new modifications to their packages.
Global warming and hurricanes
Some experts suggest that the accumulating greenhouse gases in our atmosphere, mainly carbon dioxide (CO2) and methane, have set the earth on a potentially irreversible global warming trend. CO2 is present in our atmosphere only in the relatively tiny concentration of 0.036%. In other words, 360 parts per million (ppm) of the atmosphere's particles consist of CO2; methane is even rarer. Yet these relatively rare gases trap heat in ways disproportionate to their low abundances. CO2 emissions are by-products of burning coal, oil, and fuel wood. Modelers now predict we might double our current CO2 concentration to over 700 ppm by the end of the 21st-Century. Evidence of the warming effect is already appearing in accelerating rates of polar ice melt in Greenland and Antarctica. Could the increase in trapped heat create warmer ocean surfaces and more destructive hurricanes?
Some modelers suggest this warming trend is behind the recent surge in Atlantic hurricane activity, but that statistical linkage is clearly absent in Dr. Landsea's data. Greenhouse gases have been increasing steadily for over a century, but the storm season fluctuations have not correlated well with the global warming data. Other oceanic systems such as the El niño-Southern Oscillation (ENSO) and the deep-water North Atlantic Oscillation (NAO) affect the numbers and intensities of storms from year to year, but not in entirely predictable ways.
We hope the models will become better over time, but hope is based on the successful recruitment and training of the next generation of hurricane modelers. I like to think that some of our recent Barry-NOAA students have caught the “weather bug,” so to speak, and may soon begin professional careers with NOAA. Twenty years from now, “old” Dr. Landsea himself might be turning his models over to a new generation of forecasters, but there will always be that statistical uncertainty to deal with. There is no such thing as a sure bet in long-range hurricane forecasting.
Still, it doesn't bode well for South Florida in 2006. Let's hope for the best but prepare for the worst.
For further reading and browsing
Jeremy Montague is Professor of Biology in the School of Natural and Health Sciences (SNHS) at Barry University in Miami Shores, FL. He earned his Ph.D. in biology at Syracuse University, and has been teaching and conducting research in the environmental sciences for over thirty years. Dr. Montague came to Barry University in 1983; today he teaches lectures and labs in first-year general biology and botany, as well as upper-level coursework in ecology, marine biology, evolution, and biostatistics. Since 2003 he has served as Co-Director of the Barry-NOAA program (NOAA Award NA03OAR4810129, Sister John Karen Frei, Ph.D., Grant Director).
Feedback & Citation
Find an error or having trouble with something? Let us know and we'll have a look!
Help us continue to share the wonders of the ocean with the world, raise awareness of marine conservation issues and their solutions, and support marine conservation scientists and students involved in the marine life sciences. Join the MarineBio Conservation Society or make a donation today. We would like to sincerely thank all of our members, donors, and sponsors, we simply could not have achieved what we have without you and we look forward to doing even more.