Share this article

print logo

Don Paul: The long and short of long-range forecasting

Weather forecasts generally become less reliable the further out in time you go.

Many nonscientists understandably assume the identical principles apply in predicting the extent of global warming and climate change. However, that is not quite the case. Climate prediction operates on a vastly longer timeline. Its models are designed specifically to filter out the day-to-day noise of transient, fast-moving weather systems over a period of years to decades.

It focuses on longer term trends, with prevailing probabilities for climatological patterns most likely to occur over large regions for long time periods. In other words, it and its predictive models are markedly different from weather forecasting and its models.

Still, there is a crossover time frame between weather and climate.

One organization that deals with this crossover area is the National Weather Service Climate Prediction Center (CPC). I’ve never been crazy about its title, since it generally doesn’t go beyond one year in time range. Most of what it offers beyond three months is generally regarded with appropriate skepticism by folks like me, as hard as its staff scientists work at it. Calling what CPC does “climate prediction” creates confusion. Climate models go out for decades  and don’t focus on individual dry, wet, stormy, hot or cold periods in any one region for a season.

Let’s look at forecast skills that have been tabulated by rigorous verification studies conducted within the CPC. The X axis on this graph, the horizontal axis, shows the length of time covered in predictions. The Y axis, the vertical axis, is the measured skill levels in forecasts over time.

This figure illustrates the S2S or weather-climate prediction gap. It shows estimated forecast skill based on lead time or how far ahead the forecast is issued, as well as the types of atmospheric phenomena being predicted for each time range. Going from weather to seasonal forecasts, prediction skill decreases. Much less is known about forecast skill and predictability sources in the S2S range, two weeks up to a season. (NOAA CPO graphic adapted from original by Elisabeth Gawthrop and Tony Barnston, IRI)

 There is an area on this graph in which there appears to be a vacuum on measured forecasting skills, where you see the big question mark. Here this area is called S2S, the time range between day-to-day weather forecasts out to about two weeks and the time range in which some climate forecasting begins. It’s the period between two weeks out to a season. CPC, for example, offers one-month and three-month outlooks. Forecast performance in these “in-between” time ranges is quite difficult to pin down because the interactions between numerous variables can seem almost countless. Each interaction may be important for a few weeks at a time. Gauging these variables is very different from predicting strength and movement of faster-moving weather systems over a few days. El nino and la nina each may last for several months But these lengthier phenomena interact with other oscillations that may cover just a two-week period within those several months. These shorter term oscillations often have a huge impact on how el nino will impact global weather. In a sense, this transitional zone between weather and climate forecasting is the fuzziest of all to measure for skill. Hence, the big question mark.

Farther to the right on the X axis is the demonstrable consistently lower skill more easily measured in the three- to 12month experimental outlooks. Here, CPC is dealing with mainly slower moving and changing interactions between both the land and the sea with the atmosphere. Scientists are looking for the general direction of systems, rather than any kind of detailed road map.

If the forecast skill in the three- to 12-month period is demonstrably low, then why bother trying to do such outlooks? The skill level will likely slowly get better, and better forecasts in this time range could have a significant beneficial impact on the economy. If we could predict a major drought over the central plains a year in advance, farmers in that region could plant less water-intensive grains that growing season. If we could predict a frigid winter in the upper Mississippi Valley, emergency managers could be better prepared for spring ice jam flooding. These are just two examples of many possible benefits. These experimental long-range forecasts out to a year are not a terribly expensive enterprise and are worth working at for future improvements in skill level.

Let’s reel it back in and see what CPC’s thoughts are for June. Keep in mind, summer patterns tend to be weaker and less well-defined than winter patterns.


See that little white notch in the Great Lakes, where Buffalo is located See the EC? That stands for Equal Chances. That means there is too much uncertainty to predict a higher probability for above or below average temperatures. It’s an honest scientific shoulder shrug.

As for precipitation:


CPC does not engage in hype, and will be the first to bring up the uncertainties in its forecasts and outlooks. Its efforts, even with such mixed results, are worth the undertaking to potentially mitigate the impact from seasonal weather extremes. As for my own views, I find the longer I’m in this profession the more new variables for consideration are showing up in seasonal forecasts. If I were a betting man, seasonal forecasting is not where I’d place my money. Maybe…elections; yeah, that’s the ticket!

There are no comments - be the first to comment