Share this article

print logo

Big problems are brewing for the National Weather Service

Let me be clear: The following article is not a criticism of either our local National Weather Service forecast office or other local offices, or the quality of the scientific staffing in most of the National Weather Service. The staffing is generally first-rate and world-class in academic expertise. And I believe no other government weather service on the globe can match the predictive skill of the National Weather Service in forecasting development of smaller/mesoscale severe weather events (tornadoes, severe thunderstorms, and even lake-effect snow).

The problem lies in the National Weather Service's ability to design and use what’s called Numerical Weather Prediction models and ensembles of those models in a more effective way.

Numerical Weather Prediction is the foundation underlying all computer modeling and is the basis for virtually all weather forecasts in the near and longer terms. It provides the basis by which billions can be saved in the economy, transportation can be made more efficient and safe, and many lives can be saved.

For some time now, the National Weather Service, under the National Oceanic and Atmospheric Administration, has lagged badly behind the computing and forecasting abilities of the European Centre for Medium Range Weather Forecasting and the British Meteorological Service. This is not a matter of national prestige; it is a matter of meeting the mission of the National Weather Service — the protection of life and property to the best of their ability.

It is also not a matter of the productivity of National Weather Service staff or their meteorological dedication. (On a personal note, years ago I trained on the interpretation of the then-new National Weather Service Doppler Radar data at our local airport office. In my many visits, not once did I see a single meteorologist show up late for a shift or leave early.)

This is a matter of giving our National Weather Service the necessary tools in major centers such as their headquarters, the National Centers for Environmental Prediction, in College Park, Md. We lack the supercomputing power found in both the European and British centers, each located in England.

Dr. Cliff Mass, professor of meteorology at the University of Washington, has written extensively of National Weather Service shortcomings in his blog. He is sometimes given to emotional emphasis, but, in general, he is a highly respected scholar in the field. He has served on NOAA and other research committees, and he has reminded us that the National Academy of Sciences has been equally critical of these serious problems.

As Mass has written, “Numerical weather prediction may be the most complex technology developed by our species, requiring billions of dollars of sophisticated satellites, the largest supercomputers, complex calculation encompassing millions of lines of code, and much more.”

In other words, the forecasting of weather is far more complex than TV weathercasters can even begin to explain.

Skill at forecasting has incrementally been improving for decades, but the forecasting of extreme events is now being done better in the European and British models. This issue first came to public light in advance of hurricane-turned-Superstorm Sandy. The European was far out in front of the U.S. GFS model in predicting the deadly and destructive hard left Sandy was to take, giving government leaders more lead time to take precautions to protect the public.

This was not a one-time case of superior performance. The European/ECMWF model beats our GFS the majority of the time, and often by a huge margin. Not only is the supercomputing capacity far greater, but the management of that capacity has been better than that of NOAA. Even in the coming improvements for the U.S. model with new supercomputers, much of our capacity is still unused, and much appears to be wasted on an inferior long-range model for subseasonal and seasonal forecasts, the Climate Forecast System. This model is run 16 times per day and has been proven to be vastly inferior to the longer-range ECMWF ensemble and other statistical methods. It appears to be a huge waste of our computer resources with very little to show for itself.

You’ve seen me use the word “ensemble.” Supercomputers now run many versions of each global model, every one with slightly different “start up” conditions—since we cannot know the precise state of the atmosphere everywhere when a model’s run begins. The United States runs 21 members of our GFS model with a fairly coarse (read: fuzzy) resolution, while the European Center runs 52 members of the ECMWF with twice the resolution of the GFS!

Ensembles almost always outperform single runs of individual models, but they require enormous computer capacity. Even with new supercomputers coming on line, we can hardly keep up when we’re wasting so much capacity on the nonproductive CFS model.

Moreover, Dr. Mass says another brewing problem is in NOAA’s failure — as he sees it — to bring the physics in the U.S. model up to snuff. He says we are leaving those physics mostly as they are and that they're largely inferior to those not only the European and British models, but to those in the first private-sector global model, produced by Panasonic.

Even the Panasonic model leaves the GFS behind on the majority of days, and that model uses the GFS as its foundation. Better use of improved understanding in physics makes that possible. The next modeling leap forward must be in the development of smaller/regional scale ensembles, rather than just global scale ensembles. Resolution must continue to improve. Where the terrain is complex, lower resolution/fuzzy scale models can be almost useless in forecasting small-scale precipitation and wind events.

Back to the large scale: One most recent example of the glaring inferiority of the GFS was shown in the track forecast of Hurricane Matthew. At five days out, the average GFS track error was 250 kilometers. The ECMWF track error was 100 km at fivedays. Even if resources spent on intensity forecasts improve, the intensity forecast is of little use if the track forecast for lead time is inferior.

The trite “I haven’t even begun to scratch the surface” applies here. For those of you who have more technical curiosity on this set of problems, I offer you Dr. Mass’ excellent blog, written for laypeople in relatively understandable language.

Story topics:

There are no comments - be the first to comment