Weather forecasts, especially during active patterns, can often vary widely from one source to another. If forecasts are all based on the same data input, why would that be the case?
This commonly asked question came up in my mind because I just read a Seattle newspaper interview with the National Weather Service Warning Coordination Meteorologist/WCM for that region.
That WCM, when questioned on this topic said, “The stations all get their forecasts and weather information from the National Weather Service; this includes satellite and radar images.” He went on: “But during more tranquil weather, the local weather anchor may tweak it a little bit, the high or low a few degrees, the primary reason being to differentiate their forecast from competing stations. It really does not make that much difference.”
There are some elements of truth in what he says, but he is off the mark on some other elements.
Yes, we are privy just as you are to the NWS forecasts. They are public products. But he clearly implies we simply parrot NWS forecasts one way or another.
In the interest of transparency, if the weather pattern is very simple there should be little variation between forecasts in the government and private sector in most cases. As a communicator, I might not use the precise language chosen by the NWS Buffalo Forecast Office in such cases. But there’s nothing in it for me to try to force significant differentiation between their forecast and mine just to be different.
In my own case, I try to be an interesting communicator and use more colorful, memorable language than might be permitted by NWS regional and national management. As for the competition, I honestly don’t have the time to watch them, and I doubt they have the time to watch me. We are on opposite one another and most of us don’t need to know or depend on what a competitor is saying. I can’t speak for all of us broadcast meteorologists and weathercasters, but I know that’s the case with me.
As for the data, it’s true a large portion of what we work with is generated by NOAA and its NWS. On air, satellite imagery is received from NOAA, and most of the raw radar data comes from the NWS network of Doppler radars across the country. Television stations pay hefty fees to private data vendors to access this data.
However, meteorologists also rely on computer model and ensemble data from Environment Canada, the British Met Office and the European Center for Medium Range Weather Forecasting, also located in Britain. We even occasionally examine global models from the Japan Meteorological Agency and even the Bureau of Meteorology in Australia. That’s not only true for those of us in the private sector. It’s also true for all National Weather Service meteorologists.
There is no nationalism in choosing which model you may weigh more heavily in a forecast, because that would be scientifically irrational. When the NWS feels the European model is out ahead of an American model on a storm, they don’t hesitate to rely more heavily on the European model any more than I would. Such became the case in the approach of Hurricane Sandy, when the European model far outperformed the American model for several consecutive days.
In addition to government-generated models, most broadcast meteorologists have “in-house” high-resolution models which run off raw data from NOAA but which produce forecasts based on different sets of equations and algorithms developed by private sector modelers. There also are new private sector models on a grander scale produced by IBM and Panasonic on supercomputers that are showing superior forecast verification scores to the NWS GFS model.
So, the Seattle NWS meteorologist has somewhat overstated his case on where the data comes from. (Despite being friends with many NWS meteorologists, I don’t know all the ins and outs of what transpires in their workplace, and they don’t know all that goes on in a television weather center.)
There is a more important principle the Seattle NWS meteorologist skipped in his response. When a student studies meteorology at the university level, he or she will be instructed somewhere along the academic way to avoid looking at the NWS forecast until his/her analysis is completed. That was true when I went to college and, in speaking with young meteorology majors, that’s still true today.
For the typical competent meteorologist, that analysis typically takes quite some time, and longer hours in complex situations. The reasoning behind this is to avoid putting on intellectual blinders at the start of the workday. If I start my work by looking at what might be an otherwise excellent NWS forecast (or an occasionally bad one), it will bend my thinking from the get-go. Yes, there should be some continuity from one forecaster’s shift to the next shift both in the NWS and in the private sector. But if a previously issued forecast is headed down the tubes, fresh thinking and analysis are called for.
I will say this: if, after all my analysis, I put together a forecast and then see the NWS forecast differs dramatically from my own, I will try to find the time to see if I’ve missed something along the way. Or it may sometimes be a matter of the NWS forecaster on duty having missed something. Most meteorologists I’ve known have pride in their work, and like doing their own work when and where possible. The meteorologists who join the NWS have been taught the same principles. They seek continuity when merited, and make changes as needed.
Even if we did get all our data from NOAA and the NWS, there would still be variations between forecasts. Despite big strides over the years in the quality of weather forecasting, it is not an exact science and never will be. I go back to my crusty LEGO analogy: the atmosphere is not comprised of LEGO blocks that snap neatly into place. The real-time surface and upper air data, the interaction between oceans and the atmosphere, the uneven heating of the land with varying amounts of moisture in the air and the soil, and the blending of the forecasts put out by so many models add up to a sea of possibilities and probabilities.
We get an inordinate amount of help from computers and high-resolution data, yet there is still an element of interpretive art in the practice of producing a weather forecast. We are not yet near the point where forecasts can be purely computerized with good verification. Many “crap apps” (pardon the expression!) prove me right on this point generating simply awful and unreliable junk forecasts, though there are some good ones with human input. The assimilation of all that data has to be interpreted by trained humans and blended with experience and pattern recognition.
Just as doctors can differ on diagnoses, meteorologists can differ between one another as well. I suppose forecasts are more uniform most of the summer in Needles, Calif. For most of the globe, though, it’s always going to be a case of variations on a theme. That’s what makes our jobs more challenging and — secretly — more fun.