A Posting For Meteorologists - The Future of Forecasting
2022 tornado, image by Ben Holcomb The NWS did not provide an advance warning of this Kansas tornado as the tornado was in progress when the warning was issued. |
[If you aren't a meteorologist or meteorology student, feel free to skip this post.]
On December 1, we published a piece outlining our recommendations pertaining to the future of the National Weather Service. There are serious issues pertaining to the way meteorologists are educated and trained in 2024 that desperately need to be addressed in 2025.
Today, I want to focus on one of the recommendations pertaining to meteorologists' training:
There is an expectation that meteorologists should be able to intervene and change a forecast when the computer forecasts are obviously wrong. But, as aviation safety has learned, there is no way for people to overrule defective automation output when they have little or no experience doing so.
An analogy: Why did Asiana Airlines Flight 214 crash in 2013 in clear weather on approach to San Francisco International airport? Because the pilot had not ever attempted to hand fly (e.g. without automation) a 777 during landing! Three died, 180 were injured, and a jumbo jet costing $440 million was destroyed. That is a close analogy to a meteorologist making an accurate intervention when it looks like the models are badly off. How can we expect them to make independent (of models) forecasts when they have no experience doing so?
I bring this up because of an insightful post at Twitter/X that applies to weather forecasting and storm warnings as well as to education in the broader sense:
Just had a fascinating lunch with a 22-year-old Stanford grad. Smart kid. Perfect resume. Something felt off though.
He kept pausing mid-sentence, searching for words. Not complex words - basic ones. Like his brain was buffering.
Finally asked if he was okay. His response floored me.
"Sometimes I forget words now. I'm so used to having ChatGPT complete my thoughts that when it's not there, my brain feels... slower."
He'd been using AI for everything. Writing, thinking, communication. It had become his external brain. And now his internal one was getting weaker.
Made me think about calculators. Remember how teachers said we needed to learn math because "you won't always have a calculator"? They were wrong about that.
But maybe they were right about something deeper.
We're running the first large-scale experiment on human cognition. What happens when an entire generation outsources their thinking?
Don’t get me wrong, I’m beyond excited about what AI and AI agents will do for people in the same way that I was excited in 2009 when the App Store was launched.
But thinking out loud you got to think this guy I met with isn't the onnnnnly one that's going to be completely dependent on AI.
This is a close analogy to the issues with the way meteorologists are trained and they practice their art/science. When I was being trained in college, OU had two ex-Air Force meteorologists who taught the forecasting portions of the curricula. While we were taught to use the nascent computer models, we were taught other ways of forecasting completely independent of the models. Those methods still work great.
I'm told that in today's university meteorology curriculum, forecasting is often taught by a professor who has little to no "real world" forecasting experience and it is largely, "You can't make a better forecast than the models, so don't bother to try."
If you look at a comment from a NWS meteorologist at the bottom of the December 1 piece, it says,
"As someone who works in the NWS field office, the problem goes even deeper. We have been pushed and in come cases forced to utilize the National Blend of Models (NBM) with little or no forecaster intervention."This explains why expecting meteorologists to overrule the models in unusual, and especially high-impact, weather situations is completely unrealistic. That is, I believe, one of the reasons why the National Weather Service missed the damaging tornado that struck Scotts Valley, California, Saturday. The details of the warning miss are here.
How do I know? AccuWeather (and, before it, WeatherData, Inc.) trains its meteorologists in a different manner than the National Weather Service. With the benefit of that training, AccuWeather was able to provide an accurate tornado warning 16 minutes in advance to its clients in Scotts Valley.
The fact is, we know much more about optimum techniques for weather forecasting and storm warnings than we did in the 1970's. The suggestions I, and others, have made are a start until we can create a National Disaster Review Board.
Note: with finals and travel, and because of the importance of this piece to university students studying meteorology, I'm going to make this the featured blog piece into Saturday.
© 2024 Mike Smith Enterprises, LLC
"We're running the first large-scale experiment on human cognition. What happens when an entire generation outsources their thinking?"
ReplyDeleteI disagree with your assessment that this post should be for meteorologists.
Au contraire, mon frère . . . everybody should read that chilling post by Isenberg.
Sacred poop . . . I am not outsourcing my cognitive faculties . . . I don't trust it as far as I could toss your house.
I recommend unto you a book called More than meets the eye by Richard A. Swenson, MD In it, it reports that the human brain is the most complex physical object discovered by man. Subtitle: "Fascinating Glimpses of God's Power and Design" HIghly recommended.
Heck, Mike, AI would not give us nearly as good a book as Warnings . . . I would stake my reputation on that.
A high percentage of the time, I agree with you. Not today. Pushback: Everybody ought to read this post.