Simulations/models are essentially math equations or series of equations. Some allow us to accurately predict outcomes. Others are presented as accurate but are not. The more assumptions you make the more inaccuracy you introduce. As Donald Rumsfeld famously and accurately noted there are three kinds of forces:1. The known – predictable impact in a model
2. The known unknown – usable assumptions but introduce model error
3. The unknown unknown – false confidence in the model
We can build accurate models when we know:· all the significant forces
· the impact of each force
· have a stable environment – forces aren’t changing
· have accurate and adequate data for each force
· when using assumption, have adequate volume of data for each force
· when using assumption, have adequate indicators for validity
In lean-six-sigma “design of experiment” forecasts impact of process changes. In insurance and banking, underwriting and pricing models forecast the risk of an individual or a group of individuals. NASA use models to plot the orbit of spacecraft, space debris, and celestial objects.
Some forces are constant, like the speed of light (known). Models that use constants can be very accurate.
Some forces are variable, like air temperature (known unknown). When using variable forces you have to make assumptions. These models can be useful as long as the assumptions are accurate.
Accuracy is relative because of our limited ability to measure. Time, cost, and value considerations lead us to accept approximations as good enough. Of course, many people, including some scientists often falsely treat measurements as if they are perfectly accurate.
To do deal with measurement inaccuracy we use ranges (tolerable error). When done formally, the ranges come from distribution curves built using “historical data” and work well if you get your data from stable systems (first hand and/or controlled). However, most of these models ignore the “outliers” or the data that doesn’t fit the norm. Example, pro golfers are outliers from the whole human population. Tiger Woods and Justin Rose are outliers within the golf population.
Historical data derived from samples – gettin jicky widit. When you don’t have firsthand data, or data from a controlled system, you can populate a database with samples. For example, using tree rings to estimate historical rain fall. Just know that:· the trees may represent a sample not a population – red flag
· the trees may not represent a truly random sample – red flag
· tree ring analysis to estimate rainfall is an estimate – red flag
· assumes all water came from randomly distributed rain – red flag
Additional assumptions have to be made sometimes to account for things we have trouble understanding. For example, to deal with the tendency for large, complex, systems (like Earth’s environment) to resist change requires concepts like Le Chatelier's principle. In other words, we often need more assumptions to explain the unknown unknown.
Wow. All of that to get to this point about Climate Impact ModelsOur environment is too complex to model with accuracy, and our climate is just one sub-system within the environment.
· We have too much variance in our known unknowns
· We have too many unknown unknowns
· We cannot provide qualitatively adequate data
· We cannot provide quantitatively adequate data
Should we keep trying to improve our models? Yes, of course.
Are climate change models trustworthy? No, not until we learn much more.
When can we trust them? When they can predict with practical accuracy!When will that be? Don’t know. Not in the foreseeable future.