***
Philadelphia Eagles offensive linemen average 336 pounds and are 6 foot 6.
***
Roland Busch, CEO of Germany-based Siemens, said in an interview there’s overregulation in Europe that stifles competition. “We don't have one market. When you're a startup and you want to scale, I mean where do you go? You go to the United States. You can scale within one market."
***
Overcoming Chaos in Climate Science
A characteristic of the limited climate discussion is how caution is only one-sided. Concern is permitted on the climate side but not on the proposed 'solutions.' One-sided openmindedness is an oxymoron, unwise and anti-reflective. It is the hallmark of anti-science, superstition and its derivitives. This is from an interesting article that examins the arruracy of faith-based climate science, the models.
In chaotic systems, the results only appear to be random, but they’re not random at all — they’re entirely deterministic. If you knew exactly the initial conditions of such a system and could precisely describe the physical processes, you could predict tornados. It’s only due to our lack of knowledge these events look random. You can’t average away chaos.
Climate modelers address this problem by taking our limited data and using the uncertainties in the many model parameters to tune the models in order to force a fit to the data [3]. But the fit isn’t unique as there are many ways to tune the models,. They believe this history-matching will neutralize the sensitivity to the initial data, but as soon as the simulation moves from the tuning phase to prediction, numerical dispersion takes over again.
Given this hypersensitivity to initial conditions, just how accurate is the temperature data that’s used in these models? There are certainly problems with the temperature data. Weather station instrumentation changes over the decades. Stations are relocated. There are maintenance and record issues, and very importantly, environmental changes.
The World Meteorological Organisation recognizes this and has set up a system for quality-ranking the location of weather stations, rating them from 1 thru 5. Naturally, meteorological bureaus are rather coy about how good their weather stations are according to WMO rankings. For example, I couldn’t find any data for the Australian Bureau weather station ratings, but I did find this chart for the UK Bureau [4] which seems to be a little more open than our own.
I don’t know how the UK weather station portfolio rates against the rest of the world, but I suspect it would be in the top tier. Station rankings 1 to 3 all have expected environmental errors less than 1 degree. But not even the top sites can measure to a trillionth of a degree. Rankings 4 and 5, 80% of the UK dataset, have errors of 2 and 5 degrees respectively. Basically, they’re junk.
They address this problem by averaging the data. For example, for each 100×100 km grid cell, there might one, ten or no weather stations. They average the good with the bad to come up with a representative temperature for each cell through what’s called a homogenization process, a fancy name for averaging. It’s easier to get Coca-Cola to reveal its secret recipe than getting the bureaus to reveal how this is done, and if a different homogenization algorithm is used, you will get a different temperature, far greater than a trillionth of a degree difference.
It’s clear that the certainty that many climate advocates place on these models and their data is grossly overstated, and more skepticism is required by decision-makers. I believe there should be audits of these models and their data, by statisticians from outside the climate industry. This is unlikely to occur after they saw what happened when McIntire and McKitrick tore apart Professor Michael Mann’s infamous hockey stick graph which was once used as an International Panel for Climate Change logo, but now quietly memory holed.
In chaotic systems, the results only appear to be random, but they’re not random at all — they’re entirely deterministic. If you knew exactly the initial conditions of such a system and could precisely describe the physical processes, you could predict tornados. It’s only due to our lack of knowledge these events look random. You can’t average away chaos.
Climate modelers address this problem by taking our limited data and using the uncertainties in the many model parameters to tune the models in order to force a fit to the data [3]. But the fit isn’t unique as there are many ways to tune the models,. They believe this history-matching will neutralize the sensitivity to the initial data, but as soon as the simulation moves from the tuning phase to prediction, numerical dispersion takes over again.
Given this hypersensitivity to initial conditions, just how accurate is the temperature data that’s used in these models? There are certainly problems with the temperature data. Weather station instrumentation changes over the decades. Stations are relocated. There are maintenance and record issues, and very importantly, environmental changes.
The World Meteorological Organisation recognizes this and has set up a system for quality-ranking the location of weather stations, rating them from 1 thru 5. Naturally, meteorological bureaus are rather coy about how good their weather stations are according to WMO rankings. For example, I couldn’t find any data for the Australian Bureau weather station ratings, but I did find this chart for the UK Bureau [4] which seems to be a little more open than our own.
I don’t know how the UK weather station portfolio rates against the rest of the world, but I suspect it would be in the top tier. Station rankings 1 to 3 all have expected environmental errors less than 1 degree. But not even the top sites can measure to a trillionth of a degree. Rankings 4 and 5, 80% of the UK dataset, have errors of 2 and 5 degrees respectively. Basically, they’re junk.
They address this problem by averaging the data. For example, for each 100×100 km grid cell, there might one, ten or no weather stations. They average the good with the bad to come up with a representative temperature for each cell through what’s called a homogenization process, a fancy name for averaging. It’s easier to get Coca-Cola to reveal its secret recipe than getting the bureaus to reveal how this is done, and if a different homogenization algorithm is used, you will get a different temperature, far greater than a trillionth of a degree difference.
It’s clear that the certainty that many climate advocates place on these models and their data is grossly overstated, and more skepticism is required by decision-makers. I believe there should be audits of these models and their data, by statisticians from outside the climate industry. This is unlikely to occur after they saw what happened when McIntire and McKitrick tore apart Professor Michael Mann’s infamous hockey stick graph which was once used as an International Panel for Climate Change logo, but now quietly memory holed.
(This is from an article by Greg Chapman, a former computer modeler. I made a few additions and corrections.)
No comments:
Post a Comment