Let's say you have a surface station that takes the high and low temp every day, and someone builds a building across the street and now the ground around the station gets a little more shade so the high temp on average drops 0.1 degree C for that station. That's an inhomogeneity, so you run it through a pairwise homogeneity algorithm and bingo bango it automatically removes the inhomogeneity even if you don't know about it. And if that seems questionable, don't worry, the code is open source and you can check the 10s of thousands of lines of spaghetti FORTRAN yourself.
It reminds me of the coding of Neil Fergusson - a European epidemiologist who always predicted 100x more deaths than occurs…. For every pandemic since 2000s.
He promised he would open source his code following covid scrutiny- and it has thousands of lines of spaghetti code that had obviously been massages and manipulation for decades.
Bad modeling is all over the place. Hell 90% of coof papers are meta-analysis of other coof papers. There are too many scientist (or at least people trained as scientists) and not enough actual science even during a supposed health emergency so it's all data science circle jerking. There's literal hundreds of mask papers and you know how many actually have done anything remotely resembling science? Maybe four. Maybe UBI would be a net positive because then these people would just be smoking weed all day instead of flooding the world with garbage papers.
Oh, and I just remember the lastest UK Surveillance data for the vaccine showed that for almost every age range more vaccinated people per 100,000 caught covid, but they kept pointing out that it was "unadjusted" which I'm fairly sure means, "yeah, this data looks real bad but we haven't figured out a way to cook it yet."
Let's say you have a surface station that takes the high and low temp every day, and someone builds a building across the street and now the ground around the station gets a little more shade so the high temp on average drops 0.1 degree C for that station. That's an inhomogeneity, so you run it through a pairwise homogeneity algorithm and bingo bango it automatically removes the inhomogeneity even if you don't know about it. And if that seems questionable, don't worry, the code is open source and you can check the 10s of thousands of lines of spaghetti FORTRAN yourself.
It reminds me of the coding of Neil Fergusson - a European epidemiologist who always predicted 100x more deaths than occurs…. For every pandemic since 2000s.
He promised he would open source his code following covid scrutiny- and it has thousands of lines of spaghetti code that had obviously been massages and manipulation for decades.
Well if he always predicted 100x more, that's a good thing — simply divide his prediction with 100 and you have the real number.
Bad modeling is all over the place. Hell 90% of coof papers are meta-analysis of other coof papers. There are too many scientist (or at least people trained as scientists) and not enough actual science even during a supposed health emergency so it's all data science circle jerking. There's literal hundreds of mask papers and you know how many actually have done anything remotely resembling science? Maybe four. Maybe UBI would be a net positive because then these people would just be smoking weed all day instead of flooding the world with garbage papers.
Oh, and I just remember the lastest UK Surveillance data for the vaccine showed that for almost every age range more vaccinated people per 100,000 caught covid, but they kept pointing out that it was "unadjusted" which I'm fairly sure means, "yeah, this data looks real bad but we haven't figured out a way to cook it yet."