All posts by EdwardM

This is not going to work right: CDC would like to test “virtually everyone”

The false positives will overwhelm the true positives – how will they detect this?

Testing has so far been used in the United States mostly to diagnose people who are sick or have been exposed to someone with a confirmed Covid-19 case. Screening would test virtually everyone in a given community, looking for potentially infectious people.

Source: CDC is developing new coronavirus testing guidance for screening at schools, businesses

Here’s the problem.

  • We test everyone.
  • The actual prevalence of the disease in the community is 1 in 500 people (as an example).
  • Our screening test is 99% accurate (in reality, the full test process may have a much higher error rate).
  • We test 500 people and find 1 person who actually has Covid-19, plus we find 1% of the 500 or 5 people who are tagged as false positives.
  • We’ve now found six people testing positive but only 1 of the six actually has the disease; the other five are false positives.
  • Public health authorities tell us that six people tested positive and “new coronavirus cases” go up by six.
  • The community, however, has a population of  50,000 people.
  • Our testing “virtually everyone” finds 100 actual new cases (1/500 x 50,000) and 500 false positives (5 for every 500 or 10 for every 1,000, times 50 or 500).
  • Public health tells us there are 600 new cases (100 actual + 500 false positives).
  • 500 people are placed in two week quarantines unnecessarily.
  • Because we are testing everyone and because of this problem, we can never “flatten the epicurve” – we will always have a large number of false positives when we test everyone while the prevalence of the disease is low. Even a high accuracy test – or high specificity – still results in this problem. No test – including lab and handling – is 100% accurate.

And oh, the actual test they are proposing to use has a false positive rate of 3% – three times worse than the 1% I used above.

Continue reading This is not going to work right: CDC would like to test “virtually everyone”

Air Quality Index

The Air Quality Index (AQI) is a numeric indicator of local air quality. Since last Thursday night, our AQI has been in the 500 to 526 range. The range only goes up to 500. Anything above 300 is considered hazardous to health for all.

We live in the area of the fires in Oregon.

We’ve been sealed up inside your home since Thursday. Fortunately, there is little smoke smell inside as our house is sealed very tight – but outside it was extremely difficult to breath. The smoke led to headaches, fatigue, sore throat and a slight shortness of breath – classic symptoms of smoke inhalation, I am told.

As of right now, the AQI has fallen to 169 which is a noticeable improvement, however this is not yet to a normal range.

The fire storm was caused by an early Arctic air mass (high pressure) descending south from Canada. The winds blew from the colder, high pressure to the warmer, low pressure to the south and on the west of the Cascades. This caused very strong winds to blow for 1-2 days, crossing mountain ridges at 55 to 75 mph and pouring down west side mountain canyons.

Routine summer thunderstorms had started fires on August 16th, fires which had been fought since then. The Lionshead fire had grown to about 22,000 acres – but then when the winds hit, exploded by over 100,000 acres in about 24 hours. Similarly, Beachie Creek Fire had also started on August 16th and was at only 500 acres when the winds hit – and it too was soon over 100,000 acres (now 190,000 acres) in size.

The cold air mass poured into the valleys and plains – creating an inversion layer of colder air that would not mix with the warmer air above. This trapped the smoke near the ground.

Yesterday – and likely again today – we will see the effects of the weakening inversion. Yesterday, AQI fell to around 250 in the afternoon but as the air mass cooled in the evening, it went back to 468 where it remained overnight. Suspect today’s lower reading will do the same this evening and repeat this pattern until late in the week when a fall weather storm system enters the area, delivering rain and breaking the inversion layer.

Google joins AMA, AOPA, and EAA asking the FAA to make fundamental changes to the FAA’s Remote ID proposal

This is significant: AMA, AOPA, EAA and Google’s sister company, Wing, urge FAA to Make Essential Changes to Remote ID Rule | AMA IN ACTION Advocating for Members

Last December, the FAA released a Notice of Proposed Rulemaking for Remote ID of remote control small aircraft that will de facto ban home made radio controlled model aircraft. That ban is not subtle but intentional in the FAA’s rules. The FAA fully intends to ban the nearly 100 year old model aircraft hobby – under the new rules, only certified, manufactured model aircraft which continuously transmit their location in real time (probably at the cost of a monthly subscription fee) would be permitted.

Additionally, the FAA would move flights of existing model aircraft to about 2,400 “reservations” – a number that would gradually shrink over time to the point that all home made radio controlled aircraft would be banned in the United States.

The purpose of the FAA’s proposal is to seize the public airspace over the United States and turn it over to private industrial drone operations, and to increase costs to members of the public – by mandating realtime, continuous location data transmissions into cloud databases for a likely fee – so as to largely eliminate all model aircraft. This is why it is significant that Google has joined in this effort to retain the traditional model aircraft hobby.

The FAA largely banned ultralight aircraft through a convoluted and confusing set of multiple rule making proceedings. The effect was to ban the use of two-seat ultralights for training purposes. The ultralight flying community has largely vanished as a result. The FAA has gone down the same path to regulate all airspace not only in your backyard, but under their proposal, to regulate the airspace even inside your home. The effect is to ultimately eliminate model aircraft in the U.S. in order to turn over the airspace in your backyard to private corporations. This is not wild speculation – this is exactly how their NPRM was written.

Computer security failures in the news

On Monday, Public Health Wales disclosed that it accidentally leaked the personal data of 18,105 Welsh residents who tested positive for COVID-19, and that data was visible for 20 hours on a public server on Aug. 30 and viewed up to 56 times, the agency said.

The data belonged to every resident of Wales who tested positive for COVID-19 between Feb. 27 and Aug. 30. It included people’s initials, date of birth, gender and general location, but not specific information on who they are. Still, for a subset of 1,926 people who live in supported housing or nursing homes, the data included the names of those locations.

Source: Data on 18,105 coronavirus patients leaks after staffer clicks wrong button – CNET

And, a bug in Biden’s campaign app enabled anyone to access voter history and other data on millions of voters.

Far too many organizations collect far too much information, and then retain it online for far too long. The result is “All your secrets belong to us”.

Having pulled official credit reports on myself and my wife, we were surprised to find the high number of errors in the records. For example, credit reporting agencies had us living at addresses we had never lived at. In one case, they intertwined data from a woman with a similar name to my wife. Through what we found in our own credit file, I was able to cleverly identify the woman, her actual home address and her employer!

Over time, the quality of data retained – for too long – in online databases goes down and there is seldom anyway to know what erroneous data has been stored about yourself, nor is there away to seek a correction.

 

It’s worse than we thought: “Second Analysis of Ferguson’s Model”

In the past I had some comments on Neil Ferguson’s disease model and have repeatedly noted its poor quality. This model was used, last spring, as the basis for setting government policies to respond to Covid-19. Like many disease models, its output was garbage, unfit for any purpose.

The following item noted that the revision history, since last spring, is available and shows that ICL has not been truthful about the changes made to the original model code.

Source: Second Analysis of Ferguson’s Model – Lockdown Sceptics

THIS! Many academic models including disease models and climate models, average the outputs from multiple runs, some how imaginatively thinking that this produces a reliable projection – uh, no, it does not work that way.

An average of wrong is wrong.  There appears to be a seriously concerning issue with how British universities are teaching programming to scientists. Some of them seem to think hardware-triggered variations don’t matter if you average the outputs (they apparently call this an “ensemble model”).

Averaging samples to eliminate random noise works only if the noise is actually random. The mishmash of iteratively accumulated floating point uncertainty, uninitialised reads, broken shuffles, broken random number generators and other issues in this model may yield unexpected output changes but they are not truly random deviations, so they can’t just be averaged out.

Software quality assurance is often missing in academic projects that are used for public policy:

For standards to improve academics must lose the mentality that the rules don’t apply to them. In a formal petition to ICL to retract papers based on the model you can see comments “explaining” that scientists don’t need to unit test their code, that criticising them will just cause them to avoid peer review in future, and other entirely unacceptable positions. Eventually a modeller from the private sector gives them a reality check. In particular academics shouldn’t have to be convinced to open their code to scrutiny; it should be a mandatory part of grant funding.

The deeper question here is whether Imperial College administrators have any institutional awareness of how out of control this department has become, and whether they care. If not, why not? Does the title “Professor at Imperial” mean anything at all, or is the respect it currently garners just groupthink?

When a software model – such as a disease model – is used to set public policies that impact people’s lives – literally life or death – these models should adhere to standards for life-safety critical software systems. There are standards for, say, medical equipment, or nuclear power plant monitoring systems, or avionics – because they may put people’s lives at risk. A disease model has similar effects – and hacked models that adhere to no standards have no business being used to establish life safety critical policies!

I and another software engineer had an interaction with Gavin Schmidt of NASA regarding software quality assurance of their climate model or paleoclimate histories[1]. He noted they only had funding for 1/4 of a full time equivalent person to work on SQA – in other words, they had no SQA. Instead, their position was that the model’s output should be compared to others. This would be like – instead of testing, Microsoft would judge its software quality by comparing the output of MS Word to the output of another word processor. In other words, sort of a quailty-via-proxy analogy. Needless to say, this is not how SQA works.

Similarly, the climate model community always averages multiple runs from multiple models to create projections. They do this even when some of the model projections are clearly off the rails. Averaging many wrongs does not make a right.

[1] Note that NASA does open source their software which enables more eyes to see the code, and I do not mean to pick on NASA or Schmidt here. They are doing what they can within their funding limitations. The point, however is that SQA is frequently given short shrift in academic-like settings.

Is the pandemic nearly over with?

Read this paper by apparently well qualified authors. It presents an argument that we are near the end of the pandemic, that this end has been reached naturally, and that most interventions have had little effect: How Likely is a Second wave?

(Note – published on a “skeptic” site by qualified authors.)

I have been tracking my state’s data since late March, and drawing about 40 charts plus about a dozen separate calculated numbers. I have no expertise in health topics – but I can analyze data. I have been watching similar trends play out and have had similar thoughts or questions as presented in the non-peer reviewed paper, above.

Update: Here is an item from August that comes at the issue in a different way but has similar findings.  What should we think?

Wild fires: Is everything a single variable problem?

With wild fires out of control in west coast states, the new meme is to blame all of them on climate change. If only we controlled the climate, there would be no more fires.

A couple of weeks ago, Gov Newsom of California acknowledged there are multiple factors, but as fires worsened in California and Oregon, he switched to a full throttle message blaming the “climate emergency” specifically the “climate damn emergency”.

As if the fires are a single variable problem and there are no other factors.

Let’s assume the fires are due to climate change. What steps would you take to control the climate that would reduce fires in 12 months? Five years? Ten years? Twenty years? What steps would you have taken 4 years ago to have prevented the fires in 2020?

Realistically, there is no climate control knob you can adjust now to reduce the fire threat in the next several decades.

What you can control is how you choose to manage forests, cut fire breaks, use controlled burns, establish fire resistant building codes, limit future construction, and possibly remove ignition sources like overhead power lines through forest corridors. These are some of the additional variables that define fire threats. Yes, you can also work on climate change but those efforts will not have any impacts for decades.

Continue reading Wild fires: Is everything a single variable problem?

Interesting comparison finds ICL disease model was worthless

Disease models have been a fiasco from day one:

We can refer to a natural experiment in Sweden for some clarity. Sweden’s government did not lock down the country’s economy, though it recommended that citizens practice social distancing and it banned gatherings of more than 50 people. Swedish epidemiologists took the Imperial College of London (ICL) model – the same model that predicted 2.2 million Covid-19 deaths for the United States – and applied it to Sweden. The model predicted that by July 1 Sweden would have suffered 96,000 deaths if it had done nothing, and 81,600 deaths with the policies that it did employ. In fact, by July 1, Sweden had suffered only 5,500 deaths. The ICL model overestimated Sweden’s Covid-19 deaths by a factor of nearly fifteen.

Source: The Covid-19 Catastrophe – AIER

ICL’s model, was developed by Neil Ferguson, the architect of the UK’s lock down. He resigned after he was found to have had a tryst with his married lover while socializing was prohibited by his own lock down orders – and he had tested positive for Covid-19, himself shortly before.

Models have been a mess. Experts deeply critical of the ICL model.

Most people likely won’t get a coronavirus vaccine until the middle of 2021

I have been expecting a general roll out in Q2 of 2021 and am still sticking with that. We will hear much positive news on vaccines from now well into October and the first non-test phase recipients will receive vaccinations at some point during the last two months of the year (but very limited distribution).

Most Americans likely won’t get immunized with a coronavirus vaccine until the middle of next year, U.S. officials and public health experts say, even as the federal government asks states to prepare to distribute a vaccine as soon as November.

Source: Most people likely won’t get a coronavirus vaccine until the middle of 2021

I think we will see “herd effects” occurring from this point on ward, and especially by late this year. But I am an idiot with no health expertise so my comments are for Entertainment Purposes Only.

Of interest, the annual winter time influenza in the southern hemisphere has been so mild as to be almost non-existent; that is great news. No one knows why but some suggest may be because of social isolation, hand washing – and their favorite super hero, face masks. Regarding the latter, the University Washington published new  disease model projections (theirs have been mostly worthless) that surprising implies face masks do not work (but they did not seem to notice they had said that!)

The Institute for Health Metrics and Evaluation at the University of Washington’s School of Medicine is  predicting more than 410,000 deaths by January if mask usage stays at current rates. If governments  continue relaxing social distancing requirements, that number could increase.

Source:  Coronavirus live updates: Model predicts 410K US deaths by January; Labor Day weekend brings risk; South Dakota stages state fair

That is a more than doubling of deaths in the U.S. in the next 3 1/2 months – versus the past 5 months. And this occurs while face masks are now mandatory throughout most of the country and in all of the highly populated areas. National surveys indicate  face mask wearing is common and widespread in populated areas.; Newsweek reports that 95% of Americans are wearing face masks as required.

But the UW IHME is saying we will see a  more than doubling of deaths in the next 3 1/2 months – versus the past 5 months. They are saying – without realizing it – that face masks are  not working at all. We have high compliance but the death rate, per their estimate, will more than double in just over half the time as previous deaths occur.

This is a shocking finding – there are about 110 days between now and January. The CDC reports 186,153 deaths as of today. To reach 410,000 means an average of over 2,000 new deaths per day between now and January. As of today, the U.S. is averaging about 900 deaths per day.

Official CDC Chart as of 9/4/2020. To meet the UW IHME projection, the dropping death must not only reverse, but needs to double almost immediately to leaves not seen since last spring. Does this make any sense? 

To rise from 900 to 2,000 deaths per day means public health mitigation steps are not working. It means the U.S. would revert back to the peak deaths period that occurred in the spring.

That is a stunning conclusion from the UW’s IHME and apparently they did not notice what they just said.

A LOT of experts have said the IHME’s random number generation program is worthless and this new projection seems to reinforce what other experts are saying. Disease modeling is 21st century astrology and just as reliable.

Update: When will we resume having public events again? I am planning to attend a comic con event in early March 2021 but I am convinced it will be canceled. My guess is for vaccines to start rolling out in Q1 2021 but might not be available to the general population until Q2 2021. Then, it will take months to get the vaccine administered to millions of people.

Public health authoritarians will not reduce restrictions until some as yet unspecified metrics are achieved. For example, perhaps a positive Covid-19 test rate of 0.25% or something. Who knows what they will require?

This might happen in Q2 – or may be, like some of them have been saying in the media, we will face restrictions for the next 1 to 3 years. I doubt the public will agree with that – as the authors of the 2006 paper on public health mitigation note, pandemics end when herd effects take over, vaccines are available, the virus mutates to a less infectious or virulent form – or the public just gives up and gets on with life.