Category Archives: Software Quality

Criminals encrypted 1 million devices in $70 million ransomware attack

When these attacks become so large, at some point this will be seen a declaration of war, no different than bombing a country’s infrastructure:

The hacker gang behind an international crime spree that played out over the Fourth of July weekend say they’ve locked more than a million individual devices and are demanding $70 million in bitcoin to set them all free in one swoop.

The gang, the Russia-connected REvil, is best known for previously hacking JBS, one of the world’s largest meat suppliers, and briefly halting its operations across much of North America. But this attack’s potential scope is unprecedented, according to some cybersecurity experts.

Source: Hackers behind holiday crime spree demand $70 million, say they locked 1 million devices

More details on Google’s forced install of Covid-tracking app on phones in Massachusetts

Massachusetts launched a COVID tracking app, and uh, it was automatically installed?!

Source: Even creepier COVID tracking: Google silently pushed app to users’ phones | Ars Technica

Not only was it automatically installed, it does not appear with in the apps listed on your device. You can only see it if you go to the Google Play store, look up the app, and it shows as already installed on your device. You cannot uninstall the tracking app.

Update – yeah, it was stupid, creepy and tone deaf:

Elsewhere in the United States, uptake levels of contact-tracing apps have been “incredibly low,” said Sarah Kreps, director of the Cornell Tech Policy Lab, which studies the politics of emerging technologies. She called the launch of MassNotify at this stage in the pandemic “somewhat baffling.”

“It seems to show a lack of understanding about public behavior with respect to these apps, which is that people are more likely to use them if they think that this pandemic is still going on,” Kreps said.
Continue reading More details on Google’s forced install of Covid-tracking app on phones in Massachusetts

Will information systems be able to collect and track the vaccination records for hundreds of millions?

Doubt it, based on past experience:

But Hannan says the US immunization data collection systems at the state level are prepared to manage the expected crush of information from an unprecedented mass vaccination campaign like the one the country is about to start. Most tools were in place even before the pandemic hit.

Source: Multidose COVID-19 vaccines will test state tracking systems – The Verge

My state is famous for its Cover Oregon ACA health exchange albeit for the wrong reasons. After spending $450 million dollars on the agency, it never enrolled a single person and was shut down as a failed information system.

There is a long history of failed information systems. Will they be able to successfully track millions of vaccinations? We should all be worried about this as it could create another vaccine distribution bottleneck. The organization responsible for the Cover Oregon mess is also now responsible for the vaccination tracking. What could go wrong?

Continue reading Will information systems be able to collect and track the vaccination records for hundreds of millions?

Software: The greatest software error in history?

In The Price of Panic, a detailed, albeit dated, overview of the world’s Covid-19 response, the authors note the use of Neil Ferguson’s ICL Covid simulator software – and its rather outrageous errors – could be classified as the most expensive software error in history.

Ferguson’s model predicted massive numbers of deaths from Covid-19 including 2.2 million in the U.S. through spring of 2020. In his 20 years as a disease modeler, each of his predictions has been off by orders of magnitude.

In the case of his model output last March, his model was the basis for adopting lock down measures in the UK, the US and other countries. Yet his model was completely wrong.

Continue reading Software: The greatest software error in history?

Public Health England lost about 16,000 Covid-19 positive tests due to not understanding Excel spreadsheet limitations

The BBC has confirmed the missing Covid-19 test data was caused by the ill-thought-out use of Microsoft’s Excel software. Furthermore, PHE was to blame, rather than a third-party contractor.

Source: Covid: Test error ‘should never have happened’ – Hancock – BBC News

Could have happened to anyone. What’s 16,000 lost positive test results among friends, anyway?

Public health says it has no data on what works and what does not work

Oregon has released its periodic modeling update report.

Shockingly, they say they have no data on what measures work or do not work, or whether or not anyone is adhering to them. They have no information on whether some measures work better or worse than others. They have no data on any measures at all.

Continue reading Public health says it has no data on what works and what does not work

It’s worse than we thought: “Second Analysis of Ferguson’s Model”

In the past I had some comments on Neil Ferguson’s disease model and have repeatedly noted its poor quality. This model was used, last spring, as the basis for setting government policies to respond to Covid-19. Like many disease models, its output was garbage, unfit for any purpose.

The following item noted that the revision history, since last spring, is available and shows that ICL has not been truthful about the changes made to the original model code.

Source: Second Analysis of Ferguson’s Model – Lockdown Sceptics

THIS! Many academic models including disease models and climate models, average the outputs from multiple runs, some how imaginatively thinking that this produces a reliable projection – uh, no, it does not work that way.

An average of wrong is wrong.  There appears to be a seriously concerning issue with how British universities are teaching programming to scientists. Some of them seem to think hardware-triggered variations don’t matter if you average the outputs (they apparently call this an “ensemble model”).

Averaging samples to eliminate random noise works only if the noise is actually random. The mishmash of iteratively accumulated floating point uncertainty, uninitialised reads, broken shuffles, broken random number generators and other issues in this model may yield unexpected output changes but they are not truly random deviations, so they can’t just be averaged out.

Software quality assurance is often missing in academic projects that are used for public policy:

For standards to improve academics must lose the mentality that the rules don’t apply to them. In a formal petition to ICL to retract papers based on the model you can see comments “explaining” that scientists don’t need to unit test their code, that criticising them will just cause them to avoid peer review in future, and other entirely unacceptable positions. Eventually a modeller from the private sector gives them a reality check. In particular academics shouldn’t have to be convinced to open their code to scrutiny; it should be a mandatory part of grant funding.

The deeper question here is whether Imperial College administrators have any institutional awareness of how out of control this department has become, and whether they care. If not, why not? Does the title “Professor at Imperial” mean anything at all, or is the respect it currently garners just groupthink?

When a software model – such as a disease model – is used to set public policies that impact people’s lives – literally life or death – these models should adhere to standards for life-safety critical software systems. There are standards for, say, medical equipment, or nuclear power plant monitoring systems, or avionics – because they may put people’s lives at risk. A disease model has similar effects – and hacked models that adhere to no standards have no business being used to establish life safety critical policies!

I and another software engineer had an interaction with Gavin Schmidt of NASA regarding software quality assurance of their climate model or paleoclimate histories[1]. He noted they only had funding for 1/4 of a full time equivalent person to work on SQA – in other words, they had no SQA. Instead, their position was that the model’s output should be compared to others. This would be like – instead of testing, Microsoft would judge its software quality by comparing the output of MS Word to the output of another word processor. In other words, sort of a quailty-via-proxy analogy. Needless to say, this is not how SQA works.

Similarly, the climate model community always averages multiple runs from multiple models to create projections. They do this even when some of the model projections are clearly off the rails. Averaging many wrongs does not make a right.

[1] Note that NASA does open source their software which enables more eyes to see the code, and I do not mean to pick on NASA or Schmidt here. They are doing what they can within their funding limitations. The point, however is that SQA is frequently given short shrift in academic-like settings.

What if you could be convicted with secret evidence you cannot see nor contest?

All defendants have a right to review the evidence before them. When software applications produce a conclusion, then the software source code must be re-viewable by the defense.

The government argues it can use secret software against a defendant – software that may very well be defective (think Neil Ferguson’s Imperial College London’s secret disease modeling code that ignores all modern software engineering practices).

Can secret software be used to generate key evidence against a criminal defendant?

Source: EFF and ACLU Tell Federal Court that Forensic Software Source Code Must Be Disclosed | Electronic Frontier Foundation