Colleges are racing to sign deals with “online proctor” companies that watch students through their webcams while they take exams. But education advocates say the surveillance software forces students to choose between privacy and their grades.
Reports to the FAA of “drone sightings”, used by Congress and the FAA to drive forth draconian remote identification and mandated national surveillance networks using drones, with the goal of pricing drone flying out of the public’s reach – were based on bad data and media hysterics, much of which was false reporting.
- Remember the Aeromexico flight in late 2018 that had a collapsed nose cone? The media blamed that on a drone. Six months later the official investigation found it was due to a maintenance defect on the nose cone.
- Remember the Gatwick Airport fiasco? The only confirmed drone sightings were of the fleet of surveillance drones operated by the Sussex Police over the airport.
- Remember the temporary Newark Airport closure due to a “drone sighting”? That drone report was from 20 miles away from the airport and may not have even been a drone at all.
Take a look at this – drone sightings have magically disappeared: Drone Sightings: The Actual Non-Hyped Numbers Analyzed (Graphs, Trends, etc.)
After awhile, when the FAA isn’t stealing Youtube content, they seem to have been busy making up fake drone reports to justify a remote ID proposal that mandates all drones be connected to the Internet cloud, in real time, and used as part of a massive national surveillance program, collecting imagery and telemetry and potentially sending it to China. Brilliant. Not like any drones would so something like that.
The FAA’s primary goal is to make hobby flying of radio control model aircraft so expensive and cumbersome as to eliminate it entirely. The reason is to clear the low altitude airspace for AmazonGoogleUPS delivery drones. The FAA asserts that it and it alone owns the airspace in your front and backyards from the ground up. Literally, the airspace below your head when you stand outside is controlled by the FAA and they intend to use it for corporate delivery and surveillance networks. (See my comments to see how that works.)
Claims they’ve turned it off due to “industry conversation” about such technology. The tech is kinda useless when everyone is required to wear an airway restriction device over their face:)
In the hearts of New York and metro Los Angeles, Rite Aid deployed the technology in largely lower-income, non-white neighborhoods, according to a Reuters analysis. And for more than a year, the retailer used state-of-the-art facial recognition technology from a company with links to China and its authoritarian government.
With 30% agreeing to install such apps today, that means just 9% of potential contacts could be detected.
The apps have a host of real problems:
- insufficient users to be useful. At 50% adoption, we can detect only 25% of potential contacts.
- unreliable signal strength-based distance determination, which fails in radio signal multi-path situations
- unable to detect when a barrier separates contacts. You sit outside at Starbucks and someone sits inside at a table. The inside person later tests positive for Covid-19. You receive a message. But they give you no indication where or when the contact occurred – so you have to go into quarantine for 14-days, delivering no benefit to anyone. This error can occur in buildings (through walls) or even between cars stopped at traffic signals or heavy traffic.
- unable to detect “across time” contacts. Person sits on bus, coughs, gets up, exits bus, new passenger sits in coughed on seat. These apps cannot detect this. Person sits at Starbucks tables, coughs, gets up and leaves, next person sits at contaminated table. None of these parties will be in Bluetooth contact and the apps will miss these contacts.
Bluetooth-based apps are not going to be effective. Singapore pulled the plug on their app due to insufficient users. The UK has been testing a Bluetooth app that was to have rolled out nationwide in mid-May. It’s still in testing and public information about the app has gone silent; it has not been rolled out yet. Norway has an app that uses both Bluetooth and GPS data, and used a central cloud database. This app was just ruled as violating privacy laws and has been pulled. Public health enthusiasts thought it was okay to violate privacy laws because laws do not matter to public health enthusiasts.
- I do not plan to install a tracing app on my phone.
- I do plan to be vaccinated as soon as vaccines are available.
- I was sick with Covid-19-like symptoms during almost all of March. Antigen tests were not available to normal people, only those who were already hospitalized with pneumonia and to the elite (like the Governor and her husband). My doctor suggested getting an anti-body test (end of May) but I declined as the accuracy is not sufficient (when the real world incidence is very low, the number of false positives will exceed true positives), and knowing if I was sick is not, at this time, actionable information.
One of the first national coronavirus contacts tracing apps to be launched in Europe is being suspended in Norway after the country’s data protection authority raised concerns that the software, called ‘Smittestopp’, poses a disproportionate threat to user privacy — including by continuously uploading people’s location.
It had been downloaded by 16% of the population over the age of 16. That means it could detect .16 x .16 or about 2% of potential contacts. It appears their app was based on location data, centrally stored, plus used the ineffective Bluetooth RSSI method of detecting potential contacts.
It appears that public health enthusiasts had used the “laws don’t matter in a pandemic excuse” to justify violation of EU privacy laws.
Two years ago, Europe introduced the world’s toughest data privacy legislation, putting on notice the tech giants of the world who’d grown fat off your personal data. The General Data Protection Regulation, widely known as the GDPR, is a far-reaching law designed to uphold the right to privacy for Europe’s citizens. It promises to issue bigger fines for data protection violations than have ever been seen before: 20 million euros, or up to 4% of a company’s annual worldwide revenue from the preceding financial year, whichever’s greater.
When people mention “Covid tracking apps” it would be useful to first define what is meant by “Covid tracking app”. There are many approaches in use and many that are proposed. The various methods are remarkably different. When you hear that “Country X used a tracking app and they have fewer cases”, this does not mean they used a tracking app like you have in mind.
Most apps use location data provided by the cellular network itself or on GPS/Wi-Fi position fixes stored on the phone and shared directly with public health authorities. Some use the data for contact tracing, coupled with free Covid-19 testing, while others use location data to enforce strict geo-fenced quarantine procedures that if violated, may result in arrest and imprisonment. Few existing apps use close contact tracing based on Bluetooth.
Contact tracing apps, by themselves, appear to provide little value. As we will see, to be useful there needs to be supporting infrastructure outside the app – such as Korea offering Covid-19 testing to those in close contact. And the app must be installed by nearly all smart phone users (and this will miss about 15% of phones that are not smart phones). Most countries are not using phone-based apps to track location – they are using the phone network to report locations on 100% of phones in use, which is very different than voluntary installation of a tracking app.
Consequently, when you hear someone refer to “contact tracing app”, you need to ask them to define what they mean by “contact tracing app”.
What follows is a review of various “contact tracing” apps used in different countries.
Experts believe that at least half the population needs to use contact-tracing apps for them to work. The challenge will be convincing the public to opt in after years of trust issues with big tech.
Picsix’s tool creates a fake cell tower that can fool a target’s phone into transmitting data to it. The device cannot read encrypted data, but instead tries a different tactic to get private information: making encrypted apps glitchy or even totally unusable. It’s a subtle but strong way to push a frustrated target away from a private app and toward a non-encrypted service that can easily be intercepted and eavesdropped on. The encryption itself is never broken—it is simply rendered useless.
Facebook and the like need to craft a professional code of ethics for the technology industry.
Where this is headed, naturally, is the concept of licensed professional engineers (P.E.) in software engineering. Development of a professional engineering licensing exam for software engineering was done many years ago. I believe Texas was the only state to offer the exam; however, due to low participation, they are discontinuing the software engineering PE exam as of April 2019.