Agree with this – but it is even worse than what they describe:
But for all the consternation over the potential for humans to be replaced by machines in formats like poetry and sitcom scripts, a far greater threat looms: artificial intelligence replacing humans in the democratic processes — not through voting, but through lobbying.
ChatGPT could automatically compose comments submitted in regulatory processes. It could write letters to the editor for publication in local newspapers. It could comment on news articles, blog entries and social media posts millions of times every day. It could mimic the work that the Russian Internet Research Agency did in its attempt to influence our 2016 elections, but without the agency’s reported multimillion-dollar budget and hundreds of employees.
Source: Opinion | How ChatGPT Hijacks Democracy – The New York Times
You may have noticed on social media that the “appeal to authority” argumentative form is common. Basically, something is correct because some “authority” says so, goes the argument.
Except, as Bertrand Russell observed, an argument is true (or false) regardless of who says it. An argument is based solely on facts and logic – not who says something is true. Regardless, the “appeal to authority” works very well for propaganda messaging.
During Covid, we saw governments and others suppress speech on social media that did not agree with their views – often on subjects that have no clear right or wrong answers.
Imagine a world where people turn to The AI for answers. Imagine a question such as:
“Does the Covid vaccine prevent infection?”
A year ago, you could be censored on social media for asserting that a Covid vaccine does not prevent infection. Today we know that this is, in fact, true. Covid vaccines do not prevent infection and transmission – they reduce the risk of severe disease.
In that example, the government acted as censor via proxy, working hand in hand with Twitter, Facebook and Youtube.
Imagine a world where people argue their points by citing “The AI says so it must be true!”
The AI becomes the authority and its ease of access will quickly turn it into the “appeal to authority” argument to stifle unpopular perspectives.
Imagine a world where the people who control the AI choose what is true and what is false – this is where we are heading. A world controlled by those who control the dialog of The AI.
If you thought “misinformation” on social media was a problem, then you should be terrified of where The AI is going – and probably within a year.