Category Archives: Social media

Google to remove YouTube apps from Roku

After a months-long fight between Roku and YouTube’s parent company Google, Google announced Thursday that it would no longer allow Roku customers to download the YouTube or YouTube TV apps to their devices starting Dec. 9. (Roku customers who already have YouTube or YouTube TV installed will still be able to use those apps normally.)

Source: Google to remove YouTube apps from Roku

Google/Youtube appear to be lying about what happened; there is a paper trail.

I am so old, I remember when Google’s motto was “Don’t be evil”. The good old days…

“Researchers show Facebook’s ad tools can target a single user”

A new research paper written by a team of academics and computer scientists from Spain and Austria has demonstrated that it’s possible to use Facebook’s targeting tools to deliver an ad exclusively to a single individual if you know enough about the interests Facebook’s platform assigns them.

Source: Researchers show Facebook’s ad tools can target a single user | TechCrunch

I have past posts on how the Facebook ad system has been used to discriminate based on race, sex and age.

Advertisers on Facebook have previously been caught targeting ads to specific ethnic and age groups. Job ads are frequently constructed in a way to avoid being displayed to anyone over, say, age 35, or to avoid being seen by persons in certain ethnic groups.

For example, I have degrees in computer science (B.S.), software engineering (M.S.) and business administration (M.B.A.), previously worked in Silicon Valley and also for Microsoft. I have the top sought after degrees in job listings, say various surveys. Yet I have never – not once! – seen an add on Facebook for a job related to my background! The reason, of course, is I am above the cut off age for hiring in the tech sector.

Several companies including UPS, Verizon, State Farm Insurance were found to have used FB’s ad features to intentionally avoid hiring workers over the age of 35. Similarly, ads for nurses were targeted exclusively at women (about 90% of RNs are female, worse than the tech sector’s hiring issues). And then ads for police officers and truck drivers were targeted exclusively at men.

Tricks have also been used to target ads at – or not at – certain ethnicities and used to discriminate for jobs and housing.

Should “The algorithm” be regulated by the government?

Some think the government should regulate “The algorithm”:

Whistleblower Frances Haugen says the software that decides what we see in our social feeds is hurting us all. But reforming it won’t be easy.

Source: Lawmakers’ latest idea to fix Facebook: Regulate the algorithm

Another option perhaps better than regulation – give users the tools to tweak the algorithm as it applies to their own viewing habits.

Currently, FB shows us what FB thinks we want to see, which is not necessarily even close to correct. Many of us have noticed that some of our “friends” posts have literally disappeared – since we had not interacted in some time, FB decided we were not interested and no longer shows us their updates.

Why not make the algorithm “end user configurable”? Give us the options to choose what we permit to be seen in our news feed.

Oops: All your Twitch belongs to us

An anonymous hacker claims to have leaked the entirety of Twitch, including its source code and user payout information.

The user posted a 125GB torrent link to 4chan on Wednesday, stating that the leak was intended to “foster more disruption and competition in the online video streaming space” because “their community is a disgusting toxic cesspool”.

Source: The entirety of Twitch has reportedly been leaked | VGC

Facebook’s algorithms promote posts featuring “outrage” and “sensationalism”, leading to more outrage and divisiveness

Facebook curates your “news feed”. FB promotes posts that have likes, comments or shares, or which are from people you have previously interacted with. The posts from friends with whom you have not interacted much gradually vanish.

Thus, FB’s curated news feed hides much from you while promoting items that, as a side effect of their algorithm, are likely to be sensational or causing outrage resulting in more interaction.

Company researchers discovered that publishers and political parties were reorienting their posts toward outrage and sensationalism. That tactic produced high levels of comments and reactions that translated into success on Facebook.

They concluded that the new algorithm’s heavy weighting of reshared material in its News Feed made the angry voices louder. “Misinformation, toxicity, and violent content are inordinately prevalent among reshares,” researchers noted in internal memos.

Source: Facebook Tried to Make Its Platform a Healthier Place. It Got Angrier Instead. – WSJ

Many groups – such as political groups and those operating as propaganda arms to persuade others to adopt their position – figured this out and intentionally posted more provocative content in order to get their posts seen more widely.

The effect of Facebook and other social media “curation” is to promote a culture of outrage that feeds upon itself, creating still more outrage and divisiveness. This in turns leads to more and more angry people, and eventually spills over into real life confrontations and protests. Facebook has publicly denied this is their fault, while their internal documents show they’ve known about it for a long time.

Continue reading Facebook’s algorithms promote posts featuring “outrage” and “sensationalism”, leading to more outrage and divisiveness

“Credentialism” is an old argumentative form

Credentialism is a form of the Appeal to Authority argumentative form. This is considered one of the worst forms of argument – but it is used because it is effective.

Regarding Covid-19, this form is used to shut down those who raise other perspectives and ask valid but perhaps difficult questions.

Rather than address the issues or answer the questions, the response is to say “I have a credential and you don’t, therefore you are wrong“. That is a logically invalid argument, but the Appeal to Authority form is effective at persuading many.

Bertrand Russell said a fact is true regardless of who says it. He had been awarded a Nobel prize (see what I did there?)

I would not imply this is a tactic used only by the Left; however, the Left does seem to favor rule by technocratic elite, so there is that.

Scapegoats: “Biden blasts social media after Facebook” misinformation

Public disappointed with government’s response to Covid-19, thus, must find a target to blame.

White House disappointed with social networks’ handling of lies, misinformation.

Source: Biden blasts social media after Facebook stonewalls admin over vaccine misinformation | Ars Technica

To be clear, social media is a frictionless platform for the spread of propaganda message. The business model is based on collecting dossiers on everyone for the purpose of “advertising” which is a subset of propaganda messaging. The purpose of propaganda is to persuade a target to adopt someone else’s agenda. For 7 years I have run a separate blog on the topic of propaganda (and also privacy issues) at SocialPanic.org.

These accusations are mostly blame-shifting – from inept public health messaging (the government) to someone else.

YouTube’s AI-based video recommendations skew, obviously, to what results in more views

The goal of Youtube, obviously, is to increase the percent of time you spend watching Youtube, and their ads every few minutes:

New research published today by Mozilla backs that notion up, suggesting YouTube’s AI continues to puff up piles of ‘bottom-feeding’/low grade/divisive/disinforming content — stuff that tries to grab eyeballs by triggering people’s sense of outrage, sewing division/polarization or spreading baseless/harmful disinformation — which in turn implies that YouTube’s problem with recommending terrible stuff is indeed systemic; a side-effect of the platform’s rapacious appetite to harvest views to serve ads.

Source: YouTube’s recommender AI still a horrorshow, finds major crowdsourced study | TechCrunch

Machine learning-based recommendation systems are constantly seeking patterns and associations – person X has watched several videos of type Y, therefore, we should recommend more videos that are similar to Y.

But Youtube defines “similar” in a broad way which may result in your viewing barely related videos that encourage outrage/conspiracy theories or what they term “disinformation”. Much of that depends, of course, on how you define “disinformation” – the writer of the article, for example, thinks when a user watches a video on “software rights” that it is a mistake to then recommend a video on “gun rights” and implies (in most examples given) that this has biased recommendations towards right-leaning topics.

News reports also highlighted inappropriately steering viewers to “sexualized” content, but that was a small part of the recommendations. This too might happen based on the long standing marketing maxim that “sex sells“.

What seems more likely is the algorithms identify patterns – even if weak associations – and use that to make recommendations. In a way, user behavior drives the pattern matching that ultimately leads to the recommendations. The goal of the algorithm (think of like an Excel Solver goal) is to maximize viewing minutes.

Ultimately, what the Mozilla research actually finds is the recommendations are not that good – and gave many people recommendations that they regretted watching.

Yet the researchers and TechCrunch writer spin this into an evil conspiracy of algorithms forcing you to watch “right wing” disinformation. But the reality seems far less nefarious. It’s just pattern matching what people watch. Their suggestion of  turning off these content matches is nefarious – they want Youtube to forcefully control what you see – for political purposes – not simply increasing viewing minutes.

Which is more evil? Controlling what you see for political purposes or controlling what you see to maximize viewing minutes?

Instagram no longer intended for photo sharing

Facebook’s head of Instagram said the service plans to start showing users full-screen, recommended videos in their feeds.

“We’re no longer a photo-sharing app or a square photo-sharing app,” Head of Instagram Adam Mosseri said.

Mosseri specifically highlighted TikTok as well as YouTube as serious competitors and reasons for these changes.

Source: Facebook tests changes to Instagram, would make it more like TikTok

  • They plan to force display videos full screen.
  • They intend to force display videos from people you do not follow.
  • They intend to make it a video focused social media service.
  • Instagram is looking at adding paid subscription-only features.
  • IG will add more branded content and more monetization features including “merch” sales and possibly voluntary Patreon-like payment features to creators.

Also, will it be possible to upload videos taken with anything other than a smart phone?