Category Archives: Social media

Scapegoats: “Biden blasts social media after Facebook” misinformation

Public disappointed with government’s response to Covid-19, thus, must find a target to blame.

White House disappointed with social networks’ handling of lies, misinformation.

Source: Biden blasts social media after Facebook stonewalls admin over vaccine misinformation | Ars Technica

To be clear, social media is a frictionless platform for the spread of propaganda message. The business model is based on collecting dossiers on everyone for the purpose of “advertising” which is a subset of propaganda messaging. The purpose of propaganda is to persuade a target to adopt someone else’s agenda. For 7 years I have run a separate blog on the topic of propaganda (and also privacy issues) at SocialPanic.org.

These accusations are mostly blame-shifting – from inept public health messaging (the government) to someone else.

YouTube’s AI-based video recommendations skew, obviously, to what results in more views

The goal of Youtube, obviously, is to increase the percent of time you spend watching Youtube, and their ads every few minutes:

New research published today by Mozilla backs that notion up, suggesting YouTube’s AI continues to puff up piles of ‘bottom-feeding’/low grade/divisive/disinforming content — stuff that tries to grab eyeballs by triggering people’s sense of outrage, sewing division/polarization or spreading baseless/harmful disinformation — which in turn implies that YouTube’s problem with recommending terrible stuff is indeed systemic; a side-effect of the platform’s rapacious appetite to harvest views to serve ads.

Source: YouTube’s recommender AI still a horrorshow, finds major crowdsourced study | TechCrunch

Machine learning-based recommendation systems are constantly seeking patterns and associations – person X has watched several videos of type Y, therefore, we should recommend more videos that are similar to Y.

But Youtube defines “similar” in a broad way which may result in your viewing barely related videos that encourage outrage/conspiracy theories or what they term “disinformation”. Much of that depends, of course, on how you define “disinformation” – the writer of the article, for example, thinks when a user watches a video on “software rights” that it is a mistake to then recommend a video on “gun rights” and implies (in most examples given) that this has biased recommendations towards right-leaning topics.

News reports also highlighted inappropriately steering viewers to “sexualized” content, but that was a small part of the recommendations. This too might happen based on the long standing marketing maxim that “sex sells“.

What seems more likely is the algorithms identify patterns – even if weak associations – and use that to make recommendations. In a way, user behavior drives the pattern matching that ultimately leads to the recommendations. The goal of the algorithm (think of like an Excel Solver goal) is to maximize viewing minutes.

Ultimately, what the Mozilla research actually finds is the recommendations are not that good – and gave many people recommendations that they regretted watching.

Yet the researchers and TechCrunch writer spin this into an evil conspiracy of algorithms forcing you to watch “right wing” disinformation. But the reality seems far less nefarious. It’s just pattern matching what people watch. Their suggestion of  turning off these content matches is nefarious – they want Youtube to forcefully control what you see – for political purposes – not simply increasing viewing minutes.

Which is more evil? Controlling what you see for political purposes or controlling what you see to maximize viewing minutes?

Instagram no longer intended for photo sharing

Facebook’s head of Instagram said the service plans to start showing users full-screen, recommended videos in their feeds.

“We’re no longer a photo-sharing app or a square photo-sharing app,” Head of Instagram Adam Mosseri said.

Mosseri specifically highlighted TikTok as well as YouTube as serious competitors and reasons for these changes.

Source: Facebook tests changes to Instagram, would make it more like TikTok

  • They plan to force display videos full screen.
  • They intend to force display videos from people you do not follow.
  • They intend to make it a video focused social media service.
  • Instagram is looking at adding paid subscription-only features.
  • IG will add more branded content and more monetization features including “merch” sales and possibly voluntary Patreon-like payment features to creators.

Also, will it be possible to upload videos taken with anything other than a smart phone?

Ignoring the Olympics

It is weird that some think the purpose of the Olympics should be political messaging.

American Olympians criticized the International Olympic Committee for its reiteration of a ban on protests at the Games.

Source: Tokyo Olympics: American athletes criticize IOC over protest ban

And after doing political messaging, some lose sponsorships and  complain of poverty. Athletes seem to misunderstand why someone offered to pay them money – probably not to promote political agendas and potentially upset future customers.

Athletics and sports are entertainment – and nothing more. The participants lose sight of this and deceive themselves into thinking they are something more than entertainment.

Twitter’s Photo Crop Algorithm Favors White Women

Twitter’s automatic cropping feature was supposed to automatically crop photos around the people in view.

Turns out that it overwhelmingly favors white women over everyone else.

When researchers fed a picture of a Black man and a white woman into the system, the algorithm chose to display the white woman 64 percent of the time and the Black man only 36 percent of the time, the largest gap for any demographic groups included in the analysis. For images of a white woman and a white man, the algorithm displayed the woman 62 percent of the time. For images of a white woman and a Black woman, the algorithm displayed the white woman 57 percent of the time.

Source: Twitter’s Photo Crop Algorithm Favors White Faces and Women | WIRED

AI-based systems do this a lot with photos. Meanwhile, billionaire owned social media applies the same AI-based techniques to text to find “misinformation”. Undoubtedly, those methods are also biased but we  pretend otherwise.

AI-based systems are primarily based on pattern matching and machine learning. They train the pattern matching network by feeding it large amounts of data (photos or text, for example) and use the system to identify patterns. However, the “clues” that drive the pattern match may not be correct. For example, one photo processing system identified animals, sort of, in pictures. But the pattern matching was that pictures of wolves had a snow background – and most animals having a snow background became classified as wolves. In other words, the salient characteristics were incorrect, but that is how basic machine learning works.

The culture of perpetual outrage strikes Krispy Kreme

Far too many people have far too much time and far too easy of a life that they are compelled to join the culture of perpetual outrage over absurd issues:

The donut chain’s well-meaning (and literally sweet) incentive sparked some backlash on social media, with everyone from doctors to comedians pointing out that obesity — which is rampant in the U.S.— is also a prime risk category for the coronavirus.

Source: Krispy Kreme CEO defends COVID vaccine promotion: ‘If folks don’t want to visit a donut shop, they don’t have to’

Those caught up in the culture of perpetual outrage actively seek out targets for outrage – which is easy to do with today’s media and social media sources. They are not fulfilled unless outraged about something!

Sigh.

(This post belongs on my Social Panic blog but I was lazy this morning – or outraged! Hah hah).

How much does Youtube pay content creators?

There is no simple, one size fits all answer to that question.

This online article, though, provides an excellent explanation of the mix of revenue sources and possible revenue. If interested in the pay out schemes, check out that link.

Last night I watched a video that was basically “What ever happened to these sailing channels?”, referring to the large number of (mostly) attractive young people who think they can sail around the world and pay for it by posting videos of their travels on Youtube.

Only a few of these channels achieve success and many eventually disappear for reasons ranging from insufficient income, partners splitting up, some getting married and settling down on land, thinking about families, and coming to grips with reality of – at some point- needing to save real money for the future and not just beg for donations on Youtube. Plus some have decided they would rather return to a private life.