The goal of Youtube, obviously, is to increase the percent of time you spend watching Youtube, and their ads every few minutes:

New research published today by Mozilla backs that notion up, suggesting YouTube’s AI continues to puff up piles of ‘bottom-feeding’/low grade/divisive/disinforming content — stuff that tries to grab eyeballs by triggering people’s sense of outrage, sewing division/polarization or spreading baseless/harmful disinformation — which in turn implies that YouTube’s problem with recommending terrible stuff is indeed systemic; a side-effect of the platform’s rapacious appetite to harvest views to serve ads.

Source: YouTube’s recommender AI still a horrorshow, finds major crowdsourced study | TechCrunch

Machine learning-based recommendation systems are constantly seeking patterns and associations – person X has watched several videos of type Y, therefore, we should recommend more videos that are similar to Y.

But Youtube defines “similar” in a broad way which may result in your viewing barely related videos that encourage outrage/conspiracy theories or what they term “disinformation”. Much of that depends, of course, on how you define “disinformation” – the writer of the article, for example, thinks when a user watches a video on “software rights” that it is a mistake to then recommend a video on “gun rights” and implies (in most examples given) that this has biased recommendations towards right-leaning topics.

News reports also highlighted inappropriately steering viewers to “sexualized” content, but that was a small part of the recommendations. This too might happen based on the long standing marketing maxim that “sex sells“.

What seems more likely is the algorithms identify patterns – even if weak associations – and use that to make recommendations. In a way, user behavior drives the pattern matching that ultimately leads to the recommendations. The goal of the algorithm (think of like an Excel Solver goal) is to maximize viewing minutes.

Ultimately, what the Mozilla research actually finds is the recommendations are not that good – and gave many people recommendations that they regretted watching.

Yet the researchers and TechCrunch writer spin this into an evil conspiracy of algorithms forcing you to watch “right wing” disinformation. But the reality seems far less nefarious. It’s just pattern matching what people watch. Their suggestion of  turning off these content matches is nefarious – they want Youtube to forcefully control what you see – for political purposes – not simply increasing viewing minutes.

Which is more evil? Controlling what you see for political purposes or controlling what you see to maximize viewing minutes?

Coldstreams