Featured

Recommendation System: How AI is Controlling What You See?

Published

on

Have you ever opened YoutTube to watch a 5-minute video, but 30 minutes later, you found yourself engulfed in the maze of your feed page? Well, you are not alone. Years ago, what started off as a debate on the “Ethics of recommendation system” has spiraled into a serious problem.

But, what is the recommendation system? How is it influencing us to binge more? And, does the water runs deeper in the vicious pond of attention economy than what one can see on the surface?

Recommendation System: Shaping Minds

We can take sips of the content from the fire hose of information pointed our way every day of the week, with recommendation systems, by finding a few items that seem particularly valuable or relevant in a vast catalog.

However, while the recommendation system is immensely valuable, they also have a number of serious ethical failure modes. Some of which arise from companies build recommendation systems solely on the basis of user feedback, without taking into account the broader implications these systems have for civilization, society, and young minds.

Also Read: Impact of Technology on Human Rights

These implications are significant, and they are growing rapidly. Twitter and Google use their recommender algorithms routinely to shape public opinion on moral issues of the day, sometimes by design, sometimes by accident.

All the four primary stakeholders of the modern recommendation systems have a different set of interests and reasons to care about the recommendation quality:

  • End-User: The consumers directly engrossing the recommendations from Netflix, YoutTube, Twitter, Instagram, etc
  • Systems: The AI or back end operators who are responsible for the immediate user satisfaction
  • Companies: The operator of these recommendation systems furnishing the attention and user retention on the platform to make a profit alongside determining their long-term satisfaction
  • Society: The quality of recommended content influencing human preferences of content-consumption in the long-term

The Reason Behind Your Blind Scrolling

Social media giants thrive through the recommendation algorithms fabricated in their products, but at what price? Using addiction to generate more revenue and cash cowing teens and amateur minds into binging more is the stepping stone towards a successful site.

Tik-Tok has long been accused of denting mental health, narrowing attention span, and is extremely addicting. Now, Meta with Instagram Reels and Google with YouTube shorts have joined the race of reaping money at the cost of ruining lives.

Also Read: Tiktok Blackout Challenge: The Resurfacing Danger Taking Lives

Though surfacing a bit, the issue will evolve in severity as the negative impacts compound over the coming years.

The Endless Well

Recommenders make implicit choices depending on the time horizon that matters to their users. By seducing you into taking actions that will make you happy over very short timescales, Twitter, for example, aims to dominate user attention for as long as possible.

But decisions that feel right on a short timescale don’t always feel right on a longer timescale. So, for example, even though you were performing only the actions you “wanted” to do at the moment, you could feel as if your time was wasted after you logged off. Similarly, some recommenders have different time horizons (e.g., Amazon wants to know if you trust its reviews, so they ask you to buy a product and test it before you leave the feedback), but users have little control over the horizon on which they want their interactions with software evaluated.

Also Read: Techno-Racism: Technology Automating Racial Discrimination

We desire to improve and grow in our interactions with AI systems like recommenders. In contrast, our desire to preserve agency and control our growth and improvement trajectory is naturally tense.

Twitter, for example, does more than present its users with suggestions – it shapes their views on current events and actively influences their preferences. As a result, it is transforming its users into different individuals. But, when someone’s identity can shift due to their interaction with a recommendation system, how much control should they have over the direction or magnitude of that change? That’s why ethics in recommender systems remain an open question – and a big one.

The Dark Side of Youtube Algorithm

Videos have become the native language of the digital world, from tutorials of assembling an Ikea cabinet to free crash courses on digital marketing, Youtube homes it all. It has become the dominating digital space for learning new things to entertaining users for hours with its ocean of content.

However, therein lies a dark side of YouTube. It utilizes a top-of-the-class AI-powered recommendation system to keep our eyeballs stuck to the screen. Ever since implementing the recommendation system in 2015, YoutTube’s engagement rate has skyrocketed.

Also Read: Is Social Media Inflicting Extremist Movements and Riots Globally?

But, the problem starts when this algorithm pushes whatever it deems engaging, even if it is mentally damaging or could potentially harm the user. For example, you might be done watching a tutorial on how to chop onions the right way. Still, the next one may well be about how vaccinations are poisonous; climate change is a hoax, or how starving yourself for 20-hours a day could give you your ideal body.

Also Read: TikTok: The Unhealthy Glorification of Eating Disorder Amongst Teens

Though Google tries to take most of the inflicting videos down, but not before they have influenced millions, this is just the tip of the iceberg of the intermingled profit game with AI. It is a new ear with newer challenges as real as that climate change is real-no matter what the video claims.

Recommendation System: The Ethical Dilemma

Recommendation systems are the cornerstone of the internet, especially the social media economy. They make it easy for you to funnel the content you might want to consume in todays’ unimaginably large data present online.

Perhaps we should start by asking some big questions about the kind of world we want to live in and then work backward from there to determine how our answers would affect the way we evaluate recommendation engines.

Trending

Exit mobile version