The Algorith's Invisible Hand

📅2024-09-26, 5 minutes

After the trailer of a new series with Bill Gates, I had to review the predictions from 1995 under a new scope. We will review how social media platforms and their algorithms can sometimes reinforce our existing beliefs and ideas, create filter bubbles and influence our decision-making processes. These insights takes us on a journey from mass media advertising to hyper-personalised content, and along the way, we'll meet some fascinating people and explore how group dynamics and influencers can shape our opinions and behaviours. By taking a closer look at how algorithms work, the essay shows how these systems tend to reinforce certain ideas while making it harder for diverse viewpoints to be heard.

Introduction

Back in 1995, when the Internet was just starting to take off, Bill Gates had a vision of "customised information" (Gates et al., 1995) [1]. His vision of personalised news services, whether curated by humans or software agents, has not only come to fruition but has evolved into something really quite amazing. It has become a complex ecosystem that shapes our media consumption and, by extension, our worldviews.

These days, the algorithms used by platforms like Facebook and YouTube tend to show content that fits with what users already believe. This can sometimes reinforce cognitive biases, such as the illusory correlation – the tendency to perceive connections where none exist (Grier & Brumbaugh, 1999) [2]. This kind of algorithmic curation can really intensify what scholars call "filter bubbles" or "information cocoons," where exposure to opposing viewpoints is minimized (Helberger, 2019).

This customisation has a big impact on society as a whole. People often look for reassurance from online communities, and social media influencers and group dynamics can play a big part in this. These things, combined with algorithms, can reinforce behaviours and beliefs, making collective viewpoints even stronger (Maks & Craft, 2017).

Decision-Making and Choice Architecture

The study of decision-making has evolved significantly since the early economic models that focused on rational agents maximising utility. The latest research in decision-making and choice architecture has moved beyond these early models. Kahneman and Tversky's prospect theory, introduced in 1979, unequivocally shifted attention to how perceived losses and gains influence our choices. In the context of algorithmic manipulation, it is clear that the fear of losing individuality or autonomy plays a crucial role. This makes users more susceptible to conforming to ideas presented by influencers or peer groups, especially under the pressure of potential social exclusion.

Television was the catalyst for the shift from mass media to hyper-personalisation. It provided advertisers with a platform to influence behaviour on a large scale. TV commercials used psychological techniques to appeal to emotions and personal identities, paving the way for more targeted approaches in the internet age. The digital revolution has turbo-charged personalised advertising by leveraging user data, enabling highly specific ads based on browsing habits and preferences (Li et al., 2011) [5].

The current social media landscape is defined by sophisticated algorithms. Machine learning allows these systems to predict and influence not only individual behaviour but also broader social trends. Social media giants deploy algorithms to curate news feeds, prioritising content likely to elicit engagement – often emotionally charged or sensational material. Research proves that these algorithms can exacerbate political polarisation and contribute to echo chambers, where users are primarily exposed to information that reinforces their pre-existing beliefs (Diakopoulos, 2016)[6].

The Mechanics of Manipulation

There is no doubt that the manipulation of individual and group choices by algorithms is a growing concern. It is evident that social media platforms use these techniques to normalise specific ideas or behaviours. They do so by reinforcing certain behaviours through group validation or influencer marketing.

Influencers are key in shaping public opinion on everything from fashion choices to political views. Algorithms amplify their reach by promoting content that resonates with target audiences, creating a feedback loop of validation and reinforcement. This collective choice mechanism inevitably leads to the normalisation of certain behaviours while simultaneously marginalising or suppressing dissenting voices through the same algorithmic processes.

Conclusion

Algorithmic curation has transformed our information landscape. It has moved far beyond simply providing personalised news. There is no doubt that algorithms have fundamentally altered how individuals interact with information and make decisions. By reinforcing cognitive biases and promoting influencers, while shaping collective choices, they have effectively changed our information landscape.

It is imperative that we critically examine and regulate algorithmic influence as our society becomes increasingly dependent on these platforms for news and social validation. The challenge lies in balancing the benefits of personalization with the preservation of diverse viewpoints and individual autonomy in decision-making.

References

[1] Gates, B., Myhrvold, N., & Rinearson, P. (1995). The Road Ahead. Viking Penguin. ISBN 978-0-670-77289-6.
[2] Grier, S. A., & Brumbaugh, A. M. (1999). Noticing cultural differences: Ad meanings created by target and non-target markets. Journal of Advertising, 28(1), 79–93.
[3] Helberger, N. (2019). On the democratic role of news recommenders. Digital Journalism, 7(8), 993–1012.
[4] Maks, A., & Craft, S. (2017). Measuring news media literacy. Journalism and Mass Communication Educator, 72(2), 228–241.
[5] Li, C., Kalyanaraman, S., & Du, Y. R. (2011). Moderating effects of collectivism on customized communication. Asian Journal of Communication, 21(6), 575–594.
[6] Diakopoulos, N. (2016). Accountability in algorithmic decision making. Communications of the ACM, 59(2), 56-62.

Image source: @laviperchik hosted on Unsplash.com. Retrieved: 2024-09-25.