Discussants: Szakadát István & Tófalvy Tamás
Abstract
(The full text is available here)
News algorithms are key engines of today’s public information ecosystem where the prime road to secure material and symbolic profits is commonly and increasingly believed to lie in the most seamless „connection” of popular media users with maximally relevant and resonant information and advertisements.
Algorithmed news services – „newsfeeds”, „Trends”, editorial algorithms – that make personalized news recommendations based on social media buzz have been objects of growing resentment. As critical voices have warned, algorithmed news services enclose users into a world of „filter bubbles” and „clickbait” sensationalism (where trivia, sloganeering, malign conspiracy theories and fake news thrive).
Users, however, may not be that keen on living in a world of tribal and sensational news. Algorithmed services do all they can to document that people are genuinely attracted to tribal and sensational contents, however, this is arguably an illusion that algorithmed news infrastructures maintain in order to justify their own power. Alternatively, there are good reasons to argue that if the majority of users is found to follow algorithmic recommendations and to click on ideologically consonant contents, on simplistic fake news or bombastic stories, this is not simply because they would find these contents genuinely relevant for themselves, but also, and perhaps primarily, because they tend to mechanically and unselectively click on those contents that are falsely presented to them as genuinely „popular” and „relevant” among their peers.
News algorithms do not simply gratify „real” popular tastes, their main effects lies elsewhere: in leading people to believe that the contents they recommend are genuinely „personally relevant” for their peers. News algorithms lure users into the belief that the stories they recommend are the ones that their peers genuinely find the most relevant for themselves.
The „business model” of news algorithms rests upon exploiting users’ willingness to follow their peers’ judgment about what is relevant news – and not upon serving, as they advertise, users’ genuine personal preferences. This fact has been overlooked by algorithms’ critics and sympathizers equally – as both groups have tended to assume that algorithmized news services do indeed fulfill the promise of perfect news personalization. This promise of new media has been famously expressed in Nicholas Negroponte’s early metaphor from the 1990s: the „Daily Me”. In the new media world, accordingly, the Daily News, formerly edited and curated by journalists and media gatekeepers, would transform into a fully personalized news package that every user is served with every day. Today’s Newsfeeds in Facebook and Twitter have typically been justified and criticized in these terms, and most commentary has revolved around how to judge the consequences of a service that gratifies each user’s personal preferences.
However, „Daily Me” is a seriously misleading metaphor, given that what users truly interested in could be called „Daily Others” or „Daily Peers”. Users try to follow less their own than their peers’ news preferences when using algorithmed news services.
It is not enough, then, to criticize news algorithms for establishing a „tyranny of popular taste”. We also need to address the „tyranny” of the algorithmed intermediary apparatus which exploits people’s hunger to follow the news choices of similar-minded peers, and lures people into unselectively clicking on „recommended” contents that they believe to be genuinely the most relevant for their peers. This approach explains the filter bubbles and clickbait sensationalism from a new viewpoint that steps beyond the standard critique of algorithms as servants of a „tyranny of popular taste”. The proposed research opens new paths for a more effective critical and regulatory intervention into the algorithmed public sphere.