DATE
Thursday 23 May 2024
SLOT
16.00
VENUE
Orangerie
ORGANISED BY
Panoptykon Foundation (PL) & AI, Media & Democracy Lab (NL)
FACILITATOR

Description

Over the last decade, social media platforms have fallen short on their promise to connect and empower people. Their business model comes with a strong incentive to prioritise user engagement over safety and quality of our online experience. This overarching commercial objective informs the design of recommender systems – a crucial layer of social media platforms, which determines how we find information and interact with content. Content ranking algorithms tend to amplify various types of borderline content, incl. hate speech, disinformation and click-bait. With shadow-banning and de-ranking as equally powerful and non-transparent tools, large social media platforms shape the digital public sphere in a way that benefits their commercial goals but does not serve social interests or democratic values. Individual users are told that their feed has been “personalized” but they have very few tools to influence what content will be recommended to them. The panel will critically examine EU regulatory response to challenges posed by large platforms’ recommender systems (esp. the Digital Services Act and the Commission’s enforcement powers under this regulation). Panelists will also discuss incentives and barriers to designing social media recommenders that would serve real users’ needs and a healthier online public sphere (incl. self-development, self-determination, access to high-quality and diverse content):

  • What points of legal intervention are possible to hold large online platforms accountable for harms caused by their recommender systems? Are the new powers of the European Commission under the Digital Services Act sufficient? Or should some of these harms be addressed in the upcoming revision of the European consumer protection regulation?
  • To what extent individual empowerment is an answer to systemic risks posed by social media recommender systems? What top-down measures (i.e. mandating “safer” or “healthier” default settings) may also be necessary? 
  • “User engagement” (as an objective determining the design of popular recommender systems) may not work for our digital wellbeing but it comes with clear metrics of success. And for this reason it is preferred by the shareholders. Can we translate value-based objectives (such as quality and safety of online experience) in metrics used by the designers of commercial recommender systems?

Did you see these?

You might be interested in these panels as well: