Over the last decade, social media platforms have fallen short on their promise to connect and empower people. Their business model comes with a strong incentive to prioritise user engagement over safety and quality of our online experience. This overarching commercial objective informs the design of recommender systems – a crucial layer of social media platforms, which determines how we find information and interact with content. Content ranking algorithms tend to amplify various types of borderline content, incl. hate speech, disinformation and click-bait. With shadow-banning and de-ranking as equally powerful and non-transparent tools, large social media platforms shape the digital public sphere in a way that benefits their commercial goals but does not serve social interests or democratic values. Individual users are told that their feed has been “personalized” but they have very few tools to influence what content will be recommended to them. The panel will critically examine EU regulatory response to challenges posed by large platforms’ recommender systems (esp. the Digital Services Act and the Commission’s enforcement powers under this regulation). Panelists will also discuss incentives and barriers to designing social media recommenders that would serve real users’ needs and a healthier online public sphere (incl. self-development, self-determination, access to high-quality and diverse content):