Social media have fallen short on their promise to connect and empower people. By funding their business model on advertising, very large online platforms (VLOPs) gained incentive to prioritise user engagement over safety. There is mounting evidence of the harms caused by recommender systems being optimised for engagement. With the Digital Services Act in place, we expect that VLOPs will mitigate systemic risks to fundamental rights and democracy caused by their core services. Civil society experts argue that VLOPs should depart from signals and metrics that correlate with user engagement and prioritise signals that correlate with relevance and credibility of the recommended content. What other ingredients do we need in the recipe for safer and rights-respecting recommender systems? Can we train content ranking algorithms to predict “quality” or “credibility” instead of engagement? What can go wrong? Let’s discuss.