Widespread AI systems, such as machine learning-based profiling and computer vision algorithms, lack established fairness methodologies. With the advent of the AI Act, regulators rely on self-control mechanisms to evaluate AI systems’ compliance with fundamental rights. But entrusting decentralized entities, e.g., data science teams, with identifying and resolving value tensions raises concerns. In practice, one soon runs into difficulties when trying to validate an algorithm. Such as selecting appropriate metrics to measure fairness in data and algorithms. How can normative issues regarding open legal norms relating to proxy-discrimination and explainability be resolved? This panel explores how decentralized AI audits can be performed in a more transparent and inclusive manner with the help of the concept of “algoprudence” (jurisprudence for algorithms). Additionally, the panel discusses how institutional entities can actively guide AI developers to comply with, for example, existing non-discrimination regulations.