DATE
Friday 24 May 2024
SLOT
11.50
VENUE
Orangerie
ORGANISED BY
Algorithm Audit (NL)
MODERATOR
FACILITATOR

Description

Widespread AI systems, such as machine learning-based profiling and computer vision algorithms, lack established fairness methodologies. With the advent of the AI Act, regulators rely on self-control mechanisms to evaluate AI systems’ compliance with fundamental rights. But entrusting decentralized entities, e.g., data science teams, with identifying and resolving value tensions raises concerns. In practice, one soon runs into difficulties when trying to validate an algorithm. Such as selecting appropriate metrics to measure fairness in data and algorithms. How can normative issues regarding open legal norms relating to proxy-discrimination and explainability be resolved? This panel explores how decentralized AI audits can be performed in a more transparent and inclusive manner with the help of the concept of “algoprudence” (jurisprudence for algorithms). Additionally, the panel discusses how institutional entities can actively guide AI developers to comply with, for example, existing non-discrimination regulations.

  • From the perspective of both individual and institutional legal protection, what are the implications of decentralizing decisions regarding fundamental rights, and what issues might it resolve or introduce?
  • How can normative disputes be settled when performing Fundamental Rights Impact Assessments (FRIAs) in AI development?
  • What is the role of regulatory bodies in providing guidance for resolving normative challenges regarding AI fairness?
  • What is “algoprudence” and how can it contribute to more fair AI decisions?

Moderator

Did you see these?

You might be interested in these panels as well: