DATE
Wednesday 22 May 2024
SLOT
11.50
VENUE
Class Room
ORGANISED BY
ARTICLE 19
MODERATOR
FACILITATOR

Description

Any AI governance framework should have human rights and freedom of expression at its core. The misapplication or anything but careful application of AI is a potential problem. It may be that it will be considered a human rights violation not to use AI in the near future, as it already is for using “dumb” rather than “smart” bombs around civilian populations. ARTICLE 19 has identified three main reasons often given for the banning of specific technologies or use cases for red lines: inaccuracies in performance; inherent, unnecessary or disproportionate risks to human rights which cannot be mitigated and the exacerbation of power imbalances between institutions using facial recognition on individuals. Furthermore, due to technical limitations or inadequate policies, many AI systems cannot offer transparency into their decision-making and often when lives and livelihoods are at stake. All this points to a pressing need for a shared understanding of minimum standards of transparency and accountability and which satisfy the tests of ‘legality’ and due process.

  • How might freedom of expression concerns be addressed by a global governance framework for AI? (Reference to multilateral initiatives such as the UN’s ‘Pact for the Future and ‘the Global Digital Compact’ amongst others)?
  • How can we ensure that there is consistency in the application of ‘red lines’ globally?
  • What are the implications of the EU AI Act in setting red lines?
  • What tactics/strategies need to be employed to build red lines within AI governance frameworks?

Moderator

Did you see these?

You might be interested in these panels as well: