Any AI governance framework should have human rights and freedom of expression at its core. The misapplication or anything but careful application of AI is a potential problem. It may be that it will be considered a human rights violation not to use AI in the near future, as it already is for using “dumb” rather than “smart” bombs around civilian populations. ARTICLE 19 has identified three main reasons often given for the banning of specific technologies or use cases for red lines: inaccuracies in performance; inherent, unnecessary or disproportionate risks to human rights which cannot be mitigated and the exacerbation of power imbalances between institutions using facial recognition on individuals. Furthermore, due to technical limitations or inadequate policies, many AI systems cannot offer transparency into their decision-making and often when lives and livelihoods are at stake. All this points to a pressing need for a shared understanding of minimum standards of transparency and accountability and which satisfy the tests of ‘legality’ and due process.