news

The European Union published a set of Ethics Guidelines for Trustworthy Artificial Intelligence on 8 April. These guidelines were prepared by the EU High-Level Expert Group on AI, an independent expert group set up by the European Commission in June 2018, and build on the results of a public consultation to which FERMA provided feedback.

FERMA welcomes these ethics guidelines, the first in the world, aimed at strengthening public trust in AI. They are not legally binding, but could shape future EU legislation. The EU wants to be a leader in ethical AI, as it has done with the GDPR for private data. It aims to build an international consensus on AI ethics guidelines.

The guidelines consist of  seven ethical requirements to be followed by companies and governments when developing applications of AI:

  1. Human agency and oversight: respect of fundamental rights, human agency and maintaining human oversight
  2. Technical robustness and safety: resilience to attack and security, fall back plan and general safety, accuracy, reliability and reproducibility
  3. Privacy and data governance: respect for privacy, quality and integrity of data, and access to data
  4. Transparency: traceability, explainable and communication
  5. Diversity, non-discrimination and fairness: the avoidance of unfair bias, accessibility and universal design, and stakeholder participation
  6. Societal and environmental wellbeing: sustainability and environmental friendliness, social impact, society and democracy
  7. Accountability: auditability, minimisation and reporting of negative impact, trade-offs and redress.

The report includes a list of practical questions (“trustworthy AI assessment list”) to help users implement the requirements in practice. This list will be tested by stakeholders in order to gather feedback for improvement.

Interested stakeholders can register for the piloting process and start testing the assessment list. Feedback will be gathered through an online survey, which will be launched in June 2019. Based on this feedback, the High-Level Expert Group on AI will propose a revised version of the assessment list to the Commission in early 2020.