news

The Ethics Guidelines

The first assembly of the AI Alliance, a forum set up by the EU to discuss all aspects of Artificial Intelligence (AI) with stakeholders, took place in Brussels on 26 June 2019.

The European Commissioner for Digital Economy Mariya Gabriel opened the conference with the announcement of an ambitious target to increase EU competitiveness in AI and try to catch up with the China and US in terms of investment. She proposed an investment objective of € 20 billion annually for the period 2021-2027, using the new EU funding programmes Horizon Europe and Digital Europe.

AI is a topic of long term interest for risk managers. FERMA has set up a working group both to assess the potential value of AI for improving enterprise risk management and to understand how risk managers can be key actors in highlighting the opportunities and challenges of AI technologies to senior management. The working group will present its conclusions at the FERMA European Risk Management Forum in Berlin in November and may publish them as a report.

For the Commission, AI will remain a high priority for the new mandate 2019-2024. The High Level Group on AI has now launched a pilot exercise for testing its Ethics Guidelines for AI published in April 2019. All stakeholders are invited to try the proposed assessment list that operationalises the key requirements for an ethical use of AI that are set out in the report. Feedback will be used to update the assessment list in early 2020. The principles and requirements will not be reviewed in 2020.

The pilot test will be central to determining the approach to regulation. The Commission’s current intention is to be cautious with regulation, applying proportionality, flexibility and a sectorial approach. A new conference will take place after the pilot phase, around June 2020.

Policy and investment recommendations

The High Level Group on AI also published its policy and investment recommendations. The document contains 33 recommendations addressed to the European Commission and the EU member states.  The following are particularly important for businesses and the risk management community:

  • A regulatory framework that is risk-based, proportionate and effective, starting with a mapping of EU laws to see if they are fit for purpose as regards AI development
  • Sandbox regulation to test new concepts, to reduce regulatory barriers
  • Public procurement to work with start-ups and SMEs as the backbone of AI investments
  • Deeper analysis per sector, for example healthcare, finance and manufacturing
  • Documentation, publication and discussion of negative applications of AI
  • Creation of a lucrative and open investment environment, based on a single market for a trustworthy AI and an EU data economy

The group proposes that there should be a 10-year vision with a rolling action plan, continuously monitoring the AI landscape and applying and learning quickly from new AI developments over a long period.