Advocacy

Strengthening Europe’s position as a global hub of excellence in AI

digitalisation focus of fermaOn 21 April, the European Commission proposed legislation setting new rules and actions governing Artificial Intelligence (AI) in Europe. It also released a new coordinated plan with Member States on AI.

The aims and objectives of this AI package are succinctly captured by the soundbites of the Executive Vice-President for a Europe fit for the Digital Age, Margrethe Vestager ‘on artificial intelligence, trust is a must, not a nice to have’; and, from the Commissioner for Internal Market Thierry Breton, ‘these proposals aim to strengthen Europe’s position as a global hub of excellence in AI’.

FERMA is pleased to see the Commission pursue a risk-based approach to AI.

That high-risk applications of AI come with requirements, such as enhanced risk management measures (ref. Article 9) is a logical step. Risk managers need to understand the new risks and opportunities stemming from AI and the potential impacts from this proposal for legislation in order to help support their companies to mitigate these risks. And, for this to be done effectively we will need good data. We appreciate the links made by the Commission to its Data Strategy in the proposal for a regulation on AI, and this is important. It remains to be seen how it all links together in practice.

Furthermore, and for more clarity on how enterprises are to assess the risks linked to uses of AI, FERMA sees merit in setting European standards organisations to task on standardized risk assessments, for example.

Whether high-risk or low-risk, a risk assessment should take place as part of an ex ante procedure for new products and subsequent re-assessment where applicable. To do otherwise would open the way for uncontrolled non-compliance within the AI legislative framework as the product evolves and develops. In many instances self-assessment may be sufficient and risk management methodologies such as Enterprise Risk Management already allow organisations to measure risks associated with AI and put in place the appropriate processes to mitigate them.

FERMA considers that a voluntary labelling for low-risk AI application would be very useful. Standardisation organisations have already begun some of this work, which the Commission should promote further especially as it is a market-driven solution that adheres to strong principles of consensus and transparency. The right safeguards will need to be put in place to avoid the labelling system being abused. We call upon the Commission to look further into this, since despite appearing in the ‘Impact assessment’ section in the text of the proposal, labelling for low risk AI does not appear elsewhere.

Finally, in considering the potential implications of this proposal, FERMA has concerns with potential uncertainties with regards to liability issues. Granted that on page 5 of its proposal the Commission addresses liability issues insofar as it mentions there are ongoing or planned initiatives in this area. However, we as risk managers above all else will need clear legal certainty on the potential liability issues arising from the use of AI applications. A key question to ask for risk managers is what new liability or cyber challenges arise from this proposal? And, to complement, what already existing liability or cyber challenges are altered, and how? We remain keen to see whether the Commission pursues a specific liability regime for AI, or whether it chooses to revise the Product Liability Directive.