news

EU Policy Note addresses practical considerations for risk managers of new regulation and considers insurance implications and the need to assess ‘Silent AI’.

Brussels, 15 October 2024 – FERMA has issued an EU Policy Note on the EU’s Artificial Intelligence Act (EU AI Act) which provides guidance on the practical implications of the risk-based approach underpinning the legislation and considers the potential insurance impact.

Download it here

The EU AI Act, published in July, will apply to all 27 EU Member States with companies expected to comply starting in February 2025. It aims to create a high level of protection of health, safety and fundamental rights against the potential harmful effects of AI systems. The risk-based approach at its core classifies AI systems from low or minimal risk to unacceptable risk, with most regulatory requirements applying to high-risk systems.

Under the legislation, high-risk systems must be registered in an EU database and must comply with specific obligations relating to data training and governance, transparency and risk management systems.

“The AI Act is arguably one of the most significant regulations introduced by the EU in recent years given the potential impact of AI across every aspect of our lives,” says Philippe Cotelle, Board Member, FERMA and Chair of the Digital Committee. “It not only places a clear onus on risk managers to raise their game on AI, but it also addresses another piece of the puzzle which is how this all impacts upon topics such as liability and innovation.”

The Policy Note highlights three essential pillars of an approach aimed at making the most out of the new requirements, which can act as a basis for risk managers to consider in their organisations:

1. Development of an AI strategy and transposition into a suitable governance framework, which can be demonstrated by a policy document and end-to-end processes implementation.

2. Implementation of the appropriate technology and investment in the continuous training of employees and partners, as well as providing documentation and guidance for customers.

3. Governance and technology are designed in a way that anticipates audit requirements; and, pursuing a formal certification is recommended, although not explicitly required by law.

In this context, FERMA advises risk managers to follow an internationally recognised ethical standard, to clearly define the scope of the policy and roles and responsibilities, and to consider the scope of the environment in which their organisation’s AI system operates.

The Policy Note calls on companies to invest in safe technology implementation, as well as training. FERMA encourages risk managers to consider creating an internal set of benchmarks to measure AI system performance, and to ensure users are trained to mitigate the risk of misuse, unethical outcomes, potential biases, inaccuracy, and data and security breaches. All uses of the system, it adds, must align with the AI policy.

“FERMA research has shown that most risk managers are focused on addressing AI-related risks,” said Typhaine Beaupérin, CEO, FERMA, “with key responsibilities including monitoring of regulatory developments and developing internal policies to govern the use of AI in business-related activities. Having clear and targeted guidance on how the evolving legislative environment directly impacts businesses is critical to supporting practitioners in addressing this rapidly evolving risk.”

From an insurance perspective, FERMA also considers how the impact of AI on insurers may flow through to corporate risk and insurance managers.

The Policy Note further advises risk managers to consider analysing potential ‘Silent AI’ – the unknown or unquantified exposures to AI that have the potential to impact other insurance policies currently. Looking further ahead, it proposes that risk managers should seek to evaluate their need for a new type of product related to the way their enterprise uses AI, in line with risk appetite and estimated exposures.

During the first half of 2024, FERMA ran a series of AI-focused webinars featuring speakers from Armilla, Lloyd’s of London, SAP, Moody’s, and Yields.io; as well as an
interview with the Head of Office and Digital Policy Adviser for Member of European Parliament Axel Voss, Kai Zenner. The FERMA Forum in Madrid on 20-22 October will also include a Learning Session on the implications for risk managers of the EU AI Act as well as workshops relating to different aspects of the risk implications of AI.