news

The Federation of European Risk Management Associations (FERMA) welcomes the appointment by the European Commission of experts for the High Level Group on Artificial Intelligence (AI HLG) and calls for urgent attention to two priorities for European business.

The High-Level Expert Group will support the implementation of the European strategy on AI, including the development of ethical guidelines by the end of this year. There are currently no clear ethical rules for the use of data generated by AI tools, and the AI guidelines will take into account principles on data protection and transparency.

Therefore, FERMA is calling on the High Level Group to address immediately the two following priorities for corporate organisations:

  1. Draw a clear line between the opportunities of AI technologies and the threats posed by the same technologies to the insurability of organisations as a result of over-reliance on AI during decision making processes.
  2. Define ethical rules for the corporate use of AI not just for employees but also suppliers and all actors of the value chain. AI tools will allow increased and constant monitoring of a very high number of different parameters. The risk management profession believes that this greater use of data could create concerns among stakeholders and risks to reputation.

The President of FERMA Jo Willaert says, “FERMA stands ready to bring its unique expertise in enterprise risk management methodology and tools, such as risk identification and mapping, risk control and risk financing, to the discussion so we can manage the threats and opportunities posed by the rise of AI to our organisations and society within acceptable risk tolerances.”

He adds, “FERMA argues that the new possibilities offered by AI must remain compatible with the public interest and those of the economy and commercial organisation.  AI is already a reality in many organisations and it is going to disrupt our comprehension of the future.

Public authorities have a key role to play to ensure that there is a human judgement as a last resort. This dialogue between regulators and AI users must start now and the newly set up AI HLG and open access European AI Alliance are the right settings.