The Institute of Risk Management (IRM)

2nd floor Sackville House 143 - 149 Fenchurch Street London, EC3M 6BN
+44 (0)20 7709 9808
  • About IRM

    The Institute of Risk Management (IRM) is the world’s leading enterprise-wide risk education Institute. We are independent, well-respected advocates of the risk profession, owned by practising risk professionals. IRM passionately believes in the importance of risk management and that investment in education and continuing professional development leads to more effective risk management.  

    We provide qualifications, short courses and events at a range of levels from introductory to expert. IRM supports risk professionals by providing the skills and tools needed to put theory into practice in order to deal with the demands of a constantly changing, sophisticated and challenging business environment. We operate internationally, with members and students in over 90 countries, drawn from a variety of risk-related disciplines and a wide range of industries. 

    As a not-for-profit organisation, IRM reinvests any surplus from its activities in the development of international qualifications, membership, short courses and events. 

Artificial intelligence and risk management


Artificial intelligence is going to be “a redefining event” and there needs to be a debate about its consequences on risk managers, regulation and society at large, according to a gathering over over 150 people at a recent IRM/Imperial College Business School event.

Artificial intelligence is a “cognitive technology” that extends what used to be considered as human processes – such as thinking, learning and predicting – and embeds them in networked machines. And since the technology is often freely available today from major developers such as Google, IBM and Microsoft, start-up companies are just as able to disrupt industries as large, cash-rich organisations.

“Data is the source of competitive advantage when it is harnessed to the power of cognitive technologies,” said one presenter – the meeting was held under Chatham House rules to enable free debate. But most businesses still do not have their data structured in a way that enables the effective uses of machine-learning algorithms to develop processes that can help predict future behaviours. Applying natural language processing algorithms to the data will be absolutely essential.

Once the data is collated in a “big data lake”, organisations can look at prior histories to examine customer behaviour, and gain surveillance insights through monitoring employee and supplier activities by looking at voice data, electronic communications and chat rooms.

In risk management, for example, artificial intelligence can be used to merge policies, procedures, and controls with the regulators and regulatory changes to improve their organisations’ compliance.

But there were warnings too: “If organisations see risk managers as the policeman, they will not get the best data,” said one speaker. “The system has be set up to bring the right risk data to the right risk managers when and whether I need it. And, if people know that wrist management are watching they will come clean quicker.”

And questions:

  • What risks will we have to manage when artificial intelligence is democratised?
  • What are companies really using artificial intelligence for?
  • Will risk managers bring their own human biases to artificial intelligence?
  • How does such a really significant technology change the environment itself?
  • Should artificial intelligence risks be regulated?

The panel members conceded that they did not know the true, future capability of artificial technologies – which is why they had opened many of their algorithms to business:

The panel and participants emphasised the need to learn how humans and machines could work together effectively, transparently and ethically in future.