The Institute of Risk Management (IRM) is the world’s leading enterprise-wide risk education Institute. We are independent, well-respected advocates of the risk profession, owned by practising risk professionals. IRM passionately believes in the importance of risk management and that investment in education and continuing professional development leads to more effective risk management.
We provide qualifications, short courses and events at a range of levels from introductory to expert. IRM supports risk professionals by providing the skills and tools needed to put theory into practice in order to deal with the demands of a constantly changing, sophisticated and challenging business environment. We operate internationally, with members and students in over 90 countries, drawn from a variety of risk-related disciplines and a wide range of industries.
As a not-for-profit organisation, IRM reinvests any surplus from its activities in the development of international qualifications, membership, short courses and events.
Artificial intelligence and the transformation of risk management
Artificial intelligence promises to transform the risk profession for both good and bad, as delegates at an IRM Innovation special interest group (SIG) workshop in London heard recently. The session, hosted by Lexis Nexis, explored how risk managers are using artificial intelligence, the challenges and opportunities it presents, and how AI is being deployed in climate change risk.
Mark Turner, chair of the innovation SIG, opened with a word of caution about the way that a potential lack of diversity in the risk management profession could impact on the effectiveness of machine learning. “Humans are naturally biased,” he said. “It’s what makes culture and it is partly what makes us human.”
Creating categories and identifying groups can be problematic. For example, if the majority of risk managers think that terrorists fit current media stereotypes, that unacknowledged bias may disadvantage certain social groups unfairly. “Because of the demographic of our profession we may not recognise our biases. But if we accept that we may have a blind spot, we can seek the views of people of different ages, gender, ethnicity, education and experience to help get the balance right,” he said.
Mark Dunn, portfolio manager at LexisNexis Business Insight Solutions, took a deeper dive into what AI is and the issues it throws up for those businesses and risk managers wishing to use it. He defined AI as “systems able to perform tasks normally requiring human intelligence,” such as visual perception, speech recognition and decision making. AI that could only complete a narrowly-defined task, such as ordering groceries, fell into the camp of weak AI; whereas the science-fiction flavoured strong variety of AI could potentially surpass current human capabilities, he said.
Examples of existing technologies that fall under the category of AI include machine learning, big data and several varieties of analytics. It was the interplay between these different technologies that gave AI such a wide range of application, he said.
He outlined several areas where AI was already impacting society and the opportunities and risks those initiatives posed. For instance, the ability to collect and disaggregate data more easily through the use of new technologies could improve the way that programmes and services are targeted to those who need them most. But, those very same technologies could make existing inequalities worse if they amplified the kinds of biases that Turner identified. For example, in the financial services sector, those who did not have access to the technology or the skills to harness online services – or protect themselves against fraud – could be at risk of further financial and social exclusion.
Similarly, new technologies could help improve and strengthen human rights – such as access to food, health and education. But because the field of AI is advancing rapidly ensuring accessibility across all sectors of the community could be an issue, he said. In addition, AI threatened to decimate traditional workforces if the impact of robotic process automation is not mitigated.
One major red flag that Dunn raised was privacy. AI technologies have created an unprecedented demand for personal information, with potentially grave consequences for individuals when such data is misused. “The flow of data internationally, and from private to state actors, can make the regulation of privacy more challenging, particularly in providing effective remedies,” he said. AI would create new forms of reputational damage, he predicted.
Angus Rhodes of Ventiv Technologies demonstrated how AI powered by IBM’s Watson program could help risk managers interrogate a large data set. He showed how risk managers could use Watson to compare the quality of the different data sets both from within the system and from external data (such as social media) using natural language commands. The program was able to analyse and compare a staggering range and type of data and display the information using a wide variety of visual styles. In fact, multiple visual styles could be layered on top of each other to build easy-to-read presentations.
Dr Srini Sundaram, founder and chief executive officer of Agvesto, explained how his company was using AI to reveal climate change risk and opportunities in the agricultural sector. The business represented a next generation platform for agricultural insurance, bond and credit markets, he said; “Our view is that technology platforms can transform the way financial products are delivered to agricultural sector”.
The company’s AI-powered system collates a disparate data set including everything from radar and optical observations of the earth and its weather systems, to historical events, farmers’ own data and market information. He said that farmers sat in the middle of a complex pattern of money flows, changing flood and temperature patterns, and demographic and political change. “One of the pressing questions is, ‘how do you protect the food flows?’” he asked. While the technology was still in its infancy and needed to demonstrate value to both users and regulators, the need to co-ordinate such complex data was aided by AI.
In a break-out session after the talks, the organisers asked risk managers in attendance – “how will risk management use AI and how will AI support risk management?” Risk managers said they wanted AI to help simplify complex data collation and analysis, help predict events and their rapidity, and to work in line with their organisations’ risk appetites. They wanted AI that was intuitive to use. On the other hand, risk managers did not want AI to be making decisions. They were also concerned about the potential for bias creeping into AI systems. “The more logic you put into machines, the more potential for systemic risk you get,” said one. Some said that as AI produced clearer and more accurate predictive visualisations, the risk manager would need to become more involved in ethics management.
If you like the content you see on youTalk-insurance why not take 20 seconds to subscribe to our free newsletter
- 24 Jun 2019
- 18 Feb 2019
- 11 Feb 2019
- 7 Feb 2019
- 21 Jan 2019
- 16 Jan 2019
- 13 Dec 2018
- 10 Dec 2018
- 3 Dec 2018
- 28 Nov 2018