Intersys’ Managing Director Matthew Geyman shares his thoughts on the Rise of the Chief AI Officer.
As someone helping businesses integrate emerging technologies and manage cyber security, I have a close interest in AI. If I were to sum up where we are today, I’d say using AI in business is like walking a tightrope between a colossal opportunity and an equally large information security risk.
Can we do nothing? I think not. The opportunity cost of not leveraging the advantages of AI could hold back growth. But letting AI run loose with your internal data is potentially disastrous.
Deploying AI correctly is paramount. This is why I see an for some organisations designating a chief AI officer (CAIO), although most should consider it a subset of the chief technology officer (CTO) function, something I touched on in a recent Insurance Times interview.
The role has emerged only relatively recently, but my bet is that more and more organisations will appoint a CAIO to ensure guardrails are in place for proper AI deployment and compliance. The alternative – to not show due diligence with AI – hardly bears thinking about.
Which AI are we talking about?
When talking about AI and products currently available on the market, we’re specifically referring to:
Predictive AI, which has been around for a while and learns from the past to make accurate forecasts.
Generative AI, which emerged to the wider public with the launch of ChatGPT in November 2022. This creates new content based on complex data patterns and includes large language models (LLMs).
These iterations of AI remain specialised and tool-focused, evolving alongside advancements in data and computational power.
Artificial General Intelligence (AGI), which can truly understand and reason at a human level or beyond, is still theoretical and some way away.
What are the opportunities?
You’ll know, at least in part, the answer to that question if you’ve used now-common services such as ChatGPT, Claude AI or Microsoft 365 Copilot. However, I’d like to go a little deeper by looking at a sector we serve – the insurance markets – to illustrate the huge potential for industry-specific applications.
It’s also worth noting that a LinkedIn report about the future of work and AI highlighted that financial services are near the top of the pile for demand for AI skills across seven countries, including the UK.
Here are two areas in which AI is likely to make a significant difference to the insurance sector in the months and years ahead:
Due diligence and risk assessment
AI can analyse historical claims data, financial records and other sources to assess client risk profiles and identify red flags and risk trends. This improved Risk Selection can vastly improve underwriting decisions and lower expense ratios. Meanwhile, machine-learning algorithms can detect patterns that indicate fraud by analysing vast data sources including claims histories, customer behaviours and external data sets.
Loss modelling and risk management
It’s generally accepted in insurance and reinsurance that existing NatCat and Climate Change models are broadly inadequate. To quote the Financial Times, ‘The sector has been rocked four years in a row as natural catastrophe losses topped $100bn.’
Catastrophe modelling using AI can incorporate real-time weather data, geographic risk factors and historical loss data to improve the accuracy of loss predictions. Meanwhile, predictive AI can forecast the probability and potential scale of claims based on various scenarios, from natural disasters to economic downturns, enabling reinsurance companies to price their coverage accurately and manage exposure.
While these two areas of application catch the eye, there are many more ways the industry can use AI, for instance to improve pricing and underwriting models, automate claims systems and interact with customers. For many, AI will seems likely to feature in Blueprint Two strategies, the London Market insurance industry’s drive to accelerate digital change.
But as always, there’s a catch…
The limitations and dangers of AI
At Intersys, we are increasingly being asked to integrate AI for clients – with an emphasis on getting things up and running quickly. Customers want to see the benefits and they want to see them now. However, as with any disruptive innovation, there’s a need for proper planning and a high degree of control. This is because of both the limitations and the dangers of AI.
For instance, until very recently, ChatGPT – which could write you a tolerable history of natural catastrophe reinsurance in seconds – could not count the number of Rs in the word ‘strawberry’. Well, it could, but it insisted there were two. This inaccuracy is common and the strawberry example is only the surface of a problem that could run very deep.
This could make it very hard to verify the reliability of any AI output without thorough checking. It’s also worrying that AI is gaslighting the user and insisting it is correct when it isn’t.
Yes, AI is improving its performance in numerical calculations (on the gaslighting, the jury is still out), but we must remain cautious.
Another problem is information bias. In closed markets holding sensitive information, the quality and availability of data for analysis may be limited. In other words, AI could confidently return an analysis based on a skewed data set. Trusting a flawed analysis could have devastating consequences across many industries – for insurance risk profilers, the implications are obvious.
Of course, it’s good that sensitive company data or intellectual property is protected, isn’t it? But what happens when it isn’t? This brings us to the dangers of AI in relation to sensitive data.
Early this year, we ran an exercise using ChatGPT to see what non-damage business interruption (NDBI) claims were public. Predictably, we found only information about property damage business interruption (PDBI), because NDBI is a more specialised component of BI insurance, designed to protect revenue, from events like regulatory shutdowns, or other supply chain disruption, especially when intangible assets like reputation are involved.
However, asking Copilot or ChatGPT now about large NDBI claims returns far more information. If some of it wasn’t supposed to be public, AI didn’t know or care. But perhaps competitors took a keen interest.
Whether the leaking of this information was malicious or accidental, there is a huge potential for AI – if handled without due care – to return or disseminate information you’d much rather remained private.
A more prosaic example might be a simple search by an employee that unintentionally reveals salary information for employees across the entire company. The error occurred because access privileges were not scrupulously set. As a result, the AI rifled through confidential information with potentially embarrassing and disruptive results.
Proper implementation of the principle of least privilege would have ensured users (and their AI assistant) had access only to those company files and folders deemed necessary.
What might a chief AI offer?
I hope the above illustrates that safeguarding AI use in commercial organisations isn’t something to dip into lightly, but that a CAIO should both explore and maximise the opportunities AI brings, as well as set the guardrails and culture to ensure they can both protect the organisation, and its reputation.
In my opinion, if you’re implementing AI at pace, assigning a chief AI officer, or ensuring that your CTO is formally tasked with AI governance — and has the resources to respond, could and quite probably should be considered an urgent matter. Many businesses are already taking this on board. As reported in Insurance Times, LinkedIn reports that ‘since December 2022, 13% more firms have created “head of AI” positions’.
So, what will they do? Here’s a topline job spec I’ve put together, to make my point about the potential scope of the role, However, I would add that this is a fast-moving and evolving role. We’re in conversations with clients about delivering this function as an outsourced service and at this stage it’s important to remain open-minded about the scope.
Overview: The Chief Artificial Intelligence Officer (CAIO) role is a leadership position focused on governing AI initiatives, leveraging AI for competitive advantage and building shareholder confidence while protecting the company’s reputation.
Responsibilities include strategy development, identifying opportunities and risks, and establishing ethical standards. The CAIO oversees the integration of AI into products, regulatory compliance, data security and change management.
Key skills: Product development, data science, analytics, machine learning and ethics.
Where does this role sit in an organisational hierarchy?
In most companies, a chief artificial intelligence officer should be a cross-departmental function under the chief technology officer (CTO), with responsibilities that overlap with roles such as chief information security officer (CISO) and chief innovation officer (CINO).
Typically, it will be a board-level role but that will depend on the focus of the organisation. Mature businesses, with existing security controls and procedures in place, won’t have much of an issue finding a home for the role alongside existing roles and structures. Less mature ones with a need for an AI officer may need to play catch up, in terms of organisational structure.
One thing I’d add is that the role may soon become regulated, akin to that of a data protection officer (DPO), particularly in relation to AI-driven pricing, profiling and automated decision-making. This will help to ensure it meets GDPR / Data Processing Act and other existing regulations. It will also prepare the way for compliance with potential regulations incoming from U.S. federal agencies and the EU’s AI Act, which addresses ethical AI use and automated decision-making impacts.
As AI adoption grows, the CAIO will inevitably play a critical role in aligning AI practices with evolving regulatory standards and ensuring responsible, transparent AI deployment.
Intersys offers businesses a virtual chief AI officer (VCAIO) function, either as part of our virtual chief technology officer (VCTO) service, or as an independent role. To find out more, get in touch.
See Matthew Geyman’s Insurance Times interview about chief artificial intelligence officers.