The pension industry shouldn’t wait for regulations to be in place before adopting artificial intelligence (AI), Co-Labs Global strategic adviser, Annabel Gillard, has suggested.
During a panel discussion on AI at the Hymans Robertson Pensions & Retirement Conference 2025, Gillard said: “Don’t wait for regulation. Ethics by design is by far the most efficient, robust, scalable way of approaching this.”
She called regulation “a changing beast”, warning that by the time firms align with current regulations, they might be on “the wrong side of it”.
Instead, Gillard stated that pension firms should establish a strong ethical framework.
"If instead, you establish a good, ethical framework, you’re not only setting a higher standard than the legal minimum but also staying ahead of the regulatory picture," she explained.
Webuild-AI co-founder, Mark Simpson, acknowledged that AI adoption in the pension industry is "still in its early stages".
He said that the industry will see a lot of technological change, but the bigger challenge will be on the business side.
"The way we engage with members will be harder than the technology itself,” he said.
Adding to this, Hymans Robertson partner and head of technology and innovation, Dan McMahon, stressed the need to equip pension scheme members with the “right tools” to use AI in a “safe and responsible” way.
“Members will be concerned. They will want to know why you have done the things you have done and how you have done them. We need to be able to explain that to members,” he continued.
“You need to have the guardrails. What you could be doing is introducing these systems, and as a result, introducing systemic risk."
He gave the example of systems delivering advice that could be wrong on a systemic level.
To ensure effective AI adoption, Gillard outlined key steps for the pension industry, with the first being to establish fixed AI principles and align them with operational practices.
Gillard suggested pension schemes should then move on to a risk materiality assessment and understand how AI is already being used.
The next step, she explained, was to develop an evaluation process, stating: “You need to consider where you're willing to accept risks and where you're going to mitigate them, especially when navigating trade-offs.”
“Finally, you need to review it,” she said. “Not only because AI self-trains as it learns, but also because we are new to this and will learn a lot. We need to have a continuous feedback loop to ensure integrated governance.”
Recent Stories