Could you provide some broad reflections on AI regulation in the UK and further afield as well as an overview of the work of the DRCF?
This is an exciting time for AI development. It is also a crucial period for AI governance, as the shape of the regulatory landscape is likely to have a major impact on how AI affects everyone. If you take education for example, AI could either lift everyone up and enable learning in ways that are commensurate with the individual’s needs – providing access to a broader field of knowledge as well as individualised opportunities to develop skills – or it could do the opposite, funnelling users to a narrower set of knowledge areas and impeding the development of critical-thinking skills.
So as AI governance is established in the coming years, it’s important it ensures that AI has an enabling and enhancing impact. Ideally, AI will provide everyone with new tools and services as well as being a motor for growth and the economy.
The DRCF has a three-year vision that focuses on improving our member regulators’ effectiveness as well as twin objectives of promoting innovation and growth and protecting consumers.
On the innovation side, this year we launched the DRCF AI and Digital Hub, a pioneering multi-regulator informal advice service. Innovators can come to us with queries, and we will be able to provide regulatory information spanning our member regulators’ remits. The overall ambition is to make it easier for innovators to navigate the regulatory landscape without having to work out exactly which regulator is responsible for what, and so to bring their products or services to market more quickly.
It is still early days to understand the impact of the AI and Digital Hub. We will continue to raise awareness of this pilot service amongst innovators and make sure that they know that engaging with the DRCF through the Hub isn’t something that would expose them to more regulatory supervision than otherwise. Our goal is to build confidence and awareness of the Hub so, if there’s demand for it, it can continue in the long-term.
The UK has just elected a new Government and it is possible that we will see a change in approach to AI regulation from the previous Government's ‘light-touch’ approach. Would firmer AI regulation be welcomed by regulators? Would it provide greater clarity?
It is for the Government to decide what kind of regulation is needed. But if there is legislation, regulators will stand ready to implement it. It’s important to understand that AI isn’t a standalone sector. AI is inescapable for all regulators: they are already thinking about how it affects their sectors, and how existing regulation applies.
The DRCF member regulators are each actively working on AI, as well as participating in the work we’re doing together through the DRCF. For example, the Information Commissioner’s Office (ICO) has launched the AI and data protection risk toolkit while Ofcom’s online safety team is considering the role of AI in social media. At the same time, the Financial Conduct Authority (FCA) is working on AI in financial services and the Competition and Markets Authority (CMA) has published studies on competition and consumer consequences of generative AI. These are examples of activities that regulators are doing separately; we come together where it makes sense to tackle issues collectively.
In this context, on what issues should industry engage with government versus individual regulators or the DRCF?
If it is a policy question, that is clearly for Government. If the issue concerns oversight and supervision, that would be for a single regulator. The DRCF’s role focuses on collaboration on issues that cross our member regulators’ remits. To give an example, the question of what ‘transparency’ means in AI is important to all our member regulators, so we are working on that together. Similarly, we have come together to discuss ‘fairness’ because each regulator has an interest in considering this. There are also tools that are relevant to us all. For example AI assurance/audit, on which all our member regulators share an interest in ensuring that there is a healthy and robust market that industry can rely on.
Turning to some questions on international coordination, what can other jurisdictions, whether it be in the EU, US, or elsewhere, learn from the UK’s approach to the regulation of AI?
AI is a technology that does not stop at borders. We appreciate that as there are different approaches to AI regulation emerging around the world, this can be challenging and creates a complex environment for industry.
Our member regulators demonstrated foresight in establishing the DRCF back in 2020, at a time when no one else was thinking about regulatory coordination. Now, regulators overseas are looking towards the UK as an example of best practice in regulatory coordination, and there is a lot of interest in the model we have established at the DRCF. We are beginning to see other forms of regulatory collaboration occur around the world, which is exciting. We are keen to establish links with other bodies similar to the DRCF.
Last year we established a network of those bodies, the International Network for Digital Regulation Cooperation (INDRC). There are five members so far. We come together to share best practice and knowledge in the regulatory collaboration space. It is still early days for that network but essentially, we are hoping to enable coherence for regulators, to help industry to navigate a system that is so complex internationally.
We anticipate more developments on regulatory collaboration over the coming months and years. We see that some jurisdictions, including some of the biggest, are thinking about how to achieve collaboration, in a variety of ways. For example, the EU has set up high level groups for the implementation of their digital legislation, bringing in cross-sectoral views. Some countries, like France, are taking legislative approaches, in contrast to the UK’s DRCF which was set up by regulators themselves.
Looking forward, could you provide a sense of the DRCF’s priorities and how they may evolve in the coming months and years?
This is an important moment for innovation and growth as well as regulation. The DRCF AI and Digital Hub is a key priority for us, with the cross-regulator support it can provide for innovators.
The DRCF has a three-year vision, so I anticipate us developing within the parameters of that vision. At the same time, we are seeking to be agile. Technology is moving quickly, and the regulatory conversation is moving apace both at the national and international levels. We aim to be forward-looking in this context.
More broadly, beyond our AI work, we have a focus on emerging technologies. We have a team across our member regulators devoted to researching the landscape of emerging technologies and their potential regulatory considerations. Last year, that team produced public papers on Web3 and quantum technologies. This year they are working on digital ID and synthetic media. These are forward looking and innovative ways for regulators to consider the implications of incoming waves of technology.
And we are always looking at how best to upskill our regulators. As technology moves fast, so regulators need the skills and capacity to keep up. We are also working on how regulators can best use technical tools to help them with their governance. On how we can use tech to our advantage, and do this efficiently by working across regulators - watch this space!
What do you anticipate the future policy challenges of the DRCF to be?
It will always be the case that technology moves faster than governance. This does not mean we need to speed up governance - quick law is not necessarily good law. But it does mean that law needs to be forward-looking. We have seen a move towards outcomes based and risk-based regulation, which is essential as it’s flexible enough to accommodate tech development.
AI is challenging. It is a general-purpose technology with lots of potential ramifications some of which are not yet known. And there isn’t yet complete clarity of objectives for its governance. In order to set the framework to govern AI effectively, we are seeing, and should continue to see, a multi-stakeholder conversation that includes not only Government, Parliament and regulators, but also industry and the third sector. Without multi-stakeholder discussion, it will be more difficult to set a strong framework.
As we have discussed, AI has tremendous potential if the framework for it both enables innovation and protects consumers. Again, this is a key aim of our the DRCF AI and Digital Hub, so we encourage applications from innovators far and wide.