Tech Policy Conversation #8: Kate Jones, DRCF
Kate Jones discusses a joined-up approach to regulation in the face of AI.
Navigating the UK’s digital regulation landscape, particularly with accelerating advancements in AI, can feel like a gargantuan task for leaders. To shed some light, we’ve spoken to Kate Jones, CEO of the Digital Regulation Cooperation Forum (DRCF), about their work to get member regulators working together on some of the thorniest issues.
In this issue:
The Conversation: Kate Jones, CEO at the DRCF
The Milltown POV: The return of the AI Fringe
Like this newsletter? Subscribe or forward it on! Or if there’s a topic or question you’d like us to cover, hit reply.
THE CONVERSATION – KATE JONES ON THE WORK OF THE DRCF AND AI REGULATION
Kate Jones is the Chief Executive Officer of the Digital Regulatory Cooperation Forum (DRCF). Kate’s team works to increase and deepen coordination between regulators of online services and technology for the benefit of both businesses and their customers. Before joining the DRCF, Kate was an expert consultant and researcher on the governance of emerging technologies, with particular regard to human rights law, public international law and diplomacy.
Below are the key themes from our conversation with Kate. Click here to read the full interview.
It’s an exciting time for AI development, but also a crucial time for AI governance - Exciting times are ahead for AI development and regulation but how we shape the AI governance landscape moving forward will make or break how AI affects us all. Ensuring that AI benefits everyone is at the forefront of regulators’ minds. The DRCF is leading the way for a core group of UK regulators, acting as the “connective tissue” to enable inter-regulator collaboration. If you have AI questions that cover multiple regulators, for example questions about ‘fairness’ or ‘transparency’, the DRCF should be your first port of call.
Navigating AI regulation is tricky, but the DRCF is here to help - Navigating the UK’s digital regulation landscape can appear daunting. Because AI isn’t a sector in itself, multiple regulators are already considering how it can be looked at through existing regulation in their individual sectors. But given AI’s widespread impact, individual regulators cannot fully address the opportunities and risks presented by AI technologies in isolation. The DRCF wants to make it easier for innovators to navigate the regulatory landscape without having to work out exactly which regulator is responsible for what - for example, the recently launched AI and Digital Hub is a multi-regulator information advice service through which innovators can come to the DRCF with various questions and in return they’ll get free and informal advice.
The UK is leading in regulatory coordination - As former PM Rishi Sunak said, “AI doesn’t respect borders”. The establishment of the DRCF to support cooperation and coordination between regulators in 2020 was fairly revolutionary and now others around the world are looking to the UK as a leader in regulatory coordination. We’re now starting to see other countries setting up similar models. Last year, the DRCF launched the International Network for Digital Regulation Cooperation (INDRC), bringing together five members to share best practice and knowledge in the regulatory collaboration space. But keep your eyes peeled, Kate Jones promises there are more exciting developments to come…
Quick law is not necessarily good law - AI is particularly challenging since it's a general-purpose technology with wide-ranging impacts and often unclear how this evolving technology should be regulated. And while AI governance needs to be a priority, this doesn’t mean that we need to speed uplaw-making. To govern AI effectively, we need a multi-stakeholder conversation involving government, parliament, regulators, industry, and the third sector. Without this collaborative approach, our regulations won’t be as good as they could be. This is good news for organisations wanting to engage!
THE MILLTOWN POV - THE RETURN OF THE AI FRINGE
Six months on from the first AI Safety Summit hosted by the UK Government at Bletchley Park, world leaders and tech industry bosses gathered at the AI Seoul Summit in May to take stock of progress made on the commitments they signed up to in the Bletchley Declaration.
As the summit ended, Milltown was pleased to organise the return of the AI Fringe to the British Library Knowledge Centre, alongside partners, to reflect on the discussions in Seoul and look ahead to where the conversation on AI needs to go next.
A full house of participants from across the AI ecosystem came together to hear insights from speakers involved in the Seoul Summit and expand those conversations to include wider perspectives from industry, civil society and academia on the AI safety and governance agenda.
In the first panel we heard from people who were on the ground and involved with the work of the Seoul Summit, covering the shape of AI regulation and policymaking, the role of the summits and broader questions around open source tech and AI and climate change.
The second panel discussion centred on a discussion about how the AI ecosystem can practically work to harness benefits and tackle the challenges with the importance of interdisciplinary work, public participation and regulatory frameworks all raised.
We also welcomed representatives from the French government, including Henri Verdier, the French Ambassador for Digital Affairs, who wrapped up the event by introducing the priorities of the AI Action Summit taking place in France next February.
The full recordings of the sessions are available on YouTube and are well worth a watch. You can also sign up to the AI Fringe newsletter to stay up to date with future events.