The Milltown Tech Policy Conversation #4: Dorothy Chou, Head of Policy and Public Engagement at DeepMind, on Responsible AI in Practice.
Welcome to the fourth edition of the Milltown Tech Policy Conversation.
With the help of some of today’s most influential and interesting tech policy thinkers, each month we dive into a different theme to try and answer some of the big questions facing tech policy and responsible innovation professionals.
This issue focuses on responsible AI - effective AI policy, responsible AI deployment and the importance of representation, equity and inclusivity within the industry. Companies and governments alike are trying to position themselves as leaders, and principles, policies and summits abound. But as the need for focus becomes even more important, how should companies (and governments) prioritise issues, and how should they act on their priorities?
Like this newsletter? Forward it on! Or if there’s a topic or question you’d like us to cover, hit reply.
In this issue:
The Conversation: Dorothy Chou, head of policy and public engagement at Google DeepMind; angel investor with VC firm Atomico.
The Milltown POV: preliminary findings from our Responsible AI research
A Reminder: we’ll be hosting a social for senior policy professionals in early December. If you’d like to join us, please drop us a line!
The Conversation: Dorothy Chou on responsible AI
It’s ok for governments to take time to get regulation right. The difference in pace between regulation and technology might not be a bug - it’s a feature of democratic deliberation. People often get upset that governments can't keep up, or they despair that laws will always lag behind. But I think it's okay for governments to allow time for people to study emerging technologies and understand what the ins and outs are. There are ways to push companies to race to the top and be normatively responsible (see cybersecurity and responsible / coordinated disclosure as an example) and then put laws in place once the dynamics are clear. That’s how we can ensure laws are adaptable enough to keep up with the technology— future proof rather than quickly obsolete.
Companies and investors can help to set norms. In addition to formal regulations, I also think about policy dynamics across market forces and sector norms:
VCs are really at the top of the funnel. They fund and incentivize all kinds of different behaviour, and that flywheel they create makes founders who are seeking funding behave in a certain way.
Any company releasing technology is going to have a norm setting function. Especially for the bigger companies that are leading in the space, it's really important to consider when you deploy your products, and what are the tests and requirements you are putting in place to ensure it’s safe enough to launch. I also think corporate boards have a really big role to play. They're there for governance, both from a market perspective, and from an ethics perspective.
Policy is more than government relations. It also requires community engagement - with civil society and industry peers. Facilitating, understanding and listening are important normative practices and the more diverse voices included, the better for everyone. Normative policy development requires collaboration across companies, not just those with the most resources. By engaging early stage AI firms on these issues, we can build consensus and shape policy through a democratic process and not just top-down mandates.
The UK’s AI Safety Summit should consider how to implement norms - and civil society will play an important role. It's challenging to make significant progress at any multinational government negotiations, [so] beyond statements, I think it’s important to discuss implementing norms. The White House AI Commitments are great, but who will do the actual work of putting them into practice? Companies clearly have a role, but civil society is also key - they're the ones who red-team these models and help us uphold these standards.
Inclusion should be built into AI products and governance from the start. Beyond international agreements, I would love to see leaders agree on how they are going to facilitate civil society and bring them into the room. I know you can't have everybody at the table, but having some diverse voices will make a huge difference. When you look at the people making decisions about AI right now, they all look the same and that is something I would love people to pay attention to and consciously think about changing.
Read the full interview here.
The Milltown POV
Responsible AI isn’t just the latest buzzword. It’s a concept that resonates across industry, government, academia and civil society to reflect the myriad challenges of realising AI’s benefits across economies and societies in a way that retains public trust and social licence to operate. It’s also been the focus of conversations we’ve had with policy, comms and product teams from a range of companies based on our latest research: a series of focus groups across the UK, US and Germany looking at public attitudes to AI. We conducted it in partnership with Clifford Chance and you can read the report here.
From the research, we learned:
People are aware that AI could benefit society, but could also create risks. Participants recognised that AI can have a positive impact on areas such as medicine, science and productivity, but they were concerned that it may have a negative impact on jobs and equality.
Public perceptions of AI are heavily influenced by perceptions of high-profile technology companies more generally. Participants' views of AI were influenced by associations they made with topical issues affecting a number of leading technology companies, such as safeguards for content moderation and online safety.
Public attitudes about AI do not fall into opposing viewpoints. On issues such as the use of data, people's views often divide into opposing camps such as privacy versus national security. However, participants' views on AI were not so clearly divided and so companies have an opportunity to educate stakeholders about AI and to shape public debate.
Participants expect regulators and companies to collaborate to ensure that AI technologies are developed and used responsibly. Participants felt that companies have an important role to play in realising AI’s benefits, and in doing so safely. But they expressed little trust that companies by themselves will ensure AI is developed and used responsibly. Participants wanted companies and governments to work together to develop effective guardrails, regulation and standards effectively and swiftly.
On 1 and 2 November 2023, the UK Government is hosting the AI Safety Summit at Bletchley Park, which will understandably focus on the most potentially dangerous capabilities of “frontier” models. As the Government has acknowledged, AI raises many broader issues of concern to people and society, which is why Milltown Partners is organising the AI Fringe with a range of partners including Faculty, Google DeepMind, the British Academy, the Ada Lovelace Institute and more. The AI Fringe is separate from but complementary to the AI Safety Summit.