The Milltown Tech Policy Conversation #2: Responsible Innovation (and how to work with product teams) with Zvika Krieger
Welcome to the second edition of the Milltown Tech Policy Conversation.
With the help of some of today’s most influential and interesting tech policy thinkers, each month we dive into a different theme to try and answer some of the big questions facing tech policy and responsible innovation professionals.
This issue focuses on ‘responsible innovation’. A phrase that is by no means new - it covers privacy, safety, content moderation and more - but one that the current AI arms race has put it into sharp focus. Now, governments and international institutions are scrambling to provide guidelines for responsible product development and use. How, in this context, can companies build products that will meet regulatory standards in years to come, while maintaining the trust of customers, employees, policymakers and more?
Like this newsletter? Subscribe or forward it on! Or if there’s a topic or question you’d like us to cover, hit reply.
In this issue:
The Conversation: Zvika Krieger, responsible product design expert, former Director of Responsible Innovation at Meta.
The Milltown POV: preliminary findings from our Responsible AI research.
A Reminder: we’ll be hosting roundtables with senior policy professionals to launch our Responsible AI research, in partnership with Clifford Chance. If you’d like to join us, please drop us a line.
The Conversation: Zvika Krieger on responsible innovation
Zvika Krieger was the first Director of Responsible Innovation at Meta/Facebook, where he worked with product teams to help surface and address potential product harms and ensure responsible products. He currently works with investors and industry-leading companies to develop responsible innovation strategies that effectively anticipate and mitigate potential harms in their products.
In this interview, Zvika gave us his views on the development of responsible innovation strategies within companies, and how policy professionals can best work with their product teams to drive this forward.
Highlights:
Policy teams can support innovation by reframing risk mitigation as a product challenge. Responsible innovation often means navigating trade-offs between different potential harms. The best approach is to find ways to reframe trade-offs as problems that can be solved, something responsible innovation experts are well positioned to do. Encryption is a good example. To reconcile the privacy vs safety tension, the best approach is to go to the engineering team or the design team and say, "Okay, how might we find completely new ways to keep people safe? Different to what we’ve done in the past?" Engineers and designers love those kinds of challenges, and this puts policy teams at the heart of driving innovation.
Safety and privacy are well-trodden issues, but there are new and emerging areas where companies are struggling too. These include mental health and wellbeing, particularly when it comes to youth safety as many companies have been designing products for adults. Or equity and inclusion, where companies are asking "How do we make sure our products don’t discriminate against people or cause unique harms to certain populations?"
‘Responsible’ is culturally nuanced. Companies need to build a deep understanding of the local context to operate globally. This is particularly true in content moderation - even just the concept of freedom of speech is very Western and there's a very American spin on it. While freedom of speech has defined the first generation of social media, we’re seeing more companies looking at safety, civility, or even joyful content as competitive differentiators.
Responsible practices get more complicated when there’s a vacuum of regulation. The fact that Meta’s Oversight Board had to be created is an illustration of a pretty significant failure by governments to play their rightful role in society by governing these important public spaces. Companies welcome regulation in many ways. They would rather sacrifice some of their autonomy if it means that they don't get blamed for the outcomes.
Product teams need to intervene early and with clear metrics. Responsible innovation efforts have to be integrated into the product development process early. The more fully baked a product is, the higher the bar is going to be to stop the production line. Product teams also need clear metrics to judge harms by – that way, ‘responsible innovation’ seems less squishy and subjective.
READ FULL INTERVIEW HERE
The Milltown POV
As trailed in our first newsletter, we have recently conducted research into the public attitudes towards responsible AI, in collaboration with Clifford Chance.
Through focus groups with policy informed audiences in the UK, US and Germany, we explored how companies developing or using AI products can meet the evolving expectations of potential customers and policymakers, while minimising the risk of public, policy or regulatory backlash.
The full report will launch after summer and include analysis of what the findings mean in practice for companies. In the meantime, here are a few preliminary findings as a teaser:
Participants expected both regulators and companies to step up to ensure AI technologies are developed and deployed responsibly. They expressed desire for appropriate regulation and established standards for AI, and for government and industry to work together to develop them.
Participants wanted companies to approach AI explainability, transparency and consent in ways that help those affected by AI understand and make informed choices about how AI impacts them. This was especially true for AI that might have significant impacts on people, like automated loan decision-making.
There’s high awareness of AI bias and inaccuracy among the general public, and participants demonstrated good understanding of them, more so than more abstract concerns related to existential risk. Participants also perceived AI through the lens of big tech, associating risks and opportunities of AI with those of the technology industry more generally.
Participants’ perspectives suggested that for AI governance mechanisms to be trusted they should exhibit independence, have multiple checks and balances (such as several internal company stakeholders with accountability), and embed responsibility across a company’s culture.
If you’d like to receive the full report at the end of summer, or to chat to us about the research and our findings, please reach out.
Want to receive this every month?
Know anyone who would be interested in this?