Zvika Krieger | full interview
If you read our responsible innovation newsletter, read on for the full interview with Zvika Krieger
In this interview, we spoke to Zvika Kreiger - the first Director of Responsible Innovation at Meta/Facebook - on the development of responsible innovation strategies within companies, and how policy and responsible innovation professionals can best work with their product teams to drive this forward.
Our conversation covered how to reframe tech harms and risks as product opportunities, the role of local and global context, emerging regulation, and how policy and responsible innovation teams can work with product teams in practice.
MILLTOWN PARTNERS: Responsible Innovation is a concept that seems to be everywhere at the moment. It’s a notoriously fraught term, but in our view an important one. What’s your definition of “responsible innovation”?
ZVIKA KRIEGER: Responsible innovation is about having a systematic, rigorous and thoughtful process for anticipating the ways in which products might harm individuals, users, society, and the world, and then mitigating those potential harms.
It is also thinking about the intersection of different potential harms. How do certain harms interact with other harms? What potential harms might fall between the seams of a traditional org chart? And what trade-offs need to be made between two potential harms, rather than assuming there is always a right and wrong answer?
We've moved beyond the era of moving fast and breaking things. The evolved expectations of society mean companies need to be doing this anticipatory work before products are released.
MILLTOWN PARTNERS: You mention trade-offs that companies must make. By definition these won’t have easy, objective, right or wrong answers. How can companies navigate them?
ZVIKA KRIEGER: The best approach is to find ways to reframe trade-offs as problems that can be solved. The ideal solution is the win-win. Responsible innovation experts can provide value by breaking down false dichotomies and debunking false trade offs.
Encryption is a good example. If you were optimising for protecting people's privacy then encryption is the gold standard. But this might come at the cost of safety - you can't track what's happening in encrypted messages. But this seeming trade off can also be presented as an opportunity to an engineering team or design team by saying, "Okay, we can't prevent harms in the way that we've done it in the past. How might we do it in the future? How might we find completely new ways to keep people safe?"
My experience working in companies that are largely engineering-driven or design-driven is that engineers and designers love those kinds of challenges.
MILLTOWN PARTNERS: We work with lots of different companies that are thinking about how to develop new digital and online products responsibly. Where are you seeing best practices and where are the gaps?
ZVIKA KRIEGER: Safety and privacy are both areas where companies have created good processes and infrastructure, and are becoming more sophisticated. There is still a lot of room for improvement, but there are some clear best practices. That progress has largely been driven by regulation and government intervention.
But there are new and emerging areas where many companies are perhaps struggling, like mental health and wellbeing. There have been emergent challenges when companies that have been focusing on adults start to expand their products to children – and are just now starting to think about the unique ways their products might bring harm to younger people.
Another emerging area is equity and inclusion, and thinking about, "How do we make sure that products don’t discriminate against people or cause unique harms to certain populations?" And environmental considerations are increasingly becoming important - whether it's part of the ESG movement or as climate change becomes more of a front and centrer issue on the global agenda.
MILLTOWN PARTNERS: For the companies grappling with these issues, past examples are good places to see what works and what doesn’t. Where have you seen responsible approaches to technology development work, and where have you seen them fail?
ZVIKA KRIEGER: Content moderation can demonstrate both. Companies are devoting a lot of attention to content moderation because it's one of the most clear areas where social media companies might be doing harm. They have hired 1000s of content moderators to tackle it.
The core issue in content moderation is balancing freedom of speech and autonomy with where a potential harm might come from. Different companies will draw the line in different places. What I've learned working for different social media companies is that the mission of the company, and particularly the messaging from leadership, will often drive where that line is drawn.
For example, there are some social media companies that prioritise freedom of speech and autonomy, and it's very much core to their identity. But then there are other companies that optimise for safety, particularly companies that are oriented around children (e.g. gaming companies or gaming platforms). They often say "We're not worried about children's freedom of speech. We're worried about their safety."
For companies that started as platforms oriented towards children and now have a growing audience they want to retain, this can become a competitive differentiator. They’re creating a space that is safe and civil, even if that comes at the cost of freedom of speech and expression. They’re betting that is a tradeoff that many consumers are willing to make.
There are also companies where joy and positivity are the core ethos of the platform and that trickles down to content moderation. For them, it is not about feeling an ethical responsibility to protect people from harm, it’s that they think their competitive differentiator is creating a joyful experience, and don't actually care about freedom of speech. They’re willing to take a much heavier hand in moderating content, but for different reasons.
MILLTOWN PARTNERS: How does that play out globally? What different approaches to content moderation have you seen in different countries?
ZVIKA KRIEGER: Localisation is also a really big challenge in terms of how you make sure that rules are culturally and contextually appropriate. As we see social media apps that come from places outside of the US, we’re seeing different approaches to content moderation - because different countries and different cultures have different attitudes towards freedom of speech and autonomy. Even just the concept of freedom of speech is very Western and arguably there's a very specific American spin on that concept.
Something that might be considered nudity in one culture might not be considered nudity in another culture. So companies need to build deep understanding of local contexts in order to operate globally.
You need to have that culturally specific expertise and knowledge - this is something we're seeing companies increasingly invest in as well.
MILLTOWN PARTNERS: It’s almost like a marketplace of ideas in terms of how companies approach societal issues like content moderation. These are nuanced areas and companies are ultimately going to have to find their own stance on some of these issues, and then consumers will decide accordingly whether they want to be on that platform. But will this get tricky when regulation tries to say in black and white “this is okay, this is not okay”?
ZVIKA KRIEGER: Companies welcome regulation in many ways - lots of the major tech companies are actually calling for it.
Most tech companies don't have strong opinions about what should and shouldn't be moderated. They need to do it for the safety of their users but I think they would rather point to governments and say “this is actually something that is legally regulated." They would rather sacrifice some of their autonomy in moderating decisions, if it means that they don't get blamed for the outcomes in those decisions.
The creation of the Meta Oversight Board is a good example of a company sacrificing their autonomy in moderation decisions so that they don’t have to take the blame. The fact that the Oversight Board had to be created is an illustration of a pretty significant failure by governments to play their rightful role in society in governing these important public spaces. The Oversight Board had to be created because of the vacuum left by the government's unwillingness or inability to put in place good governance structures.
MILLTOWN PARTNERS: We’re seeing lots of moves in the UK, EU, US and elsewhere to bring in regulation around AI, online safety and more. Is global regulation around responsible innovation possible?
ZVIKA KRIEGER: Having a patchwork of regulation is going to be really challenging for companies. You see companies creating whole teams devoted to tracking all the regulations that are coming down the pipeline - but I don't see any hope for globally unified regulations anytime soon.
Some companies might be able to create different products for different jurisdictions. But given how onerous that is, and how cross-border our society is these days, it's becoming increasingly difficult. The best case study that we have around this is GDPR. GDPR has essentially become the baseline global standard. I think we'll see that with other emerging regulation too: the strictest policies will become the global default.
It’s interesting when a jurisdiction that has significant market share for a company demands a very strict policy, companies push back against it by saying it is not technically possible or would undermine the whole product, but when the government stands firm, then the company is actually able to implement it. That provides fuel for other jurisdictions. We already saw that with Italy banning ChatGPT until OpenAI made some changes.
MILLTOWN PARTNERS: Let’s turn to how all responsible innovation works in practice for companies. In our experience product teams often work on something they're excited about, then policy or comms teams come in and say “here’s all the reasons why your product is going to bring reputational or policy risk.” What should companies do to make the process easier?
ZVIKA KRIEGER: Responsible innovation efforts have to be integrated into the product development process. It can't be a separate siloed endeavour. If responsible practices are separate, then they’ll just be seen as an annoyance by product teams - something that slows down and frustrates the product development process. And they will largely become a box-checking exercise.
The most successful responsible innovation efforts are therefore ones that are integrated, that meet the product folks in their workflows, and dovetail on existing processes rather than creating whole new bureaucracies.
To be impactful, responsible innovation teams need to:
Engage early: In most companies, a lot of the work that policy, trust and safety, legal, and comms teams do tends to come at the end of the product development process, when a product is already largely baked. That’s because those teams are often under-resourced, so if they have to choose between reviewing a product that's launching in six weeks, or reviewing a product that's launching in a year or two, obviously they prioritise the thing that's heading out the door. But the more fully baked a product is, the more reluctant product teams are going to be to make changes and the higher the bar is going to be to stop the production line. The earlier that you can intervene, the more likely you can make fundamental changes to the product.
Intervene with concrete suggestions. Teams that tend to have the most constructive relationship with product teams are the ones that say, "Hey, there's a potential for harm here and here's how you might address that." The most effective responsible innovation teams are the ones that have expertise in product development, where they can give suggestions for how they might address it, rather than just dumping a bunch of potential harms without any solutions.
Provide a sense of prioritisation. If everything is urgent, then nothing is. Teams need to come up with clear rubrics such as looking at scope or scale potential harm, and seeing what needs to be addressed before launch, what could potentially be addressed after launch, and what isn’t as essential.
Be metrics driven. When you're trying to convince a product team to do something, concrete product metrics are vital. Product teams are used to working with concrete metrics like: How many users use this product? How often do they use it? How much revenue comes from it? But potential harms can often feel comparatively squishy and subjective. Most tech companies are very quantitative and numbers driven and so you need to fight fire with fire. I think that policy teams are getting better at quantifying that risk and not just saying it will have reputational damage.
MILLTOWN PARTNERS: On metrics, is that less about trying to box a qualitative problem into a quantitative framing, and more about speaking the language of the engineering and product teams? I.e. framing problems and solutions in a way that they can plug into their existing workflows?
ZVIKA KRIEGER: Product teams get very frustrated when they're being forced to adjust their product to hit some sort of subjective bar. The way that product teams and particularly engineers and product managers work is that they understand goals.
So rather than saying “you need to change your product in this way" you should say "you need to hit this metric." That's the way to get product teams excited, rather than looking over their shoulder and constantly needing to check with you "Well, how about this? Does this satisfy you?" My experience is that responsible innovation initiatives that focus more on outcomes are the ones that tend to have better collaboration and better relationships with product teams.
But it's important to note that not all potential harms can be quantified. You can't measure them in the way that you can measure daily active users, and responsible innovation efforts need to be wary of deceptive quantifiable metrics that just put a superficial, quantifiable metric around something that doesn't actually measure effective outcomes, but it gives a veneer of quantitative rigour. Sometimes responsible innovation teams need to be able to go back to a product team and say they can’t put a number on it, but it's still important, and it's still real. And so I think being able to distinguish between those things is important as well.