Tech Policy Conversation #9: Navigating volatility in increasingly complex markets
Diving into research which can help policy leads at tech companies respond to concerns around deepfakes.
Disinformation, particularly around elections, has quickly become one of the thorniest questions facing tech companies today. Policymakers are particularly preoccupied about the impact that deepfakes could have on democratic processes - unsurprisingly given their profession. In this month’s edition we delve into research which can help policy leads at tech companies respond to these concerns.
In this issue:
The Milltown View: Deepfake reality
Key Developments: August 2024
The Milltown View: Deepfake Reality
Election-related disinformation is now so pervasive that the world barely blinked when former President Trump falsely accused Vice President Harris of using AI-generated images to inflate crowd numbers at a rally in Detroit.
Many of us have quickly come to normalise the idea that misleading AI-generated content is now a feature of the information in our newsfeeds, particularly when it comes to politics and elections.
However, just because we now expect false and misleading content in our online lives doesn’t mean we’re happy to accept it: quite the opposite in fact. This brings the role of tech companies and their actions into sharp relief.
Recent focus group research conducted by Milltown Partners with consumers and policy elites from the UK, EU and US found that the overwhelming majority of people across these groups believe companies throughout the tech ecosystem should take their share of responsibility for tackling deepfakes. What’s more, they have a very low tolerance for public buck-passing to other companies, even if technical responsibility can be proven to lie elsewhere in the chain.
The top line message from consumers and policy elites to tech companies is emphatic: protecting our democratic institutions is a collective endeavour, so show us how you’re doing your bit.
So, what did our research reveal was the right strategy for companies to reassure policymakers and voters that they’re combatting this threat? The answers very much depended on the type of business.
Closed source developers have the most straightforward path: amplify all the guardrails you’re putting in place to prevent the creation of malign content - with examples - and don’t be afraid to get technical. Consumers and policymakers prize evidence in abundance, even if they don’t understand all of it. Content platforms and AI deployers meanwhile need to zero in on labelling and watermarking to convince sceptical audiences of their commitment to information integrity. Open source developers probably have the hardest mountain to climb: both consumers and policy elites were hard pressed to offer up positive open source use cases, and were sceptical of them applying their own rules when it comes to safety. These developers might therefore focus first on raising awareness of the benefits of open source using concrete examples, before then addressing how they are integrating guardrails into their products.
2024 may have been the first big batch of elections with AI featuring as a mainstream issue, but the threat of AI-generated misinformation will continue to evolve as the technology and its users become more sophisticated. For businesses involved in tech and AI to be (and stay) on the right side of policymakers and voters on an issue as fundamental as the health of our democracies, they need to be crafting their engagement and messaging strategies now: backed by the insights and intelligence that will lead to the right choices.
Milltown Partners has developed seven research reports in its “Navigating Volatility” series. They cover some of the most salient and fast-moving issues facing businesses today, providing insights and advice drawn from focus groups of policy elites and consumers across key regions. You can find links to all seven papers below, and get in touch with Flo Forster for advice on how Milltown Partners’ research team can help your business navigate an increasingly volatile world.
1. How should companies address election deepfakes?
2. How can tech firms overcome skepticism and demonstrate their value to society (for policymakers)?
3. How should companies communicate AI investments that could lead to layoffs?
4. How should investors make the case for investing in defense tech?
5. How should companies communicate when geopolitical conflicts disrupt supply chains?
6. How should companies communicate re: issues of AI bias?
7. How should US sports investors communicate around acquisitions of European football clubs?
Key Developments: August 2024
UK: Keir Starmer acknowledges the importance of disinformation laws in the wake of right-wing violence
After the spread of falsehoods contributed to this month’s far-right riots, the Prime Minister has reaffirmed that social media is “not a law-free zone”. Demands from civil society to review online misinformation laws through re-opening the Online Safety Act have been growing, although criminal action has taken priority, at least in the short term.
It is less than a year since the Online Safety Act finally became law after a long and arduous journey through Parliament. Some advocacy groups and parliamentarians were vocal at the time about the Act’s limited provisions for tackling online mis and disinformation. The shock of this month’s riots fuelled by the rapid spread of falsehoods surrounding the horrifying attack in Southport has brought the Act back under scrutiny. This is unlikely to be the last challenge it will face in determining the effectiveness of its design and implementation.
UK: Government plans fresh investment in AI and supercomputing capacity
The UK Government has reiterated its commitment to artificial intelligence and supercomputing after its decision earlier this month to withdraw funding for projects including a supercomputer at the University of Edinburgh worth £800 million.
Secretary of State for Science, Innovation, and Technology, Peter Kyle, told the FT that the upcoming AI Action Plan would set out a ‘bold approach’ for AI, including how to deliver ‘future compute needs’. The plan is due to be released in September.
Peter Kyle looks like a man in a hurry when it comes to his expanded department driving tech policy, both in a bid to improve public services and to kickstart the UK economy. With a handful of tech focussed bills announced for this session of Parliament and an AI Opportunities Action Plan led by Matt Clifford reporting as soon as next month, the key question will be how the Government finds ways to convert this ambition to delivery when the Treasury is saying that there’s no money in the coffers.
EU: Next steps in the AI Act
The European Commission’s AI Office has launched a consultation on its forthcoming General-Purpose AI Code of Practice, which aims “to ensure proper application of the AI Act's rules”.
The AI Act has come under fairly fierce criticism from industry and even some member states, who are concerned that it will generate huge compliance costs and effectively stop companies developing advanced AI models in the EU. The new Code of Practice presents an opportunity for the EC to reduce the compliance burden on developers, but the Commission is under pressure from civil society groups who are concerned that the Code will let developers write their own rules.
EU: EC seeks to improve DSA enforcement
The European Commission has published a consultation on new child protection guidelines under the Digital Services Act (DSA).
Unlike a lot of tech policy areas, child safety is an issue that voters care about. But parents’ support for various child protection schemes isn’t guaranteed - e.g. a study found that while parents like age assurance in principle, they think that age assurance presents a “threat to the sense of control parents have over their children’s online media use”. There isn’t a panacea - let’s hope the Commission can avoid overzealously regulating children’s internet access.