In the News

Deepfakes and Synthetic Media in the Financial System: Assessing Threat Scenarios

Jul 8, 2020 Carnegie Endowment

SUMMARY

Rapid advances in artificial intelligence (AI) are enabling novel forms of deception. AI algorithms can produce realistic “deepfake” videos, as well as authentic-looking fake photos and writing. Collectively called synthetic media, these tools have triggered widespread concern about their potential in spreading political disinformation. Yet the same technology can also facilitate financial harm. Recent months have seen the first publicly documented cases of deepfakes used for fraud and extortion.

Today the financial threat from synthetic media is low, so the key policy question is how much this threat will grow over time. Leading industry experts diverge widely in their assessments. Some believe firms and regulators should act now to head off serious risks. Others believe the threat will likely remain minor and the financial system should focus on more pressing technology challenges. A lack of data has stymied the discussion.

Jon Bateman

Jon Bateman is a fellow in the Cyber Policy Initiative of the Technology and International Affairs Program at the Carnegie Endowment for International Peace.@JONKBATEMAN

In the absence of hard data, a close analysis of potential scenarios can help to better gauge the problem. In this paper, ten scenarios illustrate how criminals and other bad actors could abuse synthetic media technology to inflict financial harm on a broad swath of targets. Based on today’s synthetic media technology and the realities of financial crime, the scenarios explore whether and how synthetic media could alter the threat landscape.

The analysis yields multiple lessons for policymakers in the financial sector and beyond:

  • Deepfakes and synthetic media do not pose a serious threat to the stability of the global financial system or national markets in mature, healthy economies. But they could cause varying degrees of harm to individually targeted people, businesses, and government regulators; emerging markets; and developed countries experiencing financial crises.
  • Technically savvy bad actors who favor tailored schemes are more likely to incorporate synthetic media, but many others will continue relying on older, simpler techniques. Synthetic media are highly realistic, scalable, and customizable. Yet they are also less proven and sometimes more complicated to produce than “cheapfakes”—traditional forms of deceptive media that do not use AI. A bad actor’s choice between deepfakes and cheapfakes will depend on the actor’s strategy and capabilities.
  • Financial threats from synthetic media appear more diverse than political threats but may in some ways be easier to combat. Some financial harm scenarios resemble classic political disinformation scenarios that seek to sway mass opinion. Other financial scenarios involve the direct targeting of private entities through point-to-point communication. On the other hand, more legal tools exist to fight financial crime, and societies are more likely to unite behind common standards of truth in the financial sphere than in the political arena.
  • These ten scenarios fall into two categories, each presenting different kinds of challenges and opportunities for policymakers. Six scenarios involve “broadcast” synthetic media, designed for mass consumption and disseminated widely via public channels. Four scenarios involve “narrowcast” synthetic media, tailored for small, specific audiences and delivered directly via private channels. The financial sector should help lead a much-needed public conversation about narrowcast threats.
  • Organizations facing public relations crises are especially vulnerable to synthetic media. Broadcast synthetic media will tend to be most powerful when they amplify pre-existing negative narratives or events. As part of planning for and managing crises of all kinds, organizations should consider the possibility of synthetic media attacks emerging to amplify the crises. Steps taken in advance could help mitigate the damage.
  • Three malicious techniques appear in multiple scenarios and should be prioritized in any response. Deepfake voice phishing (vishing) uses cloned voices to impersonate trusted individuals over the phone, exploiting victims’ professional or personal relationships. Fabricated private remarks are deepfake clips that falsely depict public figures making damaging comments behind the scenes, challenging victims to refute them. Synthetic social botnets are fake social media accounts made from AI-generated photographs and text, improving upon the stealth and effectiveness of today’s social bots.
  • Effective policy responses will require a range of actions and actors. As in the political arena, no single stakeholder or solution can fully address synthetic media in the financial system. Successful efforts will involve changes in technology, organizational practices, and society at large. The financial sector should consider its role in the broader policymaking process around synthetic media.
  • Financial institutions and regulators should divide their policy efforts into three complementary tracks: internal action, such as organizational controls and training; industry-wide action, such as information sharing; and multistakeholder action with key outside entities, including tech platforms, AI researchers, journalists, civil society, and government bodies. Many notional responses could draw on existing measures for countering financial harm and disinformation.

INTRODUCTION

The advent of deepfakes and other synthetic, AI-generated media has triggered widespread concern about their use in spreading disinformation (see box 1). Most attention so far has focused on how deepfakes could threaten political discourse. Carnegie, for example, has extensively researched how to protect elections against malicious deepfakes.1 In contrast, there has been relatively little analysis of how deepfakes might impact the financial system.

Disinformation is hardly new to the financial world. Crimes of deceit, such as fraud, forgery, and market manipulation, are endemic challenges in every economy. Moreover, bad actors often incorporate new technologies into their schemes. It is therefore worth considering how novel deception tools like deepfakes could enable financial crimes or other forms of financial harm.

This paper merges two of Carnegie’s research areas. The FinCyber project works to better protect the financial system against cyber threats and to strengthen its resilience. The Deepfakes project has sought to develop safeguards against malicious deepfakes and other AI-generated disinformation. Through both projects, Carnegie has engaged extensively with leading stakeholders from industry, government, and academia.

In February 2020, Carnegie convened a private roundtable to discuss deepfakes in the financial sector. More than thirty international experts from the financial sector, tech industry, and regulatory community participated. This paper is informed by their collective insights, though it does not attempt to reflect any consensus.

Experts disagree sharply about the magnitude of financial threats posed by deepfakes. There have been only a handful of documented cases to date, making future trends difficult to judge. Some in the financial industry rank deepfakes as a top-tier technology challenge, predicting that they will corrode trust across the financial system and require significant policy changes. Others believe that deepfakes have been overhyped and that existing systems of trust and authentication can readily adapt to this new technology.