The rise of social media has brought multiple, recent charges of forgery and fraud in the dissemination of political attacks. The nature of such attacks is nothing new. Since the dawn of American elections, candidates have seen their words, views, and deeds fraudulently portrayed by political enemies. During the 1972 presidential election, an aide to Richard Nixon reportedly wrote a false letter to the editor of the Manchester Union Leader claiming that Senator Edmund Muskie of Maine, a candidate for the Democratic Party nomination, laughed at the use of an ethnic slur against Americans of French-Canadian descent. In 1880, a New York newspaper published a fraudulent letter, allegedly written by presidential candidate James A. Garfield, stating Garfield’s support for unrestricted Chinese immigration.

However, technology is making it easier and cheaper to develop these sorts of spurious attacks—and harder to quickly debunk them. Researchers and software companies are refining and commercializing tools, colloquially referred to as Photoshop-for-Voice, that will allow users to create realistic audio and video files of people saying things they have never actually said. Other emerging technologies will allow users to modify a person’s facial expression in real-time. Normal hardware and cheap software will allow anyone to create and manipulate these sounds and images. Through social media platforms, they can be distributed anonymously, instantaneously and widely.

How well can the law keep up with these developments? There are serious First Amendment issues involved. The cases distinguish between parodies and impersonations of candidates and public figures on the one hand, and malicious attempts to mislead viewers about the words and deeds of these figures on the other. But they do not always do so easily, as the Supreme Court signaled three years ago in Susan B. Anthony List v. Driehaus, when it allowed a First Amendment challenge to proceed against an Ohio statute that punished malicious campaign false statements. (The Sixth Circuit later struck down the Ohio law.) An aggrieved candidate might seek relief against a fraudulent attack under general federal or state anti-fraud or intellectual property laws, but time pressures and First Amendment protections can make their chances slim, even if the culprit can be found.

The federal campaign finance laws offer little more help. At the end of 2017, the Federal Election Commission recommended again to Congress that it broaden the federal statute that targets certain, narrow types of campaign fraud. 52 U.S.C. § 30124(a)(1) prohibits federal candidates, their employees and agents from “fraudulently misrepresenting” that they speak, write, or act “for or on behalf of any other candidate or political party . . . on a matter which is damaging” to such person or entity. However, this measure does not cover activities by independent actors. The FEC wants the statute to “encompass all persons purporting to act on behalf of candidates” and other political organizations, and it seeks to eliminate the requirement that such fraudulent activity pertain only to a matter that is “damaging.” But there has been no movement on this topic in the past, and little sign of any to come.

The best remedy for the aggrieved campaign may be self-help. As it becomes harder to detect fake audio and video, campaigns, parties and media outlets may increasingly turn to forensic experts to verify or debunk the damaging files. As with the forgery of fine art and currency, a technological arms race may develop between those who produce fake content, and those employed to spot the fakes. Candidates may also seize on growing uncertainty about what is real to keep attacks from reaching critical mass. More than ever, a major campaign in 2018 will need an integrated legal, research, communications and technological strategy to survive in a world where increasingly, it seems, nothing is real.