Like the 2016 American Presidential Election, the 2017 French Presidential Election was the target of a Russian disinformation campaign that included the selected leaking of then-candidate Emmanuel Macron’s emails. While Macron still managed to win the election, a much more sinister future of information warfare is not far off.
Imagine, in the days before French citizens return to the polls to decide a hotly contested Presidential election that could alter the fate of Europe, the political crisis of the century occurs. An audio recording leaks. In it, the leading center-left candidate accepts a bribe of $100 million dollars with the understanding that once elected he will protect the business interests of a group of French elite. Because France outlaws campaign coverage in the final 44 hours prior to the election, the last image French voters have is the leading candidate adamantly stating the audio was fake.[1] But, it is clearly his voice, and it goes viral on social media in a matter of minutes. The populist, far right candidate then wins in a landslide. Months later, it would be determined that the recording of the center-left candidate accepting a bribe was a forgery, an actual “deep fake” created, and released by the Russian Federation designed to drastically alter the French election and the future policies of the Republic
While the above description is entirely imaginary it should not be considered science fiction. Deep fakes – hyper realistic, fake audio or video created using machine learning that is nearly impossible to detect – are becoming a reality. These have the potential to reshape information warfare and pose a serious threat to open societies as unsavory actors could use deep fakes to cause havoc and improve their geopolitical positions.[2] Deep fakes will further erode national discourses as they eliminate an objective set of facts. People will choose the narratives they agree with, and discount any contrary evidence as deep fakes, allowing nation-states and domestic political actors to prey on preexisting fissures in society.
Background, Deep Fakes
Editing photos or videos, altering the reality of the captured moment, predates the internet. Nation-state propaganda campaigns have been around since World War II.[3] Photoshop first came out in 1988.[4] But, deep fakes represent a fundamental paradigm shift in how the world will operate online. As the adage states, “A lie will go around the world while the truth is pulling its boots on,” and with the help of social media, explored in greater depth below, this statement remains more true than ever.[5]
The term “deep fake” was originally developed in the pornography industry where it referred to the process of inserting celebrities’ faces into pornographic scenes. While this is one of the current and more crude applications of deep fakes it is not the focus here. Here “deep fake” will refer to any false audio or video that has been created using “neural network” machine learning techniques and “generative adversarial networks” designed to ensure that the deep fake is nearly impossible to unauthenticated.[6] This is a notably limited definition, but will encapsulate the most threatening deep fakes likely to be created by highly motivated and well-funded actors.
The creation of deep fakes relies on two advancements in machine learning: neural networks and generative adversarial networks (GANs).[7] Neural networks mirror how the human brain works.[8] The more the human brain is exposed to examples of something, such as how to shoot a basketball or the lyrics of some new song, the quicker and more accurate the brain can reproduce it. Neural networks use this same concept; the more examples that are fed into the network, the more accurately it can create a new example from scratch.[9]
Returning to deep fakes, the more video or audio data that is fed into the neural network, the more accurate the new, false, audio or video will be. If a dataset containing every public comment made by President Obama is fed through a neural network, the network will then be able to create a video (or just audio) that is nearly indistinguishable from the real video and audio.[10]
But, neural networks are only half of the equation. Without GANs, deep fakes would not be as realistic as they are. Generative adversarial networks (GANs) are the brain child of Ian Goodfellow, a Google researcher, who combined two neural networks in adversarial roles to improve the end product.[11] The first neural network, known as the generator, is as described above.[12] Its job is to create the new, false video or audio by attempting to replicate the dataset it is being fed. Then, both the original dataset, and the newly created deep fake, are fed into a second neural network, known as the discriminator.[13] The discriminator’s job is straightforward: decide which videos in the dataset (that now contains the deep fake) are real.[14] If the discriminator can identify the deep fake, the generator can then “learn” how the discriminator determined the fake and correct whatever error was made.[15] With each replication of this game, the deep fakes become more and more difficult to discover.
This presents one of the major problems with distinguishing a deep fake from an actual video or audio recording. The moment the deep fake is discovered; a correction can be made that makes the deep fake more difficult to discover the next time. Each discovery improves the deep fakes, thus, each new method of discovery only works once.[16] Researchers are continually refining their methods of discovery looking at everything from how often the figure in a video blinks, to irregular head movements, to micro-color changes in the face that occur as the heart beats. But, the mere fact that these are known ensures they will fail in the future.[17] That means researchers face a quandary when identifying deep fakes: do they publicize their work, and the method by which they can distinguish deep fakes, or do they keep it private in order for it to be operational for a longer period of time?
Information Warfare and the Coming Deep Fakes
Information warfare, or targeted misinformation campaigns designed to confuse and obfuscate for political gain, are nothing new. Disinformation and so-called fake news have been around for generations. Here disinformation is defined as intentional falsehoods spread as actual news designed to advance a political goal.[18] These disinformation campaigns, originally termed propaganda, developed alongside the printing press. The first information war between the United States and (then) Soviet Union began immediately after World War II in 1947.[19] At the time, a correspondent for the New York Times stated bluntly, “Propaganda may not convince, but it adds to the confusion between truth and falsehoods.”[20]
The information warfare of today is designed to do exactly the same thing, except that a monopoly on information distribution no longer exists.[21] The internet, and the creation of social media, means anyone with a screen can publish whatever content they want. Originally, giant media conglomerates formed and defined the information environment we lived in. Everyone received news in the same place. Major news hosts such as Walter Cronkite were the arbiters of truth and a trusted source to understand what was going on in the world. Today, this is not the case. Anyone can report “facts”–all it takes is a Twitter account and a willingness to post.[22] The collective media environment people live in has fractured, as content can spread like wildfire. [23] An MIT study found that while bots spread false articles and true articles at the same rate, false articles reach 1,500 views nearly six times faster than true articles.[24] This creates the perfect opportunity for nation-states to wreak havoc in the domestic politics of adversaries while maintaining plausible deniability.
The fractured sense of reality the current news environment creates was never more clear than in the wake of a Presidential rally in 2016 when the question, “what is happening in Sweden?” tore through Western media outlets after President Trump stated “You look what’s happening last night in Sweden. Sweden! Who would believe it?”[25] The only problem was that no one knew what the President was talking about. Nothing had happened in Sweden.
The story of how Sweden made it into the President’s Florida rally that night is long and complex, but centers around an obscure filmmaker named Ami Horowitz and an appearance on Fox News with Tucker Carlson. Horowitz had stitched together a video that appeared to document an “unprecedented crime wave at the hands of Muslim men.”[26] The story was a complete falsehood and began as a conspiracy theory, but President Trump validated it by talking about it. This version of fake news was not perpetrated by a nation-state, but highlights the fracturing of fact. Nothing was happening in Sweden, and yet for 24 hours the world wondered what was. Furthermore, despite the President later admitting he was referring to a Fox News clip, there remains a subset of the American population that believes something horrible is going on in Sweden and Muslim immigrants are responsible.[27]
While the question of what had happened in Sweden was not the result of a state-sponsored disinformation campaign, a story spread in Germany was. Lisa was a 13-year old girl who lived in the German-Russian community in Berlin, Germany.[28] She went missing for several days, and when she was found she told her parents, and later the investigators, that she was kidnapped and gang-raped by three Muslim men.[29] This was a lie, she had actually gotten into trouble at school and stayed with a friend to avoid punishment.[30] But, her first story, about kidnap and gang rape, took on a life of its own. Initially it was spread through word of mouth and on Facebook, yet it soon attracted the attention of the Russian state-controlled news station Channel One.[31] At German far right political rally that took place after the initial news report, a relative of Lisa gave a statement in which he decried the immigration policies of Angela Merkel. After this Sputnik, a Russian government run news organization, ran a special on Lisa’s lie and despite the fact it had debunked several times at this point it was reported as the truth. [32] This story concluded with 700 people protesting outside of the German Chancellery.[33]
These two stories are not isolated incidents. The spread of disinformation is something that is happening with increasing regularity as people choose the news that fits their worldview. Social media reinforces this as people naturally engage with those who share similar beliefs and outlooks.[34] But, now this is being weaponized by nation-states and deep fakes will only contribute to the confusion.
The Russian state has made information warfare a focus of foreign policy. Information warfare has become another avenue Russia could exploit in its quest to undermine European institutions.[35] Russia has two main goals. First, it seeks to create and deepen divisions between the member countries of NATO, the EU, and the institutions themselves.[36] Second, Russia “seeks to exacerbate divisions in consolidated democracies who are seen as the flagbearer for the European values and institutions.”[37] Information warfare represents a lucrative strategy because it preys on the openness that characterizes democratic societies.[38]
Deep fakes represent a turning point in information warfare. They will increase the reach of fake news and decrease our connection to a shared understanding of facts. If people cannot trust what they see and hear with their own eyes and ears online, then they will choose what they want to believe.[39] An agreed upon, objective reality will cease to exist as there will be no agreed upon facts to establish any dialogue. This is already happening as the above examples illustrate, and while Russia features prominently in many of the stories today, they are not the only actor in this space. For instance, in the 2018 American midterm elections Iran began exploring the use of social media misinformation campaigns.[40] Manipulating social media is easy, especially considering how much of an effect it can have. Deep fakes are the next wave with the potential to fundamentally alter news, politics, and privacy.
The Publicity Problem and the Ultimate Attribution Error
In many cases publicizing problems helps defeat them, or at least limits their ability to influence how people think and act. These are self-invalidating ideas, theories, or concepts. Take climate change for example. It has the potential to be a self-invalidating idea; the more people know about climate change, the more people can change their behavior to limit the effects of pollution on the climate and ensure that the more apocalyptic predictions do not occur. However, deep fakes are not self-invalidating.
In fact, there is the potential that publicizing the threat posed by deep fakes can actually allow actors to manipulate their environment.[41] This phenomenon is called the “Liar’s Dividend.”[42] The Liar’s Dividend is when an authentic audio or video is released and reported on, and then the public figure who has been caught saying or doing something unsavory simply claims it is a deep fake. By publicizing the existence of deep fakes, and the difficulty in detecting them, people will become primed to view any video or audio that does not conform with their pre-held beliefs as a fake, especially if they support or like the person claiming the video is a fake. Knowledge of deep fakes may lead to the discounting of authentic videos; real news becomes deep fake news in the blink of an eye.
Psychologists have defined motivated biases as a bias that serves some other purpose besides reality appraisal.[43] In many cases motivated biases are the result of people needing to believe something for some internal reason; it is a part of how they think and disproving motivated biases is nearly impossible.[44] Deep fakes have the potential to reinforce our motivated biases and, thus, limit our ability to learn.
The Ultimate Attribution Error is an error in our thought processes that causes people to create situational or dispositional explanations for someone’s actions based on how they already feel about that person.[45] As an example, if a college professor views a particular student as lazy or inattentive, and said student is late for class, the professor will think that that is just how the student is. But, if a student who the professor knows and likes is late for class, the professor is much more likely to craft a situational explanation for why the student was late. In this case, because the professor crafted a situational explanation for the student’s lateness, the lateness does not change the view the professor holds. Deep fakes have the ability to interact and reinforce the Ultimate Attribution Error and cause a further schism in the modern political debate.
If someone is already primed to dislike and mistrust a given public figure, the release of a genuine deep fake will act as confirming evidence of their dislike. Even if the audio or video is later proved to be a deep fake, the damage will already be done. But, if someone already likes and trusts a given public figure, and is presented with contradictory evidence such as audio or video, and has heard about deep fakes, they will be more likely to explain away the new evidence as being fake.[46] This is the Liar’s Dividend at work and is why denying any contradictory audio or video as fake may be a successful strategy for public figures. There are already supporters who are ready to believe and accept it. This is especially dangerous, because the Ultimate Attribution Error is a motivated bias, making it nearly impossible to overcome. Deep fakes have the potential to interact with our motivated biases and social media bubbles in such a way as to further extenuate the growing divisions in our society and limit future debate as there is no agreed upon reality.
Solutions to the Coming Deep Fake Wave
While today’s deep fakes are relatively easy to distinguish with the human eye, this will not last forever. The brief moment we occupy when deep fakes are easy to identify is why technologists, social media companies, and policymakers need to begin developing tactics to limit the efficacy of deep fakes and ensure the darkest predictions do not come true. There are several potential solutions that are currently being explored by researchers, companies, and policymakers that have potential to limit the negative consequences of deep fakes. These include creative deep fake detection algorithms, digital provenance solutions, and life logs. However, all of these solutions face severe challenges as deep fakes advance.
Algorithmic Solutions
Technology gave us deep fakes, so it is important to consider whether technology can take them away. There are several potential technological solutions to the issue of deep fakes, with the first being the development of an algorithm that can detect deep fakes and classify them immediately.[47] Such an algorithm could then be linked with the major social media platforms, Twitter, Facebook, Instagram, and YouTube, to scan the audio or video before it is uploaded.[48]
Companies are already experimenting with machine learning driven solutions, including using algorithms to search the web for a higher resolution version of videos.[49] In the end, some form of machine learning might be the ideal solution as it stops the spread of deep fakes before it begins, and could potentially avert the crises described above. Although, it is important to note this would do little to stop the individual cases where deep fakes can still do damage.[50]
But, a detection algorithm would only go so far. Currently researchers are working on such projects, such as one focusing on unnatural patterns of blinking that appear in current deep fakes.[51] But this cannot solve the problem alone. As outlined above, innovations in deep fake detection that rely on mistakes made in the machine learning process can simply be corrected in the next iteration.[52] GANs will be fed videos with proper eye blinking, and be programmed to recognize unnatural blinking. Currently a perfect solution that will end the spread of deep fakes like this does not exist and most likely will not for the next decade or longer.[53]
Digital Provenance Solutions
The second technological fix is often referred to as a “digital provenance” solution.[54] The idea is relatively simple. Every time a photo or video is taken on any device, it is automatically tagged with a digital watermark specifying when it was captured.[55] Some companies are already experimenting in this area too, including Microsoft which began using a similar technology in 2005 to fight child pornography.[56] Microsoft’s technology allows content to be tagged with a digital fingerprint, allowing it to be more quickly identified and removed later.[57]
However, digital provenance, as suggested here expands the principle of Microsoft’s digital fingerprint because the digital provenance tag will be imprinted by the device capturing or creating the image. In this scenario, it would be imbedded in the image from its creation, instead of being added after.
This solution, while appealing, can only succeed in conjunction with two other actions. First, all devices that capture audio or video have to be equipped with the ability to create a digital watermark, which includes smartphones, laptops, handheld video cameras, and an array of other devices.[58] Second, all of the major social media platforms must require these digital watermarks to post on their sites.[59] Both of these are unlikely without legally requiring both product manufactures and social media companies to acquiesce.
Furthermore, there are several drawbacks to requiring this technology. Most notably, it would most likely make it easier for authoritarian regimes to trace content they disagree with back to its source which, in turn, could undermine groups seeking greater freedom in restricted societies.
Life Logging Solutions
A final technological solution could be offered by the private sector: life logging. Again, the idea is relatively simple, but would only be viable to a limited group of public figures most at risk.[60] A private company would offer to track and log a person’s entire life including all of their interactions, locations, conversations, and other materials.[61] This would probably be accomplished with some sort of wearable technology such as a smart watch.[62] The private company would be able to partner with social media platforms and quickly verify whether any given video or audio actually occurred and have the data to prove its conclusion.[63]
However, like algorithmic solutions and data provenance solutions, the concept of life logging has many problems. First, any company would have to deal with a nearly insurmountable amount of data, and then ensure that this data stays secure.[64] Second, this solution has serious privacy drawbacks both for those having their lives logged and everyone who interacts with those people.[65] Finally, there is the question of how this technology would expand: would companies want to monitor their employees during work hours like this?[66] For now, technology does not appear to be the solution to deep fakes.
Policy Solutions
Like technology-based solutions, policy and legally solutions are equally fraught. Under current US laws, creating and sharing deep fakes could be classified as illegal under defamation and fraud laws, among other criminal and civil infractions.[67] Furthermore, Congress could create laws that make it illegal to create and share deep fakes online for nefarious reasons.
But, much like the technological solutions, there are several problems here. To begin with, it may be very difficult to determine who exactly created the deep fake, and even if it is known, they may be outside of the jurisdiction of US law enforcement.[68] After all, they may be nation-state actors, or acting on behalf of nation-states.
Additionally, there is a question of how to define a deep fake when creating anti-deep fake laws. Digital manipulations of audio, video, and photographs are commonplace, and “deep fakes” have the potential to have positive impacts as well.[69]
Another legal solution is to require social media companies to monitor, verify, and remove deep fakes. This solution has some precedence as Germany requires social media companies to remove racist, extremist, or threatening content within 24 hours of it being reported or face stiff fines.[70] But, this also has challenges, and could lead to social media companies over censoring users as they try to avoid fines, and cannot determine exactly which videos are deep fakes and which are not.[71] Within the US, freedom of speech concerns would be a barrier to such solutions.
In the end, there are no perfect solutions to deep fakes right now, and as deep fakes become more realistic and more dangerous, few good options will exist. Open, democratic societies will have to coexist with deep fakes.[72] This will require citizens to turn to reputable news organizations to verify and elaborate on facts seen on social media, it will mean social media companies cracking down on fake content, and it will require citizens to not instantly believe whatever they see online, especially when it confirms their pre-held beliefs.
Conclusion
Deep fakes are not simply an evolution in propaganda. They represent a revolution in how disinformation is created, propagated, and the effects it can have. Deep fakes pose a very real threat to both individuals and entire societies, and as of right now there are no viable solutions for this burgeoning problem. The one glimmer of hope is that there is still time. Deep fakes are becoming easier to make, but can still be disproved by the human eye.
Yet, as more and more nation-state actors turn to social media manipulation and information warfare to assist in their geopolitical ambitions, deep fakes will become a more serious threat. It is impossible to predict when the deep fake problem will come to a head on the international stage, when a malicious actor will release a deep fake in the hopes of altering an election or spurring a war, but that future is not far off.
The 2016 U.S. Presidential Election was defined a hostile foreign power’s misinformation campaign. The development of deep fakes take the issues presented in 2016 and multiply them. That is why individuals, corporations, companies, and governments need to start working on this problem now, before it is too late. Deep fakes pose a very real threat, and without a viable solution, could represent a dramatic shift in the world as the idea of an objective reality cracks more deeply than before, taking a truly Orwellian turn.
Endnotes
[1] “French Media Rules Prohibit Election Coverage over Weekend.” France 24, 1.
[2] Meserole, Christopher, Alina Polyakova, Christopher Meserole, and Alina Polyakova. “The West Is Ill-prepared for the Wave of “deep Fakes” That Artificial Intelligence Could Unleash.” Brookings.edu. May 25, 2018, 2.
[3] Rutenberg, Jim. “RT, Sputnik and Russia’s New Theory of War.” The New York Times, 9.
[4] Pagin, Shaun. “The Evolution of Photoshop: 25 Years in The Making.” Adobe Photoshop History – 25 Years in the Making, 1.
[5] Shapiro, Fred. “Quotes Uncovered: How Lies Travel.” Freakonomics, 1.
[6] Chesney, Robert, and Danielle Citron. “Deepfakes and the New Disinformation War: The Coming Age of Post Truth Geopolitics.” Foreign Affairs 98, no. 1 (2019), 1.
[7] Ibid, 3.
[8] Karras, Tero, Timo Aila, Samuli Laine, and Jaakko Lehtinen. “Progressive Growing of GANs for Improved Quality, Stability, and Variation.” 2017, 1.
[9] Chesney, Robert, and Danielle Citron. “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security. DRAFT” SSRN Electronic Journal, 2018, 5.
[10] This was done by researchers at the University of Washington. See: Suwajanakorn, Supasorn, Steven Seitz, and Ira Kemelmacher-Shlizerman. “Synthesizing Obama: Learning Lip Sync from Audio.” 3.
[11] Chesney and Citron. “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security. DRAFT.” 6.
[12] Karras, Tero, Timo, Samuli, and Lehtinen. “Progressive Growing of GANs for Improved Quality, Stability, and Variation.” 3.
[13] Ibid., 4.
[14] Fletcher, John. “Deepfakes, Artificial Intelligence, and Some Kind of Dystopia: The New Faces of Online Post Fact Performance.” Theatre Journal70, no. 4 (2018): 455-471.
[15] Chesney, Robert, and Danielle Citron. “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security. DRAFT” SSRN Electronic Journal, 2018, 6.
[16] Schwartz, Oscar. “You Thought Fake News Was Bad? Deep Fakes Are Where Truth Goes to Die.” The Guardian, 2.
[17] Ibid., 3.
[18] Bennett, W Lance, and Steven Livingston. “The Disinformation Order: Disruptive Communication and the Decline of Democratic Institutions.” European Journal of Communication 33, no. 2 (2018): 124.
[19] Rutenberg. “RT, Sputnik and Russia’s New Theory of War.” 9.
[20] Ibid., 9.
[21] Bradshaw, Samantha, and Philip N. Howard. “THE GLOBAL ORGANIZATION OF SOCIAL MEDIA DISINFORMATION CAMPAIGNS.” Journal of International Affairs 71, no. 1.5 (2018): 25.
[22] Ibid., 8.
[23] Ronald J. Deibert. “The Road to Digital Unfreedom: Three Painful Truths About Social Media.” Journal of Democracy 30, no. 1 (2019): 29.
[24] Fillion, Rubina Madan. “Fighting the Reality of Deepfakes.” Nieman Lab, 3.
[25] Bennett, W Lance, and Steven Livingston. “The Disinformation Order: Disruptive Communication and the Decline of Democratic Institutions.” 123.
[26] Ibid., 123.
[27] Ibid., 123.
[28] Rutenberg. “RT, Sputnik and Russia’s New Theory of War.” 2.
[29] Ibid., 3.
[30] Ibid., 3.
[31] Ibid., 4.
[32] Ibid., 4.
[33] Ibid., 5.
[34]. Deibert. “The Road to Digital Unfreedom: Three Painful Truths About Social Media.” 29.
[35] Mejias, Ulises A, and Nikolai E Vokuev. “Disinformation and the Media: The Case of Russia and Ukraine.” Media, Culture & Society 39, no. 7 (2017): 1038.
[36] PUTIN’S ASYMMETRIC ASSAULT ON DEMOCRACY IN RUSSIA AND EUROPE: IMPLICATIONS FOR U.S. NATIONAL SECURITY (Senator Benjamin Cardin, Vice-Chair, United States Senate Committee on Foreign Relations- Minority Report) JANUARY 10, 2018, 99.
[37] Ibid., 99.
[38] National Intelligence Council, Issuing Body. Assessing Russian Activities and Intentions in Recent US Elections. Intelligence Community Assessment; 2017-01 D. Washington, D.C.]: Office of the Director of National Intelligence, National Intelligence Council, 12.
[39] Chesney and Citron. “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security. DRAFT” 58.
[40] “U.S. Intelligence Chief Says Russia, Iran Sought to Influence 2018 Midterm Voters.” RadioFreeEurope/RadioLiberty, 1.
[41] Villasenor, John. “Artificial Intelligence, Deepfakes, and the Uncertain Future of Truth.” Brookings.edu. February 14, 2019, 3.
[42] Chesney and Citron. “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security DRAFT.” 28.
[43] Hewstone, Miles. “The ‘ultimate Attribution Error’? A Review of the Literature on Intergroup Causal Attribution.” European Journal of Social Psychology 20, no. 4 (1990): 311.
[44] Ibid., 315.
[45] Ibid., 311.
[46] Fillion, Rubina Madan. “Fighting the Reality of Deepfakes.” Nieman Lab, 3.
[47] Villasenor, John. “Artificial Intelligence, Deepfakes, and the Uncertain Future of Truth.” Brookings.edu. 2.
[48] Chesney, Robert, and Danielle Citron. “Deepfakes and the New Disinformation War: The Coming Age of Post Truth Geopolitics.” Foreign Affairs, page 152.
[49] For more on this see reporting: Cole, Samantha. “Gfycat’s AI Solution for Fighting Deepfakes Isn’t Working.” Motherboard. June 19, 2018.
[50] Chesney, Robert, and Danielle Citron. “Deepfakes and the New Disinformation War: The Coming Age of Post Truth Geopolitics.” Foreign Affairs, page 152.
[51] Ibid., 152.
[52] Villasenor, John. “Artificial Intelligence, Deepfakes, and the Uncertain Future of Truth.” Brookings.edu. 2.
[53] Chesney and Citron. “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security. DRAFT.” 58.
[54] Ibid., 30.
[55] For an example of a company making this technology see: Newman, Lily Hay. “A New Tool Protects Videos from Deepfakes and Tampering.” Wired. February 12, 2019.
[56] Cowper, Bruce. “Microsoft PhotoDNA Technology Helping Law Enforcement Fight Child Pornography.” Software Asset Management – Microsoft SAM. March 20, 2012.
[57] Ibid., 3.
[58] Chesney and Citron. “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security. DRAFT.” 30.
[59] Ibid., 30.
[60] Chesney, Robert, and Danielle Citron. “Deepfakes and the New Disinformation War: The Coming Age of Post Truth Geopolitics.” Foreign Affairs, page 154.
[61] Ibid., 154.
[62] Ibid., 154.
[63] Chesney and Citron. “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security. DRAFT.” 54.
[64] Chesney, Robert, and Danielle Citron. “Deepfakes and the New Disinformation War: The Coming Age of Post Truth Geopolitics.” Foreign Affairs, page 154.
[65] Chesney and Citron. “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security. DRAFT.” 55.
[66] Ibid., 55.
[67] Chesney, Robert, and Danielle Citron. “Deep Fakes: A Looming Crisis for National Security, Democracy and Privacy?” Lawfare. Page 3.
[68] Ibid., 3.
[69] Ibid., 4.
[70] Chesney, Robert, and Danielle Citron. “Deepfakes and the New Disinformation War: The Coming Age of Post Truth Geopolitics.” Foreign Affairs, page 154.
[71] Ibid., 154.
[72] Chesney, Robert, and Danielle Citron. “Deep Fakes: A Looming Crisis for National Security, Democracy and Privacy?” Lawfare. Page 5.
Bibliography
Bennett, W Lance, and Steven Livingston. “The Disinformation Order: Disruptive Communication and the Decline of Democratic Institutions.” European Journal of Communication 33, no. 2 (2018): 122-39.
Bradshaw, Samantha, and Philip N. Howard. “THE GLOBAL ORGANIZATION OF SOCIAL MEDIA DISINFORMATION CAMPAIGNS.” Journal of International Affairs 71, no. 1.5 (2018): 23-32. https://www.jstor.org/stable/26508115.
Cole, Samantha. “Gfycat’s AI Solution for Fighting Deepfakes Isn’t Working.” Motherboard. June 19, 2018. Accessed February 27, 2019. https://motherboard.vice.com/en_us/article/ywe4qw/gfycat-spotting-deepfakes-fake-ai-porn.
Cowper, Bruce. “Microsoft PhotoDNA Technology Helping Law Enforcement Fight Child Pornography.” Software Asset Management – Microsoft SAM. March 20, 2012. Accessed February 27, 2019. https://www.microsoft.com/security/blog/2012/03/20/microsoft-photodna-technology-helping-law-enforcement-fight-child-pornography/.
Chesney, Robert, and Danielle Citron. “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security. DRAFT.” SSRN Electronic Journal, 2018, SSRN Electronic Journal, 2018.
Chesney, Robert, and Danielle Citron. “Deep Fakes: A Looming Crisis for National Security, Democracy and Privacy?” Lawfare. February 26, 2018. Accessed February 12, 2019. https://www.lawfareblog.com/deep-fakes-looming-crisis-national-security-democracy-and-privacy.
Chesney, Robert, and Danielle Citron. “Deepfakes and the New Disinformation War: The Coming Age of Post-Truth Geopolitics.” Foreign Affairs 98, no. 1 (2019): 147-155.
Fletcher, John. “Deepfakes, Artificial Intelligence, and Some Kind of Dystopia: The New Faces of Online Post-Fact Performance.” Theatre Journal70, no. 4 (2018): 455-471. https://muse.jhu.edu/ (accessed February 27, 2019).
Fillion, Rubina Madan. “Fighting the Reality of Deepfakes.” Nieman Lab. Accessed February 27, 2019. http://www.niemanlab.org/2018/12/fighting-the-reality-of-deepfakes/.
“French Media Rules Prohibit Election Coverage over Weekend.” France 24. May 07, 2017. Accessed February 12, 2019. https://www.france24.com/en/20170506-france-media-rules-prohibit-election-coverage-over-weekend-presidential-poll.
Greenberg, Andy. “Hackers Hit Macron with Huge Email Leak Ahead of French Election.” Wired. June 03, 2017. Accessed February 12, 2019. https://www.wired.com/2017/05/macron-email-hack-french-election/.
Hewstone, Miles. “The ‘ultimate Attribution Error’? A Review of the Literature on Intergroup Causal Attribution.” European Journal of Social Psychology 20, no. 4 (1990): 311-35.
Karras, Tero, Timo Aila, Samuli Laine, and Jaakko Lehtinen. “Progressive Growing of GANs for Improved Quality, Stability, and Variation.” 2017.
Mejias, Ulises A, and Nikolai E Vokuev. “Disinformation and the Media: The Case of Russia and Ukraine.” Media, Culture & Society 39, no. 7 (2017): 1027-042.
Meserole, Christopher, Alina Polyakova, Christopher Meserole, and Alina Polyakova. “The West Is Ill-prepared for the Wave of “deep Fakes” That Artificial Intelligence Could Unleash.” Brookings.edu. May 25, 2018. Accessed February 12, 2019. https://www.brookings.edu/blog/order-from-chaos/2018/05/25/the-west-is-ill-prepared-for-the-wave-of-deep-fakes-that-artificial-intelligence-could-unleash/.
National Intelligence Council, Issuing Body. Assessing Russian Activities and Intentions in Recent US Elections. Intelligence Community Assessment; 2017-01 D. Washington, D.C.]: Office of the Director of National Intelligence, National Intelligence Council, 2017.
Newman, Lily Hay. “A New Tool Protects Videos From Deepfakes and Tampering.” Wired. February 12, 2019. Accessed February 12, 2019. https://www.wired.com/story/amber-authenticate-video-validation-blockchain-tampering-deepfakes/
Pagin, Shaun. “The Evolution of Photoshop: 25 Years In The Making.” Adobe Photoshop History – 25 Years in the Making. Accessed February 12, 2019. https://www.fastprint.co.uk/blog/the-evolution-of-photoshop-25-years-in-the-making.html.
PUTIN’S ASYMMETRIC ASSAULT ON DEMOCRACY IN RUSSIA AND EUROPE: IMPLICATIONS FOR U.S. NATIONAL SECURITY (Senator Benjamin Cardin, Vice-Chair, United States Senate Committee on Foreign Relations- Minority Report) JANUARY 10, 2018.
Ronald J. Deibert. “The Road to Digital Unfreedom: Three Painful Truths About Social Media.” Journal of Democracy 30, no. 1 (2019): 25-39. https://muse.jhu.edu/ (accessed February 11, 2019).
Rutenberg, Jim. “RT, Sputnik and Russia’s New Theory of War.” The New York Times. September 13, 2017. Accessed February 12, 2019. https://www.nytimes.com/2017/09/13/magazine/rt-sputnik-and-russias-new-theory-of-war.html.
Schwartz, Oscar. “You Thought Fake News Was Bad? Deep Fakes Are Where Truth Goes to Die.” The Guardian. November 12, 2018. Accessed February 12, 2019. https://www.theguardian.com/technology/2018/nov/12/deep-fakes-fake-news-truth.
Shapiro, Fred. “Quotes Uncovered: How Lies Travel.” Freakonomics. April 07, 2011. Accessed February 12, 2019. http://freakonomics.com/2011/04/07/quotes-uncovered-how-lies-travel/.
Suwajanakorn, Supasorn, Steven Seitz, and Ira Kemelmacher-Shlizerman. “Synthesizing Obama: Learning Lip Sync from Audio.” ACM Transactions on Graphics (TOG) 36, no. 4 (2017): 1-13.
“U.S. Intelligence Chief Says Russia, Iran Sought to Influence 2018 Midterm Voters.” RadioFreeEurope/RadioLiberty. December 22, 2018. Accessed February 12, 2019. https://www.rferl.org/a/us-election-interference-russia-iran-china-coats-dni/29670006.html.
Villasenor, John. “Artificial Intelligence, Deepfakes, and the Uncertain Future of Truth.” Brookings.edu. February 14, 2019. Accessed February 27, 2019. https://www.brookings.edu/blog/techtank/2019/02/14/artificial-intelligence-deepfakes-and-the-uncertain-future-of-truth/.