Jenna Abrams was a popular figure on Twitter during the 2016 presidential campaign, amassing over 70,000 followers.1 Abrams, tweeting as @Jenn_Abrams, started out in 2014 with a constant stream of tweets that reflected the populist right-wing politics of a demographic that would end up being a core part of Donald Trump’s base. Her more popular tweets were increasingly retweeted by establishment figures like Donald Trump Jr. and Kellyanne Conway (then Trump’s campaign manager).
The problem is Jenna Abrams never really existed. According to congressional investigators the account was in fact the creation of a Russian government-backed entity called the Internet Research Agency. Based in St. Petersburg, Russia, this agency employed hundreds of people with the goal of spreading information and misinformation that would undermine the US, and promote the interests of Russia.
These workers created thousands of what are known as “gray outlets” or “trolls,” social media accounts and web pages that look at first glance like the pages of ordinary Americans and other Westerners but were in fact being run by English-speaking Russians, or Westerners in the employ of the Russians.
This is a new twist on an old tactic. The fake accounts are a continuation of a form of political warfare carried out by the Russian sometimes referred to as “Active Measures.” The Active Measures program aims to influence world events specifically through various forms of media manipulations. It dates back, in various incarnations, to the 1920s.
In January 1998, CNN interviewed retired KGB Major General Oleg Kalugin, who described the role of “subversion” in Soviet intelligence:
The heart and soul of the Soviet intelligence was not intelligence collection, but subversion: Active measures to weaken the West, to drive wedges in the Western community alliances of all sorts, particularly NATO, to sow discord among allies, to weaken the United States in the eyes of the people of Europe, Asia, Africa, Latin America, and thus to prepare ground in case the war really occurs. To make America more vulnerable to the anger and distrust of other peoples.2
On March 30, 2017, the Senate Intelligence Committee on Russian Interference in the 2016 election heard testimony from various experts in Russian Active Measures. One of those who testified was former FBI special agent Clint Watts, senior fellow at the Foreign Policy Research Institute program on national security. Watts described the scope of the Active Measures program:
While Russia certainly hopes to promote western candidates sympathetic to their worldview and foreign policy objectives, winning a single election is not their end goal. Russian Active Measures hope to topple democracies through the pursuit of five complementary objectives.
1) Undermine citizen confidence in democratic governance.
2) Foment or exasperate divisive political fissures.
3) Erode trust between citizens and elected officials and their institutions.
4) Popularize Russian policy agendas within foreign populations.
5) Breed general distrust or confusion over information sources by blurring the lines between fact and fiction.
From these objectives the Kremlin can crumble democracies from the inside out, achieving two key milestones:
1) The dissolution of the European Union.
2) The breakup of NATO.
The ambition of the project is breathtaking: the breakup of NATO and a return to the Cold War status quo of “Mother Russia” safely surrounded by a buffer of allies and proxies similar to the former Soviet Union.
The high-level Russian strategy here is to diminish the strength of the US and NATO by making the US look bad in the eyes of its allies, and by creating dissent and distrust of authority within the US itself. One way of doing this is to spread conspiracy theories. If you can convince more people that 9/11 was an inside job, or that Sandy Hook was a hoax, or that Chemtrails are real, then this increases the number of people who are hyper-distrustful of the government. The prevalence of conspiracy theories also makes the US look less and less credible in the eyes of its allies, undermining its standing on the world stage.
You might argue that this is itself a far-fetched conspiracy theory. Russia trying to dismantle NATO by promoting Chemtrails in the West does sound rather ridiculous. But look at the evidence. RT, the Russian propaganda outlet, has published many articles and done many interviews on the topic of 9/11 Truth and other conspiracy theories, providing outlets with wide reach to Truthers like Richard Gage and Jesse Ventura:
10 March 2010, RT: “Americans continue to fight for 9/11 Truth.”3
Richard Gage is the founder of ‘Architects and Engineers for 9/11 Truth,’ which consists of more than 1,100 professionals who say it was not planes that caused three buildings to collapse at the World Trade Center.
“The buildings were demolished by explosives. More than one thousand architects and engineers are demanding Congress launch a new subpoena powered investigation considering our evidence.”
10 March 2010, Jesse Ventura on RT: “For some, the search for what happened on 9/11 isn’t over.”4
I did work four years as part of the Navy’s underwater demolition teams, where we were trained to blow things to hell and high water. And my staff talked at some length with a prominent physicist, Steven E. Jones, who says that a “gravity driven collapse,” without demolition charges, defies the laws of physics.
RT frequently interviews Gordon Duff, who has supported claims that Sandy Hook was a staged “false flag,”5 regularly referring to him as a military expert. In the opinion section of RT there are articles on Chemtrails,6 and even an article from a believer in the Flat Earth theory.7 For a pseudo-mainstream news source (RT claims a weekly audience of 8 million people in the US),8 there’s a lot of bunk being spread around.
As I write Escaping the Rabbit Hole, there is still considerable dispute as to the scale of the Russian influence on the 2016 presidential elections. The intelligence community and most of Congress seems to have little doubt that Russian interference happened, but social media, fed by the executive branch, is afire with denials that anything is going on, calling talk of Russian involvement a “nothing-burger” and “debunked.” Such denials are what you would expect if there was actually a Russian troll army steering discourse on the topic.
But regardless of the present details it is without doubt that extensive Russian programs of propaganda and subversion have existed in the past, and will continue to exist in the future. While we might view this as a largely Russian-specific phenomenon, it’s going to be a part of (if not in practice already) the cyber arsenal of any country, large or small. We should expect to see it from China, Pakistan, North Korea, and other nations and large actors. We should also expect that the US will respond in kind. Given the preeminent role that social media now has in how people acquire information and form opinions, it is inevitable that a large part of foreign subversion efforts, their active measures, will be directed towards Facebook and Twitter. We know about the trolls like Jenna Abrams, but what we are going to be seeing more and more of, and where the future of disinformation is going, and where the spread of conspiracy theories is heading, is artificial intelligence and bots.
Bunk Bots
A “bot” is an internet robot. Not a physical robot in the sense of waving metal arms and beep-beep communications, but rather something that exists only in the cloud, on a computer somewhere. It’s an artificially intelligent program designed to do something simple that a person might do on the internet. Originally these were mundane tasks like indexing web pages, or responding to simple customer service queries, or trading stocks.
Bots can also be programmed to perform simple repetitive tasks that would be too boring or time-intensive for humans to do. Many of these tasks are near or beyond the border of legality.9 Bots can stuff online ballot boxes by voting multiple times in online polls. Bots can be programmed to “watch” a particular YouTube video repeatedly to boost views,10 or download an iPhone game in order to push it up the charts. These types of bot usages have been widely recorded for over two decades, and the fight against them is an ongoing arms race.
Bots are increasingly programmed to post on social media. In 2014, a petition appeared on Whitehouse.gov, advocating the return of Alaska to Russia. This initially seemed like a joke, but not only did Russian bots appear to be voting on this petition, but they also were posting links to it on social media, encouraging other people to vote.11 Similarly on election night in 2016, the hashtag #Calexit began trending after it became clear that Donald Trump was going to be president.12 Some of this was actual Californians expressing their displeasure at the result, but a significant portion of the #Calexit tweets were traced back to automated accounts suspected of being linked to the Russian Internet Research Agency troll and bot network.
While a single operative (like @Jenn_Abrams) can be very effective in promoting a message, it takes a long time to build up an account like that organically. Using a bot army can get the job done a lot faster by artificially making it seem like a story has “gone viral.” Thousands of bot accounts on Facebook were used to amplify stories in this manner in the lead up to the 2016 presidential election. Facebook now estimates that 126 million Americans may have seen content on Facebook that was uploaded by Russian-based trolls and magnified by the retweets and shares of bot armies.13
The dumb bot armies of today are effective at what they do, but they are quickly being made redundant by the next wave: bots with significant amounts of artificial intelligence.
Artificial Intelligence
Zeynep Tufekci in her TED talk on the dystopian future of social media algorithms says:
We are not programming anymore. We are growing artificial intelligence that we don’t truly understand.14
Back when I was learning game programming (thirty years ago), it was all about algorithms. An algorithm is a series of logical steps, decisions, and loops that a computer program makes that produces what you see on screen. When you programmed a behavior for some character in the game, then the entire behavior was encapsulated in the code. You would create an algorithm just for that particular behavior. You as a programmer understood that algorithm, and you understood the results. It was done that way largely because of the limits in the computer’s speed and memory.
As machines got more and more powerful over the years we shifted to different approaches. Artificial intelligence began to be less driven by hard-coded algorithms and more driven by data. You’d write a more general purpose algorithm to define a range of behaviors, and then apply various numbers (the data) to that. Eventually even the generation of that data became automated. Game designers would “show” the AI how to act in a certain way and it would figure out the data that best matched that behavior. The AI could observe human players and try to act like them. It could also play games against itself to improve its skills.
Now, after a game is released, the data that controls the AI can continue to be tweaked, modifying itself (via the game developer’s central server) to do whatever the players seem to enjoy the most. The AI evolves to become a better, more addictive game that the players will continue to pay for.
In social media platforms like Facebook, Twitter, and YouTube there are data-driven algorithms that decide what to show you next (the behind-the-scenes code that determines what “autoplay” will show next). These algorithms are ultimately designed to make money. They do this firstly by keeping you on the site, by showing you content that the algorithm thinks that people like you would watch (given your demographic, internet history/cookies, etc.). Secondly, they are designed to make you buy stuff, which they do by showing you things that the algorithm has determined will make you spend money.
Nobody really understands exactly how these algorithms work. Sure, they understand (more or less) the code. But even the code is often written by multiple people, sometimes hundreds of people. Google employs twenty-five thousand developers, who make significant changes to the code forty-five thousand times a day.15 Programmers don’t often write entire things from scratch any more—they write pure algorithms from time to time, but most of what programming is now is either working on small parts of a large program or gluing together existing libraries of code that other people wrote.
But the real mystery of these emergent algorithms comes from the data. The decisions that Facebook and YouTube make when deciding what to show you next are not simply based on the data of your browsing history, your credit rating, your location, and your age (although they will use all that if they have it). The decisions now are based on big data, the aggregated data of all the users.
The data that drives the algorithms isn’t just a few numbers now. It’s monstrous tables of millions of numbers, thousands upon thousands of rows and columns of numbers. Any individual one of these tables that comprise the larger dataset, in a somewhat ironic twist, is technically defined as a “matrix.” Matrices are created and refined by computers endlessly churning through Big Data’s records on everyone, and everything they’ve done. No human can read those matrices, even with computers helping you interpret them they are simply too large and complex to fully comprehend. But the computers can use them, applying the appropriate matrix to show us the appropriate video that will eventually lead us to make an appropriate purchase. We are not living in The Matrix, but there’s still a matrix controlling us.
What does this have to do with the rabbit hole of conspiracy theories? It has everything to do with it. These algorithms are quickly becoming the primary route down the rabbit hole. To a large extent this has already happened, but it’s going to get far, far worse. Tufekci described what happened when she tried watching different types of content on YouTube. She started out by watching videos of Donald Trump rallies.
I wanted to write something about one of Donald Trump’s rallies, so I watched it a few times on YouTube. YouTube started recommending to me, and autoplaying to me, white supremacist videos, in increasing order of extremism. If I watched one, it served up one even more extreme.
If you watch Hillary Clinton or Bernie Sanders content, YouTube recommends and autoplays left-wing conspiracy videos, and it goes downhill from there.
Downhill, into the rabbit hole. The data-driven algorithm has evolved to recognize that the way to get people to watch more videos is to direct them downhill, down the path of least resistance. Without human intervention the algorithm has evolved to perfect a method of gently stepping up the intensity of the conspiracy videos that it shows you so that you don’t get turned off, and so you continue to watch. They get more intense because the algorithm has found (not in any human sense, but found nonetheless) that the deeper it can guide people down the rabbit hole, the more revenue it can make.