We’ve seen how conspiracy theorists like to watch conspiracy theory videos, and how this is now the primary route into the rabbit hole. They watch long videos, and they watch them multiple times. They seek out similar videos. The more convinced they are of the correctness of their belief, the more they enjoy watching videos that reflect, reinforce, and confirm that belief. It’s a positive feedback loop at its finest.
The giants of social media have unwittingly developed algorithms, developed a matrix, that is finely tuned to trap people in that loop, suck them down the rabbit hole, and keep them there. Even if your friend was never into conspiracy theories, he might have some personality factors that make him a bit more likely to believe certain videos, factors the algorithm can latch onto. But even if he’s just a regular guy, YouTube’s blind algorithm figures him out, figured out his soft spots, tweaks a matrix just for him, and starts to tease him down. The cycle repeats, escalating in intensity both with that one individual and the larger system at play.
Increasingly this type of behavior is not even something we can observe. The algorithm targets a demographic of one—that person, perhaps your friend, perhaps you. The results are tailored for that one person, and only that one person sees the breadcrumbs that the artificial intelligence is scattering at the entrance to their own privately curated rabbit hole.
Intelligent Chatbots
More insidious manipulation is being implemented in the form of chatbots and fake people. A chatbot is as it sounds, a type of bot designed to chat with people. Chatbots have been around for decades. Initially the focus was academic, trying to get them to sound like humans. Later they were used in things like customer support and telemarketing, where instead of a tired person in a call center working off a script you get an even less helpful (but cheap and tireless) bot working off a script. This type of bot increasingly uses the Facebook messenger platform. In 2017, over 100,000 Facebook messenger bots were created, four times the previous year’s total.
The next inevitable stage is chatbots that assume a fake persona, setting up social media accounts and chatting with people online with the intent of manipulating them in some way. This type of manipulation already exists with humans (“trolls”) creating content and then using bots to do the heavy lifting of repeated shares and likes, but once the entire process becomes automated it will allow an operation that’s several orders of magnitude larger and more effective.
Chatbots will increasingly act like real people. Not only will they be able to chat over text messages and posts, they will be able to engage in voice chat. Within a decade or so passably convincing video chat with fake personas is quite possible. Bots will even be able to “take” photos and videos by synthesizing images, creating 3D models of their fakes selves, and inserting themselves into existing photos and video. Eventually they will synthesize entire fake worlds.
Imagine then what is to come: a charismatic and persuasive person on the internet is going to befriend you, gain your trust, and then start to manipulate your beliefs for nefarious purposes. It would be like having your own personal Alex Jones or Dane Wigington talking directly to you, face-to-face, on a personal level, about 9/11 or Chemtrails. It might even pretend to be Alex Jones, or Neil deGrasse Tyson, or Jesus, whatever its algorithm figures out will likely work for you. It’s not just you individually, millions of other people will be simultaneously targeted by fake people who never sleep and who spend every second trying to figure out how to push their targets closer towards being the person the bots want them to be.
This will happen, to some degree. But it’s not going to be an automatic bot-apocalypse. Bots work because they are able to open accounts on major social media platforms like Twitter and Facebook. The simplest defense against AI taking over the soul of the country is to restrict social media accounts to actual people. There’s already a push towards doing this, for example from businessmen and potential presidential candidate Mark Cuban who tweeted (from his verified account):
It’s time for @twitter to confirm a real name and real person behind every account, and for @facebook to get far more stringent on the same. I don’t care what the user name is. But there needs to be a single human behind every individual account.16
There are problems with this—in particular in countries with oppressive governments where online anonymity might be a matter of life or death. There will also be an increase in identity theft, as the bots seek to become “real” people by assuming their identities. But ultimately, one way or the other, social media companies will have to deny the bot armies their life-blood of free and unverified accounts. Hopefully history will look back on the early 2020s only as a brief period of confusion between real people and virtual people. Eventually the separation will be enforced, by necessity.
The Fight against Misinformation
What other weapons do we need in the coming battle against AI driven misinformation and conspiracy theories? The trend seems inexorable, but with the growth of misinformation has come the attention of more serious figures in the media/technology landscape.
In 2016 Mark Zuckerberg, CEO of Facebook, scoffed at the idea that trolls posting fake news on Facebook would have any impact on the presidential election, calling it “crazy”:
Personally I think the idea that fake news on Facebook, of which it’s a very small amount of the content, influenced the election in any way is a pretty crazy idea. Voters make decisions based on their lived experience. … There is a certain profound lack of empathy in asserting that the only reason someone could have voted the way they did is because they saw some fake news.17
But in 2017, after internal investigations into the scope of Russian promotion of fake news stories, he walked that back.
Calling that crazy was dismissive and I regret it. This is too important an issue to be dismissive.18
Facebook has long been aware of the problems caused by AI spam bots, and fake content aggregators—not so much as an existential threat to Western society, but as a much more mundane threat to their already shaky income stream. Facebook’s revenue is based on advertising. Bots pose a problem in two ways. Firstly, bots are not people and they don’t spend money, so if a bot is on Facebook it’s ignoring the ads, but still costing Facebook money. If a significant portion (say 10 percent) of the traffic on Facebook was actually AI bots, then that’s a huge expense for Facebook, potentially over $100 million a year in server costs.
Secondly, people do not like bots. People don’t want their feeds clogged up with crap, so if Facebook becomes a swamp full of bot-spun dross then people are less likely to use it as a source of information, and so will spend less time there. This is especially true for users with higher education and higher net worth who are more desirable as a target audience demographic. For Facebook to maintain a reasonable quality audience, and hence a viable revenue stream, they need to maintain a reasonable level of quality in the information that shows up on that audience’s news feed.
With the slow realization of the scope of Russian involvement via Facebook that became clear in 2017, there’s also a new imperative—the very real possibility of government regulation of the dissemination of foreign propaganda over social media. Facebook knows this is a possibility but does not know what form it will take. It’s in their best interest to get their own house in order as much as possible. There is internal disagreement about aspects of this, according to the New York Times:
One central tension at Facebook has been that of the legal and policy teams versus the security team. The security team generally pushed for more disclosure about how nation states had misused the site, but the legal and policy teams have prioritized business imperatives.19
Facebook is a corporation made up of people, human beings who recognize that there is a potential for harm to society if we continue to slide down the rabbit hole of misinformation, fake news, and propaganda. It’s not the world they want. While there’s certainly a profit motive in their actions, part of it is also wanting the world to be a better place for future generations.
(I recognize how insane that paragraph will sound to the hardened conspiracy theorist who thinks of Facebook as an extension of the Illuminati New World Order, but I would hope that if they have read this far in the book they would at least be part way out of the rabbit hole. I accept the mockery of those who randomly opened the book on this page.)
What did Facebook do? In early 2017 they contracted with a number of outside agencies to fact check articles posted on Facebook, and to flag fake news. Those agencies included Snopes, FactCheck, PolitiFact, ABC News, and the Associated Press. Workers at those agencies have access to a dashboard that shows trending stories on Facebook, and they can click a box indicating it’s disputed (or not) and provide a link to their own site where there’s a debunking or explanatory article.20
The problem here is one of scale. While it sounds good in principal that there’s this distinguished team of fact-checkers, there’s millions of links being shared on Facebook. Facebook only has a certain number of humans who can check those links. So only the most popular links get checked. By the time they start to check them it’s already been seen by thousands (or possibly millions) of people. If it’s a breaking story it can take several days to perform a real fact-check to actually put the stamp of “disputed” next to the story.
The process needs to be streamlined, and AI needs to be incorporated to identify trending fake news (which includes conspiracy theories) much earlier in the cycle. This is the subject of much current research.
_______
Facebook chose outside agencies to do fact-checking to avoid the accusation of bias. People of all stripes distrust Facebook. Conservatives think it has a liberal bias, liberals and libertarians think it’s spying on them for corporations, conspiracy theorists think it’s part of a plot to identify and track them before they are herded into FEMA camps. Facebook policing its own content is not going to play well—especially with the conspiracy theorists who by their very nature are going to be sharing a lot of misinformation that will then get flagged by Facebook.
Facebook attempted to use neutral third parties. For certain audiences this was doomed from the start. Fact checkers like Snopes, FactCheck, and PolitiFact are already considered suspect. Facebook teaming up with Snopes is unfortunately going to be a laughable concept for someone who thinks both organizations have been “debunked” as Illuminati tools.
But for the wider population, Facebook’s first efforts, clumsy though they are, were commendable. One very good step they took was requiring that all fact checking organizations they used were either major news organizations with a history of neutral reporting (ABC News and the Associated Press), or were certified members of the International Fact-Checking Network (IFCN).
IFCN is run by the Poynter Institute, a venerable school of journalism that also owns the St. Petersburg Times Company in Florida. Poynter set up the IFCN in September 2015 to promote best practices in fact-checking. To be certified by Poynter means that an organization has been vetted and found to conform to a quite rigorous checklist that ensures they are nonpartisan, fair, transparent, open, and honest. This is reflected in the IFCN code of principles:
1. Nonpartisanship and fairness
2. Transparency of sources
3. Transparency of funding and organization
4. Transparency of methodology
5. Open and honest corrections
Unfortunately, Facebook quickly found this was more complicated than they initially thought. In December 2017, after internal experiments they discovered that flagging stories as “disputed” could actually lead to them being shared more than before, in part due to a kind of backfire effect.21 Now instead of the user being warned that Snopes and PolitiFact “dispute” the story, they are now told that there is “additional reporting.” This approach avoids a backfire effect, making it more likely they will read the “additional reporting,” and allows for Facebook to link to more nuanced analyses that are not simply “debunking” a claim, but rather providing a more accurate overview of what might be a highly complex and uncertain situation or topic. No doubt their approach will continue to evolve rapidly.
Another large corporation with an interest in filtering out fake news is Google. Google’s main source of revenue is advertising based on Google.com search results. With search results Google has a very strong profit motive to make sure that the results that are returned are as high quality as possible. Google is taking initial steps towards automated fact checking with a system that allows publishers to flag rebuttals to articles.22 Google then uses an automated algorithm to attempt to match good quality rebuttals with the claims. The initial reviews of this system suggest it is somewhat random, which can lead to it being perceived as biased.23
Microsoft’s Bing Search engine is smaller than Google, yet still commands a significant percentage of the market. Microsoft themselves say they have 33 percent of the US search market share (including Yahoo, which uses Bing), and 9 percent of the global market.24 In 2015, Microsoft researcher Danah Boyd founded the Data & Society Research Institute, in part to help address the issues being raised by increased data automation and artificial intelligence.25 It’s mostly research initiatives so far.
A similar organization is The Trust Project, initially funded by philanthropist Craig Newmark who is famous for Craigslist, an internet sales platform that has bot problems of its own. The Trust Project is developing a series of “trust indicators” to allow Google, and others, to rank the quality of information.26 The indicators are things like the expertise of the author, the citations and references used, the reporting methods, and the outlet’s feedback and corrections policy.
A lot of these initiatives are essentially experimental. We don’t know yet what the best way of addressing the issue is, but it’s heartening in that a diverse range of large organizations appear to be taking the problem very seriously and devoting considerable resources to dealing with it.
Smaller companies are also springing up to join the fight, recognizing that there is money to be made from aiding the giants in their battle. These focus on the use of Artificial Intelligence to identify and flag fake news and misinformation. One such company is AdVerif.ai, which is developing what it calls “FakeRank,” a way of automatically detecting what the Trust Project does manually—quantifying and measuring the reliability of an outlet or an individual story.27
Another company is MachineBox, an AI technology developer who trained its natural language processing module to recognize fake news with a 95 percent success rate.28 Cofounder Aaron Edell developed this in a semi-manual way by first curating sets of real and fake news and then letting the AI try to figure out which was which. While time-consuming for an individual, it’s an approach that shows great promise, and should scale very well for AI-enabled bots.
In March 2018, YouTube CEO Susan Wojcicki announced that the company would be experimenting with automatically adding what they call “information cues” to popular conspiracy theory videos.29 These will be small pop-up excerpts of directly relevant articles from Wikipedia. While this is unlikely to have much instant effect on people who think Wikipedia is part of the conspiracy, it will directly expose people to information that they would not otherwise have sought out by themselves. It should act at the least as friction to prevent people falling down the rabbit hole due to lack of alternate information. It might even help people out; their perspective improves with the more they know—even if they initially reject the “official” story.
A Hopeful Future
At the time of writing, in 2018, there is much to be discouraged about in the world of disinformation. It will probably get worse before it gets better. But I am encouraged by the efforts of the major players in the information sphere, especially in social media, to push back against the tide.
I am also hopeful that the large-scale push against more general disinformation will have ancillary benefits in the fight against the more extreme types of conspiracy theories that we’ve discussed in this book.
All disinformation is a conspiracy theory because disinformation always comes with the implication that you have been lied to, and usually by people with some form of power over you. Sometimes it’s politicians, sometimes it’s corporations, sometimes it’s their supposed pet scientists, but there’s always a supposed conspiracy.
Consider the popular political disinformation in the last few years. There were supposed conspiracies to cover up Obama’s birthplace, or Hillary Clinton’s health, or pizza-related pedophile rings, or how many people the Clintons had assassinated, or how much the Russians were involved in the election. The nature of these conspiracy theories flip-flops depending on what you believe about them, but either way there’s a false conspiracy theory involved on one side or the other. There’s also real conspiracies.
The fight against misinformation is at its root a fight against the spread of false conspiracy theories. Large scale efforts to prevent the spread of the more banal misinformation and disinformation are also going to eventually slow the spread of false theories like Chemtrails or 9/11 controlled demolition. The push for better political fact checking will, directly and indirectly, lead to a reduction in pseudoscience, medical quackery, and conspiracism.
Automation and AI will be key. Right now, debunking can be a very labor intensive and repetitive effort. We can maximize our efforts by creating accessible debunks of information that can easily be found by search engines, but if people don’t look for the debunk, then they are not going to see it. When people post useful information in public forums we need tools and automated systems to make the debunking and fact checking instantly available.
There is much to be done. We are still very much in the middle of a war against weaponized false information. There is both much uncertainty and much promise in the future role of artificial intelligence in that war, and we need to watch that very carefully.