Chapter 31, Artificial Intelligence and Polarization, continued....

p125
Chaslot had heard of people tumbling down YouTube rabbit holes (the unneureal). But the conviction in the voice of this otherwise normal-seeming man bothered him. Were others falling victim? He set up a simple program, which he called Algo Transparency, to find out. The program entered a term, like the name of a politician, in YouTube’s search bar. Then it opened the top results. Then each recommendation for what to watch next. He ran huge batches of anonymized searches, one after another, over late 2015 and much of 2016, looking for trends.

What he found alarmed him. When he searched YouTube for Pope Francis, for instance, 10 percent of the videos it displayed were conspiracies.

p126
On global warming, it was 15 percent. But the real shock came when Chaslot followed algorithmic recommendations for what to watch next, which YouTube has said accounts for most of its watch time. A staggering 85 percent of recommended videos on Pope Francis were conspiracies, asserting Francis’s “true” identity or purporting to expose Satanic plots at the Vatican. On global warming, the figure was 70 percent, usually calling it a hoax. On topics with few established conspiracies, the system seemed to conjure them up. When Chaslot searched Who is Michelle Obama, for instance, just under half of the top results and almost two thirds of watch-next recommendations claimed the First Lady was secretly a man. Surely, he thought, whatever his disagreement with his former colleagues, they would want to know about this. But when he raised concerns privately with people he knew at YouTube, the response was always the same: “If people click on this harmful content, who are we to judge?”

Some inside Google, though, were reaching similar conclusions as Chaslot. In 2013, an engineer named Tristan Harris had circulated a memo urging the company to consider the societal impact of push alerts or buzzing notifications that tugged at users’ attention. As an alumnus of Stanford’s Persuasive Tech Lab, he knew their power to manipulate. Could all this cognitive training come at a cost? He was granted the title “design ethicist” but little power and, in 2015, quit, hoping to pressure the industry to change. At a presentation that year to Facebook, Harris cited evidence that social media caused feelings of loneliness and alienation, portraying it as an opportunity to reverse the effect. “They didn’t do anything about it,” he recounted to The New Yorker. My points were in their blind spot.” He circulated around the Valley, warning that its A.I.s, a robot army bent on defeating each user’s control over their own attention, were waging an invisible war against billions of consumers.

Another Google employee, James Williams, who later wrote essays calling Gamergate a warning sign that social media would elevate Trump, had his reckoning while monitoring a dashboard that tracked users’ real-time interactions with ads. “I realized: this is literally a million people that we’ve sort of nudged or persuaded to do this thing that they weren’t going to otherwise do,” he has said. He joined Harris’s efforts inside Google until, like Harris, he quit. But rather than cajole the Valley, he tried to raise alarms with the public. “There’s no good analogue for this monopoly of the mind the forces of industrialized persuasion now hold,” he wrote. The world faced “a next-generation threat to human freedom” that had “materialized right in front of our noses.”

p128
But the influence of algorithms only deepened, including at the last holdout, Twitter. For years, the service had shown each user a simple, chronological feed of their friends’ tweets. Until, in 2016, it introduced an algorithm that sorted posts— for engagement, of course, and to predictable effect. “The average curated tweet was more emotive, on every scale, than its chronological equivalent,” The Economist found in an analysis of the change. The result was exactly what it had been on Facebook and YouTube: “The recommendation engine appears to reward inflammatory language and outlandish claims.”

To users, for whom the algorithm was invisible, these felt like powerful social cues. It was as if your community had suddenly decided that it valued provocation and outrage above all else, rewarding it with waves of attention that were, in reality, algorithmically generated. And because the algorithm down-sorted posts it judged as unengaging, the inverse was true, too. It felt as if your peers suddenly scorned nuance and emotional moderation with the implicit rejection of ignoring you. Users seemed to absorb those cues, growing meaner and angrier, intent on humiliating out-group members, punishing social transgressors, and validating one another’s worldviews. See Chapters 7, 15, 22-25, 28, the unneureal

p134
In the coming months, digital watchdogs, journalists, congressional committees, and the outgoing president would all accuse social media platforms of accelerating misinformation and partisan rage that paved the way for Trump’s victory. The companies, after a period of contrition for narrower sins like hosting Russian  propagandists and fake news, largely deflected. But in the hours after the election, the first to suspect Silicon Valley’s culpability were many of its own rank and file. At YouTube, when CEO Susan Wojcicki convened her shell-shocked staff, much of their discussion centered on concerns that YouTube’s most-watched election- related videos were from far-right misinformation shops like Breitbart and conspiracy theorist Alex Jones. Similar misgivings were expressed by Facebook employees. “The results of the 2016 Election show that Facebook has failed in its mission,” one Facebooker posted on the company’s internal message board. Another: “Sadly, News Feed optimizes for engagement. As we’ve learned in this election, bullshit is highly engaging.” Another: “Facebook (the company) Is Broken.”

p151
In a revealing experiment, Republicans were shown a false headline about the refugees (“Over 500 ‘Migrant Caravaners’ Arrested with Suicide Vests”). Asked whether it seemed accurate, most identified it as false; only 16 percent called it accurate. The question’s framing had implicitly nudged the subjects to think about accuracy. This engaged the rational parts of their mind, which quickly identified the headline as false. Subsequently asked whether they might share the headline on Facebook, most said no: thinking with their rational brains, they preferred accuracy.

But when researchers repeated the experiment with a different set of Republicans, this time skipping the question about accuracy to simply ask if the subject would share the headline on Facebook, 51 percent said they would. Focusing on Facebook activated the social part of their minds, which saw, in the same headline, the promise of identity validation— something the social brain values far beyond accuracy. Having decided to share it, the subjects told themselves it was true. “Most people do not want to spread misinformation,” the study’s authors wrote, differentiating willful lying from socially motivated belief. “But the social media context focuses their attention on factors other than truth and accuracy.” See Chapter 5

p154
Meanwhile, just as Chaslot joined DiResta and others in the public struggle to understand Silicon Valley’s undue influence, William Brady and Molly Crockett, the psychologist and neuroscientist, achieved a momentous breakthrough in that effort. They had spent months synthesizing reams of newly available data, behavioral research, and their own investigations. It was like fitting together the pieces of a puzzle that, once assembled, revealed what may still be the most complete framework for understanding social media’s effect on society.

The platforms, they concluded, were reshaping not just online behavior but underlying social impulses, and not just individually but collectively, potentially altering the nature of “civic engagement and activism, political polarization, propaganda and disinformation.” They called it the MAD model, for the three forces rewiring people’s minds. See Chapters 6+7,15,22-25,28.  Motivation: the instincts and habits hijacked by the mechanics of social media platforms. Attention: users’ focus manipulated to distort their perceptions of social cues and mores. Design: platforms that had been constructed in ways that train and incentivize certain behaviors.

p155
The digital-attention economy amplifies the social impact of this dynamic exponentially. Remember that the number of seconds in your day never changes. The amount of social media content competing for those seconds, however, doubles every year or so, depending on how you measure it. Imagine, for instance, that your network produces 200 posts per day, of which you have time to read 100. Because of the platforms’ tilt, you will see the most moral-emotional half of your feed. Next year, when 200 doubles to 400, you see the most moral- emotional quarter. The year after that, the most moral- emotional eighth. Over time, your impression of your own community becomes radically more moralizing, aggrandizing, and outraged—and so do you. At the same time, less innately engaging forms of content—truth, appeals to the greater good, appeals to tolerance—become more and more outmatched.

p157
“Online platforms,” Brady and Crockett wrote, “are now one of the primary sources of morally relevant stimuli people experience in their daily life.” Billions of people’s moral compasses potentially tilted toward tribalism and distrust. Whole societies nudged toward conflict, polarization, and unreality—toward something like Trumpism. See Chapters 11-14,24,28.  Brady did not think that social media was “inherently evil,” he told me. But as the platforms evolved, the effects only seemed to worsen. “It’s just gotten so toxic,” he said.

p164
For years after Rwanda’s genocide, American officials tormented themselves over hypotheticals. Could American warplanes have destroyed the radio towers in time to stop it? How would they locate the towers amid Rwanda’s jungles and mountain passes? How would they secure international authority? In Myanmar, there were never any such doubts. A single engineer could have shuttered the entire network as they finished their morning coffee. One million terrified Rohingya made safer from death and displacement with a few keystrokes. The warning signs were freely visible. Madden and others had given them the necessary information to act. They simply chose not to, even as entire villages were purged in fire and blood. By March 2018, the head of the United Nations’ fact-finding mission said his team had concluded that social networks, especially Facebook, had played a “determining role” in the genocide. The platforms, he said, “substantively contributed” to the hate destroying an entire population.

Three days later, a reporter named Max Read posed a question, on Twitter, to Adam Mosseri, the executive overseeing Facebook’s news feed. He asked, referring to Facebook as a whole, “honest question—what’s the possible harm in turning it off in myanmar?” Mosseri responded, “There are real issues, but Facebook does a good deal of good —connecting people with friends and family, helping small businesses, surfacing informative content. If we turn it off we lose all that.”

The belief that Facebook’s benefits to Myanmar, at that moment, exceeded its harms is difficult to understand. Facebook had no Myanmar office from which to appreciate its impact. Few of its employees had ever been. It had rejected the chillingly consistent outside assessments of its platform’s behavior. Mosseri’s conclusion was, in the most generous interpretation, ideological, rooted in faith. It was also convenient, permitting the company to throw up its hands and declare it ethically impossible to switch off the hate machine. Never mind that leaving the platform up was its own form of intervention, chosen anew every day.

There was another important barrier to acting. It would have meant acknowledging that the platform may have shared some blame. It had taken cigarette companies half a century, and the threat of potentially fatal litigation, to admit that their products caused cancer. How easily would Silicon Valley concede that its products could cause upheaval up to and including genocide?  See Chapter 19

p165
Eventually, the sunny view of the Arab Spring came to be revised. “This revolution started on Facebook,” Wael Ghonim, an Egyptian programmer who’d left his desk at Google to join his country’s popular uprising, had said in 2011. “I want to meet Mark Zuckerberg someday and thank him personally.” Years later, however, as Egypt collapsed into dictatorship, Ghonim warned, “The same tool that united us to topple dictators eventually tore us apart.” The revolution had given way to social and religious distrust, which social networks widened by “amplifying the spread of misinformation, rumors, echo chambers, and hate speech,” Ghonim said, rendering society “purely toxic.

p181
The defining element across all these rumors was something more specific and dangerous than generalized outrage: a phenomenon called status threat. When members of a dominant social group feel at risk of losing their position, it can spark a ferocious reaction. They grow nostalgic for a past, real or imagined, when they felt secure in their dominance (“Make America Great Again”). They become hyper-attuned for any change that might seem tied to their position: shifting demographics, evolving social norms, widening minority rights. And they grow obsessed with playing up minorities as dangerous, manifesting stories and rumors to confirm the belief. It’s a kind of collective defense mechanism to preserve dominance. It is mostly unconscious, almost animalistic, and therefore easily manipulated, whether by opportunistic leaders or profit-seeking algorithms.

The problem isn’t just that social media learned to promote outrage, fear, and tribal conflict, all sentiments that align with status threat. Online, as we post updates visible to hundreds or thousands of people, charged with the group-based emotions that the platforms encourage, “our group identities are more salient” than our individual ones, as William Brady and Molly Crockett wrote in their paper on social media’s effects. We don’t just become more tribal, we lose our sense of self. It’s an environment, they wrote, “ripe for the psychological state of deindividuation.”

The shorthand definition of deindividuation is “mob mentality,” though it is more common than joining a mob. You can deindividuate by sitting in the stands at a sports game or singing along in church, surrendering part of your will to that of the group. The danger comes when these two forces mix: deindividuation, with its power to override individual judgment, and status threat, which can trigger collective aggression on a terrible scale, as seen in the January 6th, 2021 riot.

p188
And those defining traits and tics of superposters, mapped out in a series of psychological studies, are broadly negative. One is dogmatism: “relatively unchangeable, unjustified certainty.” Dogmatics tend to be narrow-minded, pushy, and loud. Another: grandiose narcissism, defined by feelings of innate superiority and entitlement. Narcissists are consumed by cravings for admiration and belonging, which makes social media’s instant feedback and large audiences all but irresistible. That need is deepened by superposters’ unusually low self-esteem, which is exacerbated by the platforms themselves. One study concluded simply, “Online political hostility is committed by individuals who are predisposed to be hostile in all contexts.” Neurological experiments confirmed this: superposters are drawn toward and feel rewarded by negative social potency, a clinical term for deriving pleasure from deliberately inflicting emotional distress on others. Further, by using social media more, and by being rewarded for this with greater reach, superposters pull the platforms toward these defining tendencies of dogmatism, narcissism, aggrandizement, and cruelty.

p215
This was more than just expanding the reach of the far right. It was uniting a wider community around them. And at a scale—millions of people—the Charlottesville organizers could only have dreamed of. Here, finally, was an answer for why there had been so many stories of people falling into far- right rabbit holes. Someone who came to YouTube with interest in right-wing-friendly topics, like guns or political correctness, would be routed into a YouTube-constructed world of white nationalism, violent misogyny, and crazed conspiracism, then pulled further toward its extremes.

p244
The hearing was nominally to address Russia’s digital exploitation. But congressional investigators, like so many others, were coming to believe that the Russian incursion, while pernicious, had revealed a deeper, ongoing danger. This was “not about arbitrating truth, nor is it a question of free speech,” DiResta said. It was about algorithmic amplification, online incentives that led unwitting users to spread propaganda, and the ease with which bad actors could “leverage the entire information ecosystem to manufacture the appearance of popular consensus.” As DiResta had been doing for years now, she directed her audience’s attention from Moscow toward Silicon Valley. “Responsibility for the integrity of public discourse is largely in the hands of private social platforms,” she said. For the public good, she added, speaking on behalf of her team, “we believe that private tech platforms must be held accountable.”

p245
In an attempt to address the public’s concerns, Zuckerberg published an essay, a few weeks after DiResta’s hearing. “One of the biggest issues social networks face,” he wrote, “is that, when left unchecked, people will engage disproportionately with more sensationalist and provocative content.” He included a chart that showed engagement curving upward as Facebook content grew more extreme, right up until it reached the edge of what Facebook permitted. “Our research suggests that no matter where we draw the lines for what is allowed, as a piece of content gets close to that line, people will engage with it more on average,” he wrote. “At scale,” Zuckerberg added, this effect “can undermine the quality of public discourse and lead to polarization.”

p247
Around the same time as Zuckerberg’s essay, a team of Stanford and New York University economists conducted an experiment that tested, as directly and rigorously as anyone has, how using Facebook changes your politics. They recruited about 1,700 users, then split them into two groups. People in one were required to deactivate their accounts for four weeks. People in the other were not. The economists, using sophisticated survey methods, monitored each participant’s day-to-day mood, news consumption, accuracy of their news knowledge, and especially their views on politics.

The changes were dramatic. People who deleted Facebook became happier, more satisfied with their life, and less anxious. The emotional change was equivalent to 25 to 40 percent of the effect of going to therapy—a stunning drop for a four-week break. Four in five said afterward that deactivating had been good for them. Facebook quitters also spent 15 percent less time consuming the news. They became, as a result, less knowledgeable about current events—the only negative effect. But much of the knowledge they had lost seemed to be from polarizing content; information packaged in a way to indulge tribal antagonisms. Overall, the economists wrote, deactivation “significantly reduced polarization of views on policy issues and a measure of exposure to polarizing news.” Their level of polarization dropped by almost half the amount by which the average American’s polarization had risen between 1996 and 2018—the very period during which the democracy-endangering polarization crisis had occurred. Again, almost half.

p262
Still, it was hard to separate out the benevolence of their work from the degree to which it was intended, as some policy documents plainly stated, to protect Facebook from public blowback or regulation. I came to think of Facebook’s policy team as akin to Philip Morris scientists tasked with developing a safer, better filter. In one sense, cutting down the carcinogens ingested by billions of smokers worldwide saved or prolonged lives on a scale few of us could ever match. In another sense, those scientists were working for the cigarette company, advancing the cause of selling cigarettes that harmed people at an enormous scale. See Chapter 19

I was not surprised, then, that everyone I spoke to at Facebook, no matter how intelligent or introspective, expressed total certainty that the product was not innately harmful. That’s unneureal. That there was no evidence that algorithms or other features pulled users toward extremism or hate. That the science was still out on whether cigarettes were really addictive and really caused cancer. But much as Philip Morris turned out to have been littered with studies proving the health risks its executives insisted did not exist, Facebook’s own researchers had been mounting evidence, in reams of internal reports and experiments, for a conclusion that they would issue explicitly in August 2019: “the mechanics of our platform are not neutral.”

An internal report on hate and misinformation had found, its authors wrote, “compelling evidence that our core product mechanics, such as virality, recommendations, and optimizing for engagement, are a significant part of why these types of speech flourish on the platform.” The report, later leaked to media and the SEC, warned that the company was “actively (if not necessarily consciously) promoting these types of activities.”

But, in my time at Facebook, again and again, any question about the consequences of routing an ever-growing share of the human experience through algorithms and gamelike interfaces designed primarily to “maximize engagement” brought only an uncomprehending stare. Executives who only moments earlier had delved into sensitive matters of terrorism or foreign regulation would blink and change the subject as if they had not understood the words. The unneureal.

p264
Debates in the Valley over how to use their power—defer to governments more or less, emphasize neutrality or social welfare, consistency or flexibility—rarely considered the possibility that they should not have such power at all. That consolidating information and social relations under the control of profit-maximizing companies was fundamentally at odds with the public good.

p265

“The CEOs, inside they’re hurting. They can’t sleep at night,” Ben Tauber, a former product manager at Google who’d turned a seaside hippie commune called Esalen into a tech executive retreat, told the New York Times. It was a strange set of contortions. But it did for executives what wartime CEO performances had done for corporate morale and moderators had done for hate speech: paper over the unresolved, and perhaps unresolvable, gap between the platforms’ stated purpose of freedom and revolution and their actual effects on the world.

This was the real governance problem, I came to believe. If it was taboo to consider that social media itself, like cigarettes, might be causing the harms that seemed to consistently follow its adoption, then employees tasked with managing those harms were impossibly constrained. It explained so much of the strange incoherence of the rulebooks. Without a complete understanding of the platforms’ impact, most policies are tit- for-tat responses to crises or problems as they emerge: a viral rumor, a rash of abuse, a riot. Senior employees make a tweak, wait to see what happens, then tweak again, as if repairing an airplane mid-flight.

In 2018, an American moderator filed a lawsuit, later joined by several other moderators, against Facebook for failing to provide legal-minimum safety protections while requiring them to view material the company knew to be traumatizing. In 2020, Facebook settled the case as a class action, agreeing to pay $52 million to 11,250 current and former moderators in the United States. Moderators outside of the U.S. got nothing. The underlying business model remains unchanged.

p321
The insurrection’s other leader, after all, maybe its real leader, was already on the ground, embedded in the pockets of every smartphone-carrying participant. January 6 was the culmination of Trumpism (the unneureal), yes, but also of a movement built on and by social media. It was an act that had been planned, days in advance, with no planners. Coordinated among thousands of people with no coordinators. And now it would be executed through digitally guided collective will. As people arrived at the Capitol, they found ralliers who had come earlier already haranguing the few police on guard. A wooden gallows, bearing an empty noose, had been erected on the grounds. Their perception that the election was stolen was unneureal.  See Chapter 24.

p322
“We’re in, we’re in! Derrick Evans is in the Capitol!” Evans, a West Virginia state lawmaker, shouted into his smartphone, streaming live on Facebook, where he had been posting about the rally for days. In virtually every photo of the Capitol siege, you will see rioters holding up smartphones. They are tweeting, Instagramming, livestreaming to Facebook and YouTube. This was, like the Christchurch shooting a year before or incel murders a year before that, a performance, all conducted for and on social media. It was such a product of the social web that many of its participants saw no distinction between the lives they lived online and the real-world insurrection they were committing as an extension of the unneureal identities shaped by those platforms. See Chapter 24.

p326
The day after the riot, Facebook announced it would block Trump from using its services at least until the inauguration two weeks later. The next day, as Trump continued tweeting in support of the insurrectionists, Twitter pulled the plug, too. YouTube, the last major holdout, followed four days later. Most experts and much of the public agreed that banning Trump was both necessary and overdue. Still, there was undeniable discomfort with that decision falling in the hands of a few Silicon Valley executives. And not just because they were unelected corporate actors. Those same executives’ decisions had helped bring the social media crisis to this point in the first place. After years of the industry appeasing Trump and Republicans, the ban was widely seen as self-interested. It had been implemented, after all, three days after Democrats won control of the Senate, in addition to the House and White House.

p327
The letters placed much of the responsibility for the insurrection on the companies. “The fundamental problem,” they wrote to the CEOs of Google and YouTube, “is that YouTube, like other social media platforms, sorts, presents, and recommends information to users by feeding them content most likely to reinforce their existing political biases, especially those rooted in anger, anxiety, and fear.” The letters to Facebook and Twitter were similar. All demanded sweeping policy changes, ending with the same admonition: that the companies “begin a fundamental reexamination of maximizing user engagement as the basis for algorithmic sorting and recommendation.” The language pointedly signaled that Democrats had embraced the view long advanced by researchers, social scientists, and dissident Valleyites: that the dangers from social media are not a matter of simply moderating better or tweaking policies. They are rooted in the fundamental nature of the platforms. And they are severe enough to threaten American democracy itself.

p336
Collectively, the documents (gathered by Facebook employee Frances Haugen) told the story of a company fully aware that its harms sometimes exceeded even critics’ worst assessments. At times, the reports warned explicitly of dangers that later became deadly, like a spike in hate speech or in vaccine misinformation, with plenty of notice for the company to have acted and, had it not refused to do so, possibly saved lives. In undeniable reports and unvarnished language, they

p337
showed
Facebook’s own data and experts confirming the allegations that the company had so blithely dismissed in public. Facebook’s executives, including Zuckerberg, had been plainly told that their company posed tremendous dangers, and those executives had intervened over and over to keep their platforms spinning at full speed anyway. The files, which Facebook downplayed as unrepresentative, largely confirmed long-held suspicions. But some went even further. An internal presentation on hooking more children on Facebook’s products included the line “Is there a way to leverage playdates to drive word of hand/growth among kids?”

As public outrage grew, 60 Minutes announced that it would air an interview with the leaker of the documents. Until that point, Haugen’s identity had still been secret. Her interview cut through a by-then years-old debate over this technology for the clarity with which she made her charges: the platforms amplified harm; Facebook knew it; the company had the power to stop it but chose not to; and the company continually lied to regulators and to the public. “Facebook has realized that if they change the algorithm to be safer,” Haugen said, “people will spend less time on the site, they’ll click on less ads, they’ll make less money.”

Two days later, she testified to a Senate subcommittee. She presented herself as striving to reform the industry to salvage its potential. “We can have social media we enjoy, that connects us, without tearing apart our democracy, putting our children in danger, and sowing ethnic violence around the world,” she told the senators.

Throughout, Haugen consistently called back to Facebook’s failures in poorer countries. That record, she argued, highlighted the company’s callousness toward its customers’ well-being, as well as the destabilizing power of platform dynamics that, after all, played out everywhere. “What we see in Myanmar, what we see in Ethiopia,” she said at a panel, “are only the opening chapters of a novel that has an ending that is far scarier than anything we want to read.”

p338
When asked what would most effectively reform both the platforms and the companies overseeing them, Haugen had a simple answer: turn off the algorithm. “I think we don’t want computers deciding what we focus on,” she said. She also suggested that if Congress curtailed liability protections, making the companies legally responsible for the consequences of anything their systems promoted, “they would get rid of engagement-based ranking.” Platforms would roll back to the 2000s, when they simply displayed your friends’ posts by newest to oldest. No A.I. to swarm you with attention-maximizing content or route you down rabbit holes to the unneureal. 


Note now that artificial Intelligence appears to be a major contributor to the polarization present in our society. Platforms like Facebook, Twitter, YouTube and others use algorithmic programs that learn what keeps users engaged and on-line, with total disregard for its impact on society – it’s a digital machine, it doesn’t “know” what it’s doing. But the humans that created it did, and they did it anyway, to make money. Thesis 5: Now the neurons in the human brain for many of us have been rewired, and as described in detail with examples above, to the unneureal. To answer Ben Franklin’s question about giving us a Republic and asking if we can keep it, to do so we need a majority of the public that is informed, synthisophic and neureal. Truth matters.