It was a strangely incomplete picture of how Facebook works. Many at the company seemed almost unaware that the platform’s algorithms and design deliberately shape users’ experiences and incentives, and therefore the users themselves. These elements are the core of the product, the reason that hundreds of programmers buzzed around us as we talked. It was like walking into a cigarette factory and having executives tell you they couldn’t understand why people kept complaining about the health impacts of the little cardboard boxes that they sold. See Chapters 17, 19
Within Facebook’s muralled walls, though, belief in the product as a force for good seemed unshakable. The core Silicon Valley ideal that getting people to spend more and more time online will enrich their minds and better the world held especially firm among the engineers who ultimately make and shape the products. “As we have greater reach, as we have more people engaging, that raises the stakes,” a senior engineer on Facebook’s all-important news feed said. “But I also think that there’s greater opportunity for people to be exposed to new ideas.” Any risks created by the platform’s mission to maximize user engagement would be engineered out, she assured me.
I later learned that, a short time before my visit, some Facebook researchers, appointed internally to study their technology’s effects, in response to growing suspicion that the site might be worsening America’s political divisions, had warned internally that the platform was doing exactly what the company’s executives had, in our conversations, shrugged off. “Our algorithms exploit the human brain’s attraction to divisiveness,” the researchers warned in a 2018 presentation later leaked to the Wall Street Journal. In fact, the presentation continued, Facebook’s systems were designed in a way that delivered users “more and more divisive content in an effort to gain user attention & increase time on the platform.” Executives shelved the research and largely rejected its recommendations, which called for tweaking the promotional systems that choose what users see in ways that might have reduced their time online. The question I had brought to Facebook’s corridors—what are the consequences of routing an ever-growing share of all politics, information, and human social relations through online platforms expressly designed to manipulate attention?—was plainly taboo here. See Chapters 6,7,15,22-25, 28
The months after my visit coincided with what was then the greatest public backlash in Silicon Valley’s history. The social media giants faced congressional hearings, foreign regulation, multibillion-dollar fines, and threats of forcible breakup. Public figures routinely referred to the companies as one of the gravest threats of our time. In response, the companies’ leaders pledged to confront the harms flowing from their services. They unveiled election-integrity war rooms and updated content-review policies. But their business model—keeping people glued to their platforms as many hours a day as possible—and the underlying technology deployed to achieve this goal remained largely unchanged. And while the problems they’d promised to solve only worsened, they made more money than ever.
In summer 2020, an independent audit of Facebook, commissioned by the company under pressure from civil rights groups, concluded that the platform was everything its executives had insisted to me it was not. Its policies permitted rampant misinformation that could undermine elections. Its algorithms and recommendation systems were “driving people toward self-reinforcing echo chambers of extremism,” training them to hate. Perhaps most damning, the report concluded that the company did not understand how its own products affected its billions of users. See Chapters 6,7,15,22-25, 28
The early conventional wisdom, that social media promotes sensationalism and outrage, while accurate, turned out to drastically understate things. An ever-growing pool of evidence, gathered by dozens of academics, reporters, whistleblowers, and concerned citizens, suggests that its impact is far more profound. This technology exerts such a powerful pull on our psychology and our identity, and is so pervasive in our lives, that it changes how we think, behave, and relate to one another. The neurons in the human brain have been rewired. The effect, multiplied across billions of users, has been to change society itself.
RENÉE DIRESTA HAD her infant on her knee when she realized that social networks were bringing out something dangerous in people, something already reaching invisibly into her and her son’s lives. It was 2014, and DiResta had only recently arrived in Silicon Valley, there to scout startups for an investment firm. She was still an analyst at heart, from her years both on Wall Street and, before that, at an intelligence agency she hints was the CIA. To keep her mind agile, she filled her downtime with elaborate research projects, the way others might do a crossword in bed.
Though her investment work in Silicon Valley focused on hardware, she’d picked up enough about social media to understand what she’d found in her Facebook searches. The reason the system pushed the conspiratorial outliers so hard, she came to realize, was engagement. Social media platforms surfaced whatever content their automated systems had concluded would maximize users’ activity online, thereby allowing the company to sell more ads. See Chapter 28. A mother who accepts that vaccines are safe has little reason to spend much time discussing the subject online. Like-minded parenting groups she joins, while large, might be relatively quiet. But a mother who suspects a vast medical conspiracy imperiling her children, DiResta saw, might spend hours researching the subject. She is also likely to seek out allies, sharing information and coordinating action to fight back. To the A.I. governing a social media platform, the conclusion is obvious: moms interested in health issues will come to spend vastly more time online if they join anti-vaccine groups. Therefore, promoting them, through whatever method wins those users’ notice, will boost engagement. If she was right, DiResta knew, then Facebook wasn’t just indulging anti-vaccine extremists. It was creating them.
Parker prided himself as a hacker, as did much of the Silicon Valley generation that arose in the 1990s, when the term still bespoke a kind of counterculture cool. Most actually built corporate software. But Parker had cofounded Napster, a file-sharing program whose users distributed so much pirated music that, by the time lawsuits shut it down two years after launching, it had irrevocably damaged the music business. Parker argued he’d forced the industry to evolve by exploiting its lethargy in moving online. Many of its artists and executives, however, saw him as a parasite.
Facebook’s strategy, as he described it, was not so different from Napster’s. But rather than exploiting weaknesses in the music industry, it would do so for the human mind. “The thought process that went into building these applications,” Parker told the media conference, “was all about, ‘How do we consume as much of your time and conscious attention as possible?’” To do that, he said, “We need to sort of give you a little dopamine
hit every once in a while, because someone liked or commented on a photo or a post or whatever. And that’s going to get you to contribute more content, and that’s going to get you more likes and comments.” He termed this the “social-validation feedback loop,” calling it “exactly the kind of thing that a hacker like myself would come up with, because you’re exploiting a vulnerability in human psychology.” He and Zuckerberg “understood this” from the beginning, he said, and “we did it anyway.” See Chapters 6,7,28
Throughout the Valley, this exploitation, far from some dark secret, was openly discussed as an exciting tool for business growth. The term of art is “persuasion”: training consumers to alter their behavior in ways that serve the bottom line. Stanford University had operated a Persuasive Tech Lab since 1997. In 2007, a single semester’s worth of student projects generated $1 million in advertising revenue.
“How do companies, producing little more than bits of code displayed on a screen, seemingly control users’ minds?” Nir Eyal, a prominent Valley product consultant, asked in his 2014 book, Hooked: How to Build Habit-Forming Products. “Our actions have been engineered,” he explained. Services like Twitter and YouTube “habitually alter our everyday behavior, just as their designers intended.”
One of Eyal’s favorite models is the slot machine. It is designed to answer your every action with visual, auditory, and tactile feedback. A ping when you insert a coin. A ka- chunk when you pull the lever. A flash of colored light when you release it. This is known as Pavlovian conditioning, named after the Russian physiologist Ivan Pavlov, who rang a bell each time he fed his dog, until, eventually, the bell alone sent his dog’s stomach churning and saliva glands pulsing, as if it could no longer differentiate the chiming of a bell from the physical sensation of eating. Slot machines work the same way, training your mind to conflate the thrill of winning with its mechanical clangs and buzzes. The act of pulling the lever, once meaningless, becomes pleasurable in itself.
The reason is a neurological chemical called dopamine, the same one Parker had referenced at the media conference. Your brain releases small
amounts of it when you fulfill some basic need, whether biological (hunger, sex) or social (affection, validation). Dopamine creates a positive association with whatever behaviors prompted its release, training you to repeat them. But when that dopamine reward system gets hijacked, it can compel you to repeat self-destructive behaviors. To place one more bet, binge on alcohol—or spend hours on apps even when they make you unhappy. See Chapters 6,7,28
Dopamine is social media’s accomplice inside your brain. It’s why your smartphone looks and feels like a slot machine, pulsing with colorful notification badges, whoosh sounds, and gentle vibrations. Those stimuli are neurologically meaningless on their own. But your phone pairs them with activities, like texting a friend or looking at photos, that are naturally rewarding.
Social apps hijack a compulsion—a need to connect—that can be even more powerful than hunger or greed. Eyal describes a hypothetical woman, Barbra, who logs on to Facebook to see a photo uploaded by a family member. As she clicks through more photos or comments in response, her brain conflates feeling connected to people she loves with the bleeps and flashes of Facebook’s interface. “Over time,” Eyal writes, “Barbra begins to associate Facebook with her need for social connection.” She learns to serve that need with a behavior— using Facebook—that in fact will rarely fulfill it.
Soon after Facebook’s news-feed breakthrough, the major social media platforms converged on what Eyal called one of the casino’s most powerful secrets: intermittent variable reinforcement. The concept, while sounding esoteric, is devilishly simple. The psychologist B. F. Skinner found that if he assigned a human subject a repeatable task—solving a simple puzzle, say—and rewarded her every time she completed it, she would usually comply, but would stop right after he stopped rewarding her. But if he doled out the reward only sometimes, and randomized its size, then she would complete the task far more consistently, even doggedly. And she would keep completing the task long after the rewards had stopped altogether—as if chasing even the possibility of a reward compulsively.
Unlike slot machines, which are rarely at hand in our day- to-day lives, social media apps are some of the most easily accessible products on earth. It’s a casino that fits in your pocket, which is how we slowly train ourselves to answer any dip in our happiness with a pull at the most ubiquitous slot machine in history. The average American checks their smartphone 150 times per day, often to open social media. We don’t do this because compulsively checking social media apps makes us happy. In 2018, a team of economists offered users different amounts of money to deactivate their account for four weeks, looking for the threshold at which at least half of them would say yes. The number turned out to be high: $180. But the people who deactivated experienced more happiness, less anxiety, and greater life satisfaction. After the experiment was over, they used the app less than they had before.
Why had these subjects been so resistant to give up a product that made them unhappy? Their behavior, the economists wrote, was “consistent with standard habit formation models”—i.e., with addiction—leading to “sub- optimal consumption choices.” A clinical way of saying the subjects had been trained to act against their own interests.
Human beings are some of the most complex social animals on earth. We evolved to live in leaderless collectives far larger than those of our fellow primates: up to about 150 members. See Chapters 11,12,14,28. As individuals, our ability to thrive depended on how well we navigated those 149 relationships—not to mention all of our peers’ relationships with one another. If the group valued us, we could count on support, resources, and probably a mate. If it didn’t, we might get none of those. It was a matter of survival, physically and genetically. Over millions of years, those pressures selected for people who are sensitive to and skilled at maximizing their standing. It’s what the anthropologist Brian Hare called “survival of the friendliest.” The result was the development of a sociometer: a tendency to unconsciously monitor how other people in our community seem to perceive us. We process that information in the form of self-esteem and such related emotions as pride, shame, or insecurity. These emotions compel us to do more of what makes our community value us and less of what doesn’t. And, crucially, they are meant to make that motivation feel like it is coming from within. If we realized, on a conscious level, that we were responding to social pressure, our performance might come off as grudging or cynical, making it less persuasive.
But by 2020, even Twitter’s co-founder and then-CEO, Jack Dorsey, conceded he had come to doubt the thinking that had led to the Like button, and especially “that button having a number associated with it.” Though he would not commit to rolling back the feature, he acknowledged that it had created “an incentive that can be dangerous.”
In fact, the incentive is so powerful that it even shows up on brain scans. When we receive a Like, neural activity flares in a part of the brain called the nucleus accumbens: the region that activates dopamine. Subjects with smaller nucleus accumbens—a trait associated with addictive tendencies—use Facebook for longer stretches. And when heavy Facebook users get a Like, that gray matter displays more activity than in lighter users, as in gambling addicts who’ve been conditioned to exalt in every pull of the lever. See Chapters 6+7
Pearlman, the Facebooker who’d helped launch the Like button, discovered this after quitting Silicon Valley, in 2011, to draw comics. She promoted her work, of course, on Facebook. At first, her comics did well. They portrayed uplifting themes related to gratitude and compassion,
which Facebook’s systems boosted in the early 2010s. Until, around 2015, Facebook retooled its systems to disfavor curiosity-grabbing “clickbait,” which had the secondary effect of removing the artificial boost the platform had once given her warmly emotive content.
“When Facebook changed their algorithm, my likes dropped off and it felt like I wasn’t getting enough oxygen,” Pearlman later told Vice News. “So even if I could blame it on the algorithm, something inside me was like, ‘They don’t like me, I’m not good enough.’” Her own former employer had turned her brain’s nucleus accumbens against her, creating an internal drive for likes so powerful that it overrode her better judgment. Then, like Skinner toying with a research subject, it simply turned the rewards off. “Suddenly I was buying ads, just to get that attention back,” she admitted.
For most of us, the process is subtler. Instead of buying Facebook ads, we modify our day-to-day posts and comments to keep the dopamine coming, usually without realizing we have done it. This is the real “social-validation feedback loop,” as Sean Parker called it: unconsciously chasing the approval of an automated system designed to turn our needs against us.
To understand identity’s power, start by asking yourself: What words best describe my own? Your nationality, race, or religion may come to mind. Maybe your city, profession, or gender. Our sense of self derives largely from our membership in groups. But this compulsion—its origins, its effects on our minds and actions—“remains a deep mystery to the social psychologist,” Henri Tajfel wrote in 1979, when he set out to resolve it.
Tajfel had learned the power of group identity firsthand. In 1939, Germany occupied his home country, Poland, while he was studying in Paris. Jewish and fearful for his family, he posed as French so as to join the French army. He kept up the ruse when he was captured by German soldiers. After the war, realizing his family had been wiped out, he became legally French, then British. These identities were mere social constructs—how else could he change them out like suits pulled from a closet? Yet they had the power to compel murderousness or mercy in others around him, driving an entire continent to self-destruction.
The questions this raised haunted and fascinated Tajfel. He and several peers launched the study of this phenomenon, which they termed social identity theory. They traced its origins back to a formative challenge of early human existence. Many primates live in cliques. Humans, in contrast, arose in large collectives, where family kinship was not enough to bind mostly unrelated group members. The dilemma was that the group could not survive without each member contributing to the whole, and no one individual, in turn, could survive without support from the group.
Social identity, Tajfel demonstrated, is how we bond ourselves to the group and they to us. See Chapters 6+7. It’s why we feel compelled to hang a flag in front of our house, don an almamater T-shirt, slap a bumper sticker on our car. It tells the group that we value our affiliation as an extension of ourselves and can therefore be trusted to serve its common good.
During lunch breaks on the set of the 1968 movie Planet of the Apes, for instance, extras spontaneously separated into tables according to whether they played chimpanzees or gorillas. For years afterward, Charlton Heston, the film’s star, recounted the “instinctive segregation” as “quite spooky.” When the sequel
filmed, a different set of extras repeated the behavior exactly.
Chapter 31, Artificial Intelligence and Polarization, continued....