Synthisophy
Skinwalkers - Chapter 18
The following are direct quotes from the book Tribe, On Homecoming and Belonging, by Sebastian Junger, May 2016, except for statements in italic added.
The ultimate act of disaffiliation isn’t littering or fraud, of course, but violence against your own people. When the Navajo Nation—the Diné, in their language—were rounded up and confined to a reservation in the 1860s, a terrifying phenomenon became more prominent in their culture. The warrior skills that had protected the Diné for thousands of years were no longer relevant in this dismal new era, and people worried that those same skills would now be turned inward, against society. That strengthened their belief in what were known as skinwalkers, or yee naaldlooshii.
Skinwalkers were almost always male and wore the pelt of a sacred animal so that they could subvert that animal’s powers to kill people in the community. They could travel impossibly fast across the desert and their eyes glowed like coals and they could supposedly paralyze you with a single look. They were thought to attack remote homesteads at night and kill people and sometimes eat their bodies. People were still scared of skinwalkers when I lived on the Navajo Reservation in 1983, and frankly, by the time I left, I was too.
Virtually every culture in the world has its version of the skinwalker myth. In Europe, for example, they are called werewolves (literally “man-wolf” in Old English). The myth addresses a fundamental fear in human society: that you can defend against external enemies but still remain vulnerable to one lone madman in your midst. Anglo-American culture doesn’t recognize the skinwalker threat but has its own version. Starting in the early 1980s, the frequency of rampage shootings in the United States began to rise more and more rapidly until it doubled around 2006. Rampages are usually defined as attacks where people are randomly targeted and four or more are killed in one place, usually shot to death by a lone gunman. As such, those crimes conform almost exactly to the kind of threat that the Navajo seemed most to fear on the reservation: murder and mayhem committed by an individual who has rejected all social bonds and attacks people at their most vulnerable and unprepared. For modern society, that would mean not in their log hogans but in movie theaters, schools, shopping malls, places of worship, or simply walking down the street.
Here is a list of skinwalkers, and their shooting rampages in the USA over the last 30 years. Note that from 1988 to 1997 there were 6; from 1998 to 2007 there were 9; from 2008 to 2017 there were 24. Why does it appear that over the last 10 years our society is generating a sharp increase in skinwalkers, individuals committing murder and mayhem who have rejected all social bonds and attack people at their most vulnerable and unprepared? Perhaps it is because, as Sebastion Junger stated, this “shows how completely detribalized this country has become.” Our neurological genetic predisposition, the warrior ethos, all for 1 and 1 for all, is no longer relevant in modern life. As individuals in society it appears we are now very far from our evolutionary roots.
In 2013, areport from the Congressional Research Service, known as Congress's think tank, described mass shootings as those in which shooters "select victims somewhat indiscriminately" and kill four or more people.
From: http://timelines.latimes.com/deadliest-shooting-rampages/
Mass shootings over last 30 years until October 1, 2017. And recent news from October 2 to December 31, 2017.
November 14, 2017: Rampaging through a small Northern California town, a gunman took aim on Tuesday at people at an elementary school and several other locations, killing at least four and wounding at least 10 before he was fatally shot by police, the local sheriff’s office said.
November 5, 2017: Devin Patrick Kelley carried out the deadliest mass shooting in Texas history on Sunday, killing 25 people and an unborn child at First Baptist Church in Sutherland Springs, near San Antonio.
October 1, 2017: 58 killed, more than 500 injured: Las Vegas
More than 50 people were killed and at least 500 others injured when a gunman opened fire at a country music festival near the Mandalay Bay Resort and Casino on the Las Vegas Strip, authorities said. Police said the suspect, 64-year-old Stephen Paddock, a resident of Mesquite, Nev., was was found dead after a SWAT team burst into the hotel room from which he was firing at the crowd.
Jan. 6, 2017: 5 killed, 6 injured: Fort Lauderdale, Fla.
After taking a flight to Fort Lauderdale-Hollywood International Airport in Florida, a man retrieves a gun from his luggage in baggage claim, loads it and opens fire, killing five people near a baggage carousel and wounding six others. Dozens more are injured in the ensuing panic. Esteban Santiago, a 26-year-old Iraq war veteran from Anchorage, Alaska, has pleaded not guilty to 22 federal charges.
May 28, 2017: 8 killed, Lincoln County, Miss. A Mississippi man went on a shooting spree overnight, killing a sheriff's deputy and seven other people in three separate locations in rural Lincoln County before the suspect was taken into custody by police, authorities said on Sunday.
Sept. 23, 2016: 5 killed: Burlington, Wash.
A gunman enters the cosmetics area of a Macy’s store near Seattle and fatally shoots an employee and four shoppers at close range. Authorities say Arcan Cetin, a 20-year-old fast-food worker, used a semi-automatic Ruger .22 rifle that he stole from his stepfather’s closet.
June 12, 2016: 49 killed, 58 injured in Orlando nightclub shooting
The United States suffered one of the worst mass shootings in its modern history when 49 people were killed and 58 injured in Orlando, Fla., after a gunman stormed into a packed gay nightclub. The gunman was killed by a SWAT team after taking hostages at Pulse, a popular gay club. He was preliminarily identified as 29-year-old Omar Mateen.
Dec. 2, 2015: 14 killed, 22 injured: San Bernardino, Calif.
Two assailants killed 14 people and wounded 22 others in a shooting at the Inland Regional Center in San Bernardino. The two attackers, who were married, were killed in a gun battle with police. They were U.S.-born Syed Rizwan Farook and Pakistan national Tashfeen Malik, and had an arsenal of ammunition and pipe bombs in their Redlands home.
Nov. 29, 2015: 3 killed, 9 injured: Colorado Springs, Colo.
A gunman entered a Planned Parenthood clinic in Colorado Springs, Colo., and started firing.
Police named Robert Lewis Dear as the suspect in the attacks.
Oct. 1, 2015: 9 killed, 9 injured: Roseburg, Ore.
Christopher Sean Harper-Mercer shot and killed eight fellow students and a teacher at Umpqua Community College. Authorities described Harper-Mercer, who recently had moved to Oregon from Southern California, as a “hate-filled” individual with anti-religion and white supremacist leanings who had long struggled with mental health issues.
July 16, 2015: 5 killed, 3 injured: Chattanooga, Tenn. A gunman opened fire on two military centers more than seven miles apart, killing four Marines and a Navy sailor. A man identified by federal authorities as Mohammod Youssuf Abdulazeez, 24, sprayed dozens of bullets at a military recruiting center, then drove to a Navy-Marine training facility and opened fire again before he was killed.
June 18, 2015: 9 killed: Charleston, S.C.
Dylann Storm Roof is charged with nine counts of murder and three counts of attempted murder in an attack that killed nine people at a historic black church in Charleston, S.C. Authorities say Roof, a suspected white supremacist, started firing on a group gathered at Emanuel African Methodist Episcopal Church after first praying with them. He fled authorities before being arrested in North Carolina.
May 23, 2014: 6 killed, 7 injured: Isla Vista, Calif.
Elliot Rodger, 22, meticulously planned his deadly attack on the Isla Vista community for more than a year, spending thousands of dollars in order to arm and train himself to kill as many people as possible, according to a report released by the Santa Barbara County Sheriff’s Office. Rodger killed six people before shooting himself.
April 2, 2014: 3 killed; 16 injured: Ft. Hood, Texas
A gunman at Fort Hood, the scene of a deadly 2009 rampage, kills three people and injures 16 others, according to military officials. The gunman is dead at the scene.
Sept. 16, 2013: 12 killed, 3 injured: Washington, D.C. Aaron Alexis, a Navy contractor and former Navy enlisted man, shoots and kills 12 people and engages police in a running firefight through the sprawling Washington Navy Yard. He is shot and killed by authorities.
June 7, 2013: 5 killed: Santa Monica
John Zawahri, an unemployed 23-year-old, kills five people in an attack that starts at his father’s home and ends at Santa Monica College, where he is fatally shot by police in the school’s library.
Dec. 14, 2012: 27 killed, one injured: Newtown, Conn.
A gunman forces his way into Sandy Hook Elementary School in Newtown, Conn. and shoots and kills 20 first graders and six adults. The shooter, Adam Lanza, 20, kills himself at the scene. Lanza also killed his mother at the home they shared, prior to his shooting rampage.
Aug. 5, 2012: 6 killed, 3 injured: Oak Creek, Wis.
Wade Michael Page fatally shoots six people at a Sikh temple before he is shot by a police officer. Page, an Army veteran who was a “psychological operations specialist,” committed suicide after he was wounded. Page was a member of a white supremacist band called End Apathy and his views led federal officials to treat the shooting as an act of domestic terrorism.
July 20, 2012: 12 killed, 58 injured: Aurora, Colo.
James Holmes, 24, is taken into custody in the parking lot outside the Century 16 movie theater after a post-midnight attack in Aurora, Colo. Holmes allegedly entered the theater through an exit door about half an hour into the local premiere of “The Dark Knight Rises.”
April 2, 2012: 7 killed, 3 injured: Oakland
One L. Goh, 43, a former student at a Oikos University, a small Christian college, allegedly opens fire in the middle of a classroom leaving seven people dead and three wounded.
Jan. 8, 2011: 6 killed, 11 injured: Tucson, Ariz.
Jared Lee Loughner, 22, allegedly shoots Arizona Rep. Gabrielle Giffords in the head during a meet-and-greet with constituents at a Tucson supermarket. Six people are killed and 11 others wounded.
Nov. 5, 2009: 13 killed, 32 injured: Ft. Hood, Texas
Maj. Nidal Malik Hasan, an Army psychiatrist, allegedly shoots and kills 13 people and injures 32 others in a rampage at Ft. Hood, where he is based. Authorities allege that Hasan was exchanging emails with Muslim extremists including American-born radical Anwar Awlaki.
April 3, 2009: 13 killed, 4 injured: Binghamton, N.Y.
Jiverly Voong, 41, shoots and kills 13 people and seriously wounds four others before apparently committing suicide at the American Civic Assn., an immigration services center, in Binghamton, N.Y.
Feb. 14, 2008: 5 killed, 16 injured: Dekalb, Ill.
Steven Kazmierczak, dressed all in black, steps on stage in a lecture hall at Northern Illinois University and opens fire on a geology class. Five students are killed and 16 wounded before Kazmierczak kills himself on the lecture hall stage.
Dec. 5, 2007: 8 killed, 4 injured: Omaha
Robert Hawkins, 19, sprays an Omaha shopping mall with gunfire as holiday shoppers scatter in terror. He kills eight people and wounds four others before taking his own life. Authorities report he left several suicide notes.
April 16, 2007: 32 killed, 17 injured: Blacksburg, Va.
Seung-hui Cho, a 23-year-old Virginia Tech senior, opens fire on campus, killing 32 people in a dorm and an academic building in attacks more than two hours apart. Cho takes his life after the second incident.
Feb. 12, 2007: 5 killed, 4 injured: Salt Lake City
Sulejman Talovic, 18, wearing a trenchcoat and carrying a shotgun, sprays a popular Salt Lake City shopping mall. Witnesses say he displays no emotion while killing five people and wounding four others.
Oct. 2, 2006: 5 killed, 5 injured: Nickel Mines, Pa.
Charles Carl Roberts IV, a milk truck driver armed with a small arsenal, bursts into a one-room schoolhouse and kills five Amish girls. He kills himself as police storm the building.
July 8, 2003: 5 killed, 9 injured: Meridian, Miss.
Doug Williams, 48, a production assemblyman for 19 years at Lockheed Martin Aeronautics Co., goes on a rampage at the defense plant, fatally shooting five and wounding nine before taking his own life with a shotgun.
Dec. 26, 2000: 7 killed: Wakefield, Mass.
Michael McDermott, a 42-year-old software tester shoots and kills seven co-workers at the Internet consulting firm where he is employed. McDermott, who is arrested at the offices of Edgewater Technology Inc., apparently was enraged because his salary was about to be garnished to satisfy tax claims by the Internal Revenue Service. He uses three weapons in his attack.
Sept. 15, 1999: 7 killed, 7 injured: Fort Worth
Larry Gene Ashbrook opens fire inside the crowded chapel of the Wedgwood Baptist Church. Worshipers, thinking at first that it must be a prank, keep singing. But when they realize what is happening, they dive to the floor and scrunch under pews, terrified and silent as the gunfire continues. Seven people are killed before Ashbrook takes his own life.
April 20, 1999: 13 killed, 24 injured: Columbine, Colo.
Eric Harris and Dylan Klebold, students at Columbine High, open fire at the school, killing a dozen students and a teacher and causing injury to two dozen others before taking their own lives.
March 24, 1998: 5 killed, 10 injured: Jonesboro, Ark.
Middle school students Mitchell Johnson and Andrew Golden pull a fire alarm at their school in a small rural Arkansas community and then open fire on students and teachers using an arsenal they had stashed in the nearby woods. Four students and a teacher who tried shield the children are killed and 10 others are injured. Because of their ages, Mitchell. 13, and Andrew, 11, are sentenced to confinement in a juvenile facility until they turn 21.
Dec. 7, 1993: 6 killed, 19 injured: Garden City, N.Y.
Colin Ferguson shoots and kills six passengers and wounds 19 others on a Long Island Rail Road commuter train before being stopped by other riders. Ferguson is later sentenced to life in prison.
July 1, 1993: 8 killed, 6 injured: San Francisco
Gian Luigi Ferri, 55, kills eight people in an office building in San Francisco’s financial district. His rampage begins in the 34th-floor offices of Pettit & Martin, an international law firm, and ends in a stairwell between the 29th and 30th floors where he encounters police and shoots himself.
May 1, 1992: 4 killed, 10 injured: Olivehurst, Calif.
Eric Houston, a 20-year-old unemployed computer assembler, invades Lindhurst High School and opens fire, killing his former teacher Robert Brens and three students and wounding 10 others.
Oct. 16, 1991: 22 killed, 20 injured: Killeen, Texas
George Jo Hennard, 35, crashes his pickup truck into a Luby’s cafeteria crowded with lunchtime patrons and begins firing indiscriminately with a semiautomatic pistol, killing 22 people. Hennard is later found dead of a gunshot wound in a restaurant restroom.
June 18, 1990: 10 killed, 4 injured: Jacksonville, Fla.
James E. Pough, a 42-year-old day laborer apparently distraught over the repossession of his car, walks into the offices of General Motors Acceptance Corp. and opens fire, killing seven employees and one customer before fatally shooting himself.
Jan. 17, 1989: 5 killed, 29 injured: Stockton, Calif.
Patrick Edward Purdy turns a powerful assault rifle on a crowded school playground, killing five children and wounding 29 more. Purdy, who also killed himself, had been a student at the school from kindergarten through third grade.Police officials described Purdy as a troubled drifter in his mid-20s with a history of relatively minor brushes with the law. The midday attack lasted only minutes.
July 18, 1984: 21 killed, 19 injured: San Ysidro, Calif.
James Oliver Huberty, a 41-year-old out-of-work security guard, kills 21 employees and customers at a McDonald’s restaurant. Huberty is fatally shot by a police sniper perched on the roof of a nearby post office.
Synthisophy
Synthisophy
Integrate the Wisdoms of History into Present Culture
Addressing the polarized political climate in the USA
Add Text Here...
.
Chapter 31, Artificial Intelligence and Polarization, continued....
p125
Chaslot had heard of people tumbling down YouTube rabbit holes (the unneureal). But the conviction in the voice of this otherwise normal-seeming man bothered him. Were others falling victim? He set up a simple program, which he called Algo Transparency, to find out. The program entered a term, like the name of a politician, in YouTube’s search bar. Then it opened the top results. Then each recommendation for what to watch next. He ran huge batches of anonymized searches, one after another, over late 2015 and much of 2016, looking for trends.
What he found alarmed him. When he searched YouTube for Pope Francis, for instance, 10 percent of the videos it displayed were conspiracies.
p126
On global warming, it was 15 percent. But the real shock came when Chaslot followed algorithmic recommendations for what to watch next, which YouTube has said accounts for most of its watch time. A staggering 85 percent of recommended videos on Pope Francis were conspiracies, asserting Francis’s “true” identity or purporting to expose Satanic plots at the Vatican. On global warming, the figure was 70 percent, usually calling it a hoax. On topics with few established conspiracies, the system seemed to conjure them up. When Chaslot searched Who is Michelle Obama, for instance, just under half of the top results and almost two thirds of watch-next recommendations claimed the First Lady was secretly a man. Surely, he thought, whatever his disagreement with his former colleagues, they would want to know about this. But when he raised concerns privately with people he knew at YouTube, the response was always the same: “If people click on this harmful content, who are we to judge?”
Some inside Google, though, were reaching similar conclusions as Chaslot. In 2013, an engineer named Tristan Harris had circulated a memo urging the company to consider the societal impact of push alerts or buzzing notifications that tugged at users’ attention. As an alumnus of Stanford’s Persuasive Tech Lab, he knew their power to manipulate. Could all this cognitive training come at a cost? He was granted the title “design ethicist” but little power and, in 2015, quit, hoping to pressure the industry to change. At a presentation that year to Facebook, Harris cited evidence that social media caused feelings of loneliness and alienation, portraying it as an opportunity to reverse the effect. “They didn’t do anything about it,” he recounted to The New Yorker. My points were in their blind spot.” He circulated around the Valley, warning that its A.I.s, a robot army bent on defeating each user’s control over their own attention, were waging an invisible war against billions of consumers.
Another Google employee, James Williams, who later wrote essays calling Gamergate a warning sign that social media would elevate Trump, had his reckoning while monitoring a dashboard that tracked users’ real-time interactions with ads. “I realized: this is literally a million people that we’ve sort of nudged or persuaded to do this thing that they weren’t going to otherwise do,” he has said. He joined Harris’s efforts inside Google until, like Harris, he quit. But rather than cajole the Valley, he tried to raise alarms with the public. “There’s no good analogue for this monopoly of the mind the forces of industrialized persuasion now hold,” he wrote. The world faced “a next-generation threat to human freedom” that had “materialized right in front of our noses.”
p128
But the influence of algorithms only deepened, including at the last holdout, Twitter. For years, the service had shown each user a simple, chronological feed of their friends’ tweets. Until, in 2016, it introduced an algorithm that sorted posts— for engagement, of course, and to predictable effect. “The average curated tweet was more emotive, on every scale, than its chronological equivalent,” The Economist found in an analysis of the change. The result was exactly what it had been on Facebook and YouTube: “The recommendation engine appears to reward inflammatory language and outlandish claims.”
To users, for whom the algorithm was invisible, these felt like powerful social cues. It was as if your community had suddenly decided that it valued provocation and outrage above all else, rewarding it with waves of attention that were, in reality, algorithmically generated. And because the algorithm down-sorted posts it judged as unengaging, the inverse was true, too. It felt as if your peers suddenly scorned nuance and emotional moderation with the implicit rejection of ignoring you. Users seemed to absorb those cues, growing meaner and angrier, intent on humiliating out-group members, punishing social transgressors, and validating one another’s worldviews. See Chapters 7, 15, 22-25, 28, the unneureal
p134
In the coming months, digital watchdogs, journalists, congressional committees, and the outgoing president would all accuse social media platforms of accelerating misinformation and partisan rage that paved the way for Trump’s victory. The companies, after a period of contrition for narrower sins like hosting Russian propagandists and fake news, largely deflected. But in the hours after the election, the first to suspect Silicon Valley’s culpability were many of its own rank and file. At YouTube, when CEO Susan Wojcicki convened her shell-shocked staff, much of their discussion centered on concerns that YouTube’s most-watched election- related videos were from far-right misinformation shops like Breitbart and conspiracy theorist Alex Jones. Similar misgivings were expressed by Facebook employees. “The results of the 2016 Election show that Facebook has failed in its mission,” one Facebooker posted on the company’s internal message board. Another: “Sadly, News Feed optimizes for engagement. As we’ve learned in this election, bullshit is highly engaging.” Another: “Facebook (the company) Is Broken.”
p151
In a revealing experiment, Republicans were shown a false headline about the refugees (“Over 500 ‘Migrant Caravaners’ Arrested with Suicide Vests”). Asked whether it seemed accurate, most identified it as false; only 16 percent called it accurate. The question’s framing had implicitly nudged the subjects to think about accuracy. This engaged the rational parts of their mind, which quickly identified the headline as false. Subsequently asked whether they might share the headline on Facebook, most said no: thinking with their rational brains, they preferred accuracy.
But when researchers repeated the experiment with a different set of Republicans, this time skipping the question about accuracy to simply ask if the subject would share the headline on Facebook, 51 percent said they would. Focusing on Facebook activated the social part of their minds, which saw, in the same headline, the promise of identity validation— something the social brain values far beyond accuracy. Having decided to share it, the subjects told themselves it was true. “Most people do not want to spread misinformation,” the study’s authors wrote, differentiating willful lying from socially motivated belief. “But the social media context focuses their attention on factors other than truth and accuracy.” See Chapter 5
p154
Meanwhile, just as Chaslot joined DiResta and others in the public struggle to understand Silicon Valley’s undue influence, William Brady and Molly Crockett, the psychologist and neuroscientist, achieved a momentous breakthrough in that effort. They had spent months synthesizing reams of newly available data, behavioral research, and their own investigations. It was like fitting together the pieces of a puzzle that, once assembled, revealed what may still be the most complete framework for understanding social media’s effect on society.
The platforms, they concluded, were reshaping not just online behavior but underlying social impulses, and not just individually but collectively, potentially altering the nature of “civic engagement and activism, political polarization, propaganda and disinformation.” They called it the MAD model, for the three forces rewiring people’s minds. See Chapters 6+7,15,22-25,28. Motivation: the instincts and habits hijacked by the mechanics of social media platforms. Attention: users’ focus manipulated to distort their perceptions of social cues and mores. Design: platforms that had been constructed in ways that train and incentivize certain behaviors.
p155
The digital-attention economy amplifies the social impact of this dynamic exponentially. Remember that the number of seconds in your day never changes. The amount of social media content competing for those seconds, however, doubles every year or so, depending on how you measure it. Imagine, for instance, that your network produces 200 posts per day, of which you have time to read 100. Because of the platforms’ tilt, you will see the most moral-emotional half of your feed. Next year, when 200 doubles to 400, you see the most moral- emotional quarter. The year after that, the most moral- emotional eighth. Over time, your impression of your own community becomes radically more moralizing, aggrandizing, and outraged—and so do you. At the same time, less innately engaging forms of content—truth, appeals to the greater good, appeals to tolerance—become more and more outmatched.
p157
“Online platforms,” Brady and Crockett wrote, “are now one of the primary sources of morally relevant stimuli people experience in their daily life.” Billions of people’s moral compasses potentially tilted toward tribalism and distrust. Whole societies nudged toward conflict, polarization, and unreality—toward something like Trumpism. See Chapters 11-14,24,28. Brady did not think that social media was “inherently evil,” he told me. But as the platforms evolved, the effects only seemed to worsen. “It’s just gotten so toxic,” he said.
p164
For years after Rwanda’s genocide, American officials tormented themselves over hypotheticals. Could American warplanes have destroyed the radio towers in time to stop it? How would they locate the towers amid Rwanda’s jungles and mountain passes? How would they secure international authority? In Myanmar, there were never any such doubts. A single engineer could have shuttered the entire network as they finished their morning coffee. One million terrified Rohingya made safer from death and displacement with a few keystrokes. The warning signs were freely visible. Madden and others had given them the necessary information to act. They simply chose not to, even as entire villages were purged in fire and blood. By March 2018, the head of the United Nations’ fact-finding mission said his team had concluded that social networks, especially Facebook, had played a “determining role” in the genocide. The platforms, he said, “substantively contributed” to the hate destroying an entire population.
Three days later, a reporter named Max Read posed a question, on Twitter, to Adam Mosseri, the executive overseeing Facebook’s news feed. He asked, referring to Facebook as a whole, “honest question—what’s the possible harm in turning it off in myanmar?” Mosseri responded, “There are real issues, but Facebook does a good deal of good —connecting people with friends and family, helping small businesses, surfacing informative content. If we turn it off we lose all that.”
The belief that Facebook’s benefits to Myanmar, at that moment, exceeded its harms is difficult to understand. Facebook had no Myanmar office from which to appreciate its impact. Few of its employees had ever been. It had rejected the chillingly consistent outside assessments of its platform’s behavior. Mosseri’s conclusion was, in the most generous interpretation, ideological, rooted in faith. It was also convenient, permitting the company to throw up its hands and declare it ethically impossible to switch off the hate machine. Never mind that leaving the platform up was its own form of intervention, chosen anew every day.
There was another important barrier to acting. It would have meant acknowledging that the platform may have shared some blame. It had taken cigarette companies half a century, and the threat of potentially fatal litigation, to admit that their products caused cancer. How easily would Silicon Valley concede that its products could cause upheaval up to and including genocide? See Chapter 19
p165
Eventually, the sunny view of the Arab Spring came to be revised. “This revolution started on Facebook,” Wael Ghonim, an Egyptian programmer who’d left his desk at Google to join his country’s popular uprising, had said in 2011. “I want to meet Mark Zuckerberg someday and thank him personally.” Years later, however, as Egypt collapsed into dictatorship, Ghonim warned, “The same tool that united us to topple dictators eventually tore us apart.” The revolution had given way to social and religious distrust, which social networks widened by “amplifying the spread of misinformation, rumors, echo chambers, and hate speech,” Ghonim said, rendering society “purely toxic.
p181
The defining element across all these rumors was something more specific and dangerous than generalized outrage: a phenomenon called status threat. When members of a dominant social group feel at risk of losing their position, it can spark a ferocious reaction. They grow nostalgic for a past, real or imagined, when they felt secure in their dominance (“Make America Great Again”). They become hyper-attuned for any change that might seem tied to their position: shifting demographics, evolving social norms, widening minority rights. And they grow obsessed with playing up minorities as dangerous, manifesting stories and rumors to confirm the belief. It’s a kind of collective defense mechanism to preserve dominance. It is mostly unconscious, almost animalistic, and therefore easily manipulated, whether by opportunistic leaders or profit-seeking algorithms.
The problem isn’t just that social media learned to promote outrage, fear, and tribal conflict, all sentiments that align with status threat. Online, as we post updates visible to hundreds or thousands of people, charged with the group-based emotions that the platforms encourage, “our group identities are more salient” than our individual ones, as William Brady and Molly Crockett wrote in their paper on social media’s effects. We don’t just become more tribal, we lose our sense of self. It’s an environment, they wrote, “ripe for the psychological state of deindividuation.”
The shorthand definition of deindividuation is “mob mentality,” though it is more common than joining a mob. You can deindividuate by sitting in the stands at a sports game or singing along in church, surrendering part of your will to that of the group. The danger comes when these two forces mix: deindividuation, with its power to override individual judgment, and status threat, which can trigger collective aggression on a terrible scale, as seen in the January 6th, 2021 riot.
p188
And those defining traits and tics of superposters, mapped out in a series of psychological studies, are broadly negative. One is dogmatism: “relatively unchangeable, unjustified certainty.” Dogmatics tend to be narrow-minded, pushy, and loud. Another: grandiose narcissism, defined by feelings of innate superiority and entitlement. Narcissists are consumed by cravings for admiration and belonging, which makes social media’s instant feedback and large audiences all but irresistible. That need is deepened by superposters’ unusually low self-esteem, which is exacerbated by the platforms themselves. One study concluded simply, “Online political hostility is committed by individuals who are predisposed to be hostile in all contexts.” Neurological experiments confirmed this: superposters are drawn toward and feel rewarded by negative social potency, a clinical term for deriving pleasure from deliberately inflicting emotional distress on others. Further, by using social media more, and by being rewarded for this with greater reach, superposters pull the platforms toward these defining tendencies of dogmatism, narcissism, aggrandizement, and cruelty.
p215
This was more than just expanding the reach of the far right. It was uniting a wider community around them. And at a scale—millions of people—the Charlottesville organizers could only have dreamed of. Here, finally, was an answer for why there had been so many stories of people falling into far- right rabbit holes. Someone who came to YouTube with interest in right-wing-friendly topics, like guns or political correctness, would be routed into a YouTube-constructed world of white nationalism, violent misogyny, and crazed conspiracism, then pulled further toward its extremes.
p244
The hearing was nominally to address Russia’s digital exploitation. But congressional investigators, like so many others, were coming to believe that the Russian incursion, while pernicious, had revealed a deeper, ongoing danger. This was “not about arbitrating truth, nor is it a question of free speech,” DiResta said. It was about algorithmic amplification, online incentives that led unwitting users to spread propaganda, and the ease with which bad actors could “leverage the entire information ecosystem to manufacture the appearance of popular consensus.” As DiResta had been doing for years now, she directed her audience’s attention from Moscow toward Silicon Valley. “Responsibility for the integrity of public discourse is largely in the hands of private social platforms,” she said. For the public good, she added, speaking on behalf of her team, “we believe that private tech platforms must be held accountable.”
p245
In an attempt to address the public’s concerns, Zuckerberg published an essay, a few weeks after DiResta’s hearing. “One of the biggest issues social networks face,” he wrote, “is that, when left unchecked, people will engage disproportionately with more sensationalist and provocative content.” He included a chart that showed engagement curving upward as Facebook content grew more extreme, right up until it reached the edge of what Facebook permitted. “Our research suggests that no matter where we draw the lines for what is allowed, as a piece of content gets close to that line, people will engage with it more on average,” he wrote. “At scale,” Zuckerberg added, this effect “can undermine the quality of public discourse and lead to polarization.”
p247
Around the same time as Zuckerberg’s essay, a team of Stanford and New York University economists conducted an experiment that tested, as directly and rigorously as anyone has, how using Facebook changes your politics. They recruited about 1,700 users, then split them into two groups. People in one were required to deactivate their accounts for four weeks. People in the other were not. The economists, using sophisticated survey methods, monitored each participant’s day-to-day mood, news consumption, accuracy of their news knowledge, and especially their views on politics.
The changes were dramatic. People who deleted Facebook became happier, more satisfied with their life, and less anxious. The emotional change was equivalent to 25 to 40 percent of the effect of going to therapy—a stunning drop for a four-week break. Four in five said afterward that deactivating had been good for them. Facebook quitters also spent 15 percent less time consuming the news. They became, as a result, less knowledgeable about current events—the only negative effect. But much of the knowledge they had lost seemed to be from polarizing content; information packaged in a way to indulge tribal antagonisms. Overall, the economists wrote, deactivation “significantly reduced polarization of views on policy issues and a measure of exposure to polarizing news.” Their level of polarization dropped by almost half the amount by which the average American’s polarization had risen between 1996 and 2018—the very period during which the democracy-endangering polarization crisis had occurred. Again, almost half.
p262
Still, it was hard to separate out the benevolence of their work from the degree to which it was intended, as some policy documents plainly stated, to protect Facebook from public blowback or regulation. I came to think of Facebook’s policy team as akin to Philip Morris scientists tasked with developing a safer, better filter. In one sense, cutting down the carcinogens ingested by billions of smokers worldwide saved or prolonged lives on a scale few of us could ever match. In another sense, those scientists were working for the cigarette company, advancing the cause of selling cigarettes that harmed people at an enormous scale. See Chapter 19
I was not surprised, then, that everyone I spoke to at Facebook, no matter how intelligent or introspective, expressed total certainty that the product was not innately harmful. That’s unneureal. That there was no evidence that algorithms or other features pulled users toward extremism or hate. That the science was still out on whether cigarettes were really addictive and really caused cancer. But much as Philip Morris turned out to have been littered with studies proving the health risks its executives insisted did not exist, Facebook’s own researchers had been mounting evidence, in reams of internal reports and experiments, for a conclusion that they would issue explicitly in August 2019: “the mechanics of our platform are not neutral.”
An internal report on hate and misinformation had found, its authors wrote, “compelling evidence that our core product mechanics, such as virality, recommendations, and optimizing for engagement, are a significant part of why these types of speech flourish on the platform.” The report, later leaked to media and the SEC, warned that the company was “actively (if not necessarily consciously) promoting these types of activities.”
But, in my time at Facebook, again and again, any question about the consequences of routing an ever-growing share of the human experience through algorithms and gamelike interfaces designed primarily to “maximize engagement” brought only an uncomprehending stare. Executives who only moments earlier had delved into sensitive matters of terrorism or foreign regulation would blink and change the subject as if they had not understood the words. The unneureal.
p264
Debates in the Valley over how to use their power—defer to governments more or less, emphasize neutrality or social welfare, consistency or flexibility—rarely considered the possibility that they should not have such power at all. That consolidating information and social relations under the control of profit-maximizing companies was fundamentally at odds with the public good.
p265
“The CEOs, inside they’re hurting. They can’t sleep at night,” Ben Tauber, a former product manager at Google who’d turned a seaside hippie commune called Esalen into a tech executive retreat, told the New York Times. It was a strange set of contortions. But it did for executives what wartime CEO performances had done for corporate morale and moderators had done for hate speech: paper over the unresolved, and perhaps unresolvable, gap between the platforms’ stated purpose of freedom and revolution and their actual effects on the world.
This was the real governance problem, I came to believe. If it was taboo to consider that social media itself, like cigarettes, might be causing the harms that seemed to consistently follow its adoption, then employees tasked with managing those harms were impossibly constrained. It explained so much of the strange incoherence of the rulebooks. Without a complete understanding of the platforms’ impact, most policies are tit- for-tat responses to crises or problems as they emerge: a viral rumor, a rash of abuse, a riot. Senior employees make a tweak, wait to see what happens, then tweak again, as if repairing an airplane mid-flight.
In 2018, an American moderator filed a lawsuit, later joined by several other moderators, against Facebook for failing to provide legal-minimum safety protections while requiring them to view material the company knew to be traumatizing. In 2020, Facebook settled the case as a class action, agreeing to pay $52 million to 11,250 current and former moderators in the United States. Moderators outside of the U.S. got nothing. The underlying business model remains unchanged.
p321
The insurrection’s other leader, after all, maybe its real leader, was already on the ground, embedded in the pockets of every smartphone-carrying participant. January 6 was the culmination of Trumpism (the unneureal), yes, but also of a movement built on and by social media. It was an act that had been planned, days in advance, with no planners. Coordinated among thousands of people with no coordinators. And now it would be executed through digitally guided collective will. As people arrived at the Capitol, they found ralliers who had come earlier already haranguing the few police on guard. A wooden gallows, bearing an empty noose, had been erected on the grounds. Their perception that the election was stolen was unneureal. See Chapter 24.
p322
“We’re in, we’re in! Derrick Evans is in the Capitol!” Evans, a West Virginia state lawmaker, shouted into his smartphone, streaming live on Facebook, where he had been posting about the rally for days. In virtually every photo of the Capitol siege, you will see rioters holding up smartphones. They are tweeting, Instagramming, livestreaming to Facebook and YouTube. This was, like the Christchurch shooting a year before or incel murders a year before that, a performance, all conducted for and on social media. It was such a product of the social web that many of its participants saw no distinction between the lives they lived online and the real-world insurrection they were committing as an extension of the unneureal identities shaped by those platforms. See Chapter 24.
p326
The day after the riot, Facebook announced it would block Trump from using its services at least until the inauguration two weeks later. The next day, as Trump continued tweeting in support of the insurrectionists, Twitter pulled the plug, too. YouTube, the last major holdout, followed four days later. Most experts and much of the public agreed that banning Trump was both necessary and overdue. Still, there was undeniable discomfort with that decision falling in the hands of a few Silicon Valley executives. And not just because they were unelected corporate actors. Those same executives’ decisions had helped bring the social media crisis to this point in the first place. After years of the industry appeasing Trump and Republicans, the ban was widely seen as self-interested. It had been implemented, after all, three days after Democrats won control of the Senate, in addition to the House and White House.
p327
The letters placed much of the responsibility for the insurrection on the companies. “The fundamental problem,” they wrote to the CEOs of Google and YouTube, “is that YouTube, like other social media platforms, sorts, presents, and recommends information to users by feeding them content most likely to reinforce their existing political biases, especially those rooted in anger, anxiety, and fear.” The letters to Facebook and Twitter were similar. All demanded sweeping policy changes, ending with the same admonition: that the companies “begin a fundamental reexamination of maximizing user engagement as the basis for algorithmic sorting and recommendation.” The language pointedly signaled that Democrats had embraced the view long advanced by researchers, social scientists, and dissident Valleyites: that the dangers from social media are not a matter of simply moderating better or tweaking policies. They are rooted in the fundamental nature of the platforms. And they are severe enough to threaten American democracy itself.
p336
Collectively, the documents (gathered by Facebook employee Frances Haugen) told the story of a company fully aware that its harms sometimes exceeded even critics’ worst assessments. At times, the reports warned explicitly of dangers that later became deadly, like a spike in hate speech or in vaccine misinformation, with plenty of notice for the company to have acted and, had it not refused to do so, possibly saved lives. In undeniable reports and unvarnished language, they
p337
showed Facebook’s own data and experts confirming the allegations that the company had so blithely dismissed in public. Facebook’s executives, including Zuckerberg, had been plainly told that their company posed tremendous dangers, and those executives had intervened over and over to keep their platforms spinning at full speed anyway. The files, which Facebook downplayed as unrepresentative, largely confirmed long-held suspicions. But some went even further. An internal presentation on hooking more children on Facebook’s products included the line “Is there a way to leverage playdates to drive word of hand/growth among kids?”
As public outrage grew, 60 Minutes announced that it would air an interview with the leaker of the documents. Until that point, Haugen’s identity had still been secret. Her interview cut through a by-then years-old debate over this technology for the clarity with which she made her charges: the platforms amplified harm; Facebook knew it; the company had the power to stop it but chose not to; and the company continually lied to regulators and to the public. “Facebook has realized that if they change the algorithm to be safer,” Haugen said, “people will spend less time on the site, they’ll click on less ads, they’ll make less money.”
Two days later, she testified to a Senate subcommittee. She presented herself as striving to reform the industry to salvage its potential. “We can have social media we enjoy, that connects us, without tearing apart our democracy, putting our children in danger, and sowing ethnic violence around the world,” she told the senators.
Throughout, Haugen consistently called back to Facebook’s failures in poorer countries. That record, she argued, highlighted the company’s callousness toward its customers’ well-being, as well as the destabilizing power of platform dynamics that, after all, played out everywhere. “What we see in Myanmar, what we see in Ethiopia,” she said at a panel, “are only the opening chapters of a novel that has an ending that is far scarier than anything we want to read.”
p338
When asked what would most effectively reform both the platforms and the companies overseeing them, Haugen had a simple answer: turn off the algorithm. “I think we don’t want computers deciding what we focus on,” she said. She also suggested that if Congress curtailed liability protections, making the companies legally responsible for the consequences of anything their systems promoted, “they would get rid of engagement-based ranking.” Platforms would roll back to the 2000s, when they simply displayed your friends’ posts by newest to oldest. No A.I. to swarm you with attention-maximizing content or route you down rabbit holes to the unneureal.
Note now that artificial Intelligence appears to be a major contributor to the polarization present in our society. Platforms like Facebook, Twitter, YouTube and others use algorithmic programs that learn what keeps users engaged and on-line, with total disregard for its impact on society – it’s a digital machine, it doesn’t “know” what it’s doing. But the humans that created it did, and they did it anyway, to make money. Thesis 5: Now the neurons in the human brain for many of us have been rewired, and as described in detail with examples above, to the unneureal. To answer Ben Franklin’s question about giving us a Republic and asking if we can keep it, to do so we need a majority of the public that is informed, synthisophic and neureal. Truth matters.