Self Help

The Chaos Machine The Inside Story of How Social Media Rewired Our Minds and Our World - Max Fisher

Author Photo

Matheus Puppe

· 68 min read

BOOK LINK:

CLICK HERE

  • The author obtained over 1400 pages of internal Facebook documents from an anonymous source called Jacob who worked as a content reviewer for an outsourcing firm contracted by Facebook.

  • Jacob became concerned that Facebook’s secret rulebooks and policies for content moderation were inadequate and failing to curb the spread of hate, extremism, conspiracies online. He tried raising alarms internally but got no response.

  • Over months in 2017-2018, Jacob observed posts on Facebook and Instagram growing increasingly hateful, conspiratorial and extreme. He believed the platforms were unintentionally amplifying extreme and harmful content.

  • Frustrated by the lack of response, Jacob took the risk of secretly leaking over 1400 pages of Facebook’s internal documents, rules and policies to the author in hopes of raising awareness and pushing the company to address the issues.

  • The documents provide a window into how Facebook thinks about the consequences of social media and issues like misinformation, hate speech, political interference and public harms that have emerged with its rise.

  • The summary focuses on Jacob’s concerns over Facebook’s content policies and moderation efforts as well as his motivations for leaking the internal documents to the author.

The author was given internal documents from Facebook by a whistleblower who took steps to remain anonymous. He then met with several high-level Facebook executives at their headquarters to discuss issues around safety and harmful content on the platform.

The executives gave thoughtful answers about the challenges they face but seemed unaware of how Facebook’s own algorithms and design influence user behavior. They downplayed the idea that the platform itself could cause problems.

The author later learned that Facebook researchers had warned internally about how the platform spreads divisive content to keep people engaged. Executives dismissed these findings. After his visit, more scandals emerged around Facebook’s role in spreading misinformation and extremism. Independent audits confirmed the platform was driving people towards echo chambers.

While the companies faced backlash, their business model of maximizing engagement remained the same. A few early critics tried to sound the alarm about social media’s impacts but were dismissed. Over time more evidence accumulated showing the technology has far-reaching psychological and social consequences that were initially underestimated.

  • Renee DiResta noticed the anti-vaccine movement gaining traction online through social media platforms like Facebook and YouTube. She saw groups spreading misinformation and stoking outrage.

  • When she tried to organize pro-vaccine groups on Facebook, she found the platform’s algorithms and recommendation systems actually promoted anti-vaccine content more. Searching for vaccines on Facebook returned mostly anti results.

  • DiResta realized the platforms were optimizing for engagement, not accuracy. Conspiracy theories and outrage promoted more online activity than calm discussions. This could amplify extreme, misleading views over mainstream ones.

  • She was concerned this dynamic could undermine other issues like politics and society if left unchecked. It revealed a structural problem with how social platforms were designed and incentivized. Others also noticed similar issues across different online communities.

  • DiResta’s observations and concerns about social media’s impacts led her to study how such platforms could be exploited by bad actors like propaganda groups and hostile foreign powers seeking to manipulate public debate online.

  • Jenny Brennan began fighting anti-vaccine misinformation online in California around 2015, when the problem was still small but growing rapidly due to algorithms on Facebook and YouTube.

  • She soon realized the issue was deeply entrenched and difficult to overcome due to the basic business models and cultural values of the tech industry, which prioritize engagement and growth over harmful effects.

  • American Galápagos refers to Silicon Valley developing in isolation from the rest of the country after World War 2 due to factors like the military-industrial complex relocating there. This led to unique cultures and practices like venture capital funding.

  • Early pioneers like William Shockley established the region as a center for semiconductors and talent. But his arrogant leadership style also set a precedent for problematic corporate cultures. Venture capital funding kept talent and money located there.

  • By the 2000s, social media giants had indirectly gained control over many people due to their platforms. But the origins of problematic effects can be traced back to the isolated development of Silicon Valley.

  • Facebook was struggling to grow after expanding outside of colleges. Mark Zuckerberg turned down a $1 billion acquisition offer from Yahoo in 2006.

  • Zuckerberg introduced the “News Feed” feature, which showed users a continuous stream of updates from their friends on Facebook. This was controversial as it felt like an invasion of privacy to some.

  • Groups formed protesting the News Feed, but the feature also drove massive engagement increases on Facebook. Time spent on the site and user growth exploded.

  • Early Facebook executives like Sean Parker acknowledged they deliberately designed features to exploit vulnerabilities in human psychology and tap into dopamine rewards to drive “social validation feedback loops.”

  • The goal was to consume as much of users’ time and attention as possible through these dopamine hits from notifications, likes, comments etc. This kept users contributing more content and engaging more on the platform.

  • Some compared Facebook’s strategies to slot machines and other habit-forming technologies that use conditioning and variable rewards to create compulsive, addictive behaviors in users. This helped Facebook rapidly grow while drowning out competitors.

  • Dopamine is released in the brain when we engage in rewarding behaviors like social validation through likes, follows, comments on social media. This makes the behavior addictive by associating the platform with social rewards.

  • Platforms use variable intermittent reinforcement, like slot machines, where validation is unpredictable, making it hard to stop checking for more. This drives compulsive, addictive usage patterns.

  • While social media aims to satisfy our evolved need for social connection, the platforms actually manipulate this through features that cater to our “sociometer” - the unconscious gauge of how accepted we are by others.

  • The like button in particular hooks into this deeply by providing unprecedented public validation and social proof at scale. It releases dopamine and activates the brain’s reward center. Even the like count drives compulsive behavior by quantifying social approval or standing.

  • Overall, social media platforms design addictive experiences by exploiting fundamental human psychology around social validation, bonding, and our innate drive to maximize social standing within our communities. This leads to compulsive, habitual usage that does not ultimately satisfy and can undermine well-being.

  • The post discusses how social media platforms exploit and activate users’ social identities for engagement and profit. Expressing and sharpening one’s identity is actively encouraged as it drives user behavior.

  • Social identity theory research shows humans have a strong innate drive to bond with their own groups and distinguish themselves from others. Even arbitrary labels can prompt favoritism toward one’s own group. This creates tendencies toward distrust, fear and hostility toward outsiders.

  • Social media surfaces these social identity instincts in extreme ways by making every interaction social. Content that pits groups against each other, like portraying one’s group favorably against a reviled outgroup, is highly shareable. Early viral media companies like Upworthy and Buzzfeed figured this out.

  • Headlines leveraging identity conflicts, like liberals vs conservatives, drove high engagement. But this encouraged more hyperpartisan, misleading and outright fake content as others sought to profit from identity-based outrage and conflict. The consequences extended beyond platforms as polarization increased in the real world.

  • Warnings were coming from places experiencing unrest driven by social media exasperating existing identity tensions overnight. But these warnings received little attention from platforms focused on growth and engagement above all else.

  • In 2014, a programmer named Eron Gjoni publicly posted a lengthy blog detailing his breakup with video game developer Zoë Quinn, including private messages and emails. He falsely claimed she traded sex for a positive game review.

  • Quinn was an advocate for broadening gaming beyond its young male demographic. Some gamers resented this push for inclusion and saw it as threatening their identity.

  • Gjoni’s post resonated with these users and his allegations spread widely online. This sparked an intense backlash known as Gamergate, targeting Quinn and other women in gaming. It marked a turning point where social media became a force for widespread harassment and online outrage.

  • Companies like Facebook had rushed to connect developing countries like Myanmar without proper safety measures. Hate speech and misinformation spread rapidly, leading to real-world violence. The Gamergate episode showed similar dynamics emerging in Western online communities.

  • Subsections of 4chan and Reddit embraced Zoe Quinn’s ex-boyfriend’s claims against her as validation of their distrust in mainstream media. This set the narrative for millions of users on these platforms.

  • Gjoni’s post was seen as encouraging collective harassment. A judge later barred him from writing more about Quinn, as the harassment had the effect he seemingly intended. 4chan users discussed harassing Quinn to make her life “irrepairably horrible” or force her to kill herself.

  • Quinn was inundated with hundreds of threatening and harassing messages online, including threats against her family. Personal details like her SSN were also posted.

  • Anger over the supposed scandal, dubbed “Gamergate,” took over large parts of 4chan, Reddit, and YouTube. It targeted Quinn and others who spoke out in her defense. Brianna Wu and other women in the gaming industry faced severe harassment campaigns.

  • Gamergate had wide-ranging consequences, altering lives directly targeted and sending the extremes of online communities into broader public life. It launched a new form of antagonistic, socially-mediated politics that influenced later movements like the alt-right and Trumpism.

  • In the 1990s, the computer industry was growing beyond niche hobbyists and becoming more mainstream. Companies like Apple helped popularize the personal computer and sell the image of the engineer as a countercultural revolutionary dismantling power structures.

  • Early digital networks and forums like The WELL helped shape an explicitly libertarian and anarchist ideology among Silicon Valley engineers. They saw themselves as fulfilling the revolutionary promises of 1960s counterculture. This ideology emphasized total freedom of speech and a lack of regulation on these new online spaces.

  • Founders like Stewart Brand and David Clark promoted a vision of “cyberspace” as its own anarchic civilization governed by its users rather than governments. This ideology was enshrined in documents like John Perry Barlow’s “A Declaration of the Independence of Cyberspace.”

  • Companies like Facebook saw themselves as continuing this revolutionary ambition by using their platforms and technologies to “fundamentally rewire the world.” However, this radical libertarian ideology and lack of oversight would ultimately prove problematic as these services gained widespread popularity and influence.

  • In the early days of social media and online communities in the 2000s, many were drawn toanonymous forums like 4chan due to the lack of censorship and rules. This allowed for creative memes and pranks but also darker behavior.

  • A teenager named Adam, who struggled with anxiety and depression, found belonging on 4chan when he was first drawn in by users coordinating to get justice for an abused cat. He spent many hours on the site and enjoyed the anonymous yet social experience.

  • However, 4chan and similar sites also fostered anti-social behavior and extremism over time. As a Jewish person, Adam was uneasy about the prejudice he encountered on parts of the site. While he valued the community aspects, he was aware of how the culture could influence people in harmful real-world ways.

  • Anonymous forums set a precedent for less regulated online spaces that enabled both creativity and anonymity but also seeded the growth of harassment, conspiracy theories, and the normalization of intolerance for some users over time.

  • 4chan started as a site for sharing funny videos and images but also tended to be transgressive and push social limits. Pranks became a defining early web activity.

  • Trolling emerged, meaning posting comments to annoy or disrupt others. It often became targeted abuse meant to delight in others’ distress. The goal was to provoke extreme reactions for “lulz” or laughs.

  • Notable trolling incidents included hijacking a Taylor Swift contest and tricking Oprah into making an odd on-air statement. Things escalated into outright sadism like mocking a teenage boy’s suicide.

  • Targeting an 11-year-old girl who had falsely claimed a relationship drove the harassment to new extremes. Her emotional videos in response were celebrated and mocked on 4chan. Years later, she said the man had actually raped her as a child.

  • Some saw trolling as a way to question authority and teach skeptical thinking. But it also spread bigotry and allowed extremism to grow, as seen in the campaign against tech blogger Kathy Sierra that drove her from public life. Figures like Andrew Auernheimer pushed increasingly disturbing content.

  • Auernheimer and his hacking group were celebrated by some in tech circles for exposing an iPad vulnerability, though he later joined a prominent neo-Nazi forum.

  • Poole imposed some light restrictions on 4chan like confining extreme content, but this had the unintended effect of attracting more users to those sections and deepening a culture of defiant outsiderdom.

  • As platforms like Facebook, YouTube and Twitter rose in popularity, chan culture and memes spread there through engaged users. The algorithms and features of these platforms absorbed extreme tendencies and amplified them widely.

  • The video game industry initially targeted young men after a crash in the 1980s led Japanese companies to market games as toys in a polarized boy/girl aisle. This cultivated gaming as an identity for some rooted in resistance to evolving gender norms.

  • When social networks appeared, gamers were an early adopter audience that platforms mimicked. But diversity in games later threatened some gamers’ sense of identity, leading to the Gamergate backlash and radicalizing some users through endless outrage-driving content.

  • Ellen Pao was hired to help professionalize Reddit which was dominated by young male geeks and 4chan-inspired culture. As an outsider she faced an unwelcoming environment.

  • Reddit allowed users to submit content that others could vote up or down, with the most popular rising to the top. This format became hugely popular but also enforced a majority-driven culture.

  • The upvoting system tapped into human desires for validation and chased dopamine, addicting users. But it also pushed content and discussions to extremes as dissenting voices were drowned out.

  • Platforms like Reddit and social media were opening a “portal” connecting early internet communities to the mainstream public, potentially exporting their radical cultures. Events like Gamergate seemed inevitable as these audiences interacted with algorithms optimized for engagement.

  • Ellen Pao became CEO of Reddit in late 2014 after the previous CEO resigned amid controversies over allowing nude leaked celebrity photos and toxic harassment on the site.

  • Pao saw an opportunity to reform Reddit and make it more inclusive, as her background made her sensitive to issues of gender discrimination in tech. She wanted to curb extreme hate, harassment, and behaviors that made the site toxic.

  • She started by banning nude photos without consent, to curb “revenge porn.” While some users pushed back, most accepted the change.

  • Emboldened, Pao then announced banning users and communities responsible for extreme hate or harassment. This represented a seismic shift away from Reddit’s hands-off approach and toward moderating unacceptable behavior.

  • Pao aimed to clean out toxic subcultures and impose protections for women and minorities, bringing real inclusivity to what had been a platform dominated by young white men. However, her reforms provoked intense backlash from users committed to Reddit’s culture of free speech.

  • Milo Yiannopoulos was a relatively obscure technology writer who gained prominence covering Gamergate for Breitbart. His inflammatory articles stoked grievances aligned with the site’s far-right views.

  • Through Gamergate, Yiannopoulos tapped into a large new online audience and became a leading voice of the emerging “alt-right” movement. He amplified extreme perspectives through provocative journalism.

  • Steve Bannon saw potential to activate this online “army” and bring them into mainstream conservative politics. Yiannopoulos became the public face of the alt-right through attention-grabbing social media posts.

  • Yiannopoulos collaborated with figures like Andrew Auernheimer to author a defining guide to the alt-right for Breitbart, portraying it as a youthful internet subculture opposing social norms.

  • Through Yiannopoulos and online discourse, extremist terminology and framings from sites like 4chan spread into mainstream conservative and political discussions on social media. This influenced the tone and issues of the emerging 2016 election.

Here is a summary of the relevant passage:

  • In September 2015, American dentist Walter Palmer broke his silence about being targeted in a global online movement of outrage after killing Cecil the lion in Zimbabwe on a guided hunt.

  • The story of Cecil’s death had gained viral attention on Reddit when a user posted it with an emotionally provocative title. Thousands upvoted it and comments escalated the emotional stakes, portraying the hunter as a coward, murderer and psychopath.

  • Reddit’s incentives and formats sorted comments by popularity, amplifying expressions of outrage. Reporters then chased the story’s newfound online virality, revealing the hunter as Walter Palmer and unleashing a torrent of rage against him and his associates on a global scale through social media.

  • This was one of the first instances of a new kind of mass, life-altering online outrage and mob behavior that would soon become more common, demonstrating how web platforms could incentivize and amplify emotionally extreme content for viral spreading of stories.

  • Billy Brady realized as a college freshman that he enjoyed getting outraged on Facebook by posting inflammatory content and getting into arguments. However, this behavior didn’t align with his values of persuasion and activism.

  • His studies in moral philosophy and psychology provided insight into why people are drawn to harmful, negative emotions like outrage. Expressing outrage online gets attention and social feedback in the form of likes and shares, tapping into a deep human desire to conform to social norms.

  • Moral outrage is an evolutionary adaptation that developed in small tribes. It functions to enforce group norms - when someone violates a code of behavior, outrage compels others to join in shaming and possibly punishing the transgressor. Even infants display this behavior.

  • Brady came to understand moral outrage through the philosophical theory of sentimentalism - our sense of morality is intertwined with and driven by emotional responses, rather than purely rational thinking. Neurological research supports this view that social emotions underlie our experience of morality.

  • In summary, Brady realized through his own online experiences and studies that people are highly attuned to social feedback on platforms, and expressing moral outrage fulfills deep human desires for attention, conformity and reputation within groups.

  • Social media outrage mobs form quickly in response to perceived moral transgressions, often before the facts are fully known. They seek to publicly shame and punish individuals through widespread criticism and calls to action.

  • However, outrage mobs can spiral out of control and disproportionately harm targets. Several examples are given of incidents where social media users far exceeded the scope of the original issues by doxxing, threatening, and pressuring employers to fire people.

  • Once outrage spreads online, it takes on a life of its own and mobs become motivated by amusement or the shaming itself rather than proportional justice. Targets are often humiliated far beyond what their actual actions deserved.

  • The low costs and anonymity of online outrage allow natural restraints on public shaming to be removed. Mobs no longer need to carefully consider proportional responses and accountability. This has changed how social norms are enforced.

  • Several writers and thinkers observe that online outrage has grown crueler and more likely to destroy lives even when the initial accusations prove false. Serious issues around proportionality, due process and accuracy have emerged.

  • Lyudmila Trut conducted a decades-long experiment breeding foxes in Siberia to study animal domestication. She selectively bred the friendliest foxes and noticed physiological changes across generations, like shorter tails and floppier ears.

  • Trut discovered domestication is caused by changes to neural crest cells, which affect behaviors like fear and aggression. Friendlier foxes had fewer neural crest cells.

  • This helped explain the mysterious shrinking of human brains and anatomy 250,000 years ago - it reflected self-domestication through language and cooperation.

  • Anthropologist Richard Wrangham theorizes language allowed early humans to conspire against aggressive “alpha males” through gossip. Cooperative males reproduced more, selecting for docility.

  • However, humans also selected for “moral outrage” as an enforcement mechanism for their new consensus-based societies without strong leaders. Outrage could be turned against rule-breakers to impose order through mob-like “proactive aggression.”

  • So while breeding out aggression toward each other, humans paradoxically bred in collective aggression used to enforce shared moral codes and conformity through mob-like punishment of transgressors. This “tyranny of the cousins” dynamic still influences some hunter-gatherer societies today.

  • The passage describes an incident in Central Park in May 2020 where a Black birdwatcher named Christian Cooper asked a white woman named Amy Cooper to leash her dog as required by park rules.

  • When she refused, he began recording a video of the interaction on his phone. She threatened to call the police and tell them an “African American man was threatening her life.”

  • Christian’s sister posted the video on Twitter, where it rapidly gained millions of views. Social media users loudly condemned Amy’s actions as racism and an attempt to weaponize police against Christian.

  • Amy was swiftly fired from her job and her personal information was widely shared online, facing intense backlash and social isolation. Christian surrendered her adopted dog due to the backlash against her.

  • While social media brought deserved justice and attention to the racist incident, some noted it may have carried the punishment too far and expressed unease with the online mob behavior and level of personal information sharing/doxxing against Amy. Christian himself was also somewhat ambivalent about the outcome.

  • Guillaume Chaslot, an AI specialist at Google, tried to shed light on how algorithms govern social media platforms, as their inner workings were opaque.

  • In the early 2010s, Chaslot joined Google’s YouTube team to help improve the platform. Their goal was to maximize “watch time” - keeping users engaged and spending as much time as possible watching videos.

  • Up till then, Google’s approach emphasized usefulness and brevity. But Cristos Goodrow, running the YouTube project, argued they should promote long-form, entertaining videos even if they weren’t the most direct answers, to maximize watch time.

  • This “watch time” focus would have major consequences by incentivizing all platforms to optimize for keeping users engaged through any means, even if it spread misinformation or outraged users. It prioritized profits over social responsibility.

  • Chaslot tried to understand YouTube’s algorithms but faced resistance, as companies had incentives to stay ignorant of how their recommendation systems really worked and any negative impacts. This opened the door for manipulation without oversight.

  • Guillaume Chaslot worked on YouTube’s search and recommendation algorithm to maximize watch time. This led users down “rabbit holes” of increasingly extreme content related to their interests.

  • The algorithm learned that outrage and tribal videos around topics like anti-feminism kept male viewers engaged. It started heavily recommending these types of videos.

  • Chaslot became concerned about how this was impacting political and social issues. Using Google’s 20% rule, he worked on a side project to develop an algorithm that balanced watch time and public well-being.

  • YouTube’s leadership held a conference where they announced a bold new goal - to increase daily watch time by 10x. This set YouTube on a path of intensive optimization of its algorithms to hook users and keep them watching for as long as possible. Chaslot worried this could have unintentional consequences, as the algorithms guided users and debates online.

  • The concepts of “filter bubbles” and “echo chambers” were circulating in Silicon Valley as ways to describe the concerns about how personalized recommendations and algorithms were impacting dialogue and the spread of ideas online.

  • Eli Pariser spoke at a tech conference in 2011 warning that algorithms could threaten democracy by creating “filter bubbles” that only show users content matching their biases.

  • He noticed his Facebook feed shifted to mostly liberal posts after he interacted more with liberal content, likely due to the algorithm.

  • Research showed algorithms rearranging search results could influence up to 20% of undecided voters by giving psychological weight to higher-ranked results.

  • At YouTube, Guillaume Chaslot tried developing an alternative algorithm focusing on users’ interests rather than exploiting impulses, but his managers shut it down seeing it as a “20% project” and he was eventually fired in 2013.

  • Chaslot believed YouTube only cared about the metric of watch time and promoting addictive or hateful content that increased it, rather than moderation or kindness. He questioned the tech industry’s metrics-obsessed culture that prioritized growth and efficiency over all else.

  • This culture originated from Intel CEO Andy Grove’s philosophy imparted throughout Silicon Valley of focusing only on quantifiable metrics like speed or time to market to drive 10x growth, without regard to side effects.

Here is a summary of the provided text:

  • Renée DiResta noticed the startup investment model changing at Y Combinator’s Demo Day, with founders pitching vague ideas focused on getting users through free services and selling ads later, shown through squiggle charts depicting exponential growth.

  • This was enabled by cloud computing, which eliminated the need for startups to invest in expensive infrastructure upfront. It lowered the bar for starting internet companies.

  • Investors demanded this model as it allowed them to invest in many companies cheaply, knowing most would fail but a big success would cover those losses. Social media especially delivered on attracting large user bases.

  • The payoffs could be huge, like Instagram which returned a 100x return for investors. But this model incentivized startups to focus on quick growth and acquisitions/IPOs rather than long-term profitability.

  • It also inflated valuations, putting pressure on startups to monetize users aggressively through ads to justify their valuation. This kicked off an “arms race” for limited human attention by fighting to capture more of people’s time on sites.

  • Researchers and platform insiders had raised warnings about algorithmic amplification of controversial or extreme content as early as 2015. Guillaume Chaslot, a former YouTube engineer, witnessed this firsthand on a bus ride in Paris.

  • Chaslot observed a man spending a long time watching YouTube videos pulled from the recommendation system into deeper conspiratorial topics. As an engineer, Chaslot initially thought this was good for engagement, but then realized the human consequences could be harmful.

  • When Chaslot questioned the man about one conspiratorial video, the man insisted it revealed secret truths. This showed how the recommendation system could pull users deeper into misleading or false information without oversight for accuracy.

  • Meetings in 2015 between researchers tracking online trends and US government officials revealed growing concerns about how platform algorithms amplified jihadist messaging but also Russian propaganda efforts. However, the platforms were not compelled to address these issues at the time.

  • Researchers and insiders had identified the problem of algorithmic amplification and manipulation years before the broader public recognized its real-world consequences. But the platforms prioritized growth metrics over auditing recommendation systems for potential harms.

  • Guillaume Chaslot noticed how YouTube’s recommendation algorithm was repeatedly recommending conspiracy theory videos to people, even when they didn’t intentionally search for them. The repetition from the recommendations helped convince people the conspiracies might be true due to the illusory truth effect.

  • When Chaslot tracked YouTube recommendations, he found a high percentage of recommended videos about certain topics like Pope Francis or climate change were conspiracy theory videos. This showed the algorithm was directing people to misinformation.

  • Other Google and tech company employees like Tristan Harris and James Williams had also realized social media platforms could manipulate users’ attention and thoughts through things like notifications, recommendations, and algorithms optimized for engagement. They tried to warn companies and the public of this.

  • Renee DiResta warned Google at a conference that their algorithms were influencing policy by promoting misinformation that impacted things like vaccine acceptance and responses to Zika virus. But the company did not seem too concerned about this issues at the time.

  • When Twitter introduced algorithmic sorting of tweets, it favored inflammatory, provocative content in a way that changed the conversation on the platform and seemed to make users angrier. The example of the Tay AI also showed how algorithms could be radicalized by their inputs.

  • Chaslot and others were concerned about how the vast scale and “black box” nature of social media algorithms were promoting misinformation and manipulating users, but platforms did little in response at that point.

  • In late 2016, Renée DiResta noticed large Facebook groups gaining tens/hundreds of thousands of members spreading the “Pizzagate” conspiracy theory that prominent Democrats were involved in a child sex ring.

  • The theory originated on 4chan and spread to Reddit forums and then Facebook. Benefiting from YouTube videos and fake news articles discussing it, it gained widespread traction online in the weeks before the 2016 election.

  • By election time, 14% of Trump supporters believed some version of the conspiracy. While many dismissed it as internet weirdness, it demonstrated how conspiracy theories could spread widely online.

  • After Trump’s unexpected win, many at YouTube and Facebook expressed concerns that their platforms had amplified misinformation and partisan content, helping shape the election’s outcome. However, the companies publicly rejected this view and deflected scrutiny over their role.

So in summary, the passage describes the origin and spread of the “Pizzagate” conspiracy theory online in late 2016, as well as internal doubts this raised at tech companies about their role in the election result, even as they downplayed such concerns publicly.

  • After the 2016 election, Mark Zuckerberg claimed fake news on Facebook did not influence the outcome and the company was just a platform, not responsible for content.

  • Renee DiResta criticized this from her hospital bed, arguing the real issue was how algorithms promoted radicalization, polarization, and distorted realities for some users.

  • Other tech critics agreed the platforms may have helped elect Trump by prioritizing “engagement” over truth. Research found YouTube recommended strongly pro-Trump biased videos.

  • Edgar Welch was radicalized by Pizzagate conspiracy videos on YouTube and fired an assault rifle in a DC pizza shop, believing a conspiracy theory. This highlighted the real-world consequences.

  • Psychological researcher William Brady found on Twitter that tweets with “moral-emotional words” spread much further, benefiting Trump’s style, and also increased polarization by reducing cross-ideological reach. This provided evidence social media can systematically boost divisions and outrage-driven politics.

  • In early 2017, many agreed that social media platforms like Facebook and Twitter distorted and bent reality, making certain behaviors seem more common and accepted than they really were. However, no one had figured out how to measure the precise effects on billions of users.

  • Twitter was seen as particularly problematic due to President Trump using it extensively, which meant journalists and others had to spend more time on the platform encountering trolls, controversies, and misinformation. Its structure also meant all users largely experienced the same issues, making problems seem systemic.

  • Twitter announced focusing on curbing hate and harassment as its user growth stalled. While some saw progress removing extreme content, Twitter’s issues were less influential than other platforms due to its smaller size.

  • After the 2016 election, larger platforms like Facebook initially took steps to address foreign propaganda and misinformation concerns. However, they struggled to balance free speech with banning disinformation, and ultimately focused on connecting diverse groups, believing this could address divisions, though research showed this often made divisions worse.

  • There was a sense that Silicon Valley had not truly learned lessons and was overconfidently focused on technical solutions rather than addressing fundamental issues around content and radicalization enabled by their platforms.

  • The passage discusses the challenges of analyzing how social media platforms spread misinformation and polarized political discourse. It focuses on the work of several researchers investigating Russian influence operations on platforms like Facebook and Twitter.

  • Renee DiResta and her team of analysts tried to map the Russian campaign but were limited without access to full platform data. She connected with others raising concerns about social media’s effects, like Tristan Harris.

  • Pressure from Congress led the platforms to disclose some data on Russian influence campaigns, but DiResta pushed for the full internal archives. Months later, a massive 400GB dataset was handed over to Senate staffers, who asked DiResta to lead analysis and produce a report. She took on the task full-time, seeing it as a unique opportunity to understand modern information operations at scale.

  • The key ideas are that social media design intrinsically promotes polarization, and DiResta’s team played an important role in investigating Russian interference while pushing platforms for transparency and accountability. Their analysis of the internal dataset had potential to reveal how propaganda spread on major platforms.

  • Thomas Chaslot conducted an experiment tracking recommendations on YouTube during the French presidential election and found that the algorithm favored candidates from the far-right and far-left extremes (Marine Le Pen and Jean-Luc Mélenchon).

  • Mélenchon in particular won millions of views on YouTube, where his most dedicated fans seemed to congregate, even though he was unpopular with voters overall.

  • This benefited fringe radicals who spent disproportionate time on the platform, as the algorithm learned to push similar content and fans to those videos, driving up watch time and followers in a reinforcing loop.

  • Chaslot’s findings suggested social media elevated anti-establishment politicians who used exaggerated moral-emotional language, potentially threatening political stability globally.

  • Though YouTube disputed his methodology, Chaslot argued his conclusions were consistent based on analyzing thousands of data points, and called on the company to release internal data to clear this up or look into the issues, but YouTube remained stonewalling.

  • Over subsequent years, more researchers published further findings supporting Chaslot’s work using more sophisticated methods, indicating it was not an isolated observation.

  • YouTube consistently denied, discredited and antagonized reports about problems with its platform spreading misinformation and harming civic discourse. It attacked the credibility of researchers exposing these issues.

  • When major stories were published, YouTube would paradoxically claim it had already fixed problems it previously dismissed. It sought to portray one researcher, Chaslot, as untrustworthy in retaliation for his work.

  • Researchers Brady and Crockett achieved a breakthrough in understanding social media’s effects. Their “MAD model” described how platforms reshape social motives and attention at both individual and collective levels, altering civic engagement, political polarization and the spread of misinformation.

  • Key parts of their findings showed how moral-emotional content manipulates users’ attention and rewards expressions of outrage, promoting internalization of more extreme, divisive and polarized views over time. This distortion can nudge whole societies toward greater conflict, polarization and unreality.

  • The platforms’ economic incentives and design shape users’ experiences in ways that may not align with users’ own interests or social well-being. Researchers were only beginning to understand the far-reaching consequences on societies, politics and humanity.

  • In 2017, Myanmar military forces launched brutal attacks against the Rohingya Muslim minority in Rakhine state, killing thousands and forcing over 700,000 to flee to Bangladesh as refugees.

  • Soldiers and local Rakhine Buddhist men attacked Rohingya villages, setting houses on fire, shooting and killing men, and raping women. Many refugees described horrifying acts of violence and murder.

  • Anti-Rohingya sentiment in Myanmar had been growing for decades, depicting them as illegal immigrants despite many having lived there for generations. The military and politicians used them as a scapegoat to stir nationalist fervor.

  • In recent years, hate speech and misinformation against Rohingya spread wildly on Facebook. Extremist Buddhist nationalists like Wirathu used it to stoke fears of Muslims and blame Rohingya for violence. This radicalized many Burmese and helped justify the military’s genocidal campaign.

  • Warnings were issued to Facebook about the dangerous speech as early as 2015, but the company did not take sufficient action, fueling the radicalization and enabling calls for violence to spread unhindered on its platform in the country.

  • Facebook launched “Free Basics” in Myanmar, allowing free access to Facebook. Within months, 38% of people got most or all their news from Facebook.

  • As ethnic violence worsened in Myanmar, a human rights worker warned Facebook twice that the platform was enabling mass violence, but Facebook did not change anything.

  • Villagers credited Facebook for giving them “true information” and said Muslims were not welcome as they were “violent and multiply crazy.” Extremist pages spreading such views remained active during the killings.

  • Social media, especially Facebook, played a “determining role” in enabling the Myanmar genocide according to a UN investigation. One engineer turning off the platforms could have prevented massive violence and death.

  • Facebook executives repeatedly dismissed warnings about the platforms enabling violence in Myanmar and other countries like India and Indonesia, prioritizing growth over potential harms.

  • The journalist traveled to Sri Lanka after contacts warned of social media-linked violence, and found a family furious on Facebook rumors blaming Muslims for a relative’s death, indicating social media had inflamed tensions.

  • The story describes how anti-Muslim rumors and videos spread on Facebook incited mob violence against Muslims in Sri Lanka in 2018.

  • A viral rumored linked Muslims to sterilization pills, which led to an attack on a Muslim-owned restaurant in Ampara town by a mob. Video of the attack was then shared on Facebook, further inciting violence.

  • Extremist posts openly calling for genocide and violence against Muslims proliferated on Facebook. Researchers warned Facebook but the company did not take action to remove policy-violating content.

  • The spread of misinformation and hate on Facebook was seen as driving Sri Lanka’s slide into communal violence and chaos between Sinhalese and Muslims. The platforms’ policies of rapid expansion without proper monitoring contributed to the dangerous situation.

  • Researchers had warned Facebook for years about hate speech on the platform in Sri Lanka but the company failed to take sufficient action, despite its policies prohibiting incitement to violence. This indicated a disconnect between Facebook’s rules and its enforcement.

  • In late 2018, Sri Lankan government officials met with Facebook and pleaded with them to better police hate speech and misinformation spreading on the platform, which they said was fueling tensions and could lead to violence like in Myanmar.

  • Facebook representatives said officials should use the reporting tool, but this was ineffective as their reports were largely ignored. Facebook also did not provide a direct contact for officials to flag dangerous content.

  • In early 2019, anti-Muslim posts and rumors on Facebook spread in the lead up to deadly riots targeting Muslims. Extremist Amith Weerasinghe used Facebook to spread hate and coordinate violence via private WhatsApp groups.

  • Officials’ repeated reports to Facebook about this dangerous content went unanswered. The violence was only stopped when Sri Lanka blocked social media access.

  • Facebook representatives then contacted officials, but only to ask about the traffic loss, not about addressing the role their platform played in the violence. Victims like Abdul Basith had lost their lives due to content spread on Facebook.

  • Journalists could not get Facebook to remove the video that helped spark the violence, despite continued requests. A meeting between Facebook policy staff and Sri Lankan ministers took place after the blocking of social media.

  • The interviewee, Gema Santamaría, a scholar who studied vigilante violence in Mexico, drew parallels between social media and the historical role of church bells in facilitating lynchings.

  • Social media reproduces mechanisms by which communities are worked up into collective violence, spreading rumors that activate a sense of threatened status among dominant groups.

  • The rumors often involve reproduction/population and target minorities to confirm beliefs that the group’s dominance is at risk from changes in demographics, norms, or minority rights. This taps into the psychological phenomenon of “status threat”.

  • When dominant groups feel their status is threatened, it sparks a ferocious reaction and obsession with portraying minorities as dangerous. Social media amplifies this by making group identities more salient than individual identities online.

  • This creates an environment prone to “deindividuation” where people lose their sense of self and act based more on tribal mob mentality in response to spread of rumors online. In several countries, this pattern has preceded outbreaks of real-world violence.

Here are the key points from the summary:

  • Researchers in Germany conducted a study of over 3,000 anti-refugee attacks between 2015-2017 and found a correlation between higher levels of Facebook usage in a town and more attacks on refugees. They estimated Facebook usage accounted for around 10% of all anti-refugee violence.

  • The town of Altena, Germany experienced a rise in anti-refugee sentiment spread through local Facebook groups, even as most residents were initially supportive of refugees. A young man named Dirk Denkhaus attempted to burn down a refugee housing facility after becoming radicalized online through racist memes and jokes with a friend.

  • Prosecutors said Denkhaus’s slide into extremism began as ironic jokes on Facebook that gradually became sincere beliefs over about 6 months through a process known as “irony poisoning” - where exposure to objectionable content online can help normalize and internalize extremist ideologies through desensitization.

  • Officials saw Denkhaus as representative of a broader trend where social media exacerbated anti-refugee views in Altena. The mayor was also later stabbed by an attacker reportedly enraged by the town’s pro-refugee policies and active in local anti-refugee Facebook groups.

So in summary, the passage examines how social media, particularly Facebook, appears to have contributed to rising anti-refugee sentiment and attacks in Germany by spreading and normalizing extremist views online at a local level.

  • The passage discusses “superposters” - highly active social media users who spread and promote certain views through constant posting.

  • It profiles one superposter named Rolf Wassermann from a small German town, who constantly posted opinions and stories portraying refugees in a negative light on Facebook.

  • Research suggests superposters tend to be more extreme, narcissistic, and seek attention and validation through social media. Their high level of activity shapes what casual users see.

  • When Reddit banned some of its most toxic superusers, hate speech dropped 80%, showing their outsized influence.

  • Studies also show that people’s sense of right and wrong is influenced by what they perceive their peers’ views to be. Superposters effectively become the “social referents” that shape community norms through their prominence on social platforms.

  • This suggests superposters have the ability to influence not just what people see but how they think about issues by setting the overtones of online discussions.

  • Social media, especially Facebook, gave many Germans the misleading impression that refugees posed serious threats and that social norms were hostile to refugees. This helped fuel anti-refugee sentiments and attitudes.

  • A teacher in Traunstein, Germany noted that her students had become stridently anti-refugee in recent months, often citing propaganda and misinformation they saw on Facebook as the source. Rumors and false claims about refugees spread quickly on Facebook and convinced people they were true.

  • A local police inspector said Facebook influences people through its algorithm and the spread of misinformation online has become a threat to public safety. His department works to counter false rumors and their influence. He cited an example of a false rumor on Facebook about a refugee rape that caused real-world tensions.

  • Research showed whenever internet access went down in an area with high Facebook usage, attacks on refugees dropped significantly, suggesting Facebook drove some of the anti-refugee hostility and violence indirectly by spreading misinformation and altering social norms.

  • Neo-Nazis rioted in the German city of Chemnitz after an altercation between refugees and locals left two men dead. The riots were organized on social media.

  • Journalists contacted a local official, Sören Uhle, asking about false claims spreading online, like that more people were killed or a woman was attacked. Uhle was surprised by the misinformation.

  • Researcher Ray Serrato analyzed YouTube videos about Chemnitz and found the recommendation algorithm rapidly directed viewers to far-right and conspiratorial content, even from seemingly mainstream starting points.

  • An obscure far-right YouTuber named Oliver Flesch posted videos pushing a racist narrative about the incident that got hundreds of thousands of views due to recommendations. Other extremist channels copied his narrative.

  • Serrato found most users would encounter radical content after only 1-2 recommended videos. This helped spread misinformation and inflame tensions in Chemnitz. The YouTube algorithm seemed designed to keep users engaged with extremist content.

So in summary, the YouTube recommendation system was found to quickly and systematically direct users exploring the Chemnitz riots towards conspiratorial and far-right propaganda, helping spread misinformation that fueled the violent neo-Nazi demonstrations.

  • YouTube frequently places its own videos near the top of search results to boost views and revenue. This influences how people find information on the internet since Google dominates search.

  • In Chemnitz, Germany in 2018, YouTube videos spreading falsehoods about a stabbing increased interest in the town. Some videos urged followers to show support, and crowds gathered that soon rioted against foreigners. Social media, especially YouTube, was credited with radicalizing these people.

  • A researcher named Jonas Kaiser noticed YouTube had formed online communities around issues like Gamergate and climate change skepticism. By recommending videos, it bound these disparate groups together into a shared identity.

  • Kaiser and his partner tracked YouTube recommendations and found it clustered many political channels together, from mainstream to extremist. People who commented crossed between these groups over time. YouTube had created a new, unified far-right community.

  • They were concerned this showed YouTube was shaping American politics too. Events like the 2017 Unite the Right rally in Charlottesville appeared to mirror the digitally formed community they observed in Germany. Understanding YouTube’s influence had become urgently important.

  • In 2017, various fringe white supremacist and far-right groups organized the “Unite the Right” rally in Charlottesville, Virginia. They gathered in unprecedented numbers, bringing together groups that had little prior association.

  • The rally descended into violence, with hundreds marching with Nazi and Confederate flags and attacking counter-protestors. One rally participant deliberately rammed his car into a crowd, killing Heather Heyer.

  • Researchers found that social media, especially Facebook and YouTube, played a major role in building networks between these disparate groups and coordinating the rally. Facebook’s algorithm in particular merged different communities together.

  • Jonas Kaiser and his team began mapping the online far-right networks on YouTube using network analysis. They found YouTube’s recommendations were helping route users from more mainstream conservative voices to extreme and white nationalist channels over time.

  • User stories indicated many were introduced to far-right ideologies through YouTube recommendations starting from innocuous topics like depression or self-help. Figures like Jordan Peterson, Stefan Molyneux, and Millennial Woes served as gateways to greater extremes.

  • Kaiser’s research was beginning to show YouTube’s algorithm was significantly aiding in the growth and radicalization of the online far-right movement. This upped the urgency for addressing these issues on social media platforms.

  • A Princeton study found that YouTube’s recommendation algorithm often connects users to more extreme right-wing channels, even if they don’t directly search for them.

  • Psychologists discovered that social media platforms, unintentionally, employ an extremist recruiting strategy of capitalizing on feelings of crisis/instability to offer in-group belonging and provide an ideological solution by targeting an out-group. This can escalate to radicalization.

  • Incels forums on sites like 4chan and YouTube began mildly but employed these radicalization techniques, leading some members to carry out violent attacks inspired by feelings of persecution by women/feminists.

  • YouTube’s algorithm nudges users towards increasingly extremist content by recommending similar videos, in a pattern called the “rabbit hole.” This can become a source of community and identity for lonely users.

  • A study found YouTube stitches unrelated channels together if keeping users watching, potentially radicals users without their intent or knowledge through the recommendations.

The passage describes how YouTube’s algorithm was pushing users towards more extreme content on the platform, especially on the political right. It discusses research showing YouTube would recommend videos from figures like Alex Jones and others toward white nationalist and conspiracy theory content. This helped grow the audience and influence of extreme voices on YouTube. While some tried pressuring YouTube to change this, the company resisted due to its commitment to free speech and not interfering with recommendations. A similar situation happened on other platforms like Facebook that boosted figures like Alex Jones through their algorithms and engagement-focused designs, even as these voices spread harmful misinformation. The platforms eventually banned some figures like Jones due to pressure but did not truly accept they had played a role in the problem.

  • In 2017, an anonymous 4chan user calling themselves “Q Clearance Patriot” began posting cryptic messages implying they had insider knowledge of a military operation against Democratic leaders and elites involved in child trafficking conspiracies like Pizzagate.

  • These posts, known as “Q drops”, laid the foundation for the QAnon conspiracy theory movement. QAnon followers believe Trump is working to arrest thousands involved in a deep state cabal controlling America.

  • The posts offered a sense of purpose, belonging and control for followers trying to make sense of chaotic events. Communities formed online to analyze the clues in Q’s messages.

  • QAnon spread from fringe to mainstream sites as Facebook, YouTube and Twitter recommended related conspiracy content. It absorbed other fringe movements and grew into a major presence on the online right.

  • By 2018, QAnon groups on Facebook had millions of members and QAnon videos received millions of views, but platforms were slow to act. Some followers committed acts of violence in response to the conspiracy theories.

  • While the true identity of “Q” is unclear, the movement took on a life of its own, driven by online communities and platform algorithms that boosted extremist and conspiracy content.

  • The passage describes how some users would go to extreme websites like 8chan to deliberately desensitize themselves to disturbing and shocking content as a way to prove they belonged and had an “open” mindset.

  • The culture on sites like 8chan encouraged pushing boundaries to more extreme taboos, including celebrating mass violence, as a way for users to bond over upsetting mainstream society.

  • In 2019, the Christchurch mosque shootings were carried out by Brenton Tarrant, who had been radicalized on sites like 8chan. He livestreamed the attack on Facebook and urged 8chan users to watch and spread his message.

  • Tarrant represents a new type of violent extremism emerging from deeply radical online communities focused on shocking content and boundary-pushing as an identity and way to find community among social outcasts.

  • Jacob was a Facebook content moderator who reviewed posts from Facebook, Instagram and WhatsApp to determine if they violated guidelines. The guidelines had grown very long and complex.

  • Jacob and his coworkers, who had mostly worked in call centers before, were expected to understand and apply these extensive rules to make hundreds of moderation decisions daily.

  • Jacob’s office was one of many content moderation teams located around the world, with little communication or coordination between them beyond directives from Facebook headquarters.

  • As unseen arbiters, the moderators shaped politics and social relations globally by determining what content was allowed or removed on Facebook’s large platforms.

  • Jacob grew tired of feeling complicit in dangerous misinformation and tensions he saw rising, and was frustrated that his bosses ignored his concerns. He contacted me to expose what he saw as corporate negligence and the sensitive impact of Facebook’s moderation work.

  • Jacob, a former Facebook content moderator, provided documents detailing Facebook’s rules and guidelines for moderating content globally. The files showed Facebook’s efforts to comprehensively regulate political and social debates through extensive and precise content policies.

  • However, the rules were scattered across many documents and presentations, sometimes contradicting each other. They aimed to reduce complex moderation decisions to simplistic yes/no options that moderators had to make very quickly, without much thought.

  • The rules imposed restrictions in some countries like India that went beyond local laws, seemingly prioritizing avoiding criticism of Facebook over protecting free speech. Mistakes and outdated information were also common.

  • Facebook relied on profit-motivated outsourcing firms to employ moderators, contrary to Facebook’s claims. Moderators faced intense quotas and pressure, and sometimes lacked mental health support. The disconnect between Facebook and these firms undermined the effectiveness and principles of content moderation.

  • In total, the documents revealed Facebook had inserted itself deeply into global governance but did so tentatively and inconsistently due to its prioritization of growth, profit and avoiding controversy over carefully regulating discourse on its platforms.

  • Facebook relies heavily on outsourcing companies to moderate content around the world in order to scale quickly and keep costs low. However, this results in poor oversight and incentives for cost-cutting.

  • Moderators are pressured to make quick decisions and mark posts as approved to hit accuracy targets, even if they can’t understand the language or it was flagged as hate speech. This directly contributed to the spread of hate and incitement in countries like Sri Lanka and Myanmar.

  • The rise of young tech founders like Zuckerberg and influence of libertarian angel investors helped shape overly aggressive and irresponsible cultures at social media companies. Founders were given too much power without proper oversight.

  • Startup dynamics shifted so founders no longer needed investors as much, giving them control. Investors competed to fund young people without experience rather than experienced managers. This empowered inexperienced leaders.

  • Perks and compensation like stock options fueled hubris, a sense of being “masters of the universe.” Combined with goals of disruption, this led platforms to take on governance with inadequate consideration of societal harms.

  • Renee DiResta testified before a Senate hearing about Russia’s digital exploitation and manipulation of social media. She focused on how algorithms promote extreme, polarizing content which undermines democracy.

  • She had initially been reluctant to testify, worried it would be partisan, but found the committee sincere in their fact-finding. She wanted to emphasize it was a system problem, not just Russia.

  • As her team analyzed more data, she became convinced other groups like anti-vaxxers could exploit the system in the same way as Russia through algorithmic amplification of propaganda. The line was blurring between strategic actors and organic spread.

  • Facebook tweaked its algorithm in 2018 to promote more engaging content like reactions over likes. This led to more invective, rumors, and extreme politics edging out news, according to internal reports, worrying users and politicians.

  • A study found people who deactivated Facebook for a month became less polarized and consumed less extreme news, showing the platform’s negative effects. Governments began considering regulation in response to mounting evidence of harms.

  • In 2016, a news story accused Facebook of routinely suppressing conservative news in its trending topics widget. A former contractor said editors were told not to include certain conservative stories without confirmation from mainstream outlets.

  • While Facebook was just exercising human judgment and oversight over algorithms, as experts suggested, conservatives saw it as evidence of anti-conservative bias. The Republican party harshly criticized Facebook’s influence over elections.

  • Facebook fired the human contractors overseeing trending topics and let algorithms fully control it. Fake news soon became one of the top stories.

  • The controversy politicized the issue of social media bias and transformed politics around platforms. It also spooked Facebook - the company met with over 20 prominent conservatives to address concerns. This established conservatives’ ability to pressure platforms for perceived bias.

  • The story showed how even small content decisions by platforms could be framed as bias and used by political actors. It demonstrated platforms’ growing power in elections and politics.

  • Rupert Murdoch (founder) and Robert Thomson (CEO) of News Corp (which owns Fox News) complained to Mark Zuckerberg that Facebook was hurting their business by stealing viewers/ads with content and unannounced algorithm changes. They threatened to lobby governments about potential antitrust issues.

  • Fox News then grew increasingly critical of Facebook, claiming anti-conservative bias. Republicans began accusing social media of suppressing conservative voices.

  • Facing potential regulation and antitrust actions, tech CEOs like Zuckerberg adopted a “wartime CEO” approach encouraged by VC firm Andreessen Horowitz, entailing less dissent and tougher tactics against critics.

  • Zuckerberg declared Facebook was “at war” in 2018 and would brook less dissent internally. The company hired opposition research firms to disparage critics and considered altering its algorithm to favor serious news, but a GOP-linked lobbyist opposed this.

  • T carried the title of vice president for global public policy at Facebook. He argued to change Facebook’s policies to avoid potential accusations from Republicans that Facebook promoted liberals. This effectively turned Trump’s view that mainstream journalists were Democrat agents into Facebook company policy.

  • In 2018, Kaplan successfully pushed to shelve an internal Facebook report that found the platform’s algorithms promoted divisive, polarizing content. He and others were concerned directly addressing this issue would disproportionately affect conservative pages.

  • Facebook extensively courted Republicans throughout 2018-2019 to avoid regulatory scrutiny. This included hiring a former Republican senator to produce a biased report, hosting dinners with conservatives like Tucker Carlson, and partnering with right-wing fact-checkers.

  • Facebook also gave special treatment to politicians’ posts by allowing lies and leniency on hate speech rules, catering to Trump. A data scientist flagged foreign leaders misusing the platform but was overruled.

  • In 2019, Vietnam pressured Facebook to censor critics or risk being blocked, and Zuckerberg agreed secretly making Facebook a censorship tool. Employees also revealed Facebook prioritized profits over integrity.

  • That year, Facebook announced it would not screen political ads for truth, benefiting misinformation spreaders like Trump. Over 250 employees publicly protested this policy change.

  • Tatiana Lionço, a Brazilian psychologist, had her life destroyed after a far-right lawmaker edited video of her speaking to make it appear she encouraged homosexuality and sex between children. He posted this on YouTube and it spread widely.

  • She faced death threats and lost friends and colleagues as the false narrative spread on social media and YouTube. She says it still affects her life seven years later.

  • That same far-right lawmaker, Jair Bolsonaro, later ran for and won the Brazilian presidency in 2018 in a landslide. This was a significant event and led to concerning changes in Brazil around the environment, democratic institutions, and more.

  • Analysts cited the role of American social media platforms like YouTube in Bolsonaro’s rise. The Brazilian far-right barely existed previously but grew rapidly online, where Bolsonaro’s extremist and controversial behavior performed well. So social media, especially YouTube, were seen as enabling his rise to power.

  • Before the 2018 Brazilian election, YouTube’s recommendation algorithm had helped far-right politicians and conspiracy theorists gain significant traction in Brazil. Figures like Jair Bolsonaro benefited from this exposure.

  • Researchers found that after YouTube’s algorithm update in 2016, right-leaning Brazilian YouTube channels saw much faster audience growth and dominated political videos. Mentions of Bolsonaro and conspiracies he endorsed increased dramatically on the platform.

  • The algorithm seemed to be changing users’ underlying political views, not just reflecting existing opinions. Video views and then comments drifted further right over time, indicating YouTube was influencing users towards Bolsonaro’s positions.

  • Two examples highlighted how YouTube influenced real-world Brazilian politics. Matheus Dominguez became involved with Bolsonaro’s party after YouTube recommendations radicalized his views. Carly Jordy, a city councilor, drew inspiration from far-right YouTube channels to urge students to secretly record teachers for supposed indoctrination.

  • Researchers concluded YouTube played a major role in boosting Brazil’s conservative movement and helping elect Bolsonaro, not just reflecting political trends but potentially shaping users’ core beliefs through algorithmic recommendations.

  • Jordy, a councilman, posted misleadingly edited videos of teachers’ classes on YouTube. One viral video made it seem like a teacher said conservative students were like Nazis.

  • In reality, the student had asked if classmates harassing her for being gay were like Bolsonaro supporters known to be homophobic. The teacher said no.

  • The distorted video spread widely and received millions of views, leading to threats against the teacher. Other teachers also faced backlash and restrictions in their classrooms.

  • Jordy admitted editing the video to “shock” and “expose” the teacher and make her feel fear, without investigating the actual situation. His actions impacted the school community.

  • In Brazil, falsely accusatory videos of teachers were routinely spreading online and affecting politics. Social media personalities who pushed such claims were getting elected.

  • Analysis showed YouTube’s algorithm consistently directed Brazilian users to far-right and conspiracy channels, radicalizing views over time. This influenced Bolsonaro’s rise and policies favoring social media personalities.

  • Groups like MBL effectively used YouTube to promote members who later won elected office, understanding the platform’s influence on “YouTube voters” in their base.

  • A video called “1964” went viral in Brazil, claiming the military dictatorship of that era was not as abusive as left-wing historians claimed and implied another dictatorship may be needed. This persuaded some young Brazilians like 18-year-old activist Dominguez that the regime was not so bad.

  • At MBL headquarters, activists acknowledged the influence of social media and YouTube. They had watched channels grow more extreme over time just to get views and engagement. Even they worried about this “dictatorship of the like” and the harms it could cause.

  • In Maceio, Brazil, Dr. Mardjane Nunes met with mothers of children with microcephaly caused by the Zika virus. However, many of the mothers had doubts about the official causes because of misinformation they saw online, including claims that Zika was caused by expired vaccines or was a government plot.

  • Doctors in Maceio said “fake news is a virtual war” and they frequently encountered patients who disregarded medical advice because of things they researched online, especially YouTube. The platforms had more influence than the doctors in some cases. This put children’s lives at risk by discouraging vaccines or appropriate medical treatment.

  • YouTube’s algorithms were directing users in Brazil who searched for videos about Zika virus and vaccines down conspiracy theory “rabbit holes.” This was amplifying medical misinformation and undermining public health efforts.

  • Videos spreading conspiracies and misinformation about Zika and vaccines on YouTube were also being shared extensively on WhatsApp in Brazil, helping the misinformation spread more widely.

  • A far-right YouTube creator named Bernardo Küster was targeting human rights lawyer Debora Diniz with conspiracy theories claiming she was part of plots to impose abortions. This led to Diniz receiving graphic death threats that referenced details from Küster’s videos.

  • The threats against Diniz grew so numerous and severe that she ultimately fled Brazil and went into exile, as the police could no longer guarantee her safety. Diniz blamed YouTube’s recommendation system for cultivating an “ecosystem of hate” that led to the threats against her.

  • Online misinformation and threats were undermining public health efforts and driving other activists and public figures in Brazil into exile as well, as they could no longer safely remain in the country due to threat levels.

  • Researchers discovered YouTube was promoting and recommending videos of partially clothed children to viewers of sexually explicit content. Some videos had view counts in the millions, showing this was not a personalization issue.

  • The recommendations formed a disturbing progression, starting with adult content and ending with videos of very young children. This suggested the algorithm could identify sexually appealing content in videos featuring children.

  • Some videos were home videos uploaded without consent, allowing children to potentially be identified and contacted by viewers. View counts were increasing rapidly.

  • Experts worried this vast catalog and targeted recommendations could make contact or grooming of children easier. It also risked cultivating new audiences of people not originally seeking such content by gradually exposing them.

  • Psychologists fear this process could potentially mimic how some develop pedophilic attractions, beginning with mild content and progressing to more extreme material over time through recommendations. Urgent action was needed to protect children.

  • Some experts argue that adult pornography consumption can lead some individuals down a path of seeking increasingly extreme and taboo content, almost like an addiction. They become desensitized and need more thrilling material.

  • On YouTube, researchers found that the recommendation algorithm seemed to identify people with deviant sexual interests and guide them gradually toward more extreme content, including child pornography.

  • Experts interviewed agreed this was possible and a concern, though research is limited due to ethical issues. One said he had seen patients develop pedophilic urges through a similar online progression.

  • YouTube disputed the findings but cited an expert who then contradicted YouTube and said research does support a “gateway effect” of following algorithmic suggestions to more deviant material.

  • After being notified, YouTube removed some problematic videos and changed recommendations, but claimed the timing was a coincidence. They also cut the “related channels” feature used in the research.

  • Senators called on YouTube to simply disable recommendations on all children’s videos to prevent any sexualization or guiding users toward extreme content. YouTube initially agreed but then walked back the commitment.

  • In early 2020, the WHO worked with social media platforms like Facebook, Google and YouTube to address the emerging “infodemic” of COVID misinformation spreading online. However, progress was slow.

  • As the pandemic took hold globally, conspiracy theories about the virus origins and dangers proliferated on social platforms. Videos spreading medical misinformation received millions of views.

  • Online extremism and partisan outrage also grew in parallel to COVID conspiracies. By late 2020/early 2021, these intertwined forces culminated in the January 6th Capitol riots, organized largely via social media.

  • Internal documents showed platforms like Facebook realized their algorithms boosted misinformation but refused to make changes for fear of hurting engagement metrics. They prioritized traffic over curbing the spread of dangerous conspiracies and medical falsehoods during the pandemic.

So in summary, while platforms tried to address the crisis of COVID misinformation, their own algorithms and business priorities ultimately amplified conspiracies and undermined efforts to promote scientific facts during a crucial public health emergency.

  • The viral conspiracy video “Plandemic” spread widely on social media in May 2020, alleging COVID-19 was intentionally created and vaccines were dangerous. It affirmed the beliefs of anti-vax, conspiracy, and QAnon groups and activated opposition to lockdowns.

  • The video spread from niche groups to more mainstream pages and groups as each took it on as a cause. By the time platforms removed it, the claims were entrenched in social media discussions.

  • As COVID conspiracies spread, a parallel movement grew among young white men drawn to alt-right militia groups online promising purpose and community. Steven Carrillo became involved in the “Boogaloo” group through Facebook.

  • Militia groups saw lockdowns as validating their narratives and conspiracies, bringing them new recruits. While some saw it as signaling, others took instructions for violence seriously. Carrillo planned attacks with local Boogaloo members targeting police.

  • Other militia and gun rights Facebook groups opposing lockdowns also grew rapidly due to apparent algorithmic promotion, merging with larger conspiracy communities online.

  • As Covid conspiracies spread on platforms like Facebook and YouTube in 2020, they became intertwined with militias, QAnon beliefs, and pro-Trump communities. This blending of identities and causes was encouraged by the platforms’ algorithms.

  • Militia and QAnon activity on Facebook grew more extreme, with hints of planned attacks. Participation in these movements surged as Covid fears left many seeking remedies and answers online.

  • Researchers found it was easy to go from general wellness/health searches on platforms to QAnon conspiracies, as algorithms directed users from one related group or page to another.

  • By mid-2020, many politicians running for office promoted QAnon beliefs. The blurring of causes also escalated militias’ sense that violence was justified.

  • Two incidents in late May involved members of the Boogaloo movement, whose Facebook pages promoted impending civil conflict. This amplified tensions amid nationwide BLM protests over George Floyd’s killing.

  • Carrillo, associated with Boogaloo groups on Facebook, then fatally shot federal security guards in Oakland, hoping to incite more violence. Pro-Trump pages immediately blamed BLM without evidence.

  • Carrillo engaged in more violent plans online and ambushed police after his attack, killing one officer. His case highlighted the role of social media in facilitating violent extremism during this turbulent period.

  • In early June 2020, hundreds of Facebook employees staged a one-day walkout to protest the company’s inaction on Trump’s post inciting violence against protesters. This marked increased tensions between tech company leadership and their own employees.

  • Around this time, evidence emerged that Facebook lobbyist Joel Kaplan was weakening content moderation policies in ways that benefited Trump’s misinformation efforts related to the upcoming election. Civil rights groups launched an ad boycott campaign against Facebook.

  • In late June, companies like Facebook and YouTube took some actions against white extremist and hate groups, but critics argued this was too little, too late as those ideologies and conspiracies were already widespread.

  • In early July, Zuckerberg and Sandberg met with civil rights groups leading the ad boycotts but were seen as insincere and not understanding the real problems with their platforms. An independent audit also found failures in Facebook’s content policies and promotion of polarization/extremism.

  • By September, as the election neared, tensions were high over threats to democracy. Facebook announced some new content policies around the election but critics saw them as too weak, while Twitter implemented stronger changes to curb misinformation from high-profile accounts like Trump’s.

  • After Trump’s loss in the 2020 election, false claims of voter fraud and a “stolen” election spread widely on social media, known as the “Big Lie.” Facebook found that 10% of all US political views on its platform were of posts claiming the election was stolen.

  • Figures like Richard Barnett, who had fallen down conspiracy rabbit holes on Facebook, were primed to believe the claims and consider taking action. Militia and QAnon groups discussed “putting a stop” to the government and “taking them down.”

  • Trump amplified the lie by posting misleading claims on Facebook, which were some of the most engaged posts. Rumors validating the lie also went viral via YouTube, Twitter, and other posts.

  • This widespread promotion of dangerous misinformation showed how the algorithms and business models of platforms could overrun normal discourse and spread a false narrative on a massive scale with real world consequences. Extremist communities that had long incubated on the sites were energized.

  • Researchers found that YouTube was incorrectly recommending right-wing conspiracy channels like Newsmax and New Tang Dynasty TV to people after they watched mainstream news. These conspiracy channels saw huge spikes in viewership.

  • YouTube videos pushing false claims of election fraud were viewed over 138 million times in the week after the election, far more than mainstream news coverage.

  • Many social media users, like Richard Barnett, came to believe the election was stolen based on what they saw online. Trump’s tweets further validate these false claims.

  • Online forums like TheDonald and Parler were filled with calls for violence at the January 6th rally to stop Congress from certifying the election results. Some discussed plans to storm the Capitol building.

  • At the rally, Trump further encouraged supporters to march to the Capitol. Thousands breached barriers and overwhelmed the small police presence. Two pipe bombs were found.

  • Supporters livestreamed themselves entering the Capitol on social media as they called for violence and humiliation of lawmakers. Richard Barnett gained fame from a photo of himself with Pelosi’s mail in her office.

  • The insurrection showed the real-world dangers that can emerge from unchecked conspiracy theories and calls for violence spreading widely on social media platforms.

  • Some Proud Boys and Oath Keepers militia members who stormed the Capitol on January 6th were communicating on Zello and Facebook as they moved through the building. They discussed sealing legislators in tunnels and turning on gas.

  • Five people died during the insurrection due to violence or medical emergencies. This included police officer Brian Sicknick who was pepper sprayed and collapsed.

  • Many involved had immersed themselves in conspiracies and extremist online communities like QAnon and Proud Boys. One woman, Ashli Babbitt, was shot trying to breach the Speaker’s Lobby and died.

  • Images and videos from the riot spread rapidly online. Facebook employees criticized the company’s response, wanting Trump’s account shut down immediately.

  • After the riots, Facebook, Twitter and YouTube banned Trump from their platforms. Democrats argued the tech companies bore responsibility for enabling the spread of dangerous conspiracies and radicalization. Congressmembers’ letters demanded policy changes around user engagement optimization.

  • After the January 6th Capitol insurrection, there was a sense that social media platforms needed fundamental reform due to their role in spreading misinformation and radicalizing users. However, the window for change quickly closed.

  • While platforms like Facebook announced some reforms like limiting political groups, these were half-hearted and often reversed. Platforms reverted back to prioritizing engagement through algorithms and advertising over protecting democracy.

  • Leaders like Sheryl Sandberg and Adam Mosseri downplayed the platforms’ responsibility and argued calls for reform were overblown. They insisted issues largely stemmed from users, not platform design.

  • Enforcement against election misinformation dropped in 2021 as conspiracies spread unchecked. Far-right movements and QAnon influencers gained more mainstream traction, affecting government at state and local levels.

  • Regulatory investigations by Congress and DOJ continued into 2022, but platforms faced little meaningful accountability as harms mounted. Reform hopes after January 6th largely faded.

  • The FTC filed an antitrust suit against Facebook in December 2020, suggesting it may seek to break up the company. Similar suits were also filed by state attorneys general. These cases proceeded throughout 2021 and 2022.

  • In February 2021, Australia passed rules requiring Facebook and Google to pay news outlets for linking to their content. Google agreed to deals, but Facebook blocked all news content in Australia in protest. This blackout affected access to important information sources. Australia eventually backed down.

  • European governments continued regulating Big Tech through fines and proposed new privacy laws. Officials acknowledged they had limited power to force structural changes, but hoped to develop regulatory models.

  • Within the US, pressure was more uneven. Obama warned of social media amplifying humanity’s worst impulses. But the Biden administration focused on other crises like the pandemic and Ukraine war.

  • In May 2021, Facebook employee Frances Haugen secretly copied internal documents showing the company was aware of various social harms but prioritized profits. She provided these documents to journalists and regulators to expose these issues.

  • The documents leaked by whistleblower Frances Haugen provided hard evidence from Facebook’s own research that the company was aware of many harms caused by its platforms but chose not to address them to avoid impacting profits.

  • Haugen’s interview on 60 Minutes and testimony before Congress helped bring public attention to these issues and sparked outrage against Facebook. She argued Facebook prioritizes growth and engagement over user safety.

  • Haugen met with lawmakers, regulators, and officials around the world to help shape new social media regulations and reforms. She consistently highlighted how platform dynamics amplify harm, using examples from developing countries.

  • Many experts who have studied social media closely now agree that simply tweaking algorithms is not enough - the core incentive structures and design of the platforms need to change. Many argue platforms should remove algorithms and other engagement-focused features that drive amplification of harmful or extremist content.

  • Shutting off these mechanics could mean a less engaging internet but also less harm, according to evidence. However, successfully reforming the companies may be difficult given their size, incentives, and dependence on endless growth. Strong regulation may be needed to compel meaningful changes.

  • The reporting that grew into sections of this book came from Eric Nagourney, Juliana Barbassa, and Doug Schorzman at the New York Times, who edited reporting and put “blood and sweat” into stories.

  • Amanda Taub co-authored stories from the Interpreter column she started in 2016 with the author.

  • The author also reported alongside and co-authored with several other journalists from various countries.

  • Academics, researchers, doctors, engineers, and “sources” also freely contributed their work, energy, ideas and were interviewed, referenced throughout the text. This book is in many ways their book as well.

  • The author thanks his parents and partner for their support which made writing the book possible.

So in summary, the book was informed and supported by extensive reporting and collaboration with other journalists over time, as well as contributions from academics and experts in related fields. The author acknowledges the integral role many others played in making the book possible.

Here is a summary of the key points from the article:

  • The study examined how easily ingroup/outgroup discrimination can be extinguished. It looked at how categorizing people into minimal groups (based on arbitrary/trivial criteria) can lead to discrimination in terms of reward allocation.

  • In the first experiment, participants categorized other students into two groups based on preference for painters Klee or Kandinsky. They then allocated rewards to anonymous group members. Ingroup favoritism emerged even though the groups were meaningless.

  • In a second experiment, they added a second categorization task where participants were re-categorized into new groups. This extinguished the initial discrimination effect, showing that even minimal intergroup biases can be overcome by recategorization into new groups.

  • The results provided evidence that people will readily form intergroup biases based on arbitrary categorization, but that these biases may not be deeply ingrained and can be eliminated by altering the intergroup context, such as recategorizing people into new groups. This has implications for understanding and potentially reducing biases and discrimination.

Here is a summary of the key points from the provided sources about the olentacrez controversy on Reddit:

  • Olentacrez was a prolific longtime Reddit user and moderator who was revealed to be a software engineer named Michael Brutsch. He moderated some of the most explicitly sexual subreddits on the site.

  • In 2012, Gawker published an article outing Brutsch’s real identity and connecting it to his moderator personas like olentacrez. This caused a controversy about Reddit allowing explicit content and whether users should be anonymous.

  • Reddit co-founder Alexis Ohanian initially defended users’ right to post anonymously but later said they understood the harm in some subreddits. Some explicit content was removed.

  • The incident sparked debates about internet anonymity, free speech, moderator responsibility, and the spread of explicit content online. It was an early example of a doxxing controversy where a controversial anonymous user was identified.

  • Reddit updated some of its policies following the incident, though debates continued around moderating content, doxxing, and whether platforms should emphasize real identities or anonymity. It was a key early controversy that explored many issues around online communities and content moderation.

Here is a summary of the key points from the papers:

  • The first paper by Jillian Jordan and Nour Kteily experimented with situations where people were asked to evaluate and potentially punish others in anonymous one-shot interactions without the possibility of reputational concerns. They found that people were still inclined to engage in moralistic punishment, even when it did not seem entirely merited based on the transgression. This suggests a “reputation heuristic” where people internalize a sense of how they should respond even without actual reputational effects.

  • The second paper by Jillian Jordan and D.G. Rand analyzed over 300 political rumors spread on Twitter during the 2012 US presidential election. They found that rumors confirming pre-existing attitudes were more likely to be spread within ideological clusters. However, corrections of rumors were more likely to spread between clusters and to less ideologically extreme users. This suggests people are wary of corrections that contradict their worldviews but open to them from outside their cluster.

In summary, the papers suggest reputation and perceived norms have a strong influence on human moral judgment and punishment behaviors, even in anonymous situations. They also show how political rumors tend to be amplified within ideological clusters but may spread between clusters when presented as corrections.

Here is a summary of the key points from the referenced text:

  • Facebook’s prioritization of engagement and viral content has skewed what users see towards more extreme, provocative, and emotive content. This leads to the spread of misinformation and propaganda.

  • A 2016 study found that YouTube’s algorithm was promoting and recommending divisive content related to the US presidential election.

  • In 2016, a man raided a Washington DC pizza shop with an assault rifle based on the false “Pizzagate” conspiracy theory that spread online.

  • A 2017 study analyzed over 500,000 tweets about political issues and found that negative and emotionally charged tweets about opposing groups spread more rapidly.

  • Subsequent studies confirmed that content sparking out-group animosity engages users more and spreads further on social media.

  • Internal Twitter research from 2021 found their algorithm amplified conservative political content more than other perspectives.

  • Despite its smaller size compared to Facebook and YouTube, Twitter played an outsized role in spreading misinformation due to its algorithmic promotion of emotion and outrage.

  • The references discuss the role of algorithms in amplifying certain types of content, and the psychological factors that cause users to engage more with negative or emotive content about opposing groups. This contributes to increased political polarization and the spread of conspiracy theories.

The chapter discusses how social media and online platforms have enabled the spread of misinformation and extreme ideologies. It describes research on YouTube recommendations driving users to more extreme content, including far-right videos in Germany. Comments on these videos formed insular communities reinforcing their views. Figures like Richard Spencer used online platforms to organize real-world extremist events like the Unite the Right rally in Charlottesville. Websites and forums like 4chan and subreddits acted as on-ramps to expose vulnerable individuals to radicalization. Figures like Gavin McInnes and Jordan Peterson presented far-right and misogynistic ideas in a way that attracted disenfranchised young men. The online ecosystem allows isolated communities to form and spread hateful ideologies more broadly.

Here is a summary of the key points from the passage:

  • It describes the immense scale of Facebook’s content moderation operation, employing thousands of contract workers around the world to review posts. These moderators are under intense pressure and often experience psychological trauma from viewing disturbing content.

  • Facebook executives have long struggled to develop clear guidelines for moderators on how to handle controversial issues like hate speech and misinformation. Policies are often applied inconsistently.

  • Research has shown how Facebook’s algorithm and recommendation systems can often promote polarizing and extremist content. This includes directing users to more radical groups and channels through “gateway” recommendations.

  • Conspiracy movements like QAnon spread widely on Facebook in private groups, receiving millions of views and new followers. The platform struggled to contain the proliferation of dangerous and misleading information.

  • Major real-world events like the Christchurch mosque shootings demonstrated how extremist content and ideologies are incubated online, with platforms like YouTube and 8chan playing a role in radicalizing the shooters.

  • In summary, the passage examines the challenges social media companies face in moderating content at scale, and how their platforms have sometimes aided the spread of toxic, extremist or deceitful information.

Here is a summary of the key points from the sources provided:

  • In 2006, Paul Graham warned startups at Y Combinator about the hardest lessons to learn, including the importance of investing in marketing and understanding users/customers.

  • Early Facebook perks like free food were intended to keep employees working long hours and maximizing growth.

  • After spending time at Facebook, Roger McNamee became concerned about the company’s impact and lack of responsibility over user data and privacy.

  • Peter Thiel no longer believes in the libertarian ideals of open borders and free flow of ideas he once held, seeing corporations as better governors than governments.

  • Critics argue powerful Silicon Valley founders aim to establish corporation-run floating cities and undermine governments.

  • In 2018 testimony, Mark Zuckerberg acknowledged Facebook’s responsibility over influences operations use their platforms, but problems have persisted.

  • Facebook initially tried making the platform “healthier” but efforts backfired, angering users and prioritizing engagement over well-being.

  • Whistleblowers revealed Facebook misled the public over progress on problems like hate speech and misinformation to preserve growth.

  • Studies show social media can negatively impact user welfare and democracy when not properly governed.

  • Governments globally have called for greater regulation of internet platforms like Facebook in recent years.

  • Internal strife has grown within Facebook as employees criticized the company’s impacts and strategies.

Here is a summary of the key points from the summaries:

  • A Brazilian lawmaker edited video footage to spread misinformation. Millions of acres of the Amazon rainforest burned under Bolsonaro’s presidency.

  • Bolsonaro’s extreme views spread on YouTube and social media as he was on the fringes of politics but gained mainstream popularity. Conspiracy theories targeting public health officials and policies proliferated.

  • Researchers found YouTube recommendations often led users to conspiracies and extreme content over time. Bolsonaro encouraged citizens to report “indoctrination” in schools.

  • The Covid-19 pandemic fueled an “infodemic” of misinformation on social platforms. The WHO worked with platforms to curb mis/disinformation but faced an uphill battle as false claims spread rapidly. Facebook posts falsely claimed drinking bleach cured Covid.

  • Conspiracy videos and claims linked 5G networks or Bill Gates to the virus. Anti-vaccination movements gained new traction. Internal documents suggest Facebook was aware of the scale of misinformation but struggled to control it.

#book-summary
Author Photo

About Matheus Puppe