Self Help

Science of Fear, The - Daniel Gardner

Author Photo

Matheus Puppe

· 71 min read

BOOK LINK:

CLICK HERE

Here is a summary of the Table of Contents:

The book contains the standard front and back matter sections like the title page, copyright page, dedication, epigraph, etc.

The main content is divided into 12 chapters that cover topics related to fear such as human evolution and the brain’s fear response, the role of numbers and perception in shaping fear, how the media and politics use fear, terrorism, and analyzing whether current fears are proportionate given safety trends.

The back matter includes notes, bibliography, acknowledgements, and index sections, as well as information about the author.

Some of the key chapter topics are:

  • Chapter 1 discusses humans as prehistoric refugees and the role of fear in evolution
  • Chapter 2 looks at how the human mind has two systems, one emotional and one rational, and how they interact with fear
  • Chapter 3 examines the death of the idea of homo economicus, or rational economic man
  • Chapter 4 analyzes the emotional brain and its fear response
  • Chapter 5 talks about how numbers and statistics shape fear
  • Chapter 6 covers how the herd senses danger
  • Chapter 7 addresses how the media and politicians use fear for influence
  • Chapter 8 analyzes the role of media in shaping public fears
  • Chapter 9 discusses crime rates and fear of crime
  • Chapter 10 covers the neurochemistry and biology of fear
  • Chapter 11 analyzes modern fears related to terrorism
  • Chapter 12 argues that despite fears, this may be the safest time in history

So in summary, the book examines fear from evolutionary, psychological, social, political and statistical perspectives to analyze how and why modern societies develop and experience fears.

  • Franklin Roosevelt was inaugurated as President in 1933 at the bottom of the Great Depression. Banks were failing, unemployment was over 25%, and the country was gripped by fear and uncertainty.

  • In his inaugural address, FDR said the country’s main fear should not be the current problems, but “fear itself.” Unreasonable fear would only make the situation worse by eroding faith in democracy.

  • The passage discusses how fear is a natural feeling but unreasonable fear can be destructive. It was unreasonable fear during the Depression that could have undermined the US.

  • Modern societies experience more worry and fear than past generations. Sociologists point to heightened concerns about risks from technology and the environment. However, the passage argues life expectancy, child mortality, and standards of living have greatly improved over the last century despite doomsday warnings. While risks exist, history shows humanity has faced risks before without impending catastrophe. Overall living conditions continue to get better for most people.

  • While living standards have improved greatly in Western countries, conditions have also improved significantly in the developing world over recent decades. Malnutrition has fallen and UN development index scores have risen, indicating better income, health and education levels globally.

  • However, people’s perceptions of risk often do not align with actual risk levels. Concerns about issues like terrorism, cell phones, GMOs fluctuate unpredictably and disproportionately to real risks. Many underestimate or are unaware of major risks like cancer, traffic accidents, and climate change.

  • Some risky activities like auto racing are celebrated as entertainment despite frequent injuries and deaths. Meanwhile, less dangerous activities like marijuana use are severely restricted. Sports with physical dangers like football remain popular while performance enhancing drugs are banned.

  • Excessive fear of stranger abduction and rare criminal acts have led to overregulation of children’s activities and restrictions on independent play, despite health risks of inactivity being greater. Overall, while living standards improve, modern societies exhibit paradoxical and confusing attitudes toward risks.

  • Risk perception is influenced by psychological, social, and economic factors, not just objective analysis. People are prone to biases and irrational fears.

  • Private companies, politicians, bureaucrats, activists, and media have incentives to exaggerate certain risks for profit or influence, while downplaying other serious risks.

  • Cultural values influence which risks people take seriously and which they dismiss. Confirmation bias reinforces preexisting beliefs.

  • The human brain has two systems - System 1 is fast, instinctive feeling/intuition while System 2 is slower, rational reasoning.

  • System 1 uses heuristics and mental shortcuts to assess risk quickly, but this can lead to irrational conclusions if relied on alone. For example, easily recalled examples may make a risk feel more common than it really is.

  • System 1 evolved for a hunter-gatherer environment but now we live in a complex modern world it was not designed for, leading to miscalibrated risk perceptions not matched to actual statistics and data.

So in summary, many sociopolitical and psychological factors skew how people perceive and respond to different risks, beyond cold objective analysis. This has real consequences for public policy, health, and safety.

The passage discusses how our unconscious minds are deeply shaped by human evolution over millions of years. While modern humans think of ourselves as advanced, the key factor that made us human - our large, complex brains - developed through natural selection pressures similar to other physical traits.

Around 2 million years ago, the human brain size began increasing significantly. By 500,000 years ago, it reached closer to modern sizes. All living people share a common ancestor from just 100,000 years ago.

Genetic mutations occasionally provided advantages that natural selection could act upon, such as increased brain size. This slow process of incremental changes resulted in humans with intuitive behaviors for survival - things like nurturing offspring.

However, people are often uncomfortable recognizing how much unconscious thought is driven by evolutionary hardwiring rather than conscious decision making. The passage argues our brains are entirely physical organs governed by natural selection, not containing some non-physical “essence.” While complex, human brains evolved through the same principles as other biological traits.

  • Humans have always been physically weaker than other animals, but our brains gave us an advantage for survival. Throughout the Paleolithic era (Old Stone Age), our brains quadrupled in size which helped us develop new skills and outsmart other species.

  • However, large brains also caused problems like difficult childbirth and increased risk of head injuries. But the cognitive advantages outweighed the drawbacks.

  • Our minds evolved during the Paleolithic to cope with conditions at that time. While the environment varied, hunting/gathering in small bands was a constant that shaped brain development.

  • Evolutionary psychology looks at how our ancient past shapes human thought and behavior today through mechanisms like instinctual fears of snakes that were shaped by ancestral threats.

  • Other examples of “ancient wiring” include assumptions that causes resemble effects (Law of Similarity) and that appearances reflect reality, both of which influence beliefs and decision making in fundamental ways.

  • In summary, though living in modern environments, our brains are fundamentally those of our hunter-gatherer ancestors, making us essentially “cavemen” inside.

  • The passage discusses the dual-process model of thinking, which suggests we have two systems - System 1 (intuitive, quick, emotional thinking) and System 2 (deliberative, slow, logical thinking). It refers to these as Gut and Head.

  • Gut makes instant judgments using heuristics (rules of thumb) which usually work but can lead to biases. Its judgments are often adopted by Head unless there is clear reason to adjust them.

  • Cognitive psychologists have studied heuristics and biases to understand unconscious thinking. Some biases like remembering unusual things are obvious, while others are more surprising.

  • Over time, knowledge and skills acquired consciously can sink into unconscious operations of Gut through repetition. Experts can develop intuitive knowledge and judgments.

  • The interaction between Gut and Head is how most thinking and decision making occurs. Gut proposes intuitive judgments which Head may tweak if needed based on further reasoning.

  • But Head is often lazy and accepts Gut’s intuitions even when they may be flawed due to biases or not accounting for situational factors like hazy air affecting judgments of distance.

So in summary, it discusses the dual systems of intuitive/unconscious vs. deliberative/conscious thinking and how heuristics and biases shape unconscious judgments.

  • Psychologists have shown that people often rely on their initial intuition (“System 1” or “Gut”) when making judgments, rather than carefully thinking through issues (“System 2” or “Head”). This results in many errors and biases.

  • Head provides little oversight of Gut’s judgments. External factors like time of day, stress, and intoxication weaken Head’s monitoring abilities even further.

  • Kahneman describes Head and Gut as “competing for control of overt responses.” We are like cars driven by an impulsive caveman while a lazy teenager only half-heartedly monitors.

  • The claim that 50,000 pedophiles are online at any time seems to rely more on Gut than evidence. It is a convenient round number without a clear source. Experts dispute it but cannot definitively disprove it either.

  • Nonetheless, due to repeated citations, even skeptical experts have trouble entirely dismissing it, illustrating how numbers can influence judgments even without proof. Unreliable statistics commonly spread and influence public discourse.

  • Psychologists Fritz Strack and Thomas Mussweiler conducted an experiment asking people questions about Gandhi that involved numbers like 9 or 140. They found the initial number anchored people’s subsequent estimates.

  • This is an example of the anchoring and adjustment heuristic, where our initial estimate is anchored to the first number encountered. Adjustments from the anchor tend to be insufficient.

  • Other studies replicated this effect using random numbers or prompts before estimation questions. The initial number biases the estimate even if explicitly told to ignore it.

  • This anchoring effect can be exploited in marketing, pricing, estimating public support, influencing judges, and spreading fear. Providing an initial high anchor sets people’s intuitive baseline higher than it otherwise would be.

  • Skepticism of an initial anchor doesn’t prevent it from influencing subsequent thought. Even if dismissing a scary statistic, the intuition adjusts downward from that anchor in a way that still remains meaningfully biased.

  • The anchoring effect is a fundamental cognitive bias with wide applications, both intentional and unintentional, in communication, decision-making and persuasion. It reveals intuitive thinking’s vulnerability to surface-level influences.

  • Psychologists Daniel Kahneman and Amos Tversky collaborated on pioneering research in the 1970s exploring how people make judgments and decisions under uncertainty.

  • Their work challenged the predominant model at the time that people are perfectly rational decision-makers (Homo economicus). They demonstrated that people have systematic cognitive biases and make decisions heuristically rather than purely rationally.

  • In a seminal 1974 paper, they outlined three heuristics or “rules of thumb” people use: anchoring, representativeness (or typicality), and availability.

  • The representativeness heuristic, also called the “Rule of Typical Things,” leads people to make judgments based on how well something fits their mental image or stereotype of what is typical.

  • Their famous “Linda problem” experiment showed that people judge descriptions as more likely based on representativeness rather than pure logic, even experts trained in logic. This highlighted how powerful and misleading this heuristic can be.

  • Kahneman and Tversky’s work challenged economic models of human decision-making and launched the new field of behavioral economics, which incorporates psychological insights into economics. It profoundly influenced many fields and how researchers understand human judgment and choice.

This passage discusses how unconscious cognitive biases can influence our perceptions and judgments in problematic ways. Some key points:

  • The “Rule of Typical Things” causes people to associate stereotypes with certain groups, like seeing black men as criminals. This can lead to unconscious anxiety and prejudice even if consciously rejected.

  • Kahneman and Tversky showed experts were biased by typical narratives when judging probabilities. Scenarios seen as more representative seemed more likely, even if less probable logically.

  • The “availability heuristic” or “Example Rule” means we judge what’s more common based on what examples can be readily recalled. But recall ability isn’t always linked to actual frequency.

  • Earthquake insurance purchases don’t match the objective risk levels. Sales are highest right after quakes when risk is lowest, and fall as risk rises, contrary to facts. This likely stems from the Example Rule - recent quakes are most available in memory.

So in summary, the passage illustrates how unconscious cognitive biases rooted in mental shortcuts like associating stereotypes and judging frequency by recall ability can influence our risk perceptions and social judgments in ways that depart from logic and objective facts. These biases are difficult to overcome even for experts.

  • People judge a thing to be more common if examples of it come easily to mind, regardless of the actual number of examples. This is known as the “Example Rule”.

  • A study found people rated their risk of heart disease higher when they easily generated 3 risk factors compared to struggling to name 8 factors. Ease of recall, not substance, guides intuition.

  • Those without family history of heart disease followed this pattern, while those with family history correctly judged higher risk with more factors named.

  • For hunter-gatherers, the Example Rule made sense as vivid recent memories were likely to reflect current risks.

  • Traumatic memories are vividly encoded due to hormonal responses and remain potent guides for intuition.

  • Other factors like emotion, faces, concreteness also boost memory strength and recall.

  • Sharing experiences through storytelling allowed ancient peoples to learn risks vicariously and make judgments based on others’ examples through imagery.

  • However, intuition cannot distinguish imagined fiction from real experiences, and both can sway judgment equally.

  • Experiments showed that imagining an event occurring raises people’s perceived likelihood of it happening compared to not imagining it. For example, students who imagined Carter or Ford winning an election were more likely to predict that candidate would win.

  • A more sophisticated study at ASU had students imagine or not imagine contracting an imaginary disease on campus. Those who easily imagined having the disease rated its risk as highest. Risk perception was lowest for those who struggled to imagine hard-to-visualize symptoms.

  • Imagining winning the lottery increases people’s sense that it could happen to them, which is why lottery ads encourage imagining winning.

  • However, memory is unreliable and can change or even fabricate memories over time. So relying only on examples or experiences, as intuition does, has limitations. Recent, emotional events are also more likely to be remembered than others.

  • As a result, experience-based intuition is useful but not enough on its own. Broader sources of information are also needed to make well-informed judgments, as the world is changing rapidly through technological and societal transformations. Franklin noted “experience keeps a dear school, but fools will learn in no other.”

Here is a summary of the key points about ears:

  • The passage discusses how our access to information has changed dramatically in a very short period of time, from days or weeks to see news footage, to near instant distribution globally via the internet and cell phones.

  • Events like the execution video from Iraq and cell phone footage from the London bombings show how personal experiences can now be instantly shared worldwide.

  • While this increased access to information is beneficial in many ways, it also taps into psychological biases like the “example rule” where easily recalled examples make rare events seem more likely.

  • Our brains overestimate threats that get dramatic media coverage compared to larger but less visible risks like diseases. Movies and TV shows can also influence risk perceptions in unintended ways.

  • There is a correlation between the rise of mass media and obsession with risk starting in the 1970s, though more research is needed on the relationship between media and risk perceptions.

  • The conference discussed low-probability but high-impact risks like asteroid impacts, in the context of the volcanic and seismic risks inherent to the Canary Islands location.

  • Residents of the Canary Islands are not too concerned about the risk of a landslide or eruption from the Teide volcano, despite it being a possibility. Past major eruptions were over 100 years ago, so most residents have never witnessed one.

  • Small asteroids constantly bombard Earth, with little risk to humans. But larger asteroids could cause major damage or mass casualties depending on their size. The probability of an impact decreases with size, but the potential consequences increase dramatically.

  • Detecting and tracking asteroids is important to mitigate the risk, but is costly. Spotting smaller asteroids that could still cause regional damage would require more funding. Astronomers requested $30-40 million per year for 10 years to detect 90% of asteroids over 140 meters.

  • Governments generally consider probability, consequences, and mitigation costs together for low-probability risks. But astronomers had difficulty securing full funding for the asteroid detection program, despite the science being clear and risks communicated.

  • An expert in risk perception, Paul Slovic, was brought in to help understand why progress on the asteroid issue was modest despite efforts over 25 years. Experts and the public often perceive risks differently.

Here are the key points about laypeople’s and experts’ perceptions of risk according to the passage:

  • Experts viewed risk as the probability of consequences (number of deaths), while laypeople considered additional factors beyond just consequences.

  • Laypeople’s judgments of how deadly risks were varied in accuracy, from somewhat wrong to very wrong. But they were confident in their intuitive judgments.

  • Nuclear power was viewed as one of the lowest risks by experts based on fatalities, but laypeople viewed it as the highest risk despite its low death toll.

  • Laypeople’s risk perceptions were influenced more by factors like dread, catastrophic potential, unfamiliarity, lack of control, etc. rather than just fatalities alone.

  • Familiar risks tend to be perceived as less risky over time due to psychological habituation - the unconscious “switching off” of attention to familiar stimuli.

  • Habituation works well for tolerating everyday risks but can also cause risks to be underestimated if they become too familiar through repeated exposure, like with cigarettes.

  • Both experts and laypeople rely on intuition, but experts also consider statistics on consequences while laypeople factor in additional psychological and social influences.

  • People’s feelings and judgments about risks are influenced at least partly by an unconscious “gut” intuition, not entirely rational thinking. When asked to explain their feelings, people rationalize with their “head” rather than accessing the true gut intuition.

  • Experiments on split-brain patients showed that the left brain will quickly fabricate explanations even when the right brain knows the real reason for an action.

  • Early risk perception research assumed rational analysis, but factors like “dread” suggested emotions also played a role.

  • Risk and benefit ratings tended to be inversely correlated, which doesn’t make logical sense but could reflect an initial unconscious positive or negative gut reaction coloring subsequent judgments.

  • Tests confirming the faster this effect under time pressure and how new information shifted both risk and benefit ratings in the same direction, supporting the role of an initial “good-bad” gut feeling that then guides rationalization.

  • This “affect heuristic” or “good-bad rule” helps explain puzzles like overestimating cancer lethality due to its emotional impact, even if other diseases kill more. Unconscious emotions significantly influence risk perceptions.

  • People are less worried about risks they perceive as good, like suntanning and medical X-rays. But they greatly overestimate risks they have negative feelings about, like nuclear power and waste.

  • These intuitive risk judgments are influenced by emotional factors more than facts or statistics. Things labeled “good” seem safer, while “bad” things seem riskier.

  • Studies show subtler emotions like brief facial expressions can unconsciously bias our judgments without us realizing. Simply repeated exposure to something also generates unconscious positive feelings toward it.

  • Negative feelings toward nuclear power predate examples like Chernobyl, suggesting deeper psychological mechanisms are at work. Both bad examples and general negative emotions influence risk perceptions.

  • Even unrelated risks seem higher after exposure to tragic stories, showing how emotions spill over beyond their original target. Corporations use this mere exposure effect to generate unconscious goodwill.

  • In the real world, black-clad sports teams received more penalties, showing how unconscious negative labels persist even when the original trigger is absent. Emotion strongly shapes our intuitive risk judgments.

  • In the last 35 games of the season, the Penguins hockey team wore a new black uniform. The coach and players were the same as the first half of the season.

  • However, their penalty time rose 50% to an average of 12 minutes per game while wearing the new black uniforms.

  • This demonstrates the psychological “Good-Bad Rule” where simply changing from a positive/neutral attribute (original uniforms) to a negative one (black uniforms) can influence negative outcomes, even with all other factors being equal. In this case, it led to more penalties taken by the team.

  • Graphic health warnings on cigarette packs, like images of diseased organs, increase perceptions of risk beyond just text warnings alone. This is because they make the health consequences more visceral and concrete.

  • Even subtle changes in wording can significantly impact perceptions and judgments. Psychiatrists were much more likely to recommend confining a mental patient when told “20 out of every 100 patients” would be violent than when told there was a “20% chance” of violence, even though the probabilities were the same. Concrete wording evokes stronger emotions.

  • People are generally poor at intuitively grasping probabilities and risks. Emotions often overwhelm rational analysis of odds. Things seen as “bad” like shocks loom larger than neutral outcomes even at low probabilities. Costs are also often ignored when evaluating risk reduction programs.

  • Regulations can reduce risk but also impose economic costs, and greater wealth generally corresponds to better health outcomes. Ignoring these costs can lead to overreaction to risks. Public demand for action usually does not consider costs. Confronting those costs can change risk perceptions. Overall rational, cost-benefit analysis of risks and risk reduction programs is difficult for emotional humans.

  • Early in the 1950s, Japanese prostitutes began getting silicone and paraffin injections to enhance their breasts for American servicemen, who preferred larger breasts. This led to the development of manufactured silicone breast implants in the early 1960s.

  • In the 1980s, some studies began finding links between silicone implants and connective tissue diseases like arthritis and lupus. This sparked concerns and lawsuits against implant manufacturers.

  • In the early 1990s, media coverage of women’s stories of health issues from implants escalated fears significantly. A CBS episode and congressional hearing added pressure. Implant manufacturers were given 90 days in 1992 to prove safety but their evidence was deemed inadequate.

  • The FDA banned silicone implants in 1992, though said they were banned for lack of proven safety, not proven harm. But lawsuits and media stories led many women to worry implants were dangerous. This resulted in the largest class action settlement in history in 1994.

  • Silicone breast implants went from being seen as innocuous to being viewed as a serious health risk in the 1990s, due to many women sharing personal stories linking their implants to various illnesses. However, there was still no scientific evidence at the time that implants actually caused disease.

  • People respond more strongly to personal stories than statistics. Stories about identifiable individuals are more emotionally powerful than abstract numbers. This explains why personal anecdotes played a large role in shaping public opinion around breast implants, even without supporting data.

  • Storytelling is hardwired in humans for evolutionary advantages like information sharing and social bonding. But stories are not always factually accurate on their own. For the breast implant issue, epidemiological studies were still needed to determine if implant recipients actually had higher disease rates than others. The personal stories, while understandable, were not scientific evidence of a causal link on their own.

  • Anecdotes and stories do not prove causation, only properly collected and analyzed data can do that. But humans’ intuitive understanding of numbers is limited compared to our ability to understand stories.

  • It is difficult for humans to grasp large numbers, discriminate between close quantities, and perform multistep calculations intuitively. Developing numeracy requires significant effort.

  • Numbers alone tend not to evoke an emotional response in the way that identifying an individual person can. Big numbers may impress by size but do not resonate emotionally in proportion to their magnitude.

  • Statistical concepts like proportions, sample bias, and regression to the mean are even less intuitive for humans to comprehend without effort. Biased samples can mislead by presenting only one side of the issue.

  • Overall, relying solely on intuition can lead to misinterpreting data and making incorrect conclusions if statistical and numerical aspects are not properly understood and analyzed. Telling stories is easier for humans than grappling with complex data.

  • People have an intuitive but flawed sense of randomness. They tend to perceive patterns where there are none and underestimate the role of chance.

  • This leads to mistakes like thinking a coin is more likely to land tails after several heads in a row (gambler’s fallacy) or that evenly dispersed random dots on a page look more random.

  • Psychologists have shown perceptions like a “hot hand” in basketball are myths and randomness alone can explain clusters of events.

  • Failure to grasp randomness can have serious consequences, like rebuilding in flood-prone areas or panicking over rare cancer clusters.

  • Our intuitions about randomness produce harmless biases but sometimes fuel irrational fears. While science and statistics have limitations, they often show apparent patterns are due to chance alone.

  • Numeracy helps correct randomization errors, but many people have poor understanding of basic probability and risk concepts. When risk perceptions are miscalibrated, it can lead to issues like the breast implant scare.

Classic psychological experiments in the 1950s showed that people have a strong tendency to conform to group opinions and consensus even when it contradicts their own perception or judgment. In one study, researchers had participants view images and answer questions, with others planted in the study deliberately providing wrong answers first. Over a third of real participants then agreed with the wrong group response rather than sticking to their own assessment. This shows how powerfully humans are influenced by social norms and groupthink.

While disconcerting, conformity had evolutionary advantages for survival, as cooperation and agreeing with the group was important for tasks like hunting. However, in modern times with greater access to information, true independence of thought is difficult. Relying on experts is better than laypeople, but experts often disagree too. Most people depend on intermediary organizations that filter and spin scientific findings to support their own agendas, leaving laypeople doubtful about disputed issues even when the underlying research is complex. True independent analysis requires expertise that few have time to attain.

  • There are valid reasons to be skeptical of analyses from lobby groups like NARAL and Focus on the Family, as they may present science in a biased way to support their agenda. However, complete cynicism that denies the possibility of objective truth is wrong.

  • Trust in experts and institutions is important, as it influences public views on risks. But trust has declined and some distrust scientific consensus on issues like climate change or vaccine safety.

  • Experts don’t always convince the public - public feelings can override scientific opinion. But consensus among scientists still shapes most people’s views when they’re uncertain.

  • Social and group influences strongly shape individual risk perceptions. People may adopt views simply because others around them believe it, through “social cascades.” This conformity effect is even stronger for personally important questions versus meaningless laboratory tasks.

  • A study found conformity rates increased significantly when an eyewitness identification test was framed as important and having real consequences, versus a unimportant pilot study. The stakes influence how much people defer to the group view.

  • Conformity experiments from the 1960s were replicated, finding that people still conform to group answers, especially when tasks are difficult or ambiguous and the group is united and confident.

  • Confirmation bias causes people to seek information confirming their beliefs and ignore conflicting information. Once a belief is formed, people screen new information to confirm rather than disconfirm the belief.

  • A classic experiment showed people form beliefs and then fail to disconfirm them by seeking counterexamples, becoming overconfident their initial belief was correct.

  • Strong partisans process information contradicting their candidate differently in the brain than neutral information. Confirmation bias strengthens initially weak beliefs spread through a group.

  • Group polarization occurs - when like-minded people discuss a view, they become more extreme than their original individual positions due to reinforcement of shared beliefs and dismissal of alternatives within the group. Initial consensus widens rather than reaches compromise.

  • When like-minded people gather to discuss an issue, their existing views tend to become more extreme due to two forces: comparing themselves to others in the group makes people feel they need to have a more extreme view to be considered more “correct”, and information sharing allows biased facts to confirm and reinforce their views.

  • This group polarization effect has been demonstrated in psychological experiments where people simply state their views without supporting reasons - just being in a like-minded group is enough to polarize opinions.

  • Views become more extreme as biased information is pooled within the group and each person leaves believing the problem is bigger than they initially thought. This confirmation bias filters out dissenting views.

  • The media contributes to this effect by generating more coverage in response to growing public concern, creating a feedback loop that amplifies fears. Public risk assessments can change suddenly based on shifting social and cultural factors rather than scientific evidence.

  • Culture plays a key role in shaping individual risk perceptions beyond just psychology and social influence. Cultural influences like experiences, stories from others, and media shape what risks evoke positive or negative feelings in people. These feelings then influence intuitive risk judgments through mechanisms like the “good-bad” rule.

  • Cultural influences and biases impact perceptions of risk through unconscious rules like the “Good-Bad” rule and “Example” rule.

  • Perceived risk of things like drugs is often exaggerated due to stigma, while risks of accepted things like alcohol are underestimated.

  • People tend to form social networks with others who share their worldviews, reinforcing cultural biases.

  • Studies found the “white male effect” - white men perceive less risk than others. However, this was better explained by differences in cultural worldviews rather than demographics.

  • People’s core cultural orientations, like individualism vs. collectivism, better predicted their risk perceptions of issues like guns, climate change, etc. than factors like gender, race, education. Perceived risks aligned with feelings associated with their cultural orientation.

So in summary, cultural influences seem to have a powerful unconscious impact on shaping risk perceptions through mechanisms like stigma, availability of examples, social networks, and alignment with underlying cultural worldviews. This helps explain differences in how various groups perceive various risks.

  • The article describes the Security Essen trade show in Germany, the world’s biggest exhibition for the private security industry. It showcases thousands of products and technologies aimed at protecting individuals and businesses from threats.

  • The vast array of surveillance cameras on display illustrates how the industry capitalizes on fears about security and privacy. Weaponry is technically not allowed but some lethal items are still exhibited.

  • Armored vehicles, road barriers, and infiltration detection systems cater to fears of terrorist attacks, even though such events are still rare in Europe. A whole new market segment focused on anti-terrorism is emerging.

  • The private security industry has massively expanded in recent decades. Major companies now employ hundreds of thousands of people globally and generate billions in annual sales.

  • Security product advertisements play on fears of home invasions and other dangers to promote residential alarm systems. The industry thrives by cultivating worry and perceptions of risk.

  • In summary, the article examines how the private security industry stokes public anxieties to fuel continuous growth, as seen at the largest trade show where fear meets capitalism. Surveillance technologies and anti-terror devices are increasingly marketed based on unlikely but alarming scenarios.

  • Home security system ads tend to depict very unlikely crimes, like a stranger breaking in during the day, in order to market fear and drive sales. They portray suburban neighborhoods as unsafe despite low actual crime rates.

  • These ads are processed intuitively by viewers’ “gut” rather than rationally, increasing perceived risk and forming lasting fearful memories. Corporations leverage fear to increase sales in many areas like security, cleaning products, and pharmaceuticals.

  • Examples given include ads linking toilet water to drinking water to sell water filters, and a doctors office poster emphasizing medication and cholesterol reduction, suggesting it was made by a pharmaceutical company to promote drug use.

  • In general, many organizations exploit public fear for profit or political gain, through exaggerating risks, associating normal things with fear or disgust, and implying dangers where none truly exist. This pattern of “fear marketing” is widespread and influences public perceptions.

  • Pharmaceutical companies spend billions on marketing to doctors, health groups, and patients. This includes sponsoring educational campaigns and condition branding to raise awareness of treatable diseases.

  • Critics argue this “disease mongering” aims to expand drug markets by medicalizing normal biological variations or risk factors as diseases treatable by pills.

  • Companies singled out high cholesterol as a disease rather than a risk factor in order to promote cholesterol-lowering drugs. A Pfizer awareness campaign was criticized for frightening people about their risk of death from high cholesterol.

  • More broadly, pharmaceutical marketing has played a role in lowering diagnostic thresholds so more people qualify as diseased, according to some studies. Critics say this undermines public health by overmedicalizing healthy populations in order to sell more pills.

The passage discusses how the pharmaceutical industry promotes drugs through direct-to-consumer advertising (DTCA) in ways that emphasize emotions over facts and prioritize profits over public health. Studies have shown that drug ads focus on instilling fear about diseases and promising happiness through medication rather than educating viewers. They seldom mention lifestyle changes as alternatives or make clear who would truly benefit from treatment. The industry leverages advances in psychology research to tap into unconscious decision-making and effectively push emotional buttons. Companies understand how to associate positive feelings with their brands. Neuromarketing uses neuroscience tools like MRIs and biometrics to understand emotional engagement on a deeper level than self-reported surveys. While the industry claims its ads provide public education, the amount spent on promotion, along with the emotive and ambiguous messages, raises questions about whether profit maximization is the true driving motivation over public benefit.

  • Political campaigns often use fear-based messaging to influence voters, though some doubt how effective this strategy is. Academic research on the role of emotions in political ads has been limited.

  • Ted Brader studied campaign ad content and found that while most ads appealed to both emotions and logic, a majority (72%) had emotion as the dominant appeal. Fear, anger and pride were common emotions used.

  • Brader experimentally tested who is influenced by emotional vs informational appeals in political ads. He found that “juiced” emotional ads had more impact than bland informational ones.

  • Enthusiastic emotional appeals influenced both informed and uninformed voters, but fear-based appeals only influenced more politically knowledgeable viewers, not less informed ones.

  • Emotional audiovisual elements in ads, like music and imagery, can strongly reinforce or even overwhelm the intended verbal message. Removing informational content from ads reduces their effectiveness.

  • In summary, Brader’s research provided evidence that emotions, especially when enhanced through audiovisual elements, can play a significant role in influencing political views and behaviors. More informed voters may be more susceptible to fear-based messages.

  • Political strategists and advertisers have long understood that images and visuals can overwhelm factual spoken messages in political ads and messaging. This approach aims to evoke emotional reactions over factual arguments.

  • While spin doctors and corporations may use fear-based tactics to advance their self-interested goals, NGOs, activists and charities also sometimes employ questionable uses of fear despite their purported aim of serving the public good.

  • Examples are given of charity organizations and campaigns employing statistics, messaging or selective facts that overstate problems like childhood hunger or cancer rates/risks. While the causes are worthy, the accuracy of the alarming messages and numbers are questionable or misleading upon closer examination.

  • Criticism of overly broad or inaccurate fear-based claims is sometimes dismissed on the grounds that it still draws attention to important issues, even if the specific messaging or statistics used are misleading. However, balanced and fact-based communication is important for public understanding and policymaking.

  • The passage discusses the tactics used by various organizations to promote their causes and raise awareness, such as non-profits, NGOs, charities, government agencies, and companies.

  • Many use fear-based, exaggerated, or misleading messages to grab people’s attention in an environment where they are bombarded with information. Simplistic scary slogans are more effective than nuanced, balanced discussions.

  • Examples given include misleading statistics about car crash risks, made-up claims about doping in hockey, and press releases emphasizing unlikely food poisoning risks in schools.

  • More sophisticated groups can utilize social marketing techniques and paid media like ads and video news releases. Celebrities and stunts also help promote causes.

  • Scientists also feel pressured to spin their messages dramatically to be heard, despite their commitment to accuracy, as a former climate scientist discusses.

  • In summary, the passage critiques how exaggeration and selective facts are commonly used to promote causes due to the need to cut through the noise and motivate public support or behavior change.

Here is a summary of the key points regarding being effective and being honest:

  • In science, all knowledge is tentative and open to challenge. Scientists deal in probabilities and degrees of confidence rather than absolute certainty. This creates challenges for communicating findings clearly to the public.

  • Scientists strive to communicate accurately while balancing effectiveness in raising awareness. Using careful language like “very likely” or “it is likely that” conveys the tentative nature of the knowledge.

  • Some organizations promote their findings through alarming press releases and simplifications that overstate certainty, even if the actual studies acknowledge more nuance and uncertainty. This catches media attention but risks misleading the public.

  • Reporters often rely directly on press releases rather than reading the underlying studies due to time constraints. So sensationalized framing and facts from press releases can shape the resulting news stories, even if the full findings were more balanced.

  • There is a tension between scientists’ goal of being accurate versus effective communicators, and between organizations’ marketing aims versus fully transparent communications. Striking the right balance is an ongoing challenge.

  • The Globe and Mail featured a young girl named Shelby Gagne as the face of its cancer coverage series, using her portrait prominently. However, most cancer cases and deaths are among the elderly, not young children like Shelby.

  • Media coverage of cancer often focuses on rare and emotional stories of young victims, skewing public perceptions of who is most at risk. Research shows news articles overrepresent breast cancer victims under 50, though most cases are in older women.

  • Images in particular influence risk perception beyond factual information. Studies found readers perceived higher risk of a fictional disease when articles included disturbing photos of ticks and sick children, compared to text-only versions.

  • The media disproportionately covers dramatic and violent causes of death that lend themselves to vivid images, like murders and accidents, rather than larger but less sensational threats like diabetes and cancer in the elderly. This skews public understanding of overall risk.

The passage discusses how the media disproportionately focuses on rare but dramatic events like disasters, accidents, crimes, etc. even though societies have large populations so improbable things happen regularly. This skewed coverage affects risk perceptions.

It notes several failures in how media reports on risk:

  • Failing to provide context on probabilities/likelihoods. Stories may mention a potential risk but not say how probable it is.

  • “Denominator blindness” - stories give numbers like deaths but rarely provide the population the numbers refer to, so readers can’t calculate actual risk.

  • Rarely comparing risks to help viewers assess relative dangers. Stories report on risks in isolation without perspective.

  • Surveys find media coverage leads the public to markedly overestimate risks of dramatic causes of death and underestimate more common ones like diseases. It also inadequately conveys the relatively low risks of things like West Nile virus.

Overall, the passage critiques how media focuses on rare disasters and tragedies in a way that distorts public understanding of actual risks and probabilities. More context on likelihoods and comparisons to other risks is needed.

  • Risks can be described in two ways - relative risk (how much bigger or smaller a risk is compared to something else) and absolute risk (the actual probability of something happening).

  • The media often only provides relative risk figures, which can be misleading without the absolute risk context. For example, saying a risk is doubled or increased by 50% without the actual probability.

  • Newspapers routinely exaggerate and overdramatize health risks in their reporting by emphasizing relative risks without absolute risks. This was seen in coverage of birth control patches and cannabis use increasing mental illness risks.

  • There are some self-interested reasons for media sensationalism as it sells more papers and gets higher ratings. But there are also systemic issues like declining resources forcing less accurate/thorough reporting.

  • Reporters are also human and drawn to telling dramatic stories, so they exaggerate risks without meaning to mislead. Companies also actively promote misleading risk information to the compliant media.

So in summary, while media sensationalism is partly profit-driven, it’s also due to systemic flaws and human tendencies that make accurate risk communication a challenge without conscious effort. Both relative and absolute risk figures are needed for public understanding.

Based on the passage, the key part of the story is how the instinct for storytelling drives what gets covered in the media and what doesn’t. Some key points:

  • Stories that include conflict, novelty, human interest and are part of an existing narrative are more likely to be deemed newsworthy.

  • Statistics and issues without a compelling narrative behind them are less likely to garner attention.

  • Even significant events may be ignored if they don’t fit the prevailing narratives at the time.

  • Narratives can influence coverage for years after the originating events.

So in summary, the passage is highlighting how the media’s search for compelling stories, guided by emotions and narrative tendencies, shapes which issues receive widespread attention and coverage.

  • Journalists and media outlets have a bias toward vivid, emotional stories that trigger strong reactions, even if they are not statistically representative of actual risks. Labeling BSE as “mad cow disease” caused more worry among the public, for example.

  • They also tend to favor “bad news” over “good news” and focus on highlighting threats and dangers rather than progress. Newspapers turned good cancer rate decline statistics into alarming headlines.

  • It is difficult for journalists to craft compelling stories about statistical norms or non-events like a disease that doesn’t develop. Stories about rare incidents are more newsworthy than common ones.

  • This bias is reinforced by audiences’ inherent preference for exciting, dramatic narratives over dry facts. Entertainment media exaggerate threats further for dramatic effect, which can potentially misinform audiences about actual risks.

  • Media coverage can significantly influence risk perceptions, as seen by its impact on beef consumption during the BSE crisis. Both news and entertainment media likely contribute to the public often inaccurately assessing risk levels.

  • Researchers in Burkina Faso found that residents rated risks in a way that mirrored French surveys, despite very different real risks in the two countries. This shows the influence of the dominant French media and narratives.

  • The media both reflect and shape societal fears through a feedback loop. They cover issues that stir public anxiety, generating more fear, which leads to more coverage.

  • In the 1990s, “road rage” emerged as a panic issue in the US media despite little evidence of increased aggressive driving. Interest groups and agencies promoted the narrative for their own ends.

  • The road rage scare disappeared as quickly once the Clinton impeachment diverted attention. A later report found road rage was overblown and resources had been misdirected from more serious safety issues.

  • The media can amplify issues out of proportion by responding to narratives rather than objective evidence. This dynamic contributed to temporary moral panics like the “Summer of the Shark” in 2001.

This passage discusses how media coverage of rare crimes can distort people’s perceptions of risk. Some key points:

  • Media frequently covers rare but sensational crimes like shark attacks and child abductions, giving the impression they are more common than statistics show. This satisfies the “example rule” but misleads on probability.

  • Anderson Cooper did a special on child abductions that intensely profiled victims’ stories without providing clear statistics on risk. This satisfied emotional “good-bad rule” but not rational assessment.

  • Brief on-screen stats were hard to notice and didn’t give full context. Viewers were left with an intense emotional reaction but no understanding of true likelihood.

  • Experts criticize how media focuses on examples over data, satisfying emotions over rational risk assessment. While individual stories should be told, overwhelming coverage distorts perception of actual threats.

  • Politicians and media use fear of rare crimes like child abduction to galvanize attention, even though statistics show much lower risks than other crimes or accidents. This can undermine an informed understanding of probability.

In summary, the passage argues intense media coverage of rare crimes can distort risk perceptions by satisfying emotional reactions but not supporting rational assessment with clear statistics and context.

  • The statistics on child abductions presented in the media are often exaggerated and incomplete. In the 1980s, figures like 50,000-75,000 children abducted per year were cited with no clear source.

  • A federal study (NISMART) in 1999 found that of around 797,500 missing children cases, the vast majority were runaways or family abductions during custody disputes. Only 58,200 involved non-family abductions, which can include minor incidents.

  • True “stereotypical kidnappings” by strangers number around 115 cases per year for those under 18, and 90 cases for those under 14. This puts the risk at around 1 in 600,000-700,000.

  • Other risks like drowning or car accidents are much greater. The chance of a child being abducted and killed is about 1 in 1.4 million.

  • Media coverage exaggerates the risks by focusing on rare but sensational cases, ignoring context. This fuels public fears despite the statistics showing the true risks are almost indescribably small.

  • The media disproportionately covers rare crimes like abductions and murders, making them seem more common than they are. This is done through dramatic portrayals in news stories, novels, movies, etc.

  • Research shows crime makes up 10-30% of newspaper content and is one of the most popular topics on local TV news, but the coverage focuses on individual crimes rather than broader context and issues.

  • This leads to a bias where rising crime gets more attention through stories of individual crimes, while falling crime rates are harder to capture and go underreported. Two examples of declining crime rates getting little media coverage are given.

  • The coverage also skews toward more violent and gruesome crimes like murder, which may account for half of crime stories. Less attention is given to more common property and non-violent crimes.

  • In reality, violent crimes make up a small fraction (12%) of total crime based on US FBI data, with property crimes being over seven times more common. But the media image focuses on the rare violent crimes.

  • Within murder coverage, uncommon cases featuring unusual victim profiles tend to get disproportionate attention over more typical murders.

  • Victims also tend to be misrepresented, with most coverage focusing on crimes against children, women and elderly, despite data showing young males are most commonly victims.

  • The news and entertainment media often misrepresent and exaggerate crime rates and statistics. They portray property crimes and mundane murders as uncommon, focusing instead on exotic and violent killings.

  • True crime media like COPS portray most crimes as involving young male shirtless suspects being violently apprehended. True crime books and shows take a very harsh view of all criminals as sociopathic beasts.

  • Surveys consistently find that people vastly overestimate violent crime rates, thinking it makes up around half of all crime when it’s much lower. People also usually think crime is always getting worse despite declines.

  • Personal experience and conversations have more influence on views of local crime rates, which are seen more positively. But judgments of national crime rely more on media coverage, resulting in a more pessimistic view.

  • Exposure to crime media coverage reinforces and amplifies perceptions of fear in a spiral effect. Those who fear crime more will seek out more media coverage, fueling further fears. This effect may not be limited just to views of crime.

  • Humans have an innate instinct to be social and tell stories about people, including crimes. Crime stories in particular grab our attention because understanding rule-breakers was important for survival.

  • Emotions like anger and justice are key parts of what makes crime stories compelling. Anger that someone hurt another must be punished fuels our interest in crime stories.

  • While headlines focus on rare, unusual crimes, most crimes are local and say little about one’s personal risk. But emotionally, the gut perceives crime stories as conveying high risk regardless of probability.

  • An experiment showed people disregarding risk ratings and allocating equal funding to a high-risk deer problem and low-risk crime problem due to the emotions evoked by crime overriding the numbers. Even with crime risks much lower, emotions led people to equal treatment, showing the power of emotions over reason in perceptions of crime and risk.

  • Emotions play a large role in how people perceive crime issues and this influences both media coverage and political rhetoric. Minor crimes stir up less emotion than more serious violent crimes or crimes against vulnerable victims like the elderly.

  • The media focuses on crimes that elicit strong emotions in viewers because the journalists themselves feel those emotions. This skews public perceptions of crime even if it accurately reflects emotional reactions.

  • Politicians capitalize on public fears around crime issues by raising alarms about threats and promising tougher policies to address those threats. This started in the 1960s and became a central campaign theme.

  • Figures like Willie Horton’s crime were deliberately used against political opponents to portray them as “soft on crime.” Both Republicans and Democrats employ fear-mongering around crime.

  • Laws and policies that satisfy desires for retribution are more politically expedient than prevention programs. Tougher sentencing laws fueled mass incarceration.

  • Politicians now focus on sex offenders despite data showing most abuse is from family. They craft laws around highly publicized but rare crimes to appear protective.

  • A university survey found that the rate of online solicitation of minors has declined over time, from about 1 in 5 to less than 1 in 7. Most solicitations were of older teens, with very few cases of younger children.

  • Politicians often hype up the threat of crimes like online solicitation to push for tougher laws. But the evidence shows these laws may not actually increase public safety.

  • Groups like police departments, security companies, and nonprofit agencies also have incentives to promote fear of crime to grow their budgets and resources. Even when crime statistics are falling, they emphasize any small increases.

  • Tougher sentencing laws that politicians promote are often not backed by research. Mandatory minimums, three-strikes laws, and sex offender registries have questionable effectiveness and just contribute to overcrowded prisons.

  • Prison guard unions are a powerful lobbying force that benefits from perceptions of rising crime and tough-on-crime policies, even if crime is actually declining, as this grows the prison system and leads to more overtime.

So in summary, it discusses how various groups have motivations to hype up fears about crimes like online solicitation of minors, even when the facts don’t fully support the level of threat being portrayed. This drives overly harsh laws and policies that may not actually boost public safety.

  • California prison guards can earn over $100,000 per year including overtime pay of $37/hour. Their union donated $3 million to Gov. Gray Davis who then agreed to a large pay increase for guards.

  • Security consultants like Bob Stuber profit by marketing fear around issues like school shootings. Stuber runs scare programs in schools and appears often on TV as an expert, though he has a financial stake in promoting fear.

  • In reality, school shootings are incredibly rare events that affect virtually no students. Homicide rates in schools have declined significantly since the late 90s. However, high-profile shootings like Columbine garnered intense media coverage, fueling public fear despite statistics. Politicians amplified this fear rather than providing perspective.

  • Security consultants, media coverage, and politicians’ responses inflated the perceived threat beyond what statistics actually showed, despite schools being quite safe for the vast majority of students. This pattern recurred during later isolated shooting incidents.

  • Schools across the US reviewed emergency plans, locked doors, and held lockdown drills in response to fears about school shootings. However, government reports consistently show that schools are very safe and the risk of a student being murdered at school is essentially zero.

  • Despite data showing school safety, many Americans, including 1 in 5 parents, frequently or occasionally worry about their children’s physical safety at school.

  • Focusing so much on highly improbable fears can have negative consequences, like cutting school ties to the community, spending on security instead of education, and zero tolerance discipline policies that increase dropout rates.

  • Modern developed societies are actually some of the most peaceful in human history. While crime increased in the 1960s-80s, homicide rates today are still far lower than medieval Europe (over 10 times lower) and isolated tribal societies (20-80 times lower). Developed countries today experience some of the lowest violence levels in human history.

  • However, public fears of crime and violence often far outweigh the data and historical context, with negative impacts on schools, children, and society.

  • War and armed conflict have significantly declined in recent decades according to multiple studies. The rate of civil wars, genocides, and international crises have all dropped sharply since the early 1990s.

  • However, most people are unaware of these declines because the media focuses more on covering new wars or violent events, while peaceful endings of conflicts receive little attention. This creates a misperception that conflict is increasing.

  • Civilization has also progressed in that behaviors like torture, slavery, genocide and politically motivated killing are now widely condemned and less common in many parts of the world compared to history.

  • At the same time, concern about toxic chemicals in the environment and human body has increased. Studies found dozens of man-made chemicals in people’s blood and tissue worldwide. Some link this to rising cancer and health issues.

  • Rachel Carson’s influential book Silent Spring raised awareness of indiscriminate chemical use and its impacts on wildlife and human health in the 1960s. It helped launch the modern environmental movement and policy changes like the DDT ban.

So in summary, the passage discusses how armed conflict has actually declined but is underreported, while fears about chemicals in the environment and body have grown significantly since the 1960s.

  • Cancer provokes immense fear, far more than other major diseases. This stems in part from Rachel Carson’s influential book Silent Spring which warned of cancer-causing environmental pollution.

  • However, cancer rates were not necessarily rising as dramatically as Carson suggested. Increased lifespan and reduced deaths from other diseases like tuberculosis meant cancer took a larger share of total deaths even without changing rates.

  • The real driver of increased cancer, especially lung cancer, was the rise of smoking from the early 20th century on. But Carson avoided discussing smoking’s role.

  • While environmental pollutants may contribute slightly to cancer risks, the majority of cancers are not caused by low-level exposures but rather behaviors like smoking. Leading researchers say pollution accounts for a small percentage of total cancer deaths. Carson’s message linking all cancers to pollution misunderstood the complex causes and obscured other key factors like tobacco.

  • Environmental pollutants, including man-made and natural sources like radon gas, industrial emissions, and car exhaust, are estimated to cause only about 2% of all cancers.

  • It’s important to recognize that many natural substances in food, like certain plants, nuts, coffee, and carrots, contain natural carcinogens. Bruce Ames estimates 99.99% of dietary pesticides are natural and half of all chemicals tested cause cancer in high doses.

  • Lifestyle factors like smoking, drinking, diet, obesity, and exercise account for about 65% of cancers according to most estimates. Wealthier societies have higher cancer rates due to lifestyle differences.

  • Toxicologists emphasize that dosage is key - even very toxic substances are harmless in tiny amounts. Trace amounts of synthetic chemicals found in blood and water samples are too small to worry about.

  • However, people intuitively feel contamination of any amount is dangerous due to evolutionary instincts to avoid toxic substances. This leads to perceptions not aligned with scientific evidence on safe dosage levels.

  • Cancer testing involves deliberately exposing rodents to very high doses, not trace environmental levels, to identify carcinogenic effects. Many substances cause cancer only at these unrealistically high doses.

  • The article discusses the limitations of using animal studies and epidemiological studies to determine human carcinogenicity of chemicals. Animal studies often use high doses that may not be relevant to typical human exposures, and different species may react differently. Epidemiology can show correlations but not prove causation.

  • Regulatory agencies classify chemicals as potential, likely or known carcinogens based on weighing multiple lines of evidence, not relying on any single study. Animal studies are still considered evidence but are given less weight than in the past.

  • Cultural factors influence perceptions of chemical risks. Natural products are assumed safe while synthetic chemicals are viewed as dangerous. Companies promote this notion to sell products. Media also disproportionately covers chemical risks over lifestyle/aging factors.

  • Environmental groups have effectively raised public concern about chemicals by linking them to human health issues like cancer, even if risks at typical exposure levels are small or uncertain. However, some experts argue this scaremongering is irresponsible and misleading to the public about actual health threats.

  • Several environmental groups raise concerns about tiny amounts of chemicals in the body and links to health issues like cancer, but often fail to mention that very small quantities may not actually be harmful.

  • Claims about pesticide contamination of waterways generally don’t acknowledge that most scientists believe the levels are too low to cause direct health impacts.

  • Assertions that cancer rates are reaching “epidemic” proportions ignore that cancer risks increase with age. As lifespans get longer, lifetime cancer risks rise even if annual risks remain constant.

  • Childhood cancer rates did increase 25% over 30 years until stabilizing in 1985, but the actual risk remains very small at around 0.0168% historically. Death rates from childhood cancer have declined.

  • Increases in reported cancer cases and incidence rates could partially reflect improved diagnostics rather than actual changes in disease prevalence or risk factors like chemical exposure. Statistics have limitations.

So in summary, while certain chemicals may pose health risks, some groups emphasize risks without full context on exposure levels and other technical factors that determine actual impact on human health outcomes. Cancer statistics are complex and influenced by non-risk demographic factors.

  • Many forms of cancer can exist harmlessly in the body without causing symptoms. Screening programs like mammograms detect both aggressive cancers and harmless ones, causing incidence rates to rise without more people actually getting cancer.

  • Experts look at both incidence and death rates to understand trends. If they rise together, cancer is likely increasing. But screening can cause them to diverge as more harmless cancers are found.

  • Factors like improved diagnosis, screening programs, and data collection impacted childhood cancer trends in the 1970-80s, making the underlying changes unclear.

  • Incidence rates rose in the 1970-80s due to screening but have leveled off. Death rates for most cancers have been falling in developed nations.

  • While synthetic chemicals have risks, a lack of full scientific certainty should not automatically lead to banning them, as that risks negative consequences. The precautionary principle is ambiguous and can paradoxically forbid both action and inaction.

  • Both natural and synthetic chemicals pose unknown risks, and natural chemicals outnumber synthetic ones and have caused half of known animal carcinogens. Banning chemicals until proven safe could leave little left to eat.

  • The passage questions the simplistic narrative about DDT presented by environmental groups like the WWF. DDT had significant public health benefits in fighting diseases like typhus and malaria that are rarely acknowledged.

  • DDT was critical in eradicating malaria from much of Europe and North America in the 1950s-60s. Estimates suggest it saved millions or tens of millions of lives overall.

  • However, DDT did also have legitimate environmental impacts like harming bird populations. And mosquitoes rapidly developed resistance when it was overused.

  • The risks and benefits of DDT are complex, not clear-cut. Both banning and continued use carry risks that must be weighed. The precautionary principle provides no clear guidance in such dilemmas involving trade-offs.

  • People selectively worry about certain risks more than others based on subjective and emotional factors. This can distort risk regulation. A rational, evidence-based approach considers all risks and costs, not just the most alarming. But alarmism is often used by activists, politicians and corporations for ideological or financial gain.

  • More than half of cancers in developed nations could be prevented through lifestyle changes like exercise, diet, and not smoking. Environmental chemicals pose a small risk compared to lifestyle factors.

  • However, it is difficult to convince people to change lifestyles through willpower alone. Telling people to exercise more and eat healthier is not as impactful as a potential environmental contamination.

  • The 9/11 terrorist attacks in the US caused immense fear and uncertainty. With little prior knowledge of Al Qaeda, the threat seemed immense. Graphic media coverage amplified the shock.

  • Anthrax attacks further intensified fears. Polls in late 2001 showed over 80% of Americans expected more domestic terrorism. Over half feared they or their family could become victims.

  • Fear levels declined gradually in 2002 as no further attacks occurred. However, by 2006, worry about terrorism had risen again despite five attack-free years. Many still saw domestic attacks as likely.

  • The statistical risk of any one American dying in a terrorist attack remains extremely low. But fears have persisted far longer than actual risk would rationally justify.

  • The annual risk of dying in a motor vehicle accident is 1 in 6,498 compared to the annual risk of dying in a terrorist attack which is estimated to be 1 in 7,750 worldwide based on data from 1968-2007.

  • Terrorism causes relatively few deaths each year globally - an average of 379 deaths annually from 10,119 terrorist incidents from 1968-2007. This is a small number compared to other causes of death.

  • In Western countries the risk is even lower, with the lifetime risk of death from terrorism in most places being between 1 in 10,000 to 1 in 1,000,000. This compares to much higher risks from other causes like accidents.

  • While 9/11 caused an unusually large number of deaths, getting weapons of mass destruction is difficult for terrorists and there is no evidence that terrorists have succeeded in obtaining nuclear, chemical or biological weapons despite longstanding efforts, especially by terrorists targeting Israel. States also have deterrents against providing such weapons to terrorists.

So in summary, the data shows that terrorism overall poses a very small risk to individuals in most places, especially in Western countries, and the threat of weapons of mass destruction being used remains largely theoretical based on the evidence so far.

  • The cultivation of truly mass-casualty biological or chemical weapons that could kill thousands or tens of thousands requires advanced scientific training, significant resources, sophisticated equipment and facilities, rigorous testing, and effective dissemination methods. These high demands put such capabilities beyond most terrorist groups and even some nation-states.

  • Even al-Qaeda under Osama bin Laden, with its wealth, bases in Afghanistan, and relatively free operation in the 1990s, failed to acquire chemical weapons despite interest. Desire does not equal capability.

  • The Japanese doomsday cult Aum Shinrikyo had extensive resources, laboratories, equipment, scientists, but still failed in 9 attempts to use biological agents like anthrax and botulinum toxin. They eventually achieved some success with chemical weapons like sarin gas.

  • Aum’s attacks in Matsumoto and Tokyo subway in 1994-1995 killed only a small number compared to what a single bomb achieved. Their experience shows the technological difficulties of effective chemical/biological attacks, even for well-resourced groups.

  • Al-Qaeda and other terrorist groups have far fewer advantages than Aum and have failed to recruit skilled scientists, limiting technical sophistication. Religious extremism also impairs sound judgement, as seen with Aum.

  • While nuclear attacks pose an immense threat, probability is still important when assessing risks, even catastrophic ones. Acquiring nuclear weapons remains extremely difficult for terrorist groups.

  • The passage discusses the threat of nuclear terrorism and argues that the actual risk to Americans is relatively low.

  • While a nuclear attack is unlikely, it’s not impossible. Building nuclear weapons requires extensive resources and expertise that terrorists do not have access to. Even countries with capable programs have struggled to develop nuclear weapons.

  • A successful nuclear attack in an American city could kill around 100,000 people, but the risk to any individual American would be very small at around 1 in 3,000. The economic impact would also likely be limited, as evidenced by the recovery after 9/11.

  • Research shows that contrary to assumptions, disasters do not typically cause widespread panic. People usually respond with order, compassion and cooperation.

  • While a catastrophic attack could happen, the U.S. would remain powerful and prosperous. Events like 9/11, Katrina, and economic downturns did not destabilize the country.

  • However, public fear of terrorism remains disproportionately high due to post-9/11 rhetoric that portrayed the threat as existential rather than anomalous. The evidence does not support this level of concern.

  • After 9/11, the Bush administration framed the situation as a “war on terrorism” where civilization itself was under threat. They portrayed terrorists as having access to the world’s most destructive weapons and being able to strike anywhere at any time.

  • Top officials like the president and homeland security chief consistently emphasized that terrorism posed an existential threat on par with previous threats like WWII and the Cold War. This helped instill a sense of fear in the public.

  • However, they failed to provide any context about the actual probability of death from terrorism being quite low compared to other risks. Only John McCain encouraged looking at the odds.

  • The administration quickly linked Iraq and Saddam Hussein to 9/11 and the threat of terrorism, claiming Iraq had WMDs and could provide them to terrorists. Intelligence was adjusted to fit this policy.

  • The vivid warnings aimed to induce fear and downplay consideration of likelihood. The administration asserted action was needed preemptively before any threat became “imminent.” This one percent doctrine helped justify the Iraq war.

  • The Bush administration dismissed the precautionary principle but cited the need to act preemptively against threats from Iraq. This highlighted a selective approach to precaution depending on the issue.

  • Bush successfully linked Saddam Hussein to 9/11 and terrorism through public statements and polls showed many came to believe this connection. Fear of terrorism increased public support for invading Iraq.

  • The Iraq war initially boosted Bush’s approval ratings and dominance as a wartime leader. Reminders of terrorism and new terror alerts also correlated with increased approval for Bush.

  • Republicans capitalized on terrorism fears to gain electoral victories in 2002-2004 by portraying themselves as stronger on national security. Democrats faced a dilemma in responding to these messages.

  • By 2006, the failure in Iraq undercut Republican arguments about competency on terrorism. However, candidates like Giuliani continued promoting terrorism fears in 2007-2008 to argue for Republican leadership.

  • Both political parties, government agencies, and politicians have incentives to highlight terrorism threats due to the political sensitivities around this issue. Threat inflation has remained a recurring tactic.

  • After 9/11, many government agencies and private entities had incentives to hype the threat of terrorism for political or financial gain. This included justifying budgets, passing legislation, and promoting security/terrorism industries.

  • Government officials like CIA director Goss and FBI director Mueller emphasized hypothetical future terrorist threats rather than the lack of attacks, to argue they were being proactive. This fed the perception that another large attack was inevitable.

  • Government agencies like DEA linked drugs to terrorism to protect budgets. Local governments took federal grants for dubious “homeland security” purposes.

  • The private security industry lobbying in DC grew drastically after 9/11 as terrorism became extremely lucrative.

  • Prosecutors and NGOs also found it advantageous to link their causes to terrorism for publicity. Overall, collectively hyping diverse threats as related to terrorism implied it was an all-encompassing crisis.

  • The “terrorism industry” of experts also grew considerably, some producing outlandish hypothetical scenarios that could shape real threat perceptions through probability illusion. Imagining rare “what if” scenarios made probable threats appear inevitable.

  • Getting enriched uranium and building a nuclear bomb is extremely difficult and unlikely to succeed without being detected. However, the human “gut” reaction is to feel that such a scenario is more plausible than logic and facts suggest.

  • Post-9/11, the media focused almost solely on terrorism stories and speculation about possible attacks, even when there were no new attacks actually occurring. They tended to uncritically accept scary scenarios from experts without questioning unrealistic elements.

  • Smallpox became a major concern despite being eradicated for decades, due to media hype and lack of skepticism. Surveys found many Americans worried they would contract it.

  • A 2003 raid in London found ricin recipes, not actual ricin, but this was still presented as evidence of chemical weapons threats by officials and reported unquestioningly by media.

  • When no major post-9/11 attacks occurred in the US, the media rarely considered the possibility that al-Qaeda simply lacked capacity, preferring theories involving thwarted plots or secret motives.

  • Over time, some media critics of the administration found it increasingly agreeable to portray the terrorist threat as large and growing to blame Republicans, rather than maintaining skepticism.

  • Michael Scheuer, a former CIA official, commented on MSNBC that al-Qaeda could detonate a nuclear device in the US and we would have no way to respond.

  • Frank Rich presented this comment as credible evidence of a serious threat, despite Scheuer providing no actual evidence. However, Rich had previously criticized the Bush administration for using questionable evidence to hype the threat from Iraq.

  • The media’s coverage of terrorism amplified public fear through a feedback loop. Reporting focused on confirming the belief that terrorism was a major threat, while ignoring information that contradicted this view.

  • Entertainment media blurred the lines between fact and fiction, using dramatic portrayals of terrorist scenarios to inject more frightening images and emotions into public discourse.

  • As a result of this sustained messaging, many Americans came to strongly believe terrorism posed a serious risk, even in the absence of attacks, due to biases in information processing and conformity with the widespread social perspective.

  • Terrorism experts note the tactical goals of terrorists include revenge, fame/renown, and provoking an exaggerated reaction from adversaries - all of which can be advanced through heightened public fear responses even to relatively small-scale attacks.

  • Germany’s Red Brigades believed West Germany was disguising its true fascist character behind a veil of democracy and consumerism. They hoped terrorism would cause the government to drop the veil and resort to its “true” fascist ways, pushing the left to revolution.

  • Bin Laden also anticipated two possible outcomes from 9/11 - either the US would withdrawal from the Muslim world, weakening dictators he opposed, or the US would invade the Muslim world, confirming his view of an attack on Islam and recruiting more to jihad.

  • The Bush administration’s response of defining 9/11 as an act of war and launching a global “war on terror” played into bin Laden’s hands by elevating al-Qaeda’s status and ensuring the very reaction - invasion of Afghanistan and Iraq - that bin Laden sought. This was the greatest gift to bin Laden and allowed him to achieve renown as the enemy of the US.

  • Controlling fear should be a major part of countering terrorism, not just stopping attacks. Leaders need to avoid hyping the threat and put risks in perspective using statistics. Declaring a “war on terror” ends up conceding the terrorist’s objectives by elevating their status and provoking overreaction.

  • The response of Tony Blair and Ken Livingstone to the 2005 London bombings provided a good model, remaining calm and determined not to allow terrorists to divide society or change its way of life through fear.

  • The author argues that while terrorism is a real threat, it is often portrayed as a greater threat than it actually is. Governments should protect citizens from attacks without abandoning civil liberties.

  • Counterterrorism spending by the US government increased significantly after 9/11, totaling over $100 billion annually across developed nations. This spending has significant economic costs as well through things like slower airport security screenings.

  • However, the terrorism threat has never undergone a proper cost-benefit analysis to determine if the large sums spent are proportionate to the actual risk reduced. Other threats like lack of healthcare in the US and diseases globally likely present greater risks at lower costs.

  • Expanded healthcare and prevention programs as well as initiatives against diseases like malaria could save many more lives than counterterrorism spending saves, yet receive only a fraction of the funding.

  • The enormous amounts spent on counterterrorism are more a result of exaggerated fear rhetoric rather than rational risk assessment and prioritization based on effective solutions. A more balanced, evidence-based approach is needed.

  • Charles Morden, a young boy, starts feeling ill with a fever and sore throat. His condition quickly deteriorates and he dies within a few days.

  • His younger brother and oldest brother also fall ill and die within a day of each other. Then the girls in the family (Minnie, Ellamanda, and Dorcas) also get sick and die over the next few days.

  • The family was suffering from diphtheria, a disease that was particularly deadly for children at the time but can also affect adults. With no treatment available, the father had to store the children’s bodies in the barn until they could be buried in the spring.

  • Diphtheria and other diseases commonly killed many children in the past. Vaccines and other public health advances over the 20th century have nearly eradicated diseases like diphtheria and drastically increased life expectancy globally. Average life expectancy has more than doubled over the past hundred years.

  • While risks remain, scientists expect continued improvements in health and longevity over the 21st century as well. We should feel grateful for the unprecedented health and safety of the modern world compared to history, rather than constantly worrying about potential threats.

The passage discusses why even though we live in historically safe times, many people still feel afraid. There are three main reasons for this paradox:

  1. The human brain is wired to focus on threats as a survival mechanism, but it is ill-equipped to process modern risks. We overestimate rare but scary dangers.

  2. Media and organizations profit from stoking fears to get attention, donations, etc. Stories about single tragic deaths get reported more than declining mortality rates.

  3. These fearmongering messages activate the brain’s instinct to amplify threats through repeating information between the brain, media, and organizations. This creates a “circuitry of fear” where each component reinforces the alarms of the others.

To counteract unreasoning fears, the passage suggests thinking carefully about risks, being skeptical of exaggerated claims, and using rational thinking (“Head”) to override the initial instincts (“Gut”) when they disagree on risk levels. However, rational thinking takes more effort and we are still vulnerable to psychological biases. Overall, we must acknowledge both the neurobiological and social drivers of fear in order to make informed decisions rather than reacting to unreasonable panic.

The passage discusses how hindsight bias affects our perception of history and colors our view of the past and future. It notes that when Baruch Fischhoff surveyed people who lived through historical events like the 1814 Britain-Nepal war and Richard Nixon’s 1972 trip to China, those who knew the outcomes perceived them as more predictable and certain than they actually were at the time.

It also discusses how those who lived through particularly uncertain and frightening periods in the past, like the peak of the Cold War in the 1980s when nuclear war seemed plausible, viewed the future very differently than we do today with the benefit of hindsight. Events like the end of the Cold War and slower than expected spread of AIDS did not appear nearly as inevitable or predictable to people at the time as they seem in retrospect. This illusion of certainty makes current uncertainties feel more frightening than they rationally should based on history.

  • In the late 1960s, scientists like Paul Ehrlich warned of imminent mass starvation due to overpopulation in books like “The Population Bomb.” Ehrlich predicted hundreds of millions would starve in the 1970s-80s.

  • Ehrlich used dramatic scenarios and hypothetical future events to make his predictions seem more plausible and alarming. One scenario depicted widespread famine in the US by the 1980s.

  • “The Population Bomb” was a bestseller and made Ehrlich a celebrity. It raised widespread concern about overpopulation leading to starvation.

  • Governments did not enact the emergency population control measures Ehrlich advocated. However, fertility rates declined more gradually than predicted and food production increased greatly through new technologies.

  • Contrary to Ehrlich’s predictions, mass starvation did not occur in the predicted timeframes. However, catastrophist authors like Ehrlich remain convinced their dire predictions will inevitably come true, showing a lack of humility around prediction accuracy.

  • Books warning of imminent catastrophe sell well by triggering fears, even if the predicted events never materialize as portrayed. Authors may overstate risks for marketing purposes.

Here is a summary of the book “Comet/Asteroid Impacts and Human Society” edited by P. Bobrowsky and H. Rickman:

This book examines the impacts of comets and asteroids on human society throughout history. It looks at past impact events and how societies responded. The editors bring together perspectives from astronomy, geology, history and other fields to provide a multi-disciplinary view of the topic. Chapters discuss specific comet and asteroid impacts that have been identified, and analyze their effects on human populations and civilizations at the time. The book also considers modern society’s understanding of and preparation for possible future impact threats. It aims to inform discussions around impact risk assessment, mitigation, and communications with the public. Overall, the editors and contributors present a comprehensive analysis of how comet and asteroid collisions have shaped human history, and the ongoing relevance of these celestial phenomena for civilization today.

The passage discusses how statistics can be misrepresented or taken out of context to create an exaggerated sense of fear or alarm. It gives the example of Nicholas Stern’s 2006 estimate of the potential economic costs of climate change under different scenarios, which ranged from 5-20% of global GDP. However, environmentalists and many journalists often cited the upper range of 20% without providing the full context. More broadly, it notes that the largest and scariest number within a range is frequently singled out and presented without the important qualifications. This misrepresentation of statistics can distort public perceptions and debates in an unreasonable manner.

  • In the early 1960s, British physician Richard Doll published research definitively linking smoking to lung cancer, helping establish the scientific consensus. In 1962, the Royal College of Physicians in Britain issued a corroborating report. Shortly after, the US Surgeon General created a committee that led to the famous 1964 declaration that smoking kills.

  • This established smoking as a major public health issue and catalyzed efforts to educate the public about tobacco’s health risks, introduce warning labels on cigarette packs, restrict tobacco advertising, and ban smoking in various public places. It marked a turning point in the understanding of an activity long considered harmless becoming recognized as a serious cause of death and disease.

  • A reasonable approach to potentially catastrophic scenarios would be to look at real experiences with the threats under discussion. However, this often does not happen. Small infectious disease outbreaks in the past resulted in far fewer deaths than commonly imagined.

  • When unknown viruses like Marburg virus emerged in Europe in the past, they led to relatively few infections and deaths, not the widespread catastrophe often envisioned.

  • Government officials have a pattern of grossly exaggerating terrorism threats and plots. The Jose Padilla case and several other alleged plots shrank significantly over time and did not involve major attacks. But initial exaggerations received much more media attention.

  • There is an inverse correlation between frightening government statements about terrorism and skeptical scrutiny from the media. Dubious claims are often reported without challenge. Even speculation about al-Qaeda’s intentions receives attention without evidence.

  • Terrorism exploits psychological vulnerabilities in societies. It aims to influence policy through perceptions of threat, not necessarily body counts. Its impact depends on the society being targeted.

  • The passage proposes that while many horrible things are happening in the world, it is also possible to imagine positive changes through scientific and technological progress. Some potential changes mentioned include vaccines for malaria and AIDS, genetically engineered crops to increase food production, and new forms of alternative energy.

  • The author argues these developments could usher in an unprecedented “Golden Age,” though acknowledges some scenarios of progress are as unrealistic as the most extreme scenarios envisioned by “Catastrophists.”

  • The passage then discusses Martin Rees’s book “Our Final Hour” which warns of various global catastrophes this century, though some of the threats discussed like self-replicating nanobots are still highly hypothetical. As one reviewer noted, merely being theoretically possible based on known laws of physics is not enough to seriously consider some risks without more substantial evidence.

  • In summary, the passage contrasts more optimistic visions for scientific progress with some writings focused on potential global catastrophes, noting both perspectives contain speculative scenarios and neither is necessarily more credible than the other based on currently available evidence and technological capabilities.

Here is a summary of the key points about lings:

  • Lings are a type of linguistic element that are smaller than words but larger than individual sounds or letters. They include prefixes, suffixes, roots, and other meaningful parts of words.

  • Analyzing the lings that make up words can provide insights into their etymologies and meanings. Common lings like “pre-”, “post-”, “-tion”, etc. tend to have consistent meanings that help decode unknown words.

  • Many languages have systematic ways in which lings can be combined to form new words. This morphological word formation allows speakers to understand or derive the definitions of novel word combinations.

  • In some linguistic frameworks like morphemic analysis, lings are analyzed as the smallest meaningful units of a word. Words are broken down into their constituent morphemes (roots, prefixes, suffixes) to understand semantic relationships.

  • The study of lings is useful for language learning as well as historical linguistics. Analyzing linguistic elements within and across languages can reveal relationships and how meanings have evolved over time.

  • While words are the fundamental units of syntax, lings provide additional insight into semantics, word origins, and how the internal structures of words relate to their meanings. The link between form and meaning is seen most clearly at the level of lings.

Here are summaries of the requested terms:

typhus - A group of infectious diseases caused by bacteria. Typhus fever is caused by Rickettsia prowazekii and is transmitted by body lice.

U.S. Food and Drug Administration (FDA) - A federal agency responsible for protecting and promoting public health in the U.S. The FDA regulates food, drugs, cosmetics, and more.

vaccines - Biological preparations that provide active acquired immunity to a particular infectious disease. Vaccines contain a weakened or dead form of the germ that causes a disease.

Vallone, Robert - An American social psychologist known for research on self-serving biases and unrealistic optimism.

Vandello, Joseph - A social psychologist known for research on precarious manhood and compulsory heterosexuality in men.

Vietnam War - A war that occurred in Vietnam, Laos, and Cambodia from 1955 to 1975 between North Vietnam and South Vietnam. The U.S. supported South Vietnam.

volcanoes - Openings in the crust of a rocky planet or moon where magma and gases erupt onto the surface. On Earth, volcanoes are formed at plate boundaries.

VX nerve agent - A extremely toxic chemical weapon and nerve agent. VX is a clear liquid with no odor or color that can be inhaled or absorbed through the skin.

Wald, George - An American physicist known for his work on vitamin A and visual pigments of the retina. Wald helped establish vitamins as essential diet components.

Walsh, John - An American politician who served as a U.S. Representative and Secretary of Labor. He was influential in the development of workplace health and safety laws.

Wansink, Brian - A professor and nutritional scientist known for applying behavioral economics concepts to food consumption and healthy eating.

Wason, Peter - A British psychologist known for Wason selection task experiments on logic and reasoning. The Wason test examines understanding of if-then relationships.

water purification - The process of removing undesirable chemicals, biological contaminants, suspended solids and gases from water. Methods include filtration, disinfection, and distillation.

Waxman, Henry - An American politician who served as a U.S. Representative from 1975 to 2019. He authored major environmental and public health laws like the Clean Air Act.

weapons of mass destruction (WMDs) - Weapons capable of killing many people and causing great damage. This includes nuclear, chemical, and biological weapons. Non-state use of WMDs is a global security concern.

Weingart, John - A cognitive scientist known for research on the sociology of scientific knowledge production and communication of science to the public.

Wells, Holly - A social psychologist known for her research on cyberbullying, sexualization of girls in media, and STEM stereotype threat among girls.

Westen, Drew - A clinical psychologist and neuroscientist known for applying insights from psychology to understanding politics and polarized thinking.

West Nile virus - A virus spread by infected mosquitoes that can cause severe fevers in humans. It was first detected in the United States in 1999 and has since spread across North America.

Whelan, Paul - An American chemist known for important contributions in photochemistry, biochemistry, and molecular orbital theory through computational and experimental approaches.

Whylie, Barbara - An American psychologist known for research on unconscious racial bias and prejudice, counterstereotypic imaging, and diversity in STEM fields.

Wildavsky, Aaron - A political scientist known for research distinguishing between hierarchical and individualistic cultures and how they perceive risks differently.

Will, George - An American journalist and political commentator. He was a Pulitzer Prize winning columnist for The Washington Post for over 30 years.

Willer, Robb - A sociologist known for using large online experiments to understand political ideologies, morality and prejudice, and how social networks influence opinions.

Wilms, Ian - A biologist known for conservation efforts to save panda populations in China through field studies, environmental education, and collaborative partnerships.

Wilson, Robyn - A social psychologist known for developing the theory of homophily which proposes that contact between similar people occurs at a higher rate than among dissimilar people.

Winfrey, Oprah - An American media executive, actress, author, and host. She is perhaps the most successful African American women in business history for building her multi-million dollar media company.

Winkielman, Piotr - A social psychologist known for affective (unconscious emotional) influences on cognition, perception, judgment and choice. He uses psychophysiological methods to study unconscious and embodied processes.

Wolf, Naomi - A feminist journalist, author, and activist known for critiquing the then-growing beauty myth culture of women’s increasing emphasis on beauty, youth and slimness in the 1980s and 1990s.

Woloshin, Steven - A physician and health sciences researcher known for studying the portrayals and interpretation of medical information and events to promote disease awareness, screening, and informed decision making.

World Health Organization (WHO) - A specialized agency of the United Nations responsible for international public health. The WHO helps coordinating disease control efforts, health systems strengthening, and more.

Worldwatch Institute - An independent non-profit environmental research organization that analyzes and reports on global environmental, energy and development issues.

World Wildlife Fund - An international non-governmental organization founded in 1961 dedicated to wildlife conservation and endangered species. It works in over 100 countries and supports habitat protection and restoration.

York, Herbert - An American biologist known for biological studies of aging impacts and developing the York rat longevity studies which advanced understanding of the biology of aging.

Zajonc, Robert - A Polish-American psychologist known for seminal research on social facilitation showing that an individual’s level of arousal and performance improves in the presence of others.

Zaltman, Gerald - An American social scientist known for developing the Zaltman Metaphor Elicitation Technique which uses visualization methods to access deep consumer and organizational metaphors and symbolism.

Zillman, Dolf - A German-American media psychologist known for experimental research on aggressive content and fear arousal in media effects, as well as effects of humor, nationalism, and physical attractiveness in entertainment and advertising.

Zimmerman, Peter - A German psychologist known for fundamental experiments and theories in attention, consciousness and memory reconsolidation which advanced understanding of encoding, storage and retrieval of memories.

About the Author: See introduction.

#book-summary
Author Photo

About Matheus Puppe