Self Help

The Voltage Effect How to Make Good Ideas Great and Great Ideas Scale - John A. List

Author Photo

Matheus Puppe

· 53 min read

BOOK LINK:

CLICK HERE

  • In 2016, John List was busy leading a preschool experiment in Chicago Heights to improve educational outcomes for disadvantaged youth. After running the preschool for 4 years, they were analyzing the results and hoping to create a curriculum model that could be expanded nationally and globally.

  • List was recruited for the Chief Economist role at Uber but initially dismissed the idea since he was busy with his education work and didn’t see the connection. However, he realized Uber and his project had a common goal - scale.

  • Scaling means taking an idea from a small group to a much larger audience to achieve widespread impact. It underlies social and technological progress.

  • There are pitfalls at every step of scaling an idea, from conception to expansion. Overcoming these challenges requires strategic thinking.

  • List ultimately accepted the Uber job to learn strategies for scaling impact from Silicon Valley tech companies. He hoped these lessons could help scale solutions to society’s biggest problems like education inequality.

  • John List, an economist, realized while doing education research in Chicago that he was also studying scaling - why some ideas succeed at growing while others fail.

  • He took a job at Uber in 2016 to gain insight into their rapid scaling and growth, hoping to find lessons that could be applied elsewhere.

  • At Uber, List was impressed by their reverence for data, seeing it as a scientist’s playground. However, he clashed with Travis Kalanick in his interview, as Kalanick repeatedly questioned List’s research.

  • After a combative back-and-forth, List was surprisingly offered the job as chief economist at Uber.

  • List sees his work, and this book, as being about scaling in a broad sense across business, policy, and more - how to take an idea from small to big.

  • Coming from economics and fieldwork, List takes a scientific approach to scaling, seeking to move from theory to evidence-based conclusions about real human behavior.

  • List wants to develop a systematic framework to determine when and how ideas can scale successfully and sustainably.

  • The author discusses the importance of considering scalability when developing policies, programs, or business ideas. Failure to assess scalability often leads to “voltage drops” where promising ideas fail when implemented on a larger scale.

  • Scalability failures are common - per some estimates 50-90% of programs lose effectiveness when scaled up. This can lead to unequal outcomes, cost overruns, and unmet expectations.

  • The author argues there are 5 “Vital Signs” or traits that make ideas more scalable. A deficiency in any one can doom scalability. The traits are not a “silver bullet” but assessing them can help identify bad ideas and scale good ones.

  • The insights on scalability come from the rise of big data and the author’s background in field experiments and economics. Big data allows tracking human behavior to understand why some ideas scale and others don’t.

  • The author calls for using data to create “policy-based evidence” rather than just “evidence-based policy” - going beyond existing data to generate new data to understand causes behind success and failure.

  • The book offers concrete steps to identify voltage gains, avoid voltage drops, and maximize scalability and impact when implementing ideas. The author wants to share lessons learned over his career as an economist conducting field experiments.

  • The book is divided into two parts. The first part explains the “Five Vital Signs” - the five key things that can cause an idea to fail when scaled up. These are: false positives, overestimating market share, unscalable circumstances, unintended consequences, and unsustainable costs.

  • The second part of the book covers four techniques for successful scaling: using behavioral economics like loss aversion, exploiting marginal opportunities, quitting short-term to win long-term, and designing a sustainable culture.

  • Examples are provided throughout of ideas that failed to scale or scaled successfully. A major example is D.A.R.E., the anti-drug program that was ineffective despite widespread adoption.

  • The book applies scientific principles to real-world examples of scaling in order to provide practical advice for anyone trying to scale an idea, whether in business, government, or elsewhere.

  • The ultimate message is that scaling impact requires avoiding common pitfalls and adopting evidence-based techniques. Careful analysis using the framework provided can help increase the chances of successfully scaling.

  • D.A.R.E. (Drug Abuse Resistance Education) was widely implemented in schools despite research showing it was ineffective. This illustrates how false positives can lead to wasted resources when ineffective programs are scaled up.

  • Statistical errors and biases like confirmation bias can lead to false positives in research. Samples may not represent the full population. People tend to seek out and interpret information in a way that confirms their existing beliefs.

  • The authors tested a wellness program for Chrysler workers and got positive initial results. But further testing showed the early results were a false positive - the program didn’t actually work when implemented more broadly.

  • False positives are common across many domains because of statistical errors and cognitive biases. But using rigorous methodology can help avoid scaling up ideas that don’t actually work. The scientific method, while imperfect, is still the “least worst” method we have for testing ideas.

  • Confirmation bias and bandwagon bias are common cognitive biases that can lead to false positives and the scaling of bad ideas. Confirmation bias makes us seek out evidence that confirms our existing beliefs. Bandwagon bias makes us conform to the views of influential people.

  • These biases likely evolved as useful mental shortcuts but now hamper critical thinking. In the past, assuming a shadow was a predator kept us safe. Now, confirmation bias causes mistakes in science, medicine, policy, and business.

  • Bandwagon bias means influential leaders can sway groups to adopt bad ideas without proper vetting. This happened with D.A.R.E. as endorsers like Nancy Reagan helped it scale before research showed it was ineffective.

  • The winner’s curse and sunk cost fallacy also drive bad scaling. Overbidding to win (the winner’s curse) and staying committed to already invested resources (sunk cost fallacy) keep us scaling bad ideas.

  • These predictable quirks of human psychology show we need to be more aware of our biases and institute processes to vet ideas thoroughly before scaling. Relying on “fast thinking” heuristics often backfires. Critical thinking, not conformity or cognitive laziness, serves innovation best.

The winner’s curse is a phenomenon where the winner of a competitive bidding situation ends up overpaying for something because its true value was uncertain. This happens in many scenarios like start-up investing, Hollywood bidding wars, art auctions, etc. The winner pays more than the asset is worth.

A similar thing happens with ideas that seem scalable but aren’t. People overinvest to get them off the ground, then ignore data showing the ideas aren’t good and keep investing due to sunk cost bias. Throwing more money at bad ideas just increases losses.

The solution is replication - repeating tests multiple times to confirm results. Replication reduces false positives. Independent replication by others is even better.

In business, false positives can often be detected by testing products/features with sample customers first. But in public policy, lack of funding for replications and slow assessments makes this harder. Replication could have prevented money wasted scaling failed education programs.

Replication is key to reliable science, but many important experiments have failed to replicate. This highlights the need to repeat studies before widely applying their findings.

The replication crisis in science has revealed that many published studies fail to replicate, calling into question the validity of the original results. This is often due to innocent statistical errors or cognitive biases, but can also be the result of intentional data falsification by unethical researchers like Brian Wansink. The incentives in academia to publish exciting new findings in top journals can motivate this kind of misconduct.

The same incentives exist in business, as seen with Elizabeth Holmes and Theranos. Holmes was able to raise billions based on breakthrough blood-testing technology that didn’t actually exist. Investors relied heavily on social proof and confirmation bias rather than hard data. Holmes was motivated by the potential for billions in stock options to fake results. Better incentives, like rewards for employees who surfaced problems, may have exposed the issues sooner.

In general, incentives play a huge role. People will be motivated to lie or cheat if rewards are high for success. But counterincentives like protections for whistleblowers can surface problems early. We must consider incentives at all levels to encourage truth-telling and make fakery and false positives less rewarding.

  • The author left Uber in 2018 to work for its rival Lyft. He was attracted to Lyft’s more positive work culture under CEO Logan Green.

  • Like Uber’s Travis Kalanick, Logan Green was passionate about changing urban transportation. In 2007, he had founded a long-distance ridesharing company called Zimride that later became Lyft.

  • When the author joined Lyft, it was trying to find an innovation to help it compete with Uber. In late 2018, Logan Green thought a membership program could attract loyal customers, like Costco does.

  • The author disagreed that a membership program would scale well for Lyft. He argued in multiple meetings that rideshare is already readily accessible, unlike Costco, so a subscription model wasn’t necessary.

  • Logan believed memberships were the right strategy, while the author was confident they were not. The author felt Lyft needed to focus on differentiating itself from Uber beyond just price.

  • The key differences were that Logan thought memberships were the solution, while the author believed Lyft’s vulnerabilities as a startup meant memberships would not help it effectively compete with Uber.

Here are the key points:

  • At Costco, you can’t easily switch to another retailer like Sam’s Club in the moment. But with ridesharing apps like Uber and Lyft, switching between apps is very easy.

  • Costco ensures consistent selection and savings not readily matched elsewhere. But ridesharing is a dynamic market where prices and wait times fluctuate, allowing easy switching between apps.

  • People often subscribe to multiple streaming services to get more choices. But with ridesharing, there’s little differentiation between services.

  • Lyft tested a membership program offering discounts. But the data showed most riders taking the deals were “NoGoods” - they didn’t increase rides, just got discounts on existing rides.

  • This membership program didn’t scale because the NoGoods, who just got discounts without increasing rides, far outnumbered “JoGoods,” who increased their usage. This cost Lyft money.

  • For a membership program to work, there needs to be an attractive ratio of JoGoods to NoGoods. Lyft’s experiment showed the NoGoods greatly outnumbered the JoGoods.

  • To successfully scale an idea, you need to deeply understand your target audience across different geographies, cultures, etc. What works for one group may not work for another.

  • Kmart’s Blue Light Special originally succeeded because local store managers could tailor discounted items to their customers’ needs. But corporate dictated uniform discounts nationwide, ignoring local customer differences, which led to failure.

  • Ensure your initial test audience is representative of the larger population you hope to scale to. Selection biases or non-representative samples will skew results.

  • Much early research draws from WEIRD (Western, Educated, Industrialized, Rich, Democratic) subjects. Find ways to include more diverse populations in tests to avoid WEIRD biases.

  • Survey a broad, representative sample before scaling up. Be cautious about generalizing results from small homogeneous groups to a larger heterogeneous population.

  • Know exactly who comprises your target audience. Calculate the tradeoffs between favoring intense users versus casual users when designing offerings to maximize scale.

  • The selection effect occurs when people opt in to programs or studies in a non-random way. This can distort results, because those who opt in are often more motivated or likely to benefit than the general population.

  • For example, insomniacs are more likely to volunteer for a sleep study, so results may look better than they would in the full population.

  • Similarly, focus groups may attract unusually enthusiastic fans, like the McDonald’s focus group that loved the Arch Deluxe burger, which then flopped with most customers.

  • Researchers may also deliberately select participants likely to benefit, to get better results and recognition. This happened with an iron-fortified salt study in India.

  • The best solution is randomized trials, like those done by Opower to test energy conservation programs. But even randomized trials can have site selection bias if the sample isn’t truly representative.

  • Opower’s trials were skewed toward environmentalists. The program worked well there but not in other areas with different values.

  • Failing to understand how sample and population differ is a key reason results don’t scale up. Chicago Heights parenting program worked well for Hispanic but not Black and white families.

  • Selection bias has shaken foundations of social science. Many findings relied on narrow, unrepresentative samples, especially wealthy college students.

Here’s a summary of the key points:

  • Social science experiments have historically relied too much on WEIRD (Western, educated, industrialized, rich, democratic) participants, leading to findings that don’t necessarily apply universally.

  • Joseph Henrich found that an indigenous Amazonian community didn’t punish unfair behavior in an economics game, unlike typical WEIRD participants, raising questions about the universality of research findings.

  • Many findings in psychology and other social sciences may not be scalable or applicable to non-WEIRD populations.

  • Researchers need to use more diverse samples and natural field experiments to uncover hidden differences and generate more universal insights.

  • Businesses should also test ideas on diverse groups and in real-world settings to understand what scales.

  • If your model won’t scale past a point, consider broadening your audience by diversifying offerings, as Gopuff did when expanding beyond college students.

  • Fast food restaurants regularly introduce new menu items to attract different customers and boost scale, like Taco Bell’s Nacho Fries.

  • The key is to uncover and examine heterogeneities early on, not hide them, to generate truly scalable ideas.

  • Jamie Oliver opened his restaurant Jamie’s Italian in the UK in 2008 to great success. He hoped to rapidly expand it into a recession-proof chain.

  • Scaling up a restaurant is typically very difficult because a chef’s unique talents and skills are not easily replicated. However, Jamie’s Italian was able to expand quickly to 70 locations because Oliver’s fame brought people in, and the simple, fresh ingredients meant other chefs could replicate the food.

  • Two key ingredients in Jamie’s initial success were managing director Simon Blagden, who had a skill for choosing the right franchise partners, and Oliver himself, who instilled the chain with his spirit and mission.

  • But as the chain rapidly expanded, Blagden left and Oliver became less involved. New franchise partners cared more about profits than Oliver’s values. As a result, quality declined precipitously.

  • The chain collapsed financially, showing how critical the ‘ingredients’ of leadership and values alignment are when scaling up, even if the core product itself is replicable.

  • Jamie Oliver built a successful restaurant chain called Jamie’s Italian, but it ultimately collapsed due to issues with leadership, quality, and adaptability.

  • Oliver himself did not actually cook at the restaurants but his values and vision shaped them initially. However, his limited time commitment later on led to a leadership void.

  • Oliver made a mistake promoting his unqualified brother-in-law to a high position. This new leader lacked restaurant experience and failed at site selection, retaining employees, and adapting to market changes.

  • The food quality declined over time, leading to terrible reviews and complaints. This is often fatal for a restaurant business.

  • By not understanding the key ingredients of his initial success, Oliver was unable to sustain them as the chain grew. The enterprise failed to maintain fidelity to its non-negotiable elements.

  • The case illustrates the need to determine if your secret sauce relies on people (“chefs”) or scalable systems/products (“ingredients”). Jamie’s Italian needed both but underestimated the importance of the right leaders.

  • To scale successfully, you must identify your negotiable and non-negotiable ingredients and ensure the non-negotiables remain in place. Oliver failed to do this.

  • Patient noncompliance with medical treatments is a major obstacle to scaling effective health solutions. Doctors struggle to get patients to properly follow prescribed treatments.

  • Lack of compliance also hinders the success of policies, programs, and businesses. People need to engage and participate for these to work at scale.

  • Understanding incentives and human behavior is key. Present bias makes people value immediate costs/benefits over future ones.

  • Financial incentives can encourage compliance, if designed correctly. The Chicago Heights curriculum showed paying parents increased involvement.

  • Removing non-negotiable program elements cripples success. A London school implementing the curriculum without parent payments saw poor results.

  • Program drift causes fidelity issues too. Implementers alter core program elements at scale. Head Start’s home visits drifted from the original model when expanded.

  • The more at-risk the recipients, the more quality can slip with program drift. Home visits suffered for the neediest Head Start families.

  • Scaling requires maintaining strict fidelity to non-negotiables. Drift or noncompliance inevitably reduces benefits.

  • Digital technologies like apps and smart devices seem scalable because the code can be infinitely replicated. However, adoption does not equal compliance - just because people use the technology does not mean they will use it as intended.

  • Opower’s smart thermostat was engineered to save energy but failed to deliver savings at scale. The reason was that real-world customers did not use the thermostat optimally - they overrode default settings and went back to old habits. The engineers failed to anticipate how imperfect real humans would interact with the technology.

  • Successful technologies are designed from the start with user behavior in mind. Apple beta tests with real users. Instagram is incredibly simple and intuitive to use.

  • Overall, many voltage drops come from innovative new technologies that seem scalable but fail to account for human imperfection. Compliance challenges need to be addressed in the design phase through imagination and beta testing with real, non-expert users.

  • Ralph Nader’s consumer safety crusade led to regulations like seatbelt requirements, but research later found these did not reduce highway deaths as intended. Drivers felt safer so they took more risks, a phenomenon called the Peltzman effect.

  • Safety measures can create unintended consequences by changing people’s risk tolerance. This underscores the need to consider potential spillover effects when scaling ideas.

  • Spillover effects are unintended impacts of one group’s actions on another that become visible at scale. General equilibrium effects in economics are an example - disrupting one part of a system leads to adjustments elsewhere.

  • Small experiments won’t necessarily reveal spillovers that emerge when scaled. For instance, forcing some students to switch majors won’t show market effects, but large numbers doing so could lower wages in that field.

  • To avoid negative spillovers, we must think in advance about the broader system an innovation operates in, not just the targeted area. Good intentions aren’t enough - we must consider secondary effects.

  • Scaling sustainably requires understanding interdependencies and planning for unintended consequences. With careful systems thinking, we can create positive ripple effects instead of negative spillovers.

Here are the key points about how much people were earning in their first job:

  • The passage does not directly state how much money people were earning in their first jobs. The example discusses college students choosing new majors, but does not provide specific salary information.

  • It suggests that adding 50 students with new majors would not negatively impact their earning power, because 50 students is too small to disrupt the overall job market equilibrium.

  • The passage is focused on the concept of general equilibrium and unintended spillover effects when interventions are scaled up. It uses the example of students changing majors to illustrate that small interventions may not have the same effects when done at a larger scale.

  • But it does not provide specific data on how much money the students were earning in their first jobs either before or after changing majors. The earnings impact is hypothesized but not quantified.

In summary, the passage discusses the earnings of students with new majors in a conceptual way to explain general equilibrium, but does not give specifics on actual earnings amounts. The key takeaway is that small interventions may not predict outcomes when scaled up due to spillover effects across the entire system.

  • The author conducted a field experiment paying solicitors different hourly rates ($10 vs $15) to raise donations. Higher paid solicitors raised more money.

  • In a follow-up experiment, solicitors paid $10 did just as well as those paid $15 when they were unaware of the pay discrepancy. But when mixed together, the $15 solicitors outperformed the $10 ones.

  • This demonstrated “resentful demoralization” - the $10 solicitors worked less hard when they knew others were paid more, even though the absolute pay rate did not affect performance alone.

  • A study at a bank found employees worked harder when learning their managers earned more than thought, but less hard when peers earned more. People resent peer discrepancies more.

  • Transparent salary data could thus motivate if revealing higher manager pay, but demoralize if exposing peer inequality, especially in large companies with more peers than managers.

  • More broadly, individual behavior depends partly on social comparisons. Leaders should watch for unintended effects of transparency and other policies on relative perceptions.

  • Previous research showed that preschool programs can have “spillover effects”, positively impacting even those children not directly participating through proximity and social interactions.

  • The author saw this firsthand with the Chicago Heights Early Childhood Center. The curriculum strengthened cognitive and noncognitive skills in the treatment group children. When they played with control group kids, they spread those skills through natural social interactions.

  • There was also a “spillover” amongst parents. Hearing about the program motivated control group parents to find ways to support their kids’ development too.

  • However, there was a negative spillover of less parental attention paid to siblings of kids in the treatment group.

  • Positive spillovers like these are a type of “network effect” where benefits expand as more people join a network or adopt a solution. This applies to businesses like Facebook and public health measures like vaccines.

  • The lesson is to closely watch for unintended spillover effects when scaling up an enterprise or idea. Address negative spillovers and exploit positive ones to unlock the full potential.

  • Arivale was a promising startup that offered personalized health and lifestyle recommendations based on genetic testing and biomarkers. It was founded in 2014 by pioneering scientist Leroy Hood and had raised $50 million in investment.

  • Arivale appeared to check all the boxes for scalability: its services were grounded in scientific evidence, had mass appeal, remained faithful to its core offerings, and avoided negative spillovers. Its future success seemed assured.

  • However, Arivale failed to account for economies of scale - the idea that average costs per unit decrease as production increases. If costs rise as you produce more, you have diseconomies of scale.

  • No matter how strong the idea, if costs outpace returns, the business will lose momentum and fail to scale. This was Arivale’s downfall - despite its scientific rigor and early promise, its costs became unsustainable as it tried to expand.

  • The Arivale case underscores the importance of the fifth vital sign of scalability - sustainable costs. Even ideas that pass the first four hurdles can falter if their economics don’t work at scale.

  • Arivale launched an innovative personalized health and wellness program that used genetic testing, bloodwork, and health coaching. However, the high costs of these services meant Arivale had to charge customers around $3,500 per year.

  • The high price point limited demand and customer growth. Arivale struggled to attract enough new customers at this price level to make the business viable.

  • In 2018, Arivale cut prices to $1,200 per year to try to boost demand. But the lower price was unsustainable due to Arivale’s high operating costs.

  • In 2019, with costs exceeding revenue and unable to continue operating at a loss, Arivale ceased operations. The company failed to achieve the economies of scale needed to make its model financially viable.

  • The lesson is that in scaling a business, it’s not just demand that matters but also what customers will pay versus operating costs. Arivale failed to balance these factors, running out of capital before reaching profitability.

Here are the key points:

  • SpaceX, Blue Origin, and Virgin Galactic are investing billions in space travel technology, but the extreme costs make it a very risky business.

  • They are exploring parallel revenue streams like cargo transport and public-private partnerships to help offset the massive investments required.

  • Scaling space exploration will depend on reducing costs through economies of scale, like SpaceX has done with reusable rockets.

  • Upfront fixed costs are a hurdle for many innovations, requiring sufficient funding until economies of scale can kick in.

  • Software and other startups can reduce upfront costs through equity compensation and other strategies.

  • High upfront costs can enable lower marginal costs later if done right.

  • Nonprofits and social initiatives also face cost obstacles in scaling impact, requiring creative strategies to expand cost-effectively.

  • In the 1950s, polio was a major public health crisis in the U.S., paralyzing tens of thousands of children each year.

  • Jonas Salk developed an effective polio vaccine that was tested on hundreds of thousands of children. The vaccine worked for all children regardless of demographic factors.

  • The polio vaccine was inexpensive to produce and distribute, allowing it to be scaled up across the U.S. By 1979, polio was eliminated in the country.

  • In contrast, a program using expensive hovercrafts to deliver polio vaccines in Zambia did not scale due to prohibitive costs.

  • For social programs to scale, the benefits must outweigh the costs. Governments and organizations favor interventions that provide the most impact per dollar spent.

  • Even successful programs face competition for limited funding. The most “low-voltage” initiatives, with the highest costs and lowest benefits, are the most likely to lose support.

  • A major cost factor as programs scale is human capital. Skilled workers are scarce and retaining them grows more expensive. This was an issue when California and Tennessee tried to scale class size reductions.

  • To make the CHECC curriculum scalable, the planning team used “backward induction” to design it assuming only average teachers would be available, not the top 1%. This resulted in more modest pilot results but set the program up for long-term success.

  • The author conducted an experiment with a loan company to evaluate the character and integrity of loan applicants using a “dropped wallet” test. This aimed to predict who would be a responsible borrower and successful at scaling their business.

  • However, the author argues that focusing too much on an individual’s personality and character is misguided, as it falls into the correspondence bias trap of overestimating personal characteristics and underestimating situational factors.

  • Instead, what matters most for an organization’s success and ability to scale is getting the incentives right to motivate people. With good incentives, even people with flaws can behave with integrity.

  • Incentives are scalable and have an enormous impact on shaping behaviors, unlike relying on specific leadership styles and personalities.

  • The example of Uber shows how a small change in incentives from no tipping to adding a tipping feature changed driver behaviors dramatically.

  • Well-designed incentives are often more powerful than monetary compensation in driving performance. Non-monetary rewards can also effectively incentivize good behaviors.

  • Incentives need to directly reward the desired actions and outcomes. To scale successfully, incentives within an organization need alignment from top to bottom.

  • Uber originally did not allow tipping, in keeping with Travis Kalanick’s goal of providing low-cost rides. However, this led to discontent among drivers who wanted to earn tips like Lyft drivers could.

  • The #DeleteUber campaign and other PR crises highlighted the need to regain driver trust. Allowing tipping was seen as a way to do this.

  • After introducing tipping, Uber found that drivers’ overall wages did not increase because the number of drivers increased, reducing rides per driver.

  • Also, tipping did not improve driver service quality, contrary to expectations.

  • The big surprise was that only 1% of riders tipped every time, 60% never tipped, and 39% tipped sometimes.

  • This was due to the lack of public visibility and social pressure around the tipping decision, illustrating the principle of loss aversion - people avoid potential ‘losses’ like tipping more than they seek equivalent gains.

  • Loss aversion is a key tenet of behavioral economics, pioneered by Kahneman and Tversky, which recognizes irrational biases in human decision making.

  • Social and reputational pressure are powerful incentives for certain behaviors. When Uber removed the social pressure of tipping drivers publicly, tipping rates plummeted.

  • Publicly disclosing information can harness social pressure. A field experiment in the Dominican Republic found that threatening to publicly disclose the names of tax evaders increased tax revenue by over $100 million.

  • The desire to avoid loss of social standing is universal and can be leveraged. An experiment with Virgin Atlantic pilots used monthly fuel efficiency reports as a subtle way to establish a social norm around reducing emissions, without explicit threats or incentives. This nudge led pilots to become more fuel efficient.

  • Social incentives like reputational pressure and norms can be effective and scalable ways to influence behavior, without relying on punishment or rewards. When people believe a behavior is an established norm, they are incentivized to comply to avoid stigma or loss of social standing.

  • Overall, the social loss aversion effect demonstrates how we can tap into people’s natural aversion to loss of reputation or social status as a tool for positive change. Subtle nudges leveraging social norms can lead to significant behavioral shifts.

  • A study by Virgin Atlantic tested different messages to encourage pilots to conserve fuel. The most effective nudge was sending pilots a letter with fuel efficiency targets and words of encouragement, which improved performance by up to 28%.

  • The study shows how social incentives that tap into people’s desire to view themselves in a positive light and meet social norms/expectations can be effective at changing behavior. These incentives easily meet the Five Vital Signs for scaling.

  • Similar social comparison messaging has worked to reduce energy consumption in households and increase adoption of energy-efficient technology like compact fluorescent lightbulbs. The effect often persists even after the messaging stops.

  • Social incentives like these are highly scalable because human psychology is similar across groups, so the incentive works for most people. They are also cheaper than financial incentives which must be continually increased.

  • Social incentives can be used in many contexts like companies, healthcare, education, and voting, where reporting one’s behavior incentivizes people to make more prosocial choices to preserve their self-image.

  • Financial incentives can be used more creatively than just paying people more money to get more performance. The “clawback approach” involves giving people a bonus upfront and then taking it back if they don’t meet performance goals. This taps into people’s aversion to losing something they already have.

  • The clawback approach was tested at a factory in China and increased productivity by over 1%. At a bean sorting facility in Uganda it increased productivity by 20%.

  • The clawback approach leverages the psychological bias known as the endowment effect - people value something more once they feel ownership of it. It also uses framing effects - people are more motivated to avoid losing something than to gain something new.

  • The clawback method works across different cultures and situations. However, people who frequently trade assets show less susceptibility to loss aversion, so there are limits.

  • For ethical implementation, companies must set realistic goals and be prepared to pay out bonuses even if many employees earn them.

  • Beyond business, clawbacks can also be effective for nonprofits and in education. Financial incentives don’t have to be large to work.

  • Dan Ariely, an economist, conducted a field experiment with teachers in Chicago Heights to test the effectiveness of using a “clawback” incentive.

  • 150 teachers were divided into two groups. The “reward” group could earn bonuses up to $8,000 based on student test score improvements. The “loss” group was given $4,000 upfront and had to return money if scores were below average.

  • The loss group saw huge gains in test scores, suggesting the clawback incentive was very effective. The improved performance lasted 5 years after the experiment.

  • A similar experiment with students found financial and non-financial rewards improved test scores, but only if given immediately. Delaying rewards even one month eliminated the motivation effect.

  • Ariely argues clawbacks and immediate rewards can boost motivation in areas like education where intrinsic motivation is low. While some criticize rewards as diminishing intrinsic motivation, in some contexts they can build it by showing the satisfaction of hard work.

  • The clawback approach could be applied to other public/nonprofit sectors like social work and policing if funds are available. Incentives must be implemented responsibly and fairly to benefit all. The goal should be generating gains, not making people feel vulnerable.

  • To maximize impact with limited resources, we need to think differently - think along the margins.

  • The author was offered a position in the White House Council of Economic Advisers, despite concerns about his political leanings. The chair, Hubbard, said they wanted his economic expertise, not his politics.

  • The job involved conducting benefit-cost analyses of new policies and regulations. This involves weighing the benefits against the costs to determine if a policy makes economic sense.

  • The author sees policymaking as a “gigantic field experiment” to test ideas and track their effects. He believes rigorous benefit-cost analysis can help government spend money more effectively to improve lives.

  • However, measuring benefits and costs of policies on a large scale is tricky. It involves difficult tradeoffs on where to allocate limited resources for maximum impact.

  • The author noticed that not all dollars spent on a policy have equal value. The first dollars often reduce costs or provide benefits more than subsequent dollars.

  • He realized policymakers needed to examine the benefits of the marginal or last dollar spent, not just average benefit per dollar, to properly prioritize spending for maximum impact.

  • This insight on marginal versus average benefits could help agencies like the EPA better allocate resources to policies with the highest returns.

  • The Marginal Revolution in economics in the late 19th century introduced the concept of marginal utility, which recognizes that the value of each additional unit of a good or service declines as more units are consumed.

  • Marginal utility helps explain the value of goods and services based on the satisfaction provided by the last or marginal unit consumed.

  • The law of diminishing marginal utility states that the marginal utility tends to decrease as consumption increases.

  • Marginal analysis is useful for allocating resources efficiently by comparing the returns on the last unit of investment across different options.

  • However, people tend to rely on averages rather than think marginally, which can lead to inefficient decision making.

  • Applying marginal analysis to government spending could help identify the point where programs start yielding diminishing returns, allowing funds to be reallocated more efficiently.

  • But bureaucratic realities make it difficult to take money away from one program and redirect it based on marginal utility, as agencies are incentivized to maximize their budgets rather than efficiency.

  • The author believes that marginal thinking - optimizing the impact of each additional dollar spent - is critical for efficiency, but this approach is often lacking in government bureaucracies.

  • When the author joined Lyft as chief economist, he realized the company was not allocating resources based on marginal returns either. Some marketing channels like Facebook ads had much lower returns on marginal spend compared to others like Google ads.

  • The author wrote a company-wide memo applying marginal thinking, nicknamed “Adam Smith Visits Lyft,” arguing that Lyft should equalize marginal returns across all spending to maximize growth. This memo became very influential at Lyft.

  • Marginal thinking was then used for cost-cutting during COVID-19 and reallocating spend as demand recovered. The positive culture at Lyft enabled adoption of these changes.

  • The author argues marginal thinking can work in any organization, not just tech companies, though some may need to experiment more than others to get the data. He provides an example from his summer job in high school of how marginal thinking revealed inefficiencies.

  • Voltage drops often occur when a company scales up because it fails to think on the margins. The Wisconsin Cheeseman is an example where hiring more workers did not lead to proportionately more productivity because average productivity was budgeted for rather than marginal productivity.

  • To avoid voltage drops, you need to look for weak spots where marginal thinking is not being applied, such as areas with many levers like marketing strategies or productivity improvements. Compare granular data across situations to see where marginal returns are diminishing.

  • Experimentation is key - try different combinations of levers and compare results across regions or groups to optimize resource allocation. Some marginal benefits are intangible and hard to quantify, but still important.

  • Avoid the sunk cost fallacy of letting past investments or mistakes influence future decisions. Rational decisions should be based only on the expected returns of the next dollar spent. Emotions often lead us to irrationally commit to sunk costs.

  • Be willing to change course and leave past mistakes behind. The bygones principle says only future returns matter now. Redirect resources to where you are getting the highest marginal returns.

Here are the key points from the passage:

  • John List was a talented high school golfer who earned a spot playing golf in college. He dreamed of becoming a professional golfer.

  • During a visit home from college, List played golf with some other talented college golfers. He realized they had surpassed his skill level and he likely didn’t have what it takes to become a pro.

  • List did thorough research comparing his scores to the other golfers over time. The data confirmed he wasn’t good enough to make it onto the pro tour.

  • Despite his dream and the cultural message to never quit, List made the difficult decision to give up on becoming a pro golfer. The evidence showed his talent wouldn’t scale to the highest levels.

  • List argues that sometimes quitting is important to avoid further losses and free up time and resources to pursue more promising opportunities. Winners do quit things that aren’t working.

  • He applies this lesson to business - when data shows certain products or initiatives aren’t scaling, successful leaders must be willing to quit them rather than hang on due to sunk costs.

In summary, the passage argues that quitting can be a sign of wisdom and strength rather than weakness. When the data shows you won’t achieve your goal, cutting losses to pursue something new is often the best path forward.

Here are the key points:

  • Achieving great things often requires quitting pursuits that are going nowhere. Knowing when to quit allows you to redirect efforts to endeavors where you can make a bigger contribution.

  • Opportunity cost - the potential gains missed by choosing one option over another - is a crucial concept. When resources like time and money are limited, not evaluating opportunity costs means squandering impact.

  • As organizations and ideas scale, opportunity costs grow. More time and money get invested, so it’s important to quit doomed efforts early before too much is sacrificed.

  • When making decisions, we often neglect opportunity costs and focus just on the options in front of us. But evaluating alternatives, even ones not explicitly presented, is key to maximizing impact.

  • Time is a precious, limited resource like money. Wasted time means lost potential, so when scaling efforts, regularly reevaluating if time is being used in the best way is essential.

  • Knowing when to quit comes down to honestly assessing if you have the ability to succeed in an endeavor. Scalable skills matched with passion indicate good odds of making an impact.

In summary, getting good at quitting unscalable pursuits early allows redirecting time, money and talent to efforts with greater potential payoff. This thoughtful quitting can unlock bigger contributions to society.

  • Pursuing goals requires sacrifice, especially the opportunity cost of other paths not taken. It is devastating when ideas fail after much time and effort is invested.

  • “Optimal quitting” means leaving things behind at the right time to move on to better opportunities. This minimizes opportunity cost.

  • Rather than fixating on a struggling idea, imagine other worthwhile pursuits. Having alternatives makes quitting less painful.

  • Quitting early is ideal, before too much opportunity cost accumulates. This doesn’t mean lacking grit - it enables starting fresh.

  • Diminishing returns indicate it may be time to quit or pivot. Also consider if you are the right person to scale an idea.

  • The concept of comparative advantage means focusing your efforts where you have an edge. Don’t persist in goals you are ill-suited for.

  • Apply this principle to careers, causes, businesses - do what you excel at. But we often dedicate ourselves to goals we are less likely to succeed at.

  • Knowing when to quit an unscalable idea or endeavor is difficult but important. Though we have the rational tools to make this assessment, emotions like regret over sunk costs and fear of the unknown often prevent us from quitting things we should.

  • To determine if an idea is scalable, ask if you have a true comparative advantage in the area and if there is substantial market demand for it. If not, it may be time to pivot or quit entirely.

  • People are often happier after making a major change like quitting a job or ending a relationship, even though the uncertainty of change can be daunting. We tend to prefer the status quo due to ambiguity aversion.

  • Since you can never know the counterfactual of what would have happened if you quit sooner, it’s important to watch what happens when others pursue opportunities you passed up. This helps reveal if your assessments were wrong.

  • Time is precious and quitting unscalable ideas frees up time and resources to find scalable opportunities where you do have comparative advantage. As Thomas Edison demonstrated, perseverance requires giving up on ideas that won’t work to find the ones that will.

  • The fishing villages of Cabuçu and Santo Estêvão in Brazil demonstrate how different workplace cultures can develop based on the nature of the work. Cabuçu fishermen work collaboratively in teams, while Santo Estêvão fishermen work independently.

  • Field experiments conducted in these villages revealed that the cooperative culture of Cabuçu led to greater trust, generosity, and concern for the collective good compared to the more individualistic culture of Santo Estêvão. This suggests that cooperative teamwork can foster prosocial values that extend beyond the workplace.

  • Workplace culture impacts behaviors and norms like trust vs distrust, cooperation vs individualism, fear vs security. Culture defines an organization, influencing how work gets done and the values underlying the work.

  • Some workplace cultures thrive as organizations scale, while others self-destruct if the early-stage culture is mismatched to the needs of a larger organization.

  • Travis Kalanick built a meritocratic but abrasive culture at Uber that ultimately backfired as the company scaled. It led to unethical decisions and harassment issues.

  • Scaling culture requires balancing meritocracy and humility, promoting psychological safety, and having systems to course-correct cultural problems before they become toxic. Diverse perspectives and open communication are key.

  • In early 2017, a series of scandals rocked Uber and revealed major problems with its workplace culture fostered by CEO Travis Kalanick. These included allegations of sexual harassment, a lawsuit over trade secrets theft, a video of Kalanick berating a driver, and the use of software to evade authorities.

  • As an advisor who only spent a few days per month at headquarters, the author did not directly experience Uber’s toxic culture. Kalanick seemed decent in their interactions.

  • However, the author realized Uber’s combative, gladiatorial meeting culture did not scale well as the company grew. Though it drove early success, it silenced employees and led to high turnover.

  • Kalanick believed in ruthless meritocracy, where the best ideas win through rigorous debate. But this distorted true meritocracy, as privilege and politics still played a role. People got “run over” verbally in meetings.

  • This culture hurt deep thinkers, introverts, and non-confrontational people. As Uber grew, more people were affected and left. It became a repellent to potential hires.

  • Kalanick later acknowledged he favored logic over empathy and didn’t ensure the right team dynamics. A small company can get away with an aggressive culture, but it does not scale up well.

  • Uber’s culture valued individual meritocracy over teamwork and cooperation. This led to toxicity and distrust as the company scaled up.

  • In contrast, the fishermen of Cabuçu valued trust, generosity, inclusivity, and cooperation. This created social cohesion that guided behavior.

  • Meritocracy encourages individual gains over collective ones. It does not promote the collaboration that becomes critical as a company grows.

  • At scale, the opportunity cost of not collaborating increases. There are more potential partnerships within a large organization.

  • Coopetition (a mix of cooperation and competition) can drive performance improvements. Rewards based on individual and team success facilitate knowledge sharing.

  • Netflix has a culture centered on trust and freedom/responsibility, not micromanaging. This shows trust and cooperation can coexist with high performance.

  • Leadership must put people first and institute cultural values like trust, generosity, and teamwork from the start, not as an afterthought. These values must scale up with the business.

In summary, cultures that value trust, inclusivity and teamwork are more scalable than rigid meritocracies. Leaders must prioritize cooperative values from the outset.

  • Ident delegation policies, where employees choose how much of their paycheck to take as equity, foster a culture of trust and alignment with company success. These policies generally work well, though occasional small problems may arise. High-trust cultures attract like-minded workers and repel those who don’t fit the culture.

  • To build a cooperative yet high-performing culture, incorporate teamwork into the organizational structure. For example, have each employee belong to at least two different teams, ideally in different departments. This promotes collaboration and idea cross-fertilization.

  • Achieving diversity at scale in recruiting can be challenging. EEO statements can surprisingly backfire by triggering concerns about tokenism among minority applicants. Showing that diversity is lived, not just stated, is key. Including certain non-diversity information in job ads can also help attract diversity.

  • CSR and social responsibility, while sincere in many cases, are often used for marketing. However, they don’t boost sales much. But highlighting CSR in job ads can signal a prosocial, inclusive culture and aid recruiting top talent, rather than backfiring like EEO statements.

In summary, culture and organizational practices that foster trust, alignment, cooperation, diversity and social responsibility are key to attracting and retaining top talent at scale. But virtue signaling through EEO statements can backfire, while non-diversity information like CSR can aid diversity.

  • Conducted a field experiment posting job ads with and without EEO (equal employment opportunity) language to see the effect on applicant pool diversity. Found that explicitly stating EEO decreased applications from minorities, indicating it can inadvertently signal low diversity.

  • Conducted another field experiment posting job ads with and without language about the company’s commitment to corporate social responsibility (CSR). The CSR language increased applicant pool diversity by 25% and resulted in more productive hires, showing it attracts better candidates.

  • However, in an experiment with existing employees, introducing CSR messaging led to increased misbehavior like cheating, likely due to “moral licensing.” This shows CSR must be applied carefully when directed at current employees.

  • Another experiment found that stating salary negotiability in job ads increases women applicants and makes them negotiate salaries equally as men. This simple change can improve gender pay equity.

  • Research shows using longer shortlists for hiring increases diversity. Deepening the pool of final candidates gives more diverse applicants a chance.

  • When trust is violated, apologies can help but must be done correctly. Apologies should be delivered promptly, take responsibility, promise forbearance, and offer repair. Insincere or poorly timed apologies can backfire.

  • The author recounts an experience of being late to give a conference talk due to a bad Uber ride, where the driver got lost.

  • He called Travis Kalanick to complain, and Travis suggested he take matters into his own hands to determine if bad Uber rides were a widespread issue hurting the company’s reputation.

  • The author ran experiments analyzing rider data to see the impact of bad rides on future Uber usage. He found they decreased usage by 5-10% over the next 90 days, costing millions in lost revenue.

  • Additional experiments tested different types of apologies to riders after bad rides. The most effective apologies acknowledged responsibility and included a small ($5) coupon.

  • However, too many apologies for multiple issues can backfire and be worse than no apology. Apologies should be used strategically after one-off bad experiences.

  • The implications are that companies need to demonstrate a willingness to make material sacrifices, like financial compensation, to win back customer trust after mistakes.

  • Workplace culture and ethics influence society more broadly. The story of the Brazilian fishermen shows how cooperative values at work can spread to strengthen communities.

  • As we saw with the pandemic, effective scaling has huge societal importance. Some aspects of the pandemic response like testing scaled well over time, while other areas like equitable vaccine distribution struggled.

Here is a summary of the key points from the conclusion:

  • Scaling any initiative inevitably reveals weaknesses and problems. The COVID-19 pandemic response provides many examples of this, from issues with contact tracing and testing to problems distributing vaccines.

  • There are two key lessons: 1) Any weaknesses in an idea or system will be exposed when scaled. 2) Scalable solutions remain our most valuable resource for tackling big problems.

  • The “Anna Karenina principle” applies - successful ideas avoid all possible pitfalls, while failed ones have at least one flaw. Scaling is a “weakest link” problem.

  • You can improve the odds of successful scaling by using the right incentives, marginal thinking, opportunity cost analysis, comparative advantage, and building trust.

  • Scaling principles apply whether you’re an entrepreneur, artist, policymaker, or parent. Not everyone needs or wants to scale big.

  • Policymakers should focus on policy-based evidence and scientifically rigorous data, not hopes and sunk costs. Abandon what doesn’t work.

  • Data scientists are an untapped resource for scaling evidence-based programs and policies. Learn from failures.

  • There is only one way to truly change the world - at scale. But scale with care, using the lessons in this book.

Here is a summary of the key points about scaling success from the Introduction and Chapter 1 of The Voltage Effect:

Introduction

  • Many programs that show promising results in small trials fail to replicate or scale up successfully. This is known as the “voltage drop” problem.

  • Scaling success requires going beyond proper program evaluation to understand the behavioral science behind what motivates people to participate and organizations to implement programs.

  • The book provides a framework and real world examples of how to scale up programs in education, health, government, and business.

Chapter 1: Dupers and False Positives

  • D.A.R.E. is an example of a well-intentioned program that failed to scale up successfully. Small trials showed it reduced drug use, but nationwide studies found it was ineffective.

  • The program failed to scale because it was not properly tailored to the target audience. It focused on scaring kids rather than empowering them.

  • Scaling requires understanding small behavioral factors that motivate participation, like loss aversion and social incentives. It also requires rigorous evaluation.

  • Proper evaluation is needed to avoid “dupers” who exaggerate results and “false positives” where small trials misrepresent true impact. Evaluation must be ongoing.

  • The Voltage Effect framework provides behavioral science insights and rigorous evaluation methods to help programs scale up successfully.

Here are the full references for the in-text citations in the provided text:

Chapter 1: Know Your Data

Yet numerous scientific analyses: Susan T. Ennet, Nancy S. Tobler, Christopher L. Ringwalt, and Robert L. Flewelling, “How Effective Is Drug Abuse Resistance Education? A Meta-Analysis of Project DARE Outcome Evaluations,” American Journal of Public Health 84, no. 9 (1994): 1394–1401.

Studies suggested that employee wellness programs: T. DeGroot and D. S. Kiker, “A Meta-analysis of the Non-monetary Effects of Employee Health Management Programs,” Human Resources Management 42 (2003): 53–69.

it’s the worst form of government: Richard Langworth, Churchill by Himself: The Definitive Collection of Quotations (New York: PublicAffairs, 2011), 573.

“Judgment Under Uncertainty”: Amos Tversky and Daniel Kahneman, “Judgment Under Uncertainty: Heuristics and Biases,” Science 185, no. 4157 (1974): 1124–1131, doi:10.1126/science.185.4157.1124.

Thinking, Fast and Slow: Daniel Kahneman, Thinking, Fast and Slow (New York: Farrar, Straus and Giroux, 2011).

Predictably Irrational: Dan Ariely, Predictably Irrational: The Hidden Forces That Shape Our Decisions (New York: Harper Collins, 2008).

The Undoing Project: Michael Lewis, The Undoing Project: A Friendship That Changed Our Minds (New York: W. W. Norton, 2017).

confirmation bias prevents us: E. Jonas, S. Schulz-Hardt, D. Frey, and N. Thelen, “Confirmation Bias in Sequential Information Search After Preliminary Decisions: An Expansion of Dissonance Theoretical Research on Selective Exposure to Information,” Journal of Personality and Social Psychology 80, no. 4 (2001): 557–571; P. C. Wason, “On the Failure to Eliminate Hypotheses in a Conceptual Task,” Quarterly Journal of Experimental Psychology 12, no. 3 (1960): 129–140; P. C. Wason, “Reasoning About a Rule,” Quarterly Journal of Experimental Psychology 20 (1968): 273–281; R. E. Kleck and J. Wheaton, “Dogmatism and Responses to Opinion-Consistent and Opinion-Inconsistent Information,” Journal of Personality and Social Psychology 5, no. 2 (1967): 249–252.

This is because science has taught us: Daniel Kahneman and Amos Tversky, “Subjective Probability: A Judgment of Representativeness,” Cognitive Psychology 3, no. 3 (1972): 430–454; Tversky and Kahneman, “Judgment Under Uncertainty”; Ariely, Predictably Irrational; Thomas Gilovich, Dale Griffin, and Daniel Kahneman, Heuristics and Biases: The Psychology of Intuitive Judgment (New York: Cambridge University Press, 2002).

The British psychologist Peter Wason’s: Wason, “Reasoning About a Rule.”

In 1951, the pioneering social psychologist: Solomon E. Asch, “Effects of Group Pressure upon the Modification and Distortion of Judgments,” in Groups, Leadership and Men: Research in Human Relations, edited by Mary Henle (Berkeley: University of California Press, 1961).

the top-selling basketball jerseys: Interbasket, “The Best NBA Jerseys of All-Time,” n.d., https://www.interbasket.net/jerseys/nba/best-selling/, accessed May 10, 2021.

the widespread acceptance: Lawrence Cohen and Henry Rothschild, “The Bandwagons of Medicine,” Perspectives in Biology and Medicine 22, no. 4 (1979): 531–538, doi:10.1353/pbm.1979.0037.

oftentimes pays more: Barry Lind and Charles R. Plott, “The Winner’s Curse: Experiments with Buyers and with Sellers,” American Economic Review 81, no. 1 (1991): 335–346.

she explained that he had poured milk: R. A. Fisher, The Design of Experiments (Edinburgh: Oliver and Boyd, 1942); David Salsburg, The Lady Tasting Tea: How Statistics Revolutionized Science in the Twentieth Century (New York: Holt Paperbacks, 2002).

medical errors: M. A. Makary and M. Daniel, “Medical Error—the Third Leading Cause of Death in the US,” BMJ 353 (2016): i2139, doi:10.1136/bmj.i2139.

Several years ago: Janette Kettmann Klingner, Sharon Vaughn, and Jeanne Shay Schumm, “Collaborative Strategic Reading During Social Studies in Heterogeneous Fourth-Grade Classrooms,” Elementary School Journal 99, no. 1 (1998).

only to fail miserably: John Hitchcock, Joseph Dimino, Anja Kurki, Chuck Wilkins, and Russell Gersten, “The Impact of Collaborative Strategic Reading on the Reading Comprehension of Grade 5 Students in Linguistically Diverse Schools,” U.S. Department of Education, 2011, https://files.eric.ed.gov/fulltext/ED517770.pdf.

This emerging pattern even spurred one psychologist: Open Science Collaboration, “Estimating the Reproducibility of Psychological Science,” Science 349, no. 6251 (2015), http://doi.org/10.1126/science.aac4716.

“file drawer problem”: Eliot Abrams, Jonathan Libgober, and John A. List, “Research Registries: Facts, Myths, and Possible Improvements,” NBER Working Paper, 2020, http://doi.org/10.3386/w27250.

that grocery shopping: Aner Tal and Brian Wansink, “Fattening Fasting: Hungry Grocery Shoppers Buy More Calories, Not More Food,” JAMA Internal Medicine 173, no. 12 (2013): 1146–1148, http://doi.org/10.1001/jamainternmed.2013.650.

eating from a bigger bowl: Brian Wansink and Matthew M. Cheney, “Super Bowls: Serving Bowl Size and Food Consumption,” JAMA 293, no. 14 (2005): 1727–1728, http://doi.org/10.1001/jama.293.14.1727.

the classic cookbook The Joy of Cooking: Brian Wansink and Collin R. Payne, “The Joy of Cooking Too Much: 70 Years of Calorie Increases in Classic Recipes,” Annals of Internal Medicine 150, no. 4 (2009).

As of this writing, nineteen of: Retraction Watch, http://retractiondatabase.org/, accessed May 11, 2021.

In 2018, the Journal of the American Medical Association: “JAMA Network Retracts 6 Articles,” September 19, 2018, https://media.jamanetwork.com/news-item/jama-network-retracts-6-articles-that-included-dr-brian-wansink-as-author/.

Cornell launched an investigation: Michael I. Kotlikoff, “Cornell University Statements,” September 20, 2018, https://statements.cornell.edu/2018/20180920-statement-provost-michael-kotlikoff.cfm.

Unfortunately, such behavior is more common: J. List, C. Bailey, P. Euzent, and T. Martin, “Academic Economists Behaving Badly? A Survey on Three Areas of Unethical Behavior,” Economic Inquiry 39 (2001): 162–170.

Of course, we now know: Securities and Exchange Commission vs. Elizabeth Holmes and Theranos, Inc., 5:18-cv-01602, United States District Court, Northern District of California San Jose Division, March 14, 2018, https://www.sec.gov/litigation/complaints/2018/comp-pr2018-41-theranos-holmes.pdf.

At one point: Matthew Herper, “From $4.5 Billion to Nothing: Forbes Revises Estimated Net Worth of Theranos Founder Elizabeth Holmes,” Forbes, June 1, 2016, https://www.forbes.com/sites/matthewherper/2016/06/01/from-4-5-billion-to-nothing-forbes-revises-estimated-net-worth-of-theranos-founder-elizabeth-holmes/.

then threatened with lawsuits: Taylor Dunn, Victoria Thompson, and Rebecca Jarvis, “Theranos Whistleblowers Filed Complaints out of Fear of Patients’ Health,” ABC News, March 13, 2019, https://abcnews.go.com/Business/theranos-whistleblowers-filed-complaints-fear-patients-health-started/story?id=61030212.

Chapter 2: Know Your Audience

for instance, its net earnings would top: “Costco Wholesale Corp.,” MarketWatch, https://www.marketwatch.com/investing/stock/cost/financials, accessed 2021.

paper on “two-part tariffs”: W. Arthur Lewis, “The Two-Part Tariff,” Economica 8, no. 31 (1941): 249–270, http://doi.org/10.2307/2549332.

Ashley Madison: Dean Takahashi, “Ashley Madison ‘Married Dating’ Site Grew to 70 Million Users in 2020,” Venture Beat, February 25, 2021, https://venturebeat.com/2021/02/25/ashley-madison-married-dating-site-grew-to-70-million-users-in-2020/.

In the mid-1990s, McDonald’s: Tabitha Jean Naylor, “McDonald’s Arch Deluxe and Its Fall from Grace,” Yahoo, August 13, 2014, https://finance.yahoo.com/news/mcdonalds-arch-deluxe-fall-grace-190417958.html.

However, it turned out that the fortified salt: Abhijit Banerjee, Sharon Barnhardt, and Esther Duflo, “Can Iron-Fortified Salt Control Anemia? Evidence from Two Experiments in Rural Bihar,” Journal of Development Economics 133 (2018): 127–146.

the Nurse-Family Partnership: David L. Olds, Peggy L. Hill, Ruth O’Brien, David Racine, and Pat Moritz, “Taking Preventive Intervention for Maternal Depression to Scale: A Cluster Randomized Controlled Trial,” Pediatrics 143, no. 4 (2019), https://doi.org/10.1542/peds.2018-2905.

Here are the summarized key points and references for the passages:

Passage 1:

Summary: The Nurse-Family Partnership program provides nurse home visits to low-income first-time mothers. Studies found the program was highly effective when small-scale but quality and outcomes declined as the program scaled up nationally.

Reference: Olds, D. L., Robinson, J., O’Brien, R., Luckey, D. W., Pettitt, L. M., Henderson, C. R., Ng, R. K., Sheff, K. L., Korfmacher, J., Hiatt, S., & Talmi, A. (2002). Home visiting by paraprofessionals and by nurses: a randomized, controlled trial. Pediatrics, 110(3), 486-496.

Passage 2:

Summary: Jamie Oliver’s restaurant chain experienced rapid growth and success initially but eventually failed due to decline in food quality and service as the chain overexpanded.

Reference: Tsang, A. (2019, May 21). Jamie Oliver’s U.K. Restaurants Declare Bankruptcy. New York Times. https://www.nytimes.com/2019/05/21/business/jamie-oliver-uk-restaurants-bankruptcy-administration.html

Passage 3:

Summary: Ralph Nader’s book Unsafe at Any Speed sparked regulations improving car safety but led to unintended consequences of riskier driving behavior.

Reference: Nader, R. (1965). Unsafe at any speed: The designed-in dangers of the American automobile. Grossman.

Passage 4:

Summary: Providing cash transfers to poor households in Kenya raised incomes but also drove up prices, eroding some of the value of the transfers.

Reference: Egger, D., Haushofer, J., Miguel, E., Niehaus, P., & Walker, M. W. (2019). General equilibrium effects of cash transfers: experimental evidence from Kenya. National Bureau of Economic Research Working Paper Series, No. 26600.

Let me know if you need me to summarize any other passages or provide more detail on the references!

Here are the key points from the chapter “Quitting Is for Winners”:

  • Quitting is often seen as a sign of failure or weakness, but it can sometimes be the best option and lead to better outcomes.

  • Knowing when to quit something requires weighing the costs and benefits - if the costs outweigh the benefits, it may be time to quit. This is the “quitting threshold.”

  • People often stay in unfulfilling jobs or relationships longer than they should due to loss aversion, sunk costs, and the endowment effect. Overcoming these biases can help people quit sooner when needed.

  • Organizations and systems could be designed to make quitting easier, such as with low-stakes trials, opt-out defaults, and clearly defined exit points. This can lead to better matches between people and jobs, schools, etc.

  • There are strategies for quitting more optimally, such as setting guideposts for when you’ll reevaluate, and having alternative options lined up. A “quitter’s mindset” involves being open to quitting as a valid option.

  • Sometimes persistence pays off, but other times it makes sense to quit based on new info. Having a balanced view, rather than always persisting or always quitting easily, is ideal.

  • Academic All-American athlete: I was awarded plaques for being an Academic All-American athlete in college.

  • Famous Vince Lombardi quote: The quote “Winning isn’t everything, it’s the only thing” is commonly attributed to famous football coach Vince Lombardi, though the attribution is questionable.

  • Influential psychology research: Research by Shane Frederick and others found that people tend to neglect opportunity costs when making decisions.

  • Magnifying the pleasure and minimizing the pain: Wilson et al. found that people tend to magnify the pleasure and minimize the pain of future events when making affective forecasts.

  • Policymakers neglect opportunity costs: Research by Persson and Tinghög suggests policymakers often neglect opportunity costs in public policy decisions.

  • Focussing effect experiment: An experiment by Legrenzi et al. in the 1990s found that people focus too much on what they might gain in a decision.

  • Developing opportunity cost consideration: Frederick et al. recommend actively considering opportunity costs as a practice to improve decision making.

  • Example of celebrating failure: Astro Teller of X encourages celebrating failure as a cultural practice at the innovation lab.

  • Netflix reversing decision: In 2011 Netflix reversed its decision to split its streaming and DVD businesses after customer backlash.

  • Grit enables overcoming failures: Duckworth and Quinn found that grit can enable perseverance despite setbacks and failures.

  • Adam Smith’s classic examples: In The Wealth of Nations, Smith uses woolen coats and diamonds as classic examples of commercial goods.

  • CEO stepping down to focus elsewhere: Some tech startup CEOs like Twitter’s Jack Dorsey have stepped down to focus on other areas.

  • PayPal Mafia’s approach: The so-called PayPal Mafia of founders took risks and weren’t afraid to fail with new startup ideas after selling PayPal.

  • Levitt and Dubner coin flip idea: In Freakonomics, Levitt and Dubner proposed letting a coin flip decide between two equally good options.

  • Reid Hoffman on intelligent failure: Hoffman advocates pursuing “intelligent failures” by trying new ideas that have a chance to succeed but also teach you lessons if they fail.

Here are the key points from the book The Honest Truth About Dishonesty by Dan Ariely:

  • Humans are capable of cheating and unethical behavior, even if they consider themselves honest people. Ariely conducted experiments showing people cheat just a little to benefit themselves.

  • Factors like creativity, active listening, and critical thinking can curb dishonesty by engaging moral awareness. Strict rules and oversight have limited effectiveness.

  • Dishonesty increases when people can rationalize it and maintain a positive self-image. Conflicts of interest, such as financial incentives, can promote dishonesty.

  • People cheat more when cheating helps others too, such as benefiting a team. Loyalty and shared identity promote dishonesty.

  • Small factors in situations can greatly affect (dis)honest behavior, like lighting and crowdedness. People cheat more when situations are ambiguous.

  • Organizational culture powerfully influences conduct. Cultures valuing integrity curb cheating; competitive, aggressive cultures promote dishonesty. Leaders shape culture.

  • Dishonesty harms relationships, undermines morale, decreases productivity, and incurs hidden costs. Honesty and transparency build trust and well-being.

  • Well-designed policies, incentives, and procedures can encourage honesty, as can broader education and emphasizing decision impacts. But some dishonesty persists.

Here is a summary of the key points about customer surveys:

  • Customer surveys are an important tool for businesses to gather feedback and insights from their customers.

  • Well-designed surveys can provide valuable information about customers’ satisfaction, needs, preferences, and more.

  • However, surveys also have limitations - they rely on customers’ self-reported data, may suffer from bias, and can be influenced by how questions are framed.

  • Businesses should use caution in interpreting and acting upon survey results. Additional market research and testing is often needed to supplement survey findings.

  • Overall, customer surveys are most effective when thoughtfully created, distributed, and analyzed as one component within a company’s broader market research efforts.

Here is a summary of the key points from the passages:

  • Meritocracy - Concept that people advance in society through talent and effort alone. Critiqued for perpetuating inequality.

  • Robert Metcalfe - Inventor of Ethernet technology which allows local area networks to communicate.

  • #MeToo - Movement against sexual assault and harassment, especially in the workplace.

  • Paul Midler - Business consultant who advised Trader Joe’s on its strategy.

  • Edward Miguel - Economist who studied the impact of deworming initiatives.

  • Minneapolis, Minnesota - Location of one of Trader Joe’s first stores outside California.

  • Dan Ariely - Author who studies decision making and behavioral economics. Wrote Predictably Irrational.

  • Social norms - Informal understandings that govern behavior in groups. Can be leveraged to change behavior.

  • Spillover effects - Unintended consequences of an intervention, policy, or technology. Both positive and negative examples discussed.

  • Sunk cost fallacy - Tendency to continue an endeavor based on resources already invested despite diminishing returns.

  • Opportunity cost - Value of the next best alternative foregone when making a decision. Often neglected when making choices.

  • Organizational culture - Set of shared assumptions, values, and norms that shape behavior within an institution.

  • Uber - Ridesharing company used as case study in scaling and creating spillover effects.

Here is a summary of the key points from the paragraphs on up-front costs, utility functions, and vaccinations:

Up-front costs:

  • Implementing new technologies often requires large up-front investments, which can deter adoption even if long-run benefits are substantial (pp. 118-120, 122).
  • Governments may need to subsidize initial costs to encourage change.

Utility functions:

  • Economists use utility functions to represent people’s preferences and measure welfare impacts of policies (p. 165).
  • Utility functions quantify satisfaction derived from goods/services.

Vaccinations:

  • Vaccines have highly positive benefit-cost ratios but still face adoption challenges (pp. 105, 121, 229-230).
  • Health behaviors like vaccinations are influenced by biases like present bias and omission bias.
  • Government mandates can increase vaccinations when individuals undervalue social benefits.
#book-summary
Author Photo

About Matheus Puppe