Self Help

Technology Is Not Neutral - Stephanie Hare

Author Photo

Matheus Puppe

· 43 min read

BOOK LINK:

CLICK HERE

  • The introduction describes the storming of the US Capitol building on January 6, 2021 by supporters of President Donald Trump. Some rioters were armed and trying to violently disrupt the certification of the 2020 election results.

  • Lawmakers hid and barricaded themselves inside the building for hours as the mob roamed the halls. The riot resulted in deaths and extensive damage.

  • Trump had inspired the mob in a rally earlier that day where he continued to falsely claim the election was stolen, then did little to stop the violence as it unfolded.

  • Trump and his supporters used social media platforms like Twitter to spread misinformation about the election and incite anger.

  • On January 8, Twitter CEO Jack Dorsey permanently banned Trump from the platform, sparking other tech companies to also ban or restrict Trump.

  • This highlighted the power and responsibility that technology platforms have in regulating content, and raised complex questions about free speech, misinformation, and the role of private tech companies in public discourse.

  • The author argues these events show technology is not neutral - it shapes society and politics in complex ways. Understanding technology’s ethical implications is critical.

  • The attack on the US Capitol on January 6, 2021 was organized online and raised questions about the responsibility of tech companies to intervene. Many attackers had been radicalized online.

  • Major tech companies like Twitter, Facebook, and YouTube banned or restricted Trump’s accounts after the attack, igniting a debate about whether this infringed free speech and their right to enforce policies against inciting violence.

  • The attack generated a large amount of data that is being used by law enforcement and Congressional investigators. The public has also conducted “digilantism” by using social media and facial recognition to identify attackers.

  • The attack was a turning point that united Congress across party lines in seeking to curb the power of Big Tech companies through antitrust and other measures.

  • Other countries like the EU and China are also introducing regulations to limit the power of tech giants. The companies will resist these efforts.

  • Technology is too important to be left only to technologists - everyone should participate in holding it accountable. Technology shapes the interface between citizens, companies, and governments.

  • To ignore technology cedes control to others. We should engage with technology as informed citizens to ensure it works for us. The values and ethics behind technology need broad discussion.

  • The author believes a book explaining the ethical implications of technology would have been useful earlier in her career and life when faced with dilemmas about what technology to use and create.

  • To learn about technology ethics, the author spoke with a diverse range of people including police, government officials, technology company leaders, academics, and others.

  • She wanted to explore the emerging role of “technology ethicist” and see what difference it is making in improving how we create and use technology.

  • Technology ethicists may have training in law, data science, philosophy or other fields and work in companies, government, academia, non-profits, etc.

  • Their roles may include things like responsible AI lead, algorithmic reporter, Chief Privacy Officer, or being on ethics boards.

  • The author spoke with many experts to learn about ethical issues relating to use of biometrics, health data, AI, and more.

  • She aimed to raise awareness about technology ethics through opinion pieces, lectures, and other public engagement.

  • The author engaged in extensive research and discussion on technology ethics through panels, receiving feedback from various audiences. A key problem emerged: technology ethics matter to everyone but it’s unclear how to practice it effectively.

  • The book aims to address this by exploring what technology ethics entails, how to practice it, and how to evaluate its effectiveness. The core question is how to maximize benefits and minimize harms in creating and using technology.

  • The first half of the book examines the debate on whether technology is neutral. It then considers where to draw the line in creating/using tools and technology.

  • The second half examines two complex global issues impossible to ignore: facial recognition and digital health tools for COVID-19. It looks at whether they work, their implications for identity/privacy/civil liberties, how they change life experiences, whether they are good/bad, and what should be done now that they exist.

  • The conclusion looks ahead at applying technology ethics to other emerging challenges, risks, and opportunities.

  • There is debate over whether technology itself is neutral or not. Some argue it is neutral and depends on how humans apply it. Others argue technology embodies the values and biases of its creators.

  • Proponents of technology being neutral include Kevin Kelly and Paul Daugherty. They argue the goodness or badness comes from how humans use technology.

  • Critics say technology is not neutral because it is designed by humans with values. Tim Berners-Lee and Kate Crawford argue ethical rules must be designed into technology.

  • Critics also point to how certain groups like women can be negatively impacted by technologies designed with male default assumptions.

  • Ursula Franklin says we can evaluate neutrality by studying who benefits from a technology. Surveillance tech benefits managers over workers.

  • The difference between found objects like bones and created objects like the atomic bomb illustrates the non-neutrality of human creations. Bones can have neutral uses, bombs only destroy.

In summary, critics argue technology inherently reflects its creators’ values and biases. Its impacts reveal how non-neutral it can be. Defenders maintain neutrality depends on how humans apply technology.

  • The discovery of nuclear fission by scientists in the 1930s was neutral - they were simply trying to understand the physics of the atomic nucleus.

  • However, the intention behind actually building the atomic bomb during the Manhattan Project was not neutral. This originated with Leó Szilárd’s idea to weaponize nuclear fission.

  • Many of the European scientists involved took ethical stances - refusing to work on atomic weapons projects. But others did work towards building the bomb.

  • Secrecy around the Manhattan Project shielded some scientists from the consequences of their work. This raises issues around responsibility when we don’t have full knowledge.

  • Technology is more than just tools - it can transform how we see the world. The conception of technology as neutral ‘tools’ is limited.

  • To determine responsibility, we need to consider the intentions and societal/political forces driving technological development, not just the technology itself.

  • Those working on new technologies today have a duty to consider potential consequences, even without full knowledge. Responsibility extends beyond just the creators to users and the whole sociotechnical context.

Here is a summary of the key points from the passages:

  • Technology is more than just tools or artifacts - it also encompasses systems, procedures, ideas, and ways of structuring and controlling.

  • Maps are not neutral - they represent the perspectives and purposes of their creators. Maps can visualize geographical data, connections between ideas, or other concepts.

  • Determining responsibility for technology is challenging because it can be difficult to fully comprehend the implications of technologies when they are created and used.

  • There is a “grey space” between viewing technology as just a tool versus something more profound that can change ideas of media and communication.

  • Some argue technology is neutral while others contend technology embodies the values and biases of its creators.

  • Technology can challenge relationships with our bodies, society, politics, nature and humanity. It can enable new forms of control.

  • Looking holistically at technology as systems encompassing production, management, ideas, skills, etc. provides a richer perspective.

  • Historical examples show how technology does not develop in a vacuum - social and political forces shape its creation and use.

  • Tools and technologies exist in a “grey space” where their impact goes beyond just enabling a task - they can transform how we live and think. The internet is a prime example, changing many aspects of life.

  • Some tools like forks only affect a narrow domain, while others like the wheel have far wider ripple effects across society.

  • Information and communications technologies profoundly shape our ability to understand and share information, enabling innovation and collaboration on a mass scale.

  • Standardizing time measurement transformed our conception of reality across philosophy, physics, biology, psychology, and more. It shows technology can alter our fundamental perceptions.

  • Thought experiments about removing technologies reveal their transformative power. Losing ICTs would vastly reduce our knowledge sharing. Losing standardized time would upend our understanding of reality.

  • Looking at how other species use tools provides insight into responsibility. Programmed behaviors may remove responsibility, but intelligence in tool use raises tricky questions about accountability.

  • Definitions of “intelligence” vary, so caution is needed in handing over responsibility to AI systems and other technologies. Their impacts may extend far beyond narrow applications.

  • Overall, the “grey space” concept highlights how technologies shape the human experience in complex ways, requiring careful consideration of their ethics and responsible use. Their effects ripple outwards through society.

  • Intelligence is a complex concept with many definitions. It involves the ability to perceive, experience, and respond effectively to situations.

  • Thinking ability alone does not constitute intelligence. Factors like embodiment (existence within a physical form), culture, emotions, creativity, and consciousness also contribute to intelligence.

  • There is debate around whether plants and non-human animals can be considered intelligent based on their ability to sense environments, communicate, solve problems and make decisions. They may have a form of intelligence but it is unlikely they could be held responsible for actions like humans can.

  • The mind-body problem explores the relationship between mind and body. Descartes proposed a separation between the thinking mind and unthinking body that has shaped perspectives. But the nature of consciousness remains a mystery.

  • How we define intelligence matters for ethical questions around creating intelligent machines. If machines could potentially surpass human intelligence, who would be responsible for their actions? As we build thinking machines, we need to carefully consider what constitutes real intelligence.

Here is a summary of the key points from the Quale and Sunit Das passage:

  • The passage discusses the concept of “qualia” in philosophy of mind. Qualia refers to the subjective, qualitative properties of conscious experiences - the “what it is like” aspect of mental states.

  • Examples of qualia include the redness of red or the painfulness of pain - these are features of subjective experience that cannot be fully captured by a purely objective, scientific description.

  • There is debate over whether qualia really exist as irreducible phenomenal properties, or whether they can be explained in purely physical terms.

  • Some philosophers argue that qualia pose a problem for physicalism, the view that everything is physical. If subjective experiences have qualitative properties over and above the physical facts, then physicalism may be false.

  • Other philosophers contend that qualia can be accommodated within a physicalist view, perhaps by identifying them with certain brain states. On this view, there may be nothing more to qualia than particular neural activities.

  • Overall, the existence and nature of qualia remain controversial issues in philosophy of mind and the mind-body problem. The passage summarizes some of the key positions in this debate.

  • Technology can fall into an ethical grey area where it is unclear if the benefits outweigh potential risks. Examples given include AI, CRISPR gene editing, and the Internet of Things.

  • There is debate around where to ‘draw the line’ on what technologies should and should not be pursued. Some argue for clear ethical boundaries, while others believe in pushing the limits of what is possible.

  • Brain implants allow us to enhance human abilities but raise concerns about altering what makes us human. Devices like smartphones already provide companies huge insight into our lives.

  • During WWII, scientists faced ethical dilemmas about developing technologies for war. After the war, the US recruited Nazi scientists through Operation Paperclip despite moral objections.

  • New technologies require us to carefully consider the ethical implications. As Einstein and others asked about recruiting Nazi scientists: “Do we want science at any price?” Where we draw the line reflects our values.

Here is a summary of the key points about technology ethics from the passage:

  • The book asks if we want technology at any price, implying there may be ethical costs to technological progress.

  • Scientists’ willingness to collaborate with governments on weapons research has shifted over time based on ethical views - more willing during WWII to defeat Axis powers, more reluctant during Vietnam war.

  • Most science today is government-funded, so scientists face dilemmas about cooperating when they disagree with policies.

  • Understanding the philosophical framework helps address technology ethics issues. Relevant branches:

  • Metaphysics - what is reality? Issues like information pollution undermining shared reality.

  • Epistemology - how do we know what we know? Issues like biased algorithms distorting knowledge.

  • Ethics - how should we act? Setting standards for right and wrong conduct.

  • Political philosophy - how should society be organized? Technology’s impacts on power structures.

  • Aesthetics - what is beauty and art? Relationship between tech/science and creativity.

  • Logic - what is correct reasoning? Ensuring tech behaves logically.

  • Overall, philosophy provides systemic rigor for addressing ethical questions about technology. We need shared methods to determine where to draw lines on ethics, and who decides.

Here is a summary of the key points about epistemology from the passage:

  • Epistemology explores knowledge, how we acquire it, and its limitations. This includes learning from experience and reasoning.

  • Epistemology relates to questions of authority and information integrity - who is a source of knowledge, how is this authority conferred, and is data/evidence solid?

  • Applied to technology, epistemology considers whether AI systems can really “think” and if their reasoning is explainable. Often AI is a “black box” where the workings are mysterious.

  • Epistemology also relates to issues like censorship, freedom of information, and ethics - can tech workers know the implications of their work if key info is withheld?

  • Epistemology grapples with distinguishing truth from misinformation and disinformation, which can spread rapidly on social media. Some people distrust experts and conflate opinion with expertise.

  • Solutions involve values like humility and respect, though sometimes the stakes are too high for mere disagreement. Social media platforms have begun trying to address this by fact-checking content.

Here is a summary of the key points from the ph article on 25 October:

  • The article discusses how epistemology (the study of knowledge) can help us evaluate claims in an era of misinformation. It focuses on the role of logic and different types of arguments (deductive and inductive).

  • Deductive arguments start with general premises and reach specific conclusions, while inductive arguments start with specific observations and reach tentative general conclusions. Inductive arguments are more prone to error.

  • Logic and epistemology can help us identify unsound arguments and faulty reasoning. We need standards to evaluate the deluge of information online and combat the “infodemic.”

  • Understanding the sociopolitical roots and impacts of misinformation is key. We must ask who benefits from spreading misinformation, who is harmed, and who gains when trust declines.

  • The article advocates using logic and epistemology to build knowledge on firm foundations in our technological age, where truth and reality are under attack. Rigorous reasoning can help anchor us.

Here is a summary of the key points about political philosophy and technology:

  • Some technology companies like Meta, Alphabet, Twitter, Apple, Microsoft, Amazon, and Huawei have become political actors at the national or international level due to their wealth, number of customers, market dominance, and ability to shape political discourse.

  • Companies like Cloudflare that protect websites have had to make tough decisions about whether to stop protecting hate sites or sites used to spread dangerous misinformation. This raises questions about who should have the power to make such decisions.

  • Technology companies challenge traditional notions of power relations, privacy, civil liberties, and human rights through their data gathering and analysis capabilities.

  • Companies like Google and Facebook have considerable power over what information is accessible and emphasized in search results and news feeds. For example, Google temporarily hid some Australian news sites during a dispute with the government.

  • Lawmakers worldwide have largely failed to answer key questions about how to regulate technology companies and address the power imbalances they create. New frameworks and regulations are needed.

  • Concepts from political philosophy like power, authority, legitimacy, freedom, and the relationship between the individual and society are critical for examining technology’s societal impacts.

Here is a summary of the key points about aesthetics from the passage:

  • Aesthetics was traditionally concerned with questions of beauty, but more recently it has come to mean “relating to felt experience” more broadly.

  • Aesthetics shapes our everyday experiences - how we eat, dress, design our surroundings, engage with nature, art, music, etc. It expands and enriches our human experience.

  • Aesthetics is central to the design of technology and tools - it affects form, function, user interface, and user experience. Good aesthetics make tools intuitive and pleasing to use; bad aesthetics are frustrating.

  • Aesthetics relates to accessibility, inclusivity, and social justice in design. Exclusive or inaccessible design can exclude people from participation.

  • Aesthetics can inspire solutions to problems, like Taiwan’s mask availability map during COVID-19.

  • Data visualizations, urban design, etc. are shaped by aesthetics which invoke questions about values.

  • Aesthetics can be about curiosity, fun, and joy in technology, like Steve Jobs’ love of calligraphy shaping Apple’s design.

  • Aesthetics connects to ethics, as good design reflects good values.

In summary, aesthetics is about the broader human experience and values encoded in design, not just beauty. It is key to creating technology that is accessible, ethical, and connects to human experience.

Here is a summary of the key points about Machiavelli and utilitarianism in the passage:

  • Machiavelli in The Prince implies that unethical means can be used to achieve ethical goals. This allows for harm along the way.

  • Utilitarianism prioritizes outcomes above all else, which implies that unethical means can be justified if they lead to ethical ends.

  • Some philosophers like Aristotle and Kant argue against utilitarianism, saying intention rather than outcome defines ethical actions.

  • Mandeville argued private vices can have public benefit, challenging the focus on virtuous actions.

  • Utilitarianism fails to address how to build scalable but customizable technologies.

  • Despite flaws, utilitarianism is a powerful lens for studying technology ethics, including mass surveillance concepts like the panopticon.

  • Technology ethics are at the center of the ‘new tech cold war’ and ‘great decoupling’ between the US and China. Their data practices, tools, and technologies reflect different values.

  • China is seeking to shape and reconfigure ethics norms previously dominated by Western liberal democracies, especially the US. It is doing this in several ways:

  • Investing in setting tech standards, e.g. in facial recognition and surveillance.

  • Exporting its technology and ethics abroad through the Belt and Road initiative.

  • Applying its tech and ethics domestically to persecute Uyghurs, e.g. forced DNA collection and mass imprisonment.

  • The US and others have accused China of genocide against Uyghurs.

  • Solving these problems requires agreeing they are problems. China sees things differently than Western democracies.

  • Technology development and ethics are dominated by certain groups, excluding or silencing others. This affects how we think about tech ethics.

  • We need a framework to assess tools/tech, ask who they succeed or fail for, and ensure diverse perspectives in deciding where to ‘draw the line’.

  • Rather than answering where to draw the line, this chapter provides a philosophical framework for thinking about how to draw the line, who draws it, and when it’s crossed.

Here is a summary of the key points from the article:

  • In 2004, Boris Johnson strongly opposed the introduction of national ID cards in the UK, stating he would eat his card if required to show it.

  • The UK has historically taken a different approach to identification compared to many other countries - there is no national ID card and people are not required to show ID for most day-to-day activities.

  • There have been two exceptions - during WWII when ID cards were temporarily introduced, and after 9/11 when Labour proposed an ID card to fight terrorism. This was later scrapped by the incoming Conservative government in 2010.

  • Biometric technologies like facial recognition are now making physical ID cards redundant by turning bodies into data. Facial recognition is being increasingly used but remains largely unregulated.

  • The UK has traditionally emphasized civil liberties over surveillance and national security. New biometric technologies threaten to upset this balance. Regulation of technologies like facial recognition remains inadequate.

  • Facial recognition technology is a form of biometric technology that identifies people by their facial features. It is becoming increasingly used by governments and corporations.

  • The UK has a history of resisting mandatory ID cards and national identity schemes, seeing them as infringing on civil liberties. However, biometric technologies like facial recognition are now being adopted, especially by police forces.

  • The London Metropolitan Police has been an enthusiastic adopter of facial recognition, ignoring recommendations to halt its use from various oversight bodies due to concerns about inaccuracy and bias.

  • Facial recognition originated in early photography in the 19th century. It has now become a multi-billion dollar industry, though there are concerns about consent, security, and mission creep in its uses.

  • Oversight bodies like the Surveillance Camera Commissioner have warned the UK’s biometrics strategy is currently unfit for purpose, but advocates argue better design could address problems.

  • Overall, facial recognition challenges traditional notions of privacy and consent, and its adoption warrants close scrutiny to avoid slippage into a ‘ghastly, Orwellian, omniscient police state.’

  • Facial recognition technology originated with Alphonse Bertillon’s anthropometric system in the late 1800s, which paired facial photos with body measurements and descriptors. This allowed police to create identification records and mugshots.

  • Bertillon missed the innovation of using fingerprints for identification, which came from the UK and Argentina.

  • In the early 1900s, passport photos were added alongside biometric descriptions. In the 2000s, digital components like facial biometrics were added.

  • Biometric technologies have origins in police services and colonial administrators for use on criminals and colonial subjects. They are linked to debunked pseudosciences like physiognomy and phrenology.

  • Researchers have attempted to link physical appearance and facial features to criminality, emotional states, and behavior, from Bertillon to Francis Galton to modern facial recognition researchers. This is concerning from an ethical standpoint.

  • Overall, the history of facial recognition and biometrics is intertwined with efforts to identify and categorize criminals and marginalized populations based on appearance and biometrics. This problematic origin raises ethical issues for how the technology is used today.

Here is a summary of the key points in the passage:

  • Facial recognition technology has roots in the discredited pseudoscience of physiognomy, which attempted to use facial features to judge character and personality.

  • Companies like Clearview AI, FindFace, and PimEyes have built facial recognition search engines using billions of photos scraped from the internet without consent. This raises privacy concerns.

  • In the US, facial recognition training datasets have included exploited child pornography images, visa photos, mugshots, and photos of deceased individuals, all without consent.

  • Post 9/11, facial recognition expanded from military use to wider applications like immigration and border control under the “war on terror” infrastructure.

  • China demonstrates the dystopian surveillance potential of facial recognition technology to track and control people.

  • Facial recognition poses varying risks depending on how it is used, from low risk to unlock phones to high risk for racial profiling or controlling workers.

  • More targeted responses are needed to regulate different uses based on their risk of harm, rather than blanket approaches like bans or moratoriums.

  • 1:1 face matching involves comparing a person’s face to a single facial image stored on a device or database. Common uses are unlocking smartphones or accessing government services.

  • To set up smartphone unlocking, the phone captures a facial biometric and creates a mathematical representation (“hash”) of the face. It compares the real-time face to the stored hash to verify it matches.

  • India’s Aadhaar system is an example of using 1:1 matching for government services. It has enrolled over 1 billion citizens’ biometrics. People can access services by having their face, iris or fingerprint checked against the database.

  • Aadhaar aims to help people lacking official IDs, but has issues around consent, security, and normalizing facial recognition. Data leaks have exposed citizens’ information. It can enable tracking people by demographics like caste and religion.

  • Overall, 1:1 matching on personal devices seems low risk when done voluntarily. But government uses like India’s Aadhaar have high risks around consent, security, and social impacts. More oversight and safeguards are needed for identity systems employing facial recognition.

Here is a summary of the key points about the uses and risks of 1:1 facial verification technology:

  • To monitor workers: Used by companies like Uber to check drivers’ identities, but has higher failure rates for people with darker skin, amplifying inequality. Increases workplace surveillance and takes away workers’ ability to consent.

  • To pay for things: Convenient but not always voluntary, especially in China where it is required for many basic transactions. Raises particular concerns for children’s privacy when used in schools.

  • To enter a building: Proposed for apartment buildings but canceled after tenant protests. Risks excluding people unfairly if system fails to recognize their face. Storing facial data also creates security risks.

  • Overall, 1:1 verification is done with our knowledge but not always meaningful consent. Risks include unfair exclusion, increased surveillance of workers and minorities, and insecure storage of biometric data. Children are especially vulnerable. There is a need for stronger regulation to protect rights.

Here are the key points about 1:many face matching from the passage:

  • 1:many matching compares our face to images stored in a database or the cloud. It can be done with or without our knowledge and consent.

  • Social media platforms previously used 1:many matching to automatically tag people in photos without consent. This led to a class action lawsuit against Facebook which resulted in a $650 million settlement.

  • Facebook also announced it would end use of facial recognition and delete over 1 billion facial templates following privacy concerns.

  • Law enforcement agencies use 1:many matching to identify suspects by comparing faces to mugshots and driver’s licenses. This is often done without knowledge or consent, raising privacy and bias concerns.

  • 1:many matching enables mass surveillance and tracking of people in public spaces like airports and stadiums. China is a notable example of large-scale use for social control.

  • Overall, 1:many matching raises significant risks around consent, privacy, bias and mass surveillance compared to 1:1 matching. Stronger regulation is likely needed to protect human rights as use of the technology expands.

  • Facial recognition technology raises privacy concerns, as it can identify people without their consent. Illinois’ Biometric Information Privacy Act has been used successfully against tech companies like Facebook and Clearview AI to protect biometric data.

  • The technology is used by law enforcement for surveillance and to identify suspects, which risks misidentification and wrongful arrests. It can chill dissent and protests even in democracies.

  • Globally, over 75 countries use AI for surveillance. China dominates and exports the technology through its Belt and Road Initiative. Deals like CloudWalk sharing African facial data raise ethical issues.

  • Facial recognition is a dual-use technology, deployed in military contexts by NATO forces and others. This further normalizes and proliferates the technology.

  • While the risks are high, especially from ubiquitous persistent surveillance, thoughtful regulation protecting civil liberties is feasible if there is political will. Key is informed public debate on balancing security with liberty.

Here is a summary of the key points about facial recognition technology in Iraq, Afghanistan, and for immigration control since 2001:

  • After 9/11, the US military collected facial scans, fingerprints, and other biometrics from millions of people in Iraq and Afghanistan to identify individuals. This data was collected without consent and raises concerns about how it could be misused.

  • In the US, facial recognition has been used at airports since 2004 to identify non-citizens entering the country. The stated goals are to identify threats and speed up processing. This is done without consent.

  • Companies like Accenture have built biometric and surveillance systems for borders and immigration worldwide, enabling monitoring and control of movement with little transparency.

  • Facial recognition can also analyze physical traits and characteristics to classify people, often without consent.

  • The UK’s online passport photo checker was found to have higher rejection rates for dark-skinned people in 2020 due to difficulties reading their facial images.

  • Overall, facial recognition technology since 9/11 has greatly expanded governments’ ability to identify, monitor, and control populations, with few safeguards for consent, oversight, or preventing misuse. Its use in immigration and border control raises significant human rights concerns.

I would summarize the key points as:

  • Facial analysis technology is being used by governments and companies in various ways, often without consent. Examples include identifying emotions, monitoring health, and classifying people by ethnicity, race, sexual orientation, or political views.

  • This technology has shown biases, such as passport photo systems working less accurately on people with darker skin tones.

  • Classifying people based on protected characteristics like race or sexual orientation raises major ethical concerns, as these are complex social constructs. This research risks legitimizing harmful ideologies.

  • There is minimal ethical oversight and review of most AI research. Researchers have explored using facial analysis to determine sexual orientation or political beliefs, which poses privacy threats and enables persecution of marginalized groups.

  • Overall, facial analysis technology poses risks of abuse, bias, and privacy violations without proper safeguards and consent. There are serious ethical implications that need to be addressed as this technology expands.

Here is a 140 character summary:

Facial recognition technology raises concerns about ethics, bias, and privacy. Studies show it is less accurate for women and people of color. Some companies have stopped selling it to police after protests over racial injustice. Cities and states are banning police use, going beyond just improving accuracy.

  • Facial recognition technology suffers from issues of inaccuracy and bias, especially in the US where training datasets and algorithms have problems related to race and gender. Studies have found very high false positive rates.

  • There is inadequate regulation and oversight of how facial recognition is used, particularly by US law enforcement. Police departments have significant discretion and there are examples of questionable practices.

  • The problems are not just technical but also political, relating to privacy, civil liberties, and power imbalances between citizens, companies, and the state.

  • The UK also has issues with inaccuracy of facial recognition systems used by police forces, despite claims to the contrary. Independent studies of UK police trials have found very high false positive rates, up to 98% in some cases.

  • The use of facial recognition is spreading rapidly with little oversight. There are concerns about mass surveillance, especially private networks like Amazon’s Ring doorbell cameras integrating with law enforcement.

  • Overall, facial recognition technology as currently implemented suffers from serious issues that make expanded use concerning. There is a need for much more rigorous regulation and oversight before deployment in sensitive contexts like law enforcement.

  • A 2019 report by the University of Essex highlighted issues with facial recognition technology, including that it is inaccurate, unreliable, and needs more testing and regulation.

  • Being misidentified by facial recognition can have serious consequences, as illustrated by the case of Jean Charles de Menezes who was shot dead in 2005 after being mistakenly identified as a suspected terrorist.

  • Another example is a man who was stopped and searched by police after facial recognition incorrectly flagged him as a match. 98% of matches made by the Metropolitan Police’s facial recognition system were misidentifications.

  • There are concerns about the disproportionate impact of facial recognition on minorities. Black people in the UK are more likely to be stopped by police.

  • A man who covered his face to avoid facial recognition was fined £90, showing how the technology reduces privacy and civil liberties.

  • There is currently no legislation governing police use of facial recognition in the UK. Regulators have called for better oversight and controls.

  • Police databases contain images of innocent people, violating the presumption of innocence. People have limited rights to request removal of their images.

  • Parliament recommended a moratorium on facial recognition until legislation is in place, but this was ignored by the government. Only in 2021 did the Home Office publish a proposed code of practice, but this does not provide comprehensive regulation.

  • The UK lacks clear national guidance on police use of live facial recognition technology. This leads to inconsistent policies and civil liberties protections across different jurisdictions.

  • The London Metropolitan Police has used the technology since 2016 despite criticism about lack of safeguards and potential for discrimination.

  • South Wales Police received government funding to test the technology but a court ruled their use was unlawful due to deficiencies in guidelines, data protection, and bias mitigation.

  • Police Scotland postponed plans to use facial recognition after warnings from parliament that legislation and human rights protections were needed first.

  • Experiences of policing, privacy, and civil liberties now depend on location within the UK. There are differences between England, Wales, Scotland, and Northern Ireland in if and how facial recognition is being used.

  • Overall, the UK’s decentralized approach has created a patchwork system where civil liberties vary across jurisdictions. Clearer national guidance is needed.

Here is a summary of the key points from the information request on facial recognition technology in the UK:

  • Regulators are unwilling or unable to stop the proliferation of facial recognition technology. The merging of oversight roles and the new commissioner’s stance indicate this will continue.

  • The High Court ruling stopped South Wales Police from using facial recognition, but not the London Met Police, who have now integrated it permanently.

  • No action will come from the current government given its large majority. A moratorium bill has stalled.

  • Scotland has banned police use of facial recognition, but Wales and Northern Ireland are unlikely to follow.

  • The number of surveillance cameras in the UK is estimated at 10 million and rising rapidly. Many are gaining facial recognition capabilities.

  • Facial recognition is being used in stores, hospitals, public spaces, and more without public knowledge or consent.

  • Private surveillance like Amazon Ring doorbells is rising and police partnerships threaten to expand it.

  • Secret use of facial recognition further erodes public trust and accountability.

  • Intelligence agencies warn it could aid Chinese espionage and want restrictions.

  • Our experience is one of constant, expanding surveillance that we have little power over.

Here is a summary of the key points from the excerpts:

  • Facial recognition technology has origins in policing, racism, colonial administration, border control, and debunked pseudosciences like physiognomy and phrenology.

  • In the US, facial recognition technology has led to wrongful arrests. Police have tampered with facial images and people may be denied fair trials if not informed they were identified by the technology. Major tech companies are restricting sales to police. Several cities and states have banned police use.

  • In the UK, researchers found the technology misidentifies people. The High Court ruled police use was unlawful but the Met police still plan to integrate it. Data regulators think the biometrics strategy needs redoing. A House of Commons committee recommends a moratorium until proper legislation exists.

  • Private sector use of facial recognition is unchecked in the UK. The likelihood of police use depends on where you live.

  • China uses the technology for state surveillance and control, which the UK government has called genocide. Extensive CCTV networks in the UK mean citizens are constantly watched by police, companies, and each other. Regulators have not stopped this.

  • Ethics and unintended consequences must be considered. The technology risks harming certain groups more than others. Utilitarian and panopticon concepts highlight the societal impacts.

  • The UK has one of the world’s largest surveillance camera networks, with facial recognition technology increasingly being used by police and private companies.

  • There are concerns about the lack of regulation and oversight, with police forces able to deploy facial recognition without clear legal authority.

  • Biometrics and surveillance raise significant ethical issues around privacy, consent, and function creep. There are also concerns about bias and discrimination in the underlying algorithms.

  • The “panopticon” concept shows how surveillance can be a tool of control and social coercion. Digital technology has now enabled a panopticon on a mass scale.

  • The UK needs proper legislation to regulate facial recognition and other biometric technologies, learning from developments in the EU, US, and China. An immediate moratorium on high-risk uses may be needed.

  • Facial recognition makes national ID cards redundant as our faces now act as ID. The UK lacks a legislative framework to protect against abuse of this biometric data.

  • There is still time to define ethics and oversight for these technologies, but we must act now before the digital panopticon is complete. The line must be repeatedly redrawn as technology, values and facts evolve.

Here is a summary of the key points about digital health tools explored in the UK during the Covid-19 pandemic:

  • Immunity passports were considered early in the pandemic as a way to allow people who had recovered from Covid-19 to resume normal activities, but were quickly dismissed due to concerns about creating perverse incentives, privacy issues, and lack of evidence about immunity conferred by infection.

  • Exposure notification apps using Apple and Google’s framework were adopted by many countries to notify people about potential Covid-19 exposures while preserving privacy. The UK initially pursued a centralized model to give public health authorities more data, but ultimately adopted the decentralized Apple/Google model in its NHS Covid-19 app due to technical limitations.

  • QR code check-in programs were introduced in venues like restaurants and bars to support contact tracing efforts. Customers scanned a venue’s unique QR code when entering to check-in.

  • Vaccine passports, showing proof of vaccination, were considered domestically to allow access to public venues and events, but their rollout was limited and controversial due to privacy, discrimination, and equity concerns. They have been required for international travel.

  • Overall, uptake and impact of digital tools was mixed, facing issues like low adoption rates. None became a silver bullet solution, but provided complementary support to traditional public health measures to combat the pandemic. Their continued use post-pandemic remains debated.

  • Exposure notification apps are different from contact tracing apps. Exposure notification apps have a limited function - to alert users if they have been near someone who tested positive. Contact tracing aims to map out chains of transmission.

  • Exposure notification apps raise privacy and ethics concerns, as they require balancing public health goals with individual privacy. The UK has a poor record on data privacy breaches, so privacy was a key concern in developing exposure notification apps there.

  • Exposure notification apps cannot work well without fast and accurate testing. To break chains of transmission, infected people need test results back within 24 hours. The UK has struggled with testing capacity and rapid turnarounds.

  • Exposure notification apps are less effective if people do not self-isolate when notified of an exposure. However, many people cannot afford to self-isolate or do not comply. Financial and social support is needed.

  • Adoption rates for the UK’s exposure notification apps have been low compared to other countries. Public skepticism, technical issues, and interoperability problems between the apps in England and Wales, Scotland, and Northern Ireland have hindered uptake.

  • Overall, while exposure notification apps can play a small role, investment may be better directed at improving testing capacity, compliance with self-isolation, and traditional contact tracing methods. Apps are not an effective standalone solution.

  • The UK government’s Scientific Advisory Group for Emergencies (SAGE) advised that people should self-isolate even after testing positive for Covid-19, but compliance was low.

  • The government tried to improve compliance through fines for offenders and financial support for isolation, but the support was inadequate and most applications were rejected.

  • Poorer people were less likely to self-isolate due to inability to afford staying home from work. High rates of Covid-19 persisted in deprived communities.

  • The UK initially built uncoordinated contact tracing apps that were not interoperable between nations until months later, causing confusion.

  • ‘Citizen science’ from public symptom trackers provided useful Covid-19 insights missed by authorities initially, like loss of smell.

  • ‘Critical friends’ were not sufficiently involved to advise on usability issues that may have improved the apps.

  • Overall, lack of financial support undermined isolation compliance among poorer groups, while fragmented apps and lack of user feedback hampered effectiveness.

  • The UK government implemented various lockdowns and restrictions during the pandemic, with devolved administrations setting some of their own policies. This included self-isolation rules and fines for non-compliance.

  • Issues arose with the NHS Test and Trace system, including lab errors, testing shortages, and poor compliance with self-isolation rules.

  • The NHS COVID-19 exposure notification app was launched, but suffered from technical issues and low uptake/compliance. Interoperability with other countries’ apps was also attempted.

  • Independent experts and critics provided feedback on issues with the government’s approaches, leading to some changes like abandoning a centralized app model.

  • Financial support for self-isolation was introduced but bureaucratic hurdles limited its effectiveness. More trials and initiatives emerged later to improve compliance.

  • The government faced criticism for failing to deliver on promises of “world-beating” systems. More realistic expectation-setting and admitting limitations could have helped manage public expectations. Overall, pandemic response suffered various setbacks but iteration and feedback led to improvements.

  • The UK government launched a contact tracing app in 2020 to help stop the spread of COVID-19. Initial versions had privacy issues, but version 2.0 improved on this.

  • However, a major issue remained - digital exclusion. Many people couldn’t use the app as their phones were too old. This raises ethical questions around excluding people from healthcare based on their technology access.

  • The app likely helped reduce cases and deaths to some extent based on modeling, though the real-world impact is hard to quantify due to the private nature of the app data.

  • Ultimately, the key metrics are deaths, hospitalizations, and vaccine uptake, not the app adoption rate. The app is just one part of the public health response and works best alongside other interventions like social distancing and manual contact tracing.

  • By mid-2021, with vaccination rates up, the priority shifted away from using the app to limit all infections. Large numbers of people being pinged to self-isolate caused disruptions. This illustrates how tech tools like apps need to align with changing public health strategies over time.

  • The UK government developed two main digital tools for COVID-19 exposure notification and contact tracing - an app using Apple/Google framework and QR code check-ins.

  • The NHS COVID-19 app launched in September 2020 and has been downloaded over 20 million times. It notifies users if they have been in close contact with someone who later tests positive.

  • In July 2021, a ‘pingdemic’ occurred where huge numbers were notified to self-isolate, causing worker shortages. The app’s parameters were then adjusted to reduce alerts.

  • QR code check-ins were introduced so venues could display a code for visitors to scan on arrival using the app. This would enable exposure alerts.

  • However, a leaked report showed the QR code check-in data was barely used by NHS Test and Trace in early 2021, leading to potential virus spread.

  • The number of venue alerts from QR codes increased and then declined after check-ins were made voluntary in July 2021.

  • The UK government denied plans for domestic vaccine passports but then proposed them, aiming to make full vaccination a condition of entry to nightclubs and other crowded venues. This was controversial and not implemented.

Here is a summary of the key points about the UK government’s plan regarding vaccine passports:

  • Initially, government ministers ruled out domestic vaccine passports, calling them discriminatory and against British values.

  • However, the government funded 8 pilot schemes to develop vaccine passport apps, including using facial recognition and fingerprint technology.

  • In January 2021, it started trialling vaccine passports domestically in two local authorities.

  • In February, it made a public U-turn and announced plans to add vaccine passport functionality to the NHS app.

  • This prompted a heated national debate, with opposition on civil liberties, equality, privacy, and feasibility grounds.

  • In April, Michael Gove launched a review and asked the public where the line should be drawn on their use.

  • By May, the government scrapped plans for domestic vaccine passports, though it maintained the infrastructure it had built.

  • Some venues started using them voluntarily despite unresolved issues around ethics and discrimination.

  • Overall, the government made a U-turn on domestic vaccine passports without explanation, built the infrastructure anyway, and discretely enabled their voluntary use.

  • The UK government has flip-flopped numerous times on whether to introduce vaccine passports for domestic use. Initially it said it had no plans to do so, then suggested it might, then said it wouldn’t, before eventually introducing them in limited settings.

  • Reasons cited for introducing domestic vaccine passports include incentivizing vaccine uptake, controlling virus transmission, and protecting the NHS. However, clear criteria for when they might be lifted have not been provided.

  • The current NHS Covid Pass in England shows proof of vaccination or a recent negative test. However, this could change to require double or triple vaccination.

  • While currently limited to certain high-risk settings like nightclubs, the scope could easily be expanded to more venues.

  • Decisions around domestic vaccine passports involve trade-offs, as they are not neutral tools. There are debates around effectiveness for increasing vaccinations, ethics, privacy, discrimination and civil liberties.

Here is a summary of the key points made in the passage:

  • The UK’s approach to digital technologies for fighting the pandemic has evolved dramatically over the course of the crisis. Where we draw the line on these technologies is a matter of ethical judgement.

  • In reviewing the pandemic response, we must assess each mitigation measure against impacts on cases, hospitalizations, deaths, long Covid, and strain on the NHS.

  • Before mass vaccination, lockdowns, distancing and hygiene were the main effective measures. Digital tools like QR code apps may have helped slow infections but were limited by problems with testing and isolation support.

  • After mass vaccination, the focus shifted to limiting hospitalizations. Vaccine passports then became a tool to encourage vaccination, despite earlier opposition.

  • The governments of the UK nations have differed on use of vaccine passports. Their future use remains uncertain.

  • No single technology can solve a problem as complex as a pandemic. Critical human judgement is still essential, as the Petrov incident illustrates.

  • In 1983, Stanislav Petrov’s decision to distrust a false alarm from Soviet early warning systems and not launch a retaliatory nuclear strike may have prevented nuclear war. He was reprimanded rather than thanked.

  • In contrast, a generation of children grew up and were able to live normal lives because of Petrov’s actions. The author cites this as an example of how human judgement can be preferable to blind trust in computers.

  • There is now growing recognition of the need to teach technology ethics, with new courses and funding at universities in the US and UK. This can help empower people to consider ethics when creating and using new technologies.

  • The author reflects that her own computer science education in the 1990s did not cover technology ethics at all. There was a divide between technical and humanities education.

  • Young people today have grown up with technology and challenges like nuclear weapons, climate change, and surveillance. Teaching technology ethics can help them draw ethical lines around new technologies.

  • The book aims to provide a starting point for thinking about technology ethics issues, covering concepts like responsibility and intelligence as well as case studies. But there is more work to be done to create full solutions.

  • We must beware getting stuck at the problem definition stage rather than pushing for solutions. We need actionable insights and measurable progress.

  • Some “wicked problems” may be unsolvable, so we need humility about what can be achieved. But we must still act ethically.

  • Wicked problems like environmental degradation, terrorism, and poverty have complex, interrelated causes and no definitive solutions. Applying standard techniques often makes them worse.

  • Technology is often deployed to try to solve wicked social problems, but this is challenging because these problems can never be fully solved, only improved on an ongoing basis.

  • Technology itself can become a wicked problem when its harms are complex and solutions unclear. Bans may sometimes be needed, but often harms must be balanced against benefits.

  • There are many potential actions individuals and organizations can take to address technology ethics, from small steps like ethics training to larger ones like regulating data tracking.

  • With wicked problems, a systems approach rather than linear thinking is needed. Combinations of actions, continually refined, may prove most effective.

  • Taking a “Hippocratic oath” for technology could foster an ethical mindset, as could adhering to principles like autonomy, non-maleficence, beneficence, and justice. But principles alone don’t guarantee ethical outcomes.

  • Overall, promoting ethical reflection and culture is an important first step individuals can take, since humans ultimately decide where to draw the line on technology’s use.

  • Technology ethics should be considered beforehand, not just decided in the moment. Businesses, governments, and individuals all have a role to play in shaping values around technology.

  • Technology often involves “wicked problems” with no clear solutions. Continued debate and dialogue is needed.

  • Some argue technologists should take ethical oaths, similar to doctors’ Hippocratic oath. But there are challenges in adapting traditional oaths to emerging tech issues.

  • Various frameworks have been proposed for ethical tech, like avoiding harm, ensuring informed consent, and promoting justice. But applying these values involves judgment calls.

  • Public engagement and diverse perspectives are important in technology ethics. Outcomes often depend on how values are prioritized and balanced.

  • Ultimately, technology ethics spreads through culture and ideas. Each person has a role in considering the ethics of new technologies as they emerge.

Here is a summary of the key terms:

This glossary covers concepts related to technology, philosophy, ethics, logic, artificial intelligence, facial recognition, the COVID-19 pandemic, and more. Key terms include deep learning, digital humanities, ethics, existential risk, facial analysis, friction, human-centered design, identity, inclusivity, inductive/deductive arguments, interoperability, machine learning, metadata, minimum viable product, misinformation, and surveillance capitalism. The glossary provides definitions and context around these concepts, highlighting their relevance for discussions around technology, society, ethics, and policy. It covers foundational ideas in philosophy, logic, and ethics as well as emerging technologies like AI and facial recognition. Overall, the glossary serves as a reference guide for the key concepts and debates covered in the book.

  • National Institute of Standards and Technology (NIST) is a US federal laboratory that develops standards for new technology.

  • Neural networks are a type of machine learning where a computer learns from labelled examples.

  • Neuro-rights are proposed human rights regarding emerging neurotechnologies that could alter what makes us human.

  • Niche tools/technologies are more expensive but tailored, while tools/technologies that scale can be used by more people.

  • The panopticon is a prison design where inmates can be watched at any time without knowing it.

  • Personal protective equipment (PPE) like masks and gloves protect healthcare workers.

  • Perverse incentives have unintended negative consequences.

  • Predictive policing algorithms try to predict crime locations or who will commit crimes.

  • A proof of concept is an early mock-up of an idea that can’t yet be launched.

  • Realpolitik refers to politics based on practical rather than moral considerations.

  • The right to an explanation gives people information about algorithmic decisions that affect them.

  • Robot refers to a machine that can automatically carry out complex actions.

  • Scale refers to a tool/technology’s ability to be used by many people.

  • STS explores the relationship between science/technology and society/culture.

  • Superintelligence refers to AI potentially surpassing human intelligence.

  • A superspreading event involves a disease spreading much more than usual.

  • Symbolic tool use involves using tools to represent something or change emotional states.

  • Technosolutionism sees technology as the solution to any problem.

  • Test-trace-isolate is seen as key to suppressing the virus.

  • Transmission rate refers to how many people each infected person infects.

  • The trolley problem is a thought experiment about who self-driving cars should kill.

  • The Turing test determines if a computer can think like a human.

  • Utilitarianism is the philosophy of the greatest good for the greatest number.

  • Vaccine passports prove someone is vaccinated against COVID-19.

  • Value-sensitive design considers values and stakeholders in technology design.

  • A weapon of math destruction is an algorithm that can cause mass harm.

  • Wicked problems are complex with no clear solution.

  • Rittel and Webber argue that some problems, which they call “wicked problems,” are fundamentally different from regular, solvable problems.

  • Wicked problems have many complex, interconnected causes and no single correct solution. They are difficult to clearly define and attempts to solve them often create unforeseen consequences.

  • Examples of wicked problems include environmental degradation, terrorism, and poverty. These cannot be solved using standard techniques or processes.

  • Conventional problem-solving processes not only fail to solve wicked problems, but can often make the situations worse.

  • In contrast, regular, solvable problems can be solved in a finite time by applying standard techniques.

  • Wicked problems like environmental issues are the opposite of regular problems that can be solved straightforwardly. They require fundamentally different approaches.

Here is a summary of the key people and groups thanked in the acknowledgements:

Jennifer, Aislin, Shehnaz, Azeem, Dawn, Pippa, Rich, Alexa, David, Ankur, Ely, Lyndsay, James, Danny, Richard, Chris, Nik, Mike and Dr Robert Boyce.

Hugo Warner, Maria João Paixão, Katie Dunn, William Atwell, Matt Lecznar, Mary Greer, Jim and Sophie Copeman, Genevieve Cuming, Sophie and John Power, Eduardo Plastino, James Barr, Graham Ball, Dr Mara Tchalakov, Benjamin Charlton, Andy Him, Jason Crabtree, Lisa Dittmar, Dr Maria Chen, Dr Magdalena Delgado, Professor Rana Mitter and Professor Margaret MacMillan.

The author’s family: Gene, Amanda, Jordan, Jackson and Jason; Amber and Tom; Jan and Rick; and especially her parents, Sharon and Eugene.

In summary, the author thanks colleagues, friends, advisors, family members and others who supported the writing of the book.

#book-summary
Author Photo

About Matheus Puppe