Self Help

Being You A New Science of Consciousness (9780571337736) - Seth, Anil

Author Photo

Matheus Puppe

· 51 min read

“If you liked the book, you can purchase it using the links in the description below. By buying through these links, you contribute to the blog without paying any extra, as we receive a small commission. This helps us bring more quality content to you!”



Here is a summary of the further praise for Anil Seth’s book Being You:

  • David Byrne praised the book for its wide-ranging synthesis pulling together disparate fields like philosophy, science, literature and personal experience. He found the speculation about unproven ideas the most exciting. Seth proposes to explain not just what and how we are, but why we are the way we are.

  • Professor Karl Friston called the treatment of consciousness compelling and praised the author’s gentle and inclusive arguments. He described it as a potent account of embodied sentience and selfhood.

  • Nigel Warburton recommended the book highly, saying Seth explores fundamental questions about consciousness and self from a philosophically informed neuroscientist perspective.

  • Sean Carroll said it provides a wonderfully accessible and comprehensive account of how our minds capture the world and how that makes us who we are.

  • Andy Clark called it a remarkable and groundbreaking work that offers a surprising answer to what explains our consciousness and sense of self, rooted in the new science of the predictive brain.

  • Anil Ananthaswamy said the book takes readers closer than ever to understanding the experience of being conscious selves, calling it a must-read.

  • Chris Anderson praised Seth as uniquely placed to truly advance understanding of one of humanity’s deepest riddles.

  • Christof Koch called Seth one of the world’s leading consciousness researchers with a unique and refreshing take, always exciting, accessible and engaging.

  • Consciousness refers to subjective experience - there is “something it is like” to be a conscious creature from that creature’s perspective. This phenomenological aspect of consciousness is the most important aspect to define and understand.

  • Consciousness begins very early in development, likely even in the womb. Fetuses show signs of consciousness like responding to sensations and stimuli.

  • Non-human animals, including mammals, birds, octopuses and likely many other creatures, also experience consciousness due to their complex nervous systems and behaviors. There is something it is like to be them from their perspective.

  • Even simpler organisms like worms may have rudimentary forms of experience, though it is debated whether very simple organisms like bacteria are conscious.

  • Future advanced artificial intelligence systems may also become conscious if designed to have complex internal experiences, though when and how machine consciousness may emerge is uncertain.

  • Understanding consciousness requires exploring how subjective experience relates to and emerges from biological and physical processes in the brain and body as an embodied system, not just focusing on behaviors or functions alone. This relates to the nature of selfhood as well.

So in summary, the key issues are defining consciousness in terms of subjective experience, exploring its origins early in development and in non-human animals, and relating phenomenology to brain mechanisms through an embodied perspective.

  • Consciousness has often been confused with having language, intelligence or certain behaviors, but it does not depend on outward signs. Consciousness exists during dreaming and paralysis when there are no behaviors.

  • Prominent theories of consciousness like global workspace theory and higher-order thought theory emphasize functionality and behavior over phenomenology (subjective experience). They see consciousness arising when mental content can influence behavior flexibly or when there are metacognitive processes.

  • While interesting, the author will take a different approach starting from phenomenology rather than functionality.

  • The “hard problem” of consciousness is how subjective experience arises from physical systems like the brain. This is perplexing compared to “easy problems” like explaining cognitive functions, which may eventually yield to physical explanations.

  • Philosophically, views include physicalism (consciousness emerges from physical stuff), idealism (consciousness is fundamental), and dualism (mind and matter are separate). Most scientists take a physicalist view but its details are debated.

  • Functionalism, common in neuroscience, says consciousness depends on what a system does, not what it’s made of. But terms like “information processing” assume brains work like computers, which is questionable. The nature of information is also unclear.

  • The passage discusses various issues and challenges related to explaining consciousness within a physicalist framework. Physicalism holds that consciousness arises from physical processes in the brain and body.

  • It addresses challenges from functionalism, which claims consciousness depends only on functional relations rather than physical composition. This raises issues about whether consciousness could be simulated on a computer.

  • Panpsychism posits that consciousness is a fundamental property of everything in the universe, but it does not provide testable hypotheses.

  • Mysterianism argues the physical basis of consciousness may be fundamentally incomprehensible to humans, but the passage argues against giving up based on perceived limitations.

  • The “zombie” thought experiment challenges physicalism by imagining physically identical beings lacking consciousness, but the passage argues such conceivability arguments are weakened by incomplete knowledge.

  • The “real problem” of consciousness is outlined as explaining the link between specific conscious experiences and underlying physical mechanisms/processes in the brain, in contrast to other conceptualizations of the problem.

In summary, the passage critically discusses various positions on the relationship between consciousness and physical reality, arguing the most pragmatic approach is a physically-oriented but functionally agnostic view focused on explanatory mechanisms.

Here is a summary of the view on the relationship between matter and mind presented in the passage:

  • There are three main approaches to conceptualizing the relationship: the easy problem, hard problem, and real problem.

  • The easy problem focuses on explaining the functional, behavioral, and mechanistic properties associated with consciousness, but does not address how subjective experience arises from physical mechanisms.

  • The hard problem acknowledges that there is currently an “explanatory gap” between physical explanations of brain/body processes and the emergence of qualia or subjective experience. It casts doubt on whether experience can ever truly be explained physically.

  • The real problem accepts that experiences exist and aims to explain their phenomenological properties in terms of brain/body processes, rather than just establishing correlations. It seeks explanations that bridge the physical to the phenomenal.

  • The neural correlates of consciousness (NCC) approach seeks to identify minimal brain mechanisms correlated with specific experiences. However, correlations do not necessarily translate to causal explanations.

  • The real problem approach aims to build “explanatory bridges” from physical neuroscience to qualities of experience in order to dissolve the intuition that experience cannot be physically explained - similar to how mechanistic explanations for life were eventually accepted.

So in summary, the passage presents consciousness as arising from physical processes in the brain/body, but acknowledges current limits in explaining this emergence - which the real problem approach ultimately aims to address through developing causal explanatory models.

  • The philosophy of vitalism reached its peak in the 19th century and was supported by biologists like Johannes Müller and Louis Pasteur. It held that life requires a special “spark” or “elan vital”.

  • Vitalism is now thoroughly rejected in science. While many things about life remain unknown, the idea that life requires a supernatural ingredient has lost credibility.

  • Biologists were able to move past vitalism by focusing on practical progress - describing and explaining the properties of living systems using physical/chemical mechanisms. As details were filled in, the mystery of “what is life” faded.

  • This parallel provides optimism for consciousness research. As mechanistic explanations are found for properties of consciousness like level, content and self, the fundamental mystery of consciousness may fade, like the mystery of life.

  • However, consciousness properties are subjective while life properties are objective. But this is not insurmountable, it just means subjective data is harder to collect.

  • The practical strategy is to study different properties of consciousness separately, rather than seeing it as one big mystery. This allows proposing mechanisms and pushes back against limiting ideas that consciousness can’t be explained.

  • Eventually, with progress, the “hard problem” of consciousness may succumb and we can understand consciousness as continuous with the natural world without arbitrary “ism” views of its relationship to physics.

  • Scientists in the 17th century were developing reliable thermometers and scales to measure temperature in order to understand the physical nature of heat. This required a precise, unchanging reference point for calibration.

  • Initially, a cool cellar in Paris was proposed as a fixed point since its temperature seemed constant. But factors like air pressure could still influence variables like boiling points.

  • Fahrenheit perfected mercury thermometers and established temperature scales based on reproducible states: the freezing and boiling points of water under standard conditions. This allowed consistent, systematic experiments to be conducted and calorific theories of heat to be revised.

  • Modern scientists are now seeking reliable ways to measure levels of consciousness in humans and compare it to non-conscious states. Precise scales and reference points are needed to better understand the physical nature and correlates of consciousness through experimentation, just as they were for heat. New technologies may help quantify consciousness in a way that was not previously possible.

So in summary, the historical example shows the importance of measurement, scales and reference points in advancing scientific understanding. The text suggests researchers are now attempting to develop similar standardized tools to systematically study consciousness.

  • Scientists are working on developing reliable instruments called “consciousness meters” that can measure conscious level and determine if something is conscious, similar to how thermometers measure temperature.

  • Measurement is important not just for yes/no answers but for enabling quantitative experiments that can transform scientific understanding, as shown by the history of thermometry.

  • Conscious level is distinct from physiological arousal/wakefulness. You can be dreaming/asleep yet conscious, or unconscious in a vegetative state despite being awake.

  • Neuronal activity alone does not determine conscious level. Interactions within the thalamocortical system, involving the cortex and thalamus, seem important.

  • Pioneering work by Massimini and Tononi used TMS to stimulate brain areas and EEG to record responses. In unconscious states, responses are simple, but in conscious states they spread complexity across the cortex.

  • Their “complexity index” derived from this approach provides a single number quantifying conscious level, showing promise as a reliable consciousness meter.

  • When the brain is zapped with TMS, it produces an initial strong response that quickly dies off, like ripples in still water. But during conscious states, the response is more complex, echoing widely across the cortical surface in intricate patterns over time and space. This implies different brain regions like the thalamocortical system communicate more sophistically during consciousness.

  • Massimini developed a method called “zap and zip” - using TMS to zap the cortex and an algorithm to “zip” the electrical response into a complexity index number. Lower complexity during unconscious states like sleep validated the approach.

  • Massimini found this pertubational complexity index (PCI) distinguishes conscious states like minimally conscious from vegetative. It can quantify consciousness independent of wakefulness or behavior, with potential to better diagnose disorders of consciousness.

  • Measuring spontaneous brain activity complexity, without TMS stimulation, also reliably drops in unconscious states like early sleep or anesthesia. This supports PCI and complexity measures tracking consciousness level.

  • PCI and other emerging methods migrating from labs have potential to revolutionize diagnosing borderline cases and residual consciousness in injured brains, avoiding misdiagnoses based solely on inconsistent or absent behavior. One case showed PCI detecting consciousness the clinical team initially missed.

  • Owen and his team repurposed a brain scanner to allow a patient diagnosed as being in a vegetative state to interact with her environment using her brain rather than her body. She was able to imagine playing tennis to answer yes and walking around her house to answer no, demonstrating that her original diagnosis was wrong and she was consciously aware.

  • Subsequent studies have used similar methods for diagnosis and basic communication with patients thought to be unresponsive but who may be consciously aware. It’s estimated that 10-20% of those diagnosed as vegetative may actually be consciously aware to some degree.

  • New methods like PCI promise to detect residual awareness without requiring mental imagery or language comprehension from patients, allowing detection of even very minimal levels of consciousness.

  • Many conscious but unresponsive people may be undiagnosed in medical facilities. Owen’s method and techniques like PCI could help communicate with these patients and better understand their condition.

This passage discusses research on the effects of psychedelic drugs on measures of brain activity and conscious level. The key points are:

  • Researchers studied changes in algorithmic complexity (a measure of signal diversity/randomness) in brain regions under psilocybin, LSD and ketamine, finding increases compared to placebo. This was the first time such a measure of conscious level increased rather than decreased with an altered state.

  • This suggests psychedelics lead to more random, “freewheeling” brain activity patterns that match reports of altered perceptual experiences on psychedelic trips. It shows measures of conscious level can be sensitive to changes in conscious content.

  • However, purely random brain activity may not actually produce conscious experience. Too much randomness could yield incoherent signals and no experience.

  • A later paper proposed consciousness requires both integrated and informative neural mechanisms, in a “middle ground” of complexity between perfect order and randomness.

  • Measures should track how information and integration are jointly expressed in the brain, not just information alone. This better links neural properties to consciousness as both integrated and informative.

  • The findings raise questions about what neural properties truly underlie conscious experience and how altered states like psychedelic trips impact the brain and consciousness.

Here are the key points about properties of experience based on the summary:

  • Integration - Conscious experiences involve the integration of information across distributed brain regions. Measures of consciousness need to directly capture both integration and information.

  • Information - Highly conscious experiences contain large amounts of information that cannot be easily compressed. Measurements of algorithmic complexity can approximate the information content of brain activity.

  • Dynamical complexity - Conscious states correlate with more complex dynamical patterns in brain activity that are between the extremes of order and disorder. Measures try to quantify this intermediate complexity.

  • Multidimensional - Conscious level or awareness may not be fully captured by a single scale or metric, but involves multiple dimensions or properties that vary across individuals and contexts.

  • Altered states - Psychedelic drugs profoundly alter conscious experience and brain dynamics, decreasing functional connectivity and information flow between regions. This correlates with heightened senses of integration and novel experiences.

  • Integration of experience - A true measure of consciousness needs to bridge neural mechanisms with universal properties of experience like integration and information content in a quantifiable way. No existing measure achieves this fully.

Here is a summary of the key points about integrated information theory (IIT) and the measure of Phi (Φ):

  • IIT proposes that consciousness is identical to integrated information (Φ), which measures how much information is generated by a system as a whole over and above the information generated by its parts independently.

  • Φ quantifies both information (how many alternative states are ruled out by the system’s global state) and integration (the system acts in an integrated, unified way rather than as separable parts).

  • A high Φ means a system is “more than the sum of its parts” in terms of information - it has a highly integrated, unified conscious experience.

  • A single element like a photodiode has low Φ since its state carries little information.

  • A distributed sensor array may carry information but has zero Φ since the individual elements act independently, not in an integrated way.

  • A “split brain” network divided into independent halves would have zero whole-system Φ even if the halves each had Φ, since the halves can be separated without information loss.

  • IIT claims consciousness is intrinsically identical to Φ - systems with high Φ experience rich, unified consciousness while those with zero Φ experience nothing at all.

  • IIT claims consciousness arises from integrated information (Φ) within a system. It measures the difference between a system’s behavior as a whole vs the sum of its parts.

  • IIT suggests a split brain would have two independent consciousnesses, not one spanning both hemispheres, since the hemispheres can be informationally separated. Similarly, individuals have their own consciousnesses, not a collective one.

  • IIT explains why cerebellum likely has low consciousness despite many neurons - its circuitry doesn’t integrate information well. Cortex wiring enables high Φ. Loss of cortical integration during sleep/anesthesia makes Φ vanish.

  • IIT uses theoretical axioms about integration and information, rather than experimental data, to derive claims about consciousness mechanisms. It defines consciousness based on a system’s properties.

  • A major challenge is measuring intrinsic Φ, not just observable behavior. This requires knowing all a system’s possible states, which is difficult or impossible for brains without exhaustive knowledge.

  • Other challenges include identifying the right way to partition a system and determining the appropriate spatial/temporal scale for analysis when calculating Φ.

  • If correct, IIT claims any system with the right causal properties would have consciousness, regardless of composition. It leads to some unusual implications about consciousness distribution.

The passage discusses some implications of integrated information theory (IIT) that could lead to seemingly strange predictions, even though the scenarios described may never actually occur.

Specifically, IIT predicts that conscious experience would subtly change even if new neurons in the brain were wired up but never actually fired. This is because the potential states the brain could occupy would increase. By the same logic, consciously preventing already inactive neurons from firing would also subtly alter experience, despite no real change in brain activity.

Remarkably, new optogenetic technologies may allow experimentally preventing inactive neurons from firing, which could provide a way to test IIT’s predictions. However, the scenarios described may never in fact encounter or be tested, as they rely on neuronal states that the brain never adopts.

In summary, the key point is that IIT claims consciousness would change based on potential, not actual, neuronal states - states that in the examples given may never occur in reality. This leads to predictions that could be difficult to experimentally assess.

  • The common view of perception is that the senses act as windows onto the world, detecting objective properties like color and shape and conveying this information to the brain to form perceptions. This is known as the “how things seem” view.

  • However, this view can be challenged. As Wittgenstein pointed out in his example of the sun orbiting the Earth versus the Earth rotating, how things seem is not necessarily how they are in objective reality.

  • The dominant view of how perception works is the “bottom-up” model, where sensory signals flow into the brain and are processed hierarchically to extract increasingly complex features, building up an inner representation of the world.

  • While this bottom-up model fits with brain anatomy and some experimental evidence, it may not tell the full story. The next part of the book will explore the idea that the brain acts as a “prediction machine”, generating perceptions based on its “best guess” of sensory inputs rather than simply processing inputs in a feedforward manner.

  • This suggests perceptions may be more of a “controlled hallucination” generated by the brain rather than a transparent reflection of objective reality captured by the senses. An alternative view to the “how things seem” perspective.

  • Wittgenstein was driving at the idea that even with a greater scientific understanding of how things work, like the solar system being heliocentric, some aspects of perception remain the same on the surface.

  • Perception appears to provide a window onto an external reality, but is actually constructed from top-down predictions and inferences by the brain based on past experiences. Sensory inputs serve to reduce errors in the brain’s predictions.

  • Ancient thinkers like Plato and Ibn al Haytham proposed perception involves inference rather than direct access to reality. Kant saw perception as filtering an unknowable “thing in itself” through mental frameworks.

  • In the 19th century, Helmholtz proposed perception is unconscious inference based on combining sensory signals and expectations. This idea influenced many 20th century theories, like predictive coding/processing theories that see perception minimizing prediction errors.

  • The author describes perception as “controlled hallucination” - internally generated predictions constrained by sensory inputs. Both normal perception and hallucination involve predictions, differing by degree of constraint from external causes. This blurs the line between perception and hallucination.

  • Colour perception depends not just on the light entering the eye, but on complex interactions between the light reflecting off surfaces and the ambient illumination.

  • The brain unconsciously “discounts the illuminant” to infer colors in a way that compensates for changes in light conditions. This allows an object to appear the same color even in different lighting.

  • There is no single property of “redness” in objects themselves. Color is a construct of the brain to track objects consistently across illumination changes.

  • Examples like “The Dress” photo show individual differences in how people discount illuminants can lead to varied color percepts of the same thing.

  • Visual illusions like Adelson’s Checkerboard demonstrate that context and prior expectations shape color perception, not just the light entering the eye. Our percept of an object’s shade depends on inferences about lighting conditions like shadows.

  • Perception involves the brain making its best guess of causes from sensory inputs, based on past experience, rather than directly representing the physical properties of the world. Color exists where the brain and world interact through this process of perceptual inference.

The passage discusses examples that demonstrate how perception is influenced by top-down predictions rather than simply reflecting sensory signals. It describes a “Mooney image” which initially appears as scattered blobs but resolves into a coherent scene once the original image is seen, providing a prediction about the causes of the sensory input. Similarly, “sine wave speech” becomes intelligible once the original unprocessed speech is heard.

This shows that perception is an active, generative process where the brain builds interpretations of sensory signals based on its predictions, rather than passively receiving raw inputs. Even though the sensory signals don’t change, seeing the original image or hearing the original speech provides new predictions that alter conscious perception. This reveals perception to be a “controlled hallucination” or “proactive, context-laden interpretation” guided by top-down predictions rather than a transparent representation of the world.

  • Bayesian reasoning is a type of abductive reasoning that involves updating probabilities (beliefs) based on new evidence or data. It provides an optimal way to reach conclusions under uncertainty.

  • Priors are the initial probabilities before new data. Likelihoods represent the probability of data given different hypotheses. Posteriors are the updated probabilities after incorporating likelihoods and priors via Bayes’ rule.

  • An example involves seeing a wet lawn and determining if it rained or the sprinkler was left on. The sprinkler is initially a better hypothesis due to lower rain priors.

  • New data, like seeing the neighbor’s lawn also wet, changes the likelihoods and makes rain a better hypothesis via Bayesian updating of posteriors.

  • Reliability of data impacts updating - unreliable data has less influence. Bayesian inferences improve through continuous updating as new data becomes available in an endless cycle.

  • Bayesian inference involves using prior information and new evidence to update beliefs and make predictions. Each new observation informs the next guess.

  • If your lawn is wet two mornings in a row, your guess about the cause on the second day should take into account that it was also wet the first day. This prior information from the first observation allows you to refine your prediction for the second day.

  • Bayesian thinking has many applications, from medical diagnosis to searching. It also provides a way to understand the scientific method as an ongoing process of revising hypotheses based on new evidence.

  • Perception can be viewed as a Bayesian process, where the brain generates predictions and updates them based on sensory input and prior assumptions. Each new perception helps shape the next one. This allows the brain to continuously refine its understanding of the external world.

The controlled hallucination view is a theory that builds on predictive processing and Bayesian inference to account for conscious experiences. It proposes that perception is not a bottom-up processing of sensory data, but a top-down “controlled hallucination” generated by predictive models in the brain.

Both predictive processing and the controlled hallucination view are based on the idea of prediction error minimization in the brain. By constantly minimizing prediction errors at all levels, the brain is essentially implementing Bayesian inference and approximating Bayes’ rule.

There are three key components to prediction error minimization. 1) Generative models that determine what can be perceived. 2) Perceptual hierarchies where predictions cascade across levels from abstract to concrete. 3) Precision weighting, where the brain adjusts the influence of sensory signals by changing their estimated precision/reliability, similar to attention.

Phenomena like inattentional blindness demonstrate how precision weighting can mean unattended sensory inputs have no influence on perception. Magicians and pickpockets exploit this through misdirecting attention.

Overall, perception and action are tightly coupled - perception evolves to guide effective action, not to represent the world as it is. The brain may generate actions first and then calibrate them using sensory signals to achieve goals.

  • In predictive processing, action and perception are underpinned by minimizing sensory prediction errors. Actions can help reduce prediction errors by altering sensory inputs to match existing predictions.

  • This process is called active inference - where the brain seeks out sensory data through actions that fulfill its perceptual predictions. Even complex actions like decisions ultimately involve chains of simpler bodily actions that impact sensory input.

  • Active inference relies on generative models that can predict the sensory consequences of potential actions. This allows the brain to choose actions most likely to reduce prediction errors.

  • Actions can help test competing perceptual hypotheses by gathering new sensory data. They also support long-term learning by revealing more about the causal structure of the world.

  • Remarkably, active inference views actions themselves as a form of self-fulfilling proprioceptive (body position) predictions. Actions allow predictions about body movements to override contradictory sensory evidence.

  • This underscores how action and perception are two sides of the same predictive coin. Both emerge from the brain’s optimal guessing about sensory causes through minimizing prediction errors.

  • The concept of the “beholder’s share” was introduced by Austrian art historians Alois Riegl and Ernst Gombrich. It refers to the role of the observer/perceiver in imaginatively “completing” a work of art, beyond what is physically present.

  • This idea is highly compatible with predictive theories of perception, which posit that perception involves unconscious inferential processes and top-down predictions from the brain. There is no “innocent eye” - perception always involves interpreting and classifying visual information.

  • Impressionist paintings like those by Monet, Cézanne and Pissarro attempt to remove the artist and impart the raw materials of perceptual variation/brightness, leaving space for the observer’s visual system to perform its interpretative work. This evokes the subjective experience of perception.

  • The paintings can be seen as experiments reverse-engineering how the visual system operates, from sensory input to coherent experience. They embrace the beholder’s share and phenomenology of perception.

  • The concept emphasizes the experiential, qualitative nature of perception that gets lost in purely mechanistic explanations of predictive processing using probabilities, errors, etc. It offers a phenomenological perspective.

  • The passage describes an experiment done by Yair Pinto, a former postdoc in the author’s lab, to test how perceptual expectations can influence conscious perception.

  • Using a technique called continuous flash suppression, Pinto presented participants with images of either houses or faces that were initially suppressed by flashing shapes. The images gradually increased in visibility.

  • Participants were cued beforehand with either the word “house” or “face”, creating expectations. However, the cues were only partially valid - a house might appear 30% of the time when expecting a face.

  • Results showed images appeared consciously faster and were recognized more accurately when they matched expectations. The difference was small (0.1 seconds) but reliable.

  • This supports the idea that perceptual expectations shape conscious experience by enhancing processing of expected objects. Other experiments have found similar effects using words and letters.

  • The passage then discusses how the author took LSD recreationally and experienced vivid hallucinations, seeing patterns like faces that he could partially control. This experience supported the idea that all perception involves brain projections.

  • The author’s lab now uses VR/AR to study how perceptual priors generate experiences, building a “hallucination machine” that simulates overactive expectations through image algorithms.

  • The author developed a “hallucination machine” that uses a deep dream algorithm to project perceptual predictions onto panoramic video frames in virtual reality. This gives the video an exaggerated hallucinatory quality by making the brain’s top-down predictions more pronounced.

  • When the author tried it, the experience was like a mild hallucination with dog features emerging organically throughout the scene. The machine simulates the effects of top-down guesses about what is present.

  • Adjusting which layers of the deep learning model are fixed can generate different types of hallucinations, like fragmented parts or geometric patterns.

  • This is an example of “computational phenomenology” - using models to understand perceptual experiences like hallucinations and how they relate to mechanisms of predictive perception.

  • The author argues that normal perception involves similar top-down predictive processes and can be seen as a form of “controlled hallucination”. He discusses how this can explain deeper features of perception like the perception of “objecthood”.

  • Experiments in VR further support the role of valid sensorimotor predictions in generating the experience of objecthood for virtual objects.

  • Imotor predictions, or predictions about how our movements will affect sensory feedback, can influence conscious perception in measurable ways.

  • Experiments show that physical changes in the world are neither necessary nor sufficient to cause perceived changes. Perceived changes arise from inference, not direct registration of sensory changes.

  • Experiences of time also emerge through perceptual inference rather than an internal clock mechanism. Studies using neural networks and fMRI data show that estimates of time duration correlate with rates of change in visual processing, without an inner clock.

  • New research aims to create “substitutional reality” through VR/AR to study conditions under which people experience their environment as real versus not real, and investigate disorders involving loss of reality perception. The goal is a system where people believe simulated input is real. This explores the deep structure of how perception constructs reality.

  • Wittgenstein argued that even when we understand perceptions like seeing the Earth revolve around the sun as controlled hallucinations, these perceptions will still seem veridical and to represent real properties of the external world.

  • Similarly, David Hume argued that causality is not a directly observable property of the world, but rather something we “project” onto it based on repeated perceptions of temporal succession. We “gild and stain” objects with internal sentiments like causality.

  • On the controlled hallucination view, perception evolved not to represent the world as it is, but to guide useful action and behavior for survival. Phenomenological properties like colors and causality seem real so we can respond quickly.

  • This “seeming real” adds to dualist intuitions about consciousness, fueling the “hard problem.” But if we realize phenomenal properties are inferential constructions, not direct correspondences, it helps dissolve the hard problem.

  • Progress is made by distinguishing different aspects of experience and accounting for them mechanistically, without need for a special “consciousness sauce.” Dissolving, not necessarily solving, the hard problem is the best approach.

  • The chapter then discusses several experiments providing evidence that perceptual experiences are influenced by top-down expectations and prior knowledge/beliefs.

  • The author’s mother experienced hospital-induced delirium while being treated for bowel cancer at a hospital in Oxford. Delirium is an acutely disturbed mental state characterized by restlessness, illusions, and incoherence.

  • Unlike dementia, delirium is usually temporary, waxing and waning over time. However, it can last for weeks.

  • Risk factors for developing delirium in the hospital include infection, major surgery, fever, dehydration, lack of food/sleep, medication side effects, and unfamiliar surroundings. Hospitals are disorienting places that can trigger delirium.

  • The author’s mother experienced intense hallucinations and delusions, thinking experiments were being done on her with her son as the “ringmaster.” She became paranoid, angry, and tried to escape. This was not like her normal self.

  • Delusions often have a twisted logic relating to the patient’s situation. The author’s fluid identity as son and doctor likely contributed to his mother’s delusion that he was involved in the “experiments.”

  • Up to 1/3 of elderly hospital patients develop delirium, and it can have long-term cognitive and health impacts if not treated. The author brought familiar items from home to help reorient his mother.

In summary, the passage describes the author’s experience of his elderly mother developing disorienting and frightening hospital-induced delirium while undergoing treatment, and the challenges this posed for her and their relationship.

  • The passage describes a thought experiment involving teletransportation technology that can scan and replicate a person exactly on Mars. This raises philosophical questions about personal identity.

  • Is the replicated person on Mars (Eva2) the same person as the original (Eva1)? On one hand they would have the same psychological continuity, but there are now two individuals.

  • The passage argues both Evas are real, as over time their identities would diverge based on different experiences and memories. This relates to how personal identity changes naturally over time for each individual.

  • The self is more complex than it seems. Elements include embodied selfhood tied to feelings of owning one’s body, emotions, and a basic feeling of being alive. There is also one’s first-person perspective and personal narrative of identity based on role, interests, etc.

  • The sense of a singular, immutable self is an intuitive bias but has been questioned by philosophers, psychologists, neuroscience showing the self can break down, and Eastern philosophies seeing no permanent self. Overall the passage aims to show the self is more multifaceted than it appears.

  • The passage discusses different aspects of selfhood, including the perspectival self (based on one’s first-person viewpoint), the volitional self (sense of free will and agency), and the narrative/personal self (based on autobiographical memories and sense of identity over time).

  • It describes Ernst Mach’s self-portrait which illustrates the perspectival self. It also discusses how experiences of selfhood can emerge prior to a developed sense of personal identity.

  • More complex emotions like regret involve the narrative self and sense of personal identity. Different aspects of selfhood interact and influence each other.

  • The social self involves how we view ourselves through the perceptions of others. It develops through social interactions and brings emotions like guilt and shame.

  • Normally these diverse aspects of selfhood feel unified, but experiments show this unity can break down in certain disorders or situations.

  • The rubber hand illusion experiment demonstrates how the sense of body ownership can shift to include a fake hand through synchronous touching. This shows body perception is flexible and inferred rather than fixed.

  • Similar virtual reality experiments can induce out-of-body experiences by manipulating visuo-tactile synchrony, suggesting the first-person perspective is also inferred rather than immutable.

  • Some people report unusual out-of-body experiences during epileptic seizures or other neurological disruptions. These experiences are thought to arise from disrupted activity in brain regions involved in balance, movement and integrating sensory information.

  • Altering these systems can lead to strange perceptions of one’s first-person perspective, even while other aspects of selfhood remain intact. Some people experience hallucinations of seeing their body from an outside perspective.

  • Advances in virtual reality are allowing researchers to artificially induce experiences similar to out-of-body experiences by manipulating perspectives between head-mounted displays. One study had people embody another person’s virtual body through sensorimotor synchronization.

  • While these illusions can be compelling, they don’t fully convince participants that they are in a different body or perspective. Susceptibility to such illusions correlates with hypnotizability, suggesting top-down expectations drive the experiences.

  • Clinical conditions like phantom limb syndrome provide stronger evidence that embodiment is a construction of the brain, as they involve far greater alterations of self-experience. Higher-level aspects of personal identity and social self are also dissociable from low-level embodiment.

  • Clive Wearing suffers from immense retrograde and anterograde amnesia, leaving him only able to recall events from around 7-30 seconds ago.

  • His autobiographical memory has been destroyed, meaning he has no episodic memory of events in his own life located in time and space.

  • His diaries show him repeatedly writing that he has just ‘woken up’ from unconsciousness, crossing out previous entries, as he has no memories of writing before.

  • However, other aspects of identity like his sense of self and ability to act voluntarily remain intact. His love for his wife also persists.

  • When playing music, Clive seems to regain a sense of wholeness and identity, described by Oliver Sacks as being “himself again and wholly alive.”

  • Overall, Clive’s condition is tragic as the destruction of his narrative self results in an erosion of his fundamental sense of personal identity and perception of self as continuous over time. His case shows how memory persistence is important for selfhood.

So in summary, Clive suffers one of the most profound amnesias documented, which annihilated his narrative self and sense of personal identity by preventing formation of new memories or recall of past events and experiences.

  • The experience of having a consistent self seems enduring even as the world around us changes. We feel like the “same old body” is always present.

  • However, our perceptions of self are constantly changing even if we don’t notice it. We become slightly different people over time through changes in our brains and bodies.

  • This “change blindness” regarding our own identity allows us to feel like the self is an immutable entity rather than a bundle of changing perceptions.

  • The exaggerated stability of our sense of self goes beyond just not noticing physical changes. It evolved not to discover or know ourselves, but to control and regulate ourselves in order to survive.

  • Our perceptions of self are designed by evolution primarily for physiological control and staying alive, not for accurately perceiving what is “out there”. Understanding this has implications for how we understand all conscious experiences.

In summary, while we experience a consistent sense of self, our perceptions are constantly changing in subtle ways we don’t notice. This evolved not for self-knowledge but for self-control and survival through physiological regulation.

Here is a summary of key points from section e:

  • René Descartes rejected the Great Chain of Being model and instead divided the universe into just two domains: res cogitans (mind/thought) and res extensa (matter). This created problems about how the two domains interact.

  • Descartes viewed non-human animals as “beast machines” that lacked souls, consciousness, rationality, etc. Their bodies were just machines that moved automatically.

  • Later philosopher Julien Offray de La Mettrie extended this view to humans, arguing we are “man machines” too, denying any special immaterial status for the soul. This questioned religion and God’s existence.

  • The Cartesian view divides mind and life, while La Mettrie saw them as deeply connected - mind could be viewed as a property of life. Debates continue on if life and mind are continuous or discontinuous.

  • Emotions and moods are types of conscious content tied to interoceptive perception of the internal body state, like heart rate, digestion, breathing.

  • William James and Carl Lange argued emotions stem from perceiving bodily changes, not the other way around (as traditionally thought). Later appraisal theories incorporated cognitive evaluation of context.

  • The passage discusses appraisal theory, which proposes that emotions emerge from cognitive interpretations of bodily states.

  • It describes a famous 1974 study by Dutton and Aron that found men interpreted physiological arousal from crossing a rickety bridge as sexual attraction rather than fear. This supported the idea that arousal is cognitively appraised.

  • However, appraisal theory assumes a distinction between cognitive and non-cognitive processes that the brain does not actually make.

  • The author proposes “interoceptive inference” as an alternative, drawing on predictive processing theories. It treats emotions as predictions about the causes of internal signals, without needing separate cognitive evaluation.

  • Some evidence comes from experiments showing bodily perceptions depend on integration of interoceptive and exteroceptive signals. But more research is needed to directly test interoceptive inference.

  • From this view, emotions come from “inside out” rather than being shaped by external events. Understanding this helps explain how embodied self-perception is grounded in physiology.

  • Cybernetics provided insights about how biological systems can have internal “purposes” or goals, like homeostasis, providing a link between living and non-living systems.

  • Central heating systems use simple feedback control - if temperature is too low, turn heating on, otherwise turn it off (System A).

  • A more advanced system (System B) can predict how temperature will change based on house properties, weather, and adjust heating accordingly.

  • System B maintains temperature better than System A by having an internal model of the house and how temperature responds to actions.

  • An even more advanced System B could anticipate future temperature changes and preemptively adjust heating.

  • If System B has imperfect sensors, it must infer the actual temperature from sensor readings using its models. This is similar to how the brain infers states of the world from sensory signals.

  • System B’s goal is temperature regulation, not just perceiving temperature. Its perception is for control, not just knowledge, through active inference.

  • Emotions and moods can be understood as control-oriented perceptions that regulate the body’s “essential variables” like temperature, preserving life.

  • Fear perceptions ready the body for danger through actions like running or physiological changes. The goal is control, not just perception.

  • Emotions don’t feel like objects because their goal is regulating essential variables, not just finding out states of the world or body. Their experience reflects the conditional predictions and control needed for homeostasis.

  • The core purpose of any living organism is to stay alive through physiological homeostasis. Brains evolved primarily to help regulate essential bodily variables and ensure survival.

  • Interoceptive signals provide information about the internal physiological state of the body, but the brain only has indirect access via prediction and inference. Interoceptive perception is a form of predictive control aimed at physiological regulation.

  • This process is called interoceptive inference or active inference. The brain makes predictions about the current and future internal state and acts to minimize prediction errors and fulfill predictions, both externally and internally.

  • Interoceptive inference supports anticipatory responses and actions to maintain stability through change (allostasis). Emotions and moods arise from interoceptive predictions and regulate the body.

  • The deepest level of selfhood experience is a formless perception of “just being” that predicts the ongoing viability of the physical body. All other perceptions emerge from this ground state of regulating physiological integrity.

  • Subjective experience of a stable, continuous self over time emerges from precise interoceptive predictions that become self-fulfilling. Experience of stability and reality of self similarly arise from predictive control structures oriented around physiological regulation.

In summary, the theory proposes that conscious experience of self and world arise from the brain’s predictive mechanisms for actively regulating and controlling the physical body to ensure survival as a living organism. All perception is ultimately grounded in this biological imperative.

  • The author argues that consciousness is rooted in and arises from the physiology of living organisms, not as some separate immaterial entity. Conscious experiences are shaped by and related to the body’s perceptions, internal states, and interactions with the world.

  • Thinking of consciousness as grounded in the material body breaks down the “hard problem” of how physical processes give rise to subjective experience. Our sense of self as a separate observer is just one aspect of perceptual inference, not something fundamentally apart from the natural world.

  • Theories of the body and mind as continuous, rather than separate, go back centuries but were controversial due to ideas of the immaterial soul. The author’s “beast machine” view of embodied selfhood as governed by predictive control processes echoes ancient conceptions of the soul as connected to life rather than rational thought.

  • We are better understood as “feeling machines” whose experiences emerge from regulating our internal states via interoceptive predictions, not as disembodied cognitive computers. This deepens the dissolution of dualist intuitions that see consciousness as some non-physical observer separate from the physical world.

Here is a summary of the key points about the role of the brainstem and the free energy principle from the passage:

  • The brainstem has traditionally been viewed as an “enabling factor” for consciousness, similar to a power cable for a TV. But it plays an active role in physiological regulation, leading some to suggest consciousness arises there rather than the cortex. However, most evidence links the cortex and thalamus to conscious states.

  • The free energy principle (FEP) proposes that all living systems actively resist increases in entropy/dispersion to maintain their identity over time. This means actively preserving internal order and boundaries from the external environment.

  • The FEP has very broad scope, aiming to explain many biological phenomena from single-cell organization up to evolution. It draws on diverse fields like biology, physics, statistics, neuroscience, and machine learning.

  • Understanding the FEP is challenging due to its complexity and scope. Researchers have struggled to fully comprehend Karl Friston’s articulation of the idea.

  • However, at its core the FEP is a simple notion - that living things must differenciate themselves from their surroundings to exist. The passage aims to provide a more accessible take on this fundamental principle.

In summary, the passage considers alternative views of the brainstem’s role in consciousness and introduces the free energy principle as a way to understand how biological systems maintain their distinct identity over time.

  • According to the second law of thermodynamics, all isolated physical systems trend toward maximum entropy/disorder over time. Living systems oppose this by maintaining a state of low entropy.

  • To resist entropy, living systems must occupy states they “expect” or statistically predict to be in based on their environment/circumstances. This keeps them alive and ordered rather than decomposing into disorder.

  • However, entropy of sensory states cannot be directly measured. The free energy formulation provides a measurable quantity that approximates sensory entropy.

  • Organisms minimize free energy by minimizing sensory prediction error through processes like predictive processing and active inference. This serves to keep the organism in expected low-entropy states compatible with survival.

  • By minimizing prediction error via modeling, organisms in effect maximize evidence for their own continued organized/living state of being, resisting entropy and disorder over time. This formulation provides a philosophical explanation for how life persists out of thermodynamic equilibrium.

  • The Free Energy Principle is not directly testable but motivates and facilitates interpretation of theories like predictive processing that can be experimentally evaluated. It provides a conceptual framework rather than a falsifiable theory.

The passage argues that theories of embodied cognition and control-oriented perception, as well as the “beast machine” theory, can be understood through the lens of the free energy principle (FEP). The FEP provides three benefits:

  1. It grounds these theories in fundamental physics, specifically the second law of thermodynamics. This makes the theories more compelling and integrative.

  2. It retells the theories in reverse order, starting from basic existence and moving outward. This strengthens the intuition that the underlying story is coherent.

  3. It brings a rich mathematical toolbox that can be used to further develop and test the ideas. For example, it suggests we should seek new sensations to reduce future uncertainty, making us curious agents.

However, the FEP is not a theory of consciousness per se. Like predictive coding theories, it is a theory “for consciousness science” that aims to explain phenomenology mechanistically, not address the “hard problem” directly. Theories like the FEP and integrated information theory currently do not interact much, but experiments comparing their predictions may help integrate insights over time. The “beast machine” theory aims to bring together insights from both.

To encounter can mean:

  • To come into the presence or view of someone or something by chance or design. For example, “She encountered an old friend on the subway.”

  • To experience or be exposed to something, or to engage or interact with a person, thing, or situation. For example, “She encountered many challenges running her own business.”

  • To meet, face, or have to deal with. For example, “The company encountered difficulties raising capital.”

So in summary, to encounter generally refers to coming into contact with or experiencing something or someone, whether planned or by chance. It often implies interaction, engagement, or having to deal with or confront what is encountered.

  • The debate over determinism vs free will is not as important once we reject the idea of “spooky” free will that magically intervenes in the causal flow of events. Determinism does not preclude the experience of free will.

  • Libet’s experiments on voluntary action showed neural readiness potentials starting hundreds of milliseconds before conscious intention or urges to act. This seems to undermine free will by suggesting actions are initiated unconsciously.

  • However, Schurger later realized readiness potentials may simply be fluctuations in brain activity that occasionally cross a threshold to trigger actions, rather than signatures of intention. They are seen preceding actions just because that’s when we look for them.

  • This supports viewing experiences of volition as forms of self-perception rather than causal factors. We feel intentions match our desires even though we don’t choose desires. We feel able to do otherwise even though given causes we may not be. Actions feel internal rather than imposed. So free will remains an experiential phenomenon even if not fundamental to causality.

  • Voluntary actions feel voluntary when we perceive they are caused from within by our beliefs, goals and desires, rather than external factors, and that we could have potentially acted differently.

  • Volition emerges from a distributed network in the brain implementing control over our many “degrees of freedom.” This network includes a “what” process selecting actions, a “when” process timing actions, and a “whether” process allowing cancellation.

  • Perceiving our control over actions through this network is what gives rise to the subjective experience of free will. However, free will is not an immaterial source of causation.

  • Experiencing volition is useful for guiding future behavior - it flags actions so we can learn from outcomes and potentially act differently next time. The feeling of alternative possibilities supports this future orientation.

  • While “spooky” free will is illusory, we do have a real capacity for voluntary control thanks to our brain’s management of degrees of freedom. However, this can be undermined by injuries or conditions. In summary, whether free will is an illusion depends on how it is defined.

  • The passage discusses difficult ethical and legal questions raised by cases where brain abnormalities like tumors seem to influence or cause violent/criminal behavior, as in the case of Charles Whitman. It questions whether such individuals should be fully responsible if their actions were influenced by biological factors outside their control.

  • It notes that as we understand more about the brain basis of behavior and voluntary control, the concept of completely free and uncaused will becomes harder to justify. However, it argues experiences of intending and willing action are still real and important for guiding behavior.

  • The passage discusses Benjamin Libet’s famous experiments showing unconscious brain preparations for action precede conscious awareness of willing that action. However, it notes this doesn’t undermine the experience of willing action or its role in controlling complex environments.

  • It argues volition should not be seen as an illusion but as perceptual inferences that fulfill intentions and allow learning from past actions. Experiences of intending action play a vital role in human survival and navigating the world.

  • In conclusion, it suggests exploring voluntary control and possibly consciousness in other animal species to better understand the origins and degrees of “free will” seen in humans.

  • From the 8th century to the mid-1700s, European ecclesiastical courts would try and prosecute animals for their actions, such as pigs being hanged for eating children. This shows that medieval thinkers viewed animals as having conscious minds, unlike Cartesian dualism which viewed animals as machines lacking consciousness.

  • While larger animals could be tried in court, smaller infestations like rats or locusts were issued written orders to leave an area. This still acknowledges some level of will or decision making in animals.

  • Animal consciousness likely exists in non-human species but will differ from humans. Tests for consciousness should not rely only on intelligence or language, as consciousness is distinct from intelligence.

  • The author believes all mammals are conscious given shared brain structures, activity patterns during sleep/wake states, and effects of anesthesia. However, conscious contents and experiences of selfhood will vary significantly between species due to perceptual and cognitive differences.

  • Tests of self-recognition in mirrors have shown some non-human species like great apes, dolphins and elephants can recognize themselves, indicating a level of self-awareness beyond just responding to stimuli.

In summary, the passage discusses the history of attributing consciousness to animals, cautions against solely equating it with intelligence, and argues mammals likely possess consciousness based on shared brain and sleep mechanisms, while contents of consciousness vary greatly between species.

  • The mirror self-recognition test developed by Gordon Gallup Jr. in the 1970s is used to test self-awareness in animals. It involves marking an anesthetized animal in a way it can’t see, then seeing if it looks at the mark in a mirror rather than just the mirror image.

  • Among mammals, great apes like chimpanzees, some dolphins/killer whales, and one elephant have passed the test. Many other mammals like pandas, dogs, and monkeys have failed.

  • Animals could fail for reasons other than lack of self-recognition, like disliking mirrors or not understanding them. New versions of the test are being developed tailored to different species.

  • While monkeys have failed the mirror test, spending time with them gives a compelling impression of other conscious selves. Videos also show emotions like indignation over unfairness. However, they are not “furry little people.”

  • Spending a week with octopuses in a lab left the author with a sense of their intelligence and consciousness being very different from humans, as their last common ancestor was over 600 million years ago. Their minds evolved independently without a backbone. This gives a glimpse of what an alien mind may be like.

Here is a summary of the key points about octopus consciousness and cognition from the passage:

  • Octopuses have a decentralized nervous system, with some of their brain situated in their arms rather than a central brain. Their brains lack myelin insulation found in mammals. This suggests their consciousness may be more distributed without a single center.

  • Octopus genetic information can undergo significant editing of RNA sequences before being turned into proteins, allowing them to potentially rewrite parts of their own genome. This may underlie their impressive cognitive abilities.

  • Octopuses show intelligence in tasks like retrieving hidden objects, maze navigation, problem-solving, and observational learning. They have been observed using tools like shells for disguise.

  • Their camouflage abilities are aided by chromatophore sacs in their skin that can precisely match color and texture of surroundings, despite being colorblind. Some control may happen locally without the brain’s input.

  • Their decentralized nervous system and highly flexible arms pose challenges for experiencing a unified sense of body ownership like mammals. But they can distinguish self from non-self through a taste-based self-recognition system.

  • This decentralized nature may mean experiences of embodiment are hazy for the whole octopus but potentially present even for detached arms. Their consciousness is likely more distributed and different from our own.

  • The Cartesian view traces consciousness to physiological regulation and preservation of the organism. This suggests looking for evidence of awareness in how animals respond to painful events.

  • Studying animal consciousness is scientifically sensible and ethically motivated. Decisions about animal welfare should be based on their capacity for pain and suffering, not human-centered ideas of cognition.

  • Many vertebrates show adaptive responses to pain like tending injuries. Even zebrafish will accept pain relief in exchange for an unpleasant environment after injury.

  • Insects have pain relief systems but their harder bodies may experience less pain. Fruit flies show responses to injury resembling chronic pain in humans.

  • Anesthetics work across animals from microbes to primates, suggesting shared bases of consciousness. However, definitively proving consciousness in other species is difficult.

  • While some basic awareness may exist even in small-brained animals, intelligence can likely exist without consciousness. The possibility of “smart without suffering” machines leads to questions about artificial consciousness.

  • Studying animal minds humbles human assumptions and motivates minimizing all forms of suffering wherever they appear in nature. Consciousness is more about being alive than intelligence.

  • The assumption that machines could become conscious stems from two unsupported claims: 1) functionalism, which says consciousness depends only on input-output functions and not physical substrate, and 2) the idea that intelligence and consciousness are intrinsically linked and consciousness will emerge with advanced intelligence.

  • Functionalism being true is not sufficient for consciousness - information processing alone does not guarantee it. And intelligence is neither necessary nor sufficient for consciousness.

  • Conflating consciousness with intelligence reflects anthropocentrism - we assume our own experiences of being both intelligent and conscious map onto all systems.

  • Worries about conscious AI gaining control or turning on humans stem partly from exponential growth assumptions about technological change and impacts on jobs. They are fueled by popular science fiction tropes.

  • Claims that machines displaying any learning or goal-directed behavior are conscious represent an overextension of the term and don’t stand up to scrutiny.

  • While conscious AI cannot be ruled out, the common assumptions that it is imminent or that we know how it might arise are not well-justified given our current understanding of consciousness. More caution is needed in these claims and their implications.

  • The figure depicts consciousness and intelligence as separate, multidimensional concepts. Current AI systems are low on the intelligence scale and lack consciousness.

  • Researchers aim to develop artificial general intelligence similar to human intelligence, but this does not necessarily mean machines will become conscious. Different forms of intelligence may exist without consciousness.

  • Simply making computers smarter will not make them sentient. However, machine consciousness is not ruled out as a possibility with the right design. Different theories propose what would be required, like certain types of information processing or integrated information.

  • The beast machine theory connects consciousness to biological processes that maintain an organism’s physical integrity. A sophisticated robot mimicking these processes may appear intelligent and sentient externally but likely would not be conscious due to lacking the material underpinnings of life.

  • Even without conscious machines, advanced AI that appears conscious could still raise concerns. Films like Ex Machina show how tests of machine consciousness say more about how humans perceive and relate to machines than about the machines themselves. Sophisticated human-machine interactions may elicit feelings of the machine’s sentience without it actually being conscious.

The passage discusses the challenge of ascribing consciousness to artificial intelligence. It notes that while chatbots have passed simple versions of the Turing test by fooling humans, truly demonstrating human-level intelligence remains difficult.

The term “The Garland Test” is mentioned as gaining traction, referring to how the sci-fi story Ex Machina established a standard for showing a machine has genuine subjective experience (consciousness). Despite improvements like GPT-3, passing sophisticated versions of the Turing Test or the Garland Test remains an open challenge.

The passage then discusses roboticist Hiroshi Ishiguro’s humanoid “Geminoid” robots and the uncanny valley effect, where near-human machines elicit discomfort. Advances in deepfakes and virtual beings may help machines escape the uncanny valley by appearing fully human.

While some thinks limits exist, the passage argues convincingly passing tests for intelligence and consciousness is likely as technology progresses. Two open questions are if virtual beings can impact the real world, and if we will feel they are conscious even knowing they are code. This could psychologically impact humanity.

Finally, the passage notes the rise of AI sparks ethics discussions, from economic impacts to risks of bias and control. Discussions of machine consciousness ethics are important preemptively to shape how humanity views and treats sophisticated non-human agents in the future.

  • The prospect of machine consciousness is alluring but also raises significant ethical concerns that need consideration. Simply pursuing it blindly for purposes of recreation or progress could be misguided.

  • Creating artificial forms of subjective experience would pose unprecedented moral issues, as any entity with consciousness would require moral status and consideration of its potential suffering. But determining the nature of a machine’s consciousness could be challenging.

  • Emerging biotechnologies like organoids approach consciousness in a more material way than computers and cannotrule out primitive forms of awareness. “Farms” of organoids raise issues of scale.

  • Some see machine consciousness as enabling a “techno-rapture” where minds are uploaded and immortalized. But this discounts our biological nature and could foster detachment from nature.

  • The possibility of machine consciousness does not necessarily contradict a view of consciousness as grounded in biological organisms. Understanding it rightly places us more within, not apart from, nature. Preventative ethics are needed to guide any research in this area.

  • Humberto Maturana developed the theory of autopoiesis, which describes how biological systems can maintain and reproduce themselves through internal production of physical components.

  • Autopoiesis suggests a continuity between life and mind, implying there is more to mind and consciousness than just a system’s behavior.

  • The author met Maturana in 2019 in Santiago, Chile to discuss these ideas.

  • Turing’s original imitation game test involved differentiating a human from a machine pretending to be human. Shanahan later coined the “Garland test” while researching concepts related to embodiment and inner experience.

  • Chatbots have claimed to pass the Turing test through noisy promotional tactics, but others argue the humans were failed or the tests showed software’s lack of true intelligence.

  • Large language models like GPT-3 can generate coherent text but arguably do not demonstrate real understanding. Their abilities raise ethical issues if uses are not carefully constrained.

  • Techniques like deepfakes that synthesize realistic images/video also raise ethical concerns if abused. Careful regulation may be needed for technologies that manipulate human perception at scale.

  • Many emerging technologies involving AI, robotics, synthetic biology raise difficult questions about how to ensure they are developed and applied safely, ethically and for the benefit of humanity. Ongoing discussion and debate are important to help guide research responsibly.

  • The author had the opportunity to observe brain surgery on a child. They were in awe at the complexity and intricacy of the brain tissue visible through the microscope. It gave them a visceral understanding of how little we can comprehend from abstract knowledge alone.

  • After the successful surgery, the author reflected on the “hard problem” of consciousness - why and how physical brain processes give rise to subjective experience. Philosophy has proposed various answers but science requires building explanatory links between mechanisms and experience.

  • The “real problem” approach seeks to identify correlations between brain activity and experience, and explain them without dismissing experience. This echoes how life was explained by mechanisms after dismissing vitalism.

  • Predictive processing theories propose perception and selfhood emerge from the brain’s predictions and inferences. Experience constructs our world through hypotheses rather than accurately representing reality. The self also arises from internal predictions.

  • Ultimately, consciousness and mental life are shaped by the drive for physiological regulation and survival. This connects mind and life scientifically while distinguishing both from artificial intelligence. It reshapes our understanding of consciousness and our place in nature.

  • By investigating these phenomena, science continually expands our understanding while displacing human exceptionalism. It aims to solve not one “hard problem” but gain insights by empirically linking brain and experience. This has practical implications for conditions of disturbed experience.

  • The study of consciousness has the potential to shed light on our inner mental lives and experiences of the world. Understanding consciousness can give us new insights into how our inner selves are connected to the natural world.

  • Following the path of studying consciousness will lead us to understand many new things about conscious perceptual experiences and how we experience ourselves from within.

  • It may also help us make peace with the mystery of what happens or doesn’t happen when our conscious experience ultimately comes to an end in death. Some mystery may remain even after death.

  • Overall, pursuing the study of consciousness promises to deepen our understanding of the mind and experience, even if some open questions remain in the end. The journey of exploring consciousness is a personal one for each individual.

Author Photo

About Matheus Puppe