Self Help

Being There Putting Brain, Body, and World

Author Photo

Matheus Puppe

· 70 min read

Here is a summary of the preface and groundings sections:

  • The preface introduces the idea that intelligence arises from the interactions of mind, body, and world. It argues against a Cartesian view of mind as separate from body and environment. It suggests we should study how brains guide embodied action in the real world, not just solve abstract problems or puzzles.

  • The groundings section acknowledges several influences and precursors for the book’s perspective:

  • Phenomenology (Heidegger, Merleau-Ponty) emphasized mind’s embodiment and environmental embedding.

  • Soviet psychology (Vygotsky) focused on the role of action in cognition.

  • Piaget studied how children’s cognition develops through interaction and exploration.

  • Some cognitive science (Varela, Winograd, Flores) has looked at embodied and situated cognition.

  • Dreyfus critiqued classical AI for ignoring the body and environment.

  • Connectionism, neuroscience, and robotics have recently made progress in modeling embodied, environmentally embedded intelligence.

  • The author has been influenced and inspired by philosophers like Churchland and Dennett, and roboticists like Brooks.

  • The book aims to provide an integrative perspective on mind, brain, body, and environment.

The author argues that the CYC project is fundamentally flawed in its attempt to create artificial intelligence and understanding. The reason is that CYC lacks adaptive responses to the real world and a fluent coupling between the system and the environment. Even simple creatures like cockroaches display a kind of practical intelligence and robust flexibility that current computer systems lack. The author says that intelligence and understanding arise not from manipulating explicit knowledge structures but from tuning responses to the real world in a way that allows embodied creatures to sense, act, and survive.

The author acknowledges that this view is not new and that philosophical critics of AI have long argued that intelligence requires situated reasoning by embodied agents acting in the real world. However, the alternative to the disembodied manipulation of explicit knowledge is not a retreat from science but a pursuit of even harder science. The author points to research in neural networks, cognitive neuroscience, and simple robotics as providing evidence for an alternative vision of biological cognition focused on dynamics and feedback loops rather than logic and filing cabinets.

In summary, the key ideas are:

  1. Practical intelligence requires adaptive responses and fluent coupling to the real world.

  2. Even simple creatures like cockroaches display this kind of practical intelligence in a way that current AI systems do not.

  3. Understanding arises from tuning responses to the real world, not manipulating explicit knowledge.

  4. Situated, embodied reasoning is required for intelligence.

  5. The alternative to disembodied reasoning is new interdisciplinary science, not a retreat from science.

  6. This new science points to a vision of cognition based on dynamics and feedback rather than logic and data storage.

• The classical view of intelligence as a disembodied logical reasoning system is challenged by studies of biological cognition, which requires real-time responsiveness and embodied action in the real world. The cockroach escape behavior is a good example of this challenging view.

• Biological agents achieve intelligent behavior not through accessing explicit symbolic knowledge and logical reasoning but through coupling with their environments and real-time responsiveness. The cockroach escape behavior shows this, as its responses are too fast to be based on explicit reasoning and knowledge access.

• New sciences of embodied cognitive systems are needed to understand intelligent behavior in biological agents. These sciences study how agents are coupled with their environments and generate appropriate real-time responses. Simulations and neural network modeling have been used to model cockroach escape behavior in this embodied and environmentally embedded way.

• The cognitive science focus on abstract reasoning and the mind as a logical reasoning system needs to shift to understanding how minds control bodily action in the real world. This requires understanding real-time responsiveness, embodiment, environment embeddedness, and adaptive behavior. The cockroach example shows the need for this shift in focus.

• There are layers of assumptions in traditional cognitive science, including the distinction between perception and cognition and the idea of executive control centers in the brain, that need to be questioned. A view of rationality as disembodied logical reasoning also needs to be rethought. Studying how minds control adaptive behavior in the real world can help in questioning these assumptions.

• The methodological focus on studying the mind and brain in isolation needs to shift to studying adaptive behavior that emerges from the coupling of minds, bodies, and environments. Interactions with the real world and opportunities for action must be considered. The cockroach example shows why this methodological shift is needed.

Does this summary accurately reflect the key points being conveyed in the introduction? Let me know if you would like me to clarify or expand on any part of the summary.

• Dante II is an autonomous robot that explored an active volcano in Alaska in 1994. It is part of a project to develop autonomous robots to explore other planets.

• Autonomous robots for space exploration need to be able to operate independently without constant communication with humans on Earth. They need to be able to sustain themselves, handle unexpected problems, and withstand damage. Developing such robust autonomous robots leads to rethinking ideas about adaptive intelligence.

• Early examples of autonomous robots were Elmer and Elsie, built in 1950. They were simple but their behavior seemed purposeful. Herbert was an early robot built at MIT that collected empty soda cans in a lab. It used a subsumption architecture with layers of competing behaviors controlled by the environment.

• Attila is a small walking robot that uses multiple mini-brains, or finite state machines, to control its leg movements and navigate uneven terrain. It can walk, avoid obstacles, and explore on its own.

• In general, autonomous robots are moving from purely reactive behaviors to having some capacity for planning, learning, and making predictions. They are still limited but are becoming more flexible, adaptable, and robust.

That covers the essence of the key ideas, projects, and trends discussed in the passage on the development of autonomous robots. Please let me know if you would like me to explain or expand on any part of the summary.

  • The simulated cockroach Periplaneta Computatrix uses a neural network controller for hexapod locomotion. Each leg has its own pattern generator but they are coordinated through inhibitory links. It can demonstrate different gaits like tripod or metachronal.

  • Although just a simulation, its locomotion circuit has been implemented in a real robot hexapod. The hexapod can traverse rough terrain by carefully exploring footholds. This demonstrates insect-level intelligence.

  • A brachiation robot learns to swing from branch to branch using Q-learning, a form of neural network learning. It learns the value of actions in different situations to maximize a reward signal. The trained robot can successfully brachiate and recover from misses.

  • The humanoid robot COG has many degrees of freedom and onboard processors for motor control and sensory processing. Its “brain” is composed of multiple nodes that can communicate in a limited fashion. It lacks a central shared memory or executive control, instead solving real-time problems through its embodied interactions.

  • These artificial agents reject a central planner with a complete world model. They avoid the “representational bottleneck” and time costs of translating between central representations and motor outputs.

  • However, internal models, maps and representations still have a role to play. They should not be rejected entirely, but used where local, limited representations can aid in embodied problem-solving. Representations and embodiment can be combined in an integrated system.

  • In summary, the new robotics revolution shows that adaptive success can be achieved without a central planner or detailed world model, through the coordinated problem-solving of multiple quasi-independent devices. But representations still have an important role when used judiciously.

The criticism that autonomous agents employ integrated, symbolic models of the world is apt in some cases but an overgeneralization. There are two main reasons why:

  1. The human brain does integrate information from multiple senses at times, such as when making eye saccades or manipulating objects in the dark. The brain also uses internal models, like emulators, that generate predictions to facilitate quick motor responses, showing that internal models are not always slow or prohibitive.

  2. Organisms, including humans, do not represent the entire, complex world in their brain. Instead, they focus on and become attuned to aspects of the environment that are specifically relevant or important to them, which reduces computational demands. This is known as “niche-dependent sensing.” For example, the tick only perceives aspects of the environment necessary to drop onto and feed from mammals. Similarly, humans perceive details in the environment biased toward human interests and needs.

  3. Human perception may be more limited than our experiences suggest. Although it seems we perceive a detailed, enduring model of the world, our perception may actually depend more on cues in the immediate environment than stable internal representations. For example, to catch a ball, we likely do not have an internal model of the ball’s trajectory but instead detect cues about its path and speed in each moment to guide our movement, giving the illusion of perceiving the entire, complex scene. Overall, the brain tends to take computational shortcuts, like relying on cues in the immediate environment, as much as possible.

In summary, the key ideas are:

  1. Internal models and multisensory integration do not necessarily hinder real-time success. Emulators can facilitate quick motor responses.

  2. Perception and cognition are highly niche-dependent. Organisms attune to specific, limited aspects of the world that matter to them.

  3. Human perception may be more cue-dependent and less model-based than it seems. We perceive less rich, enduring representations of the world than our experiences suggest.

  4. Recent research suggests that intercepting a moving object may not require actively computing its trajectory. A simpler strategy of running so as to keep the angular acceleration of the object’s elevation in your visual field at zero can achieve interception effectively. This avoids computational costs and relies on detecting minimal parameters required for the task.

  5. The “animate vision” approach suggests everyday visual problem solving uses many specialized routines and tricks rather than building a detailed 3D model of the world. Strategies like rapid saccades, using peripheral vision for coarse cues, and “using the world as its own model” by sampling visual information as needed can support adaptive behavior without high computational costs. Though we experience a detailed visual world, this may be an illusion created by our ability to rapidly gather visual information on demand.

  6. An analogy is our sense of touch - we don’t perceive “holes” between our fingertips because we are accustomed to exploring surfaces by moving our hands. We treat fragments of perception as guiding further exploration rather than indicating holes in the world. Perception may be an “action-involving cycle” of probing the world rather than passive reception of sensory information.

  7. Evidence that we do not have a “picture perfect” visual world comes from experiments showing we often don’t notice changes to scenes made during saccadic eye movements or to text outside our focal vision. Our visual experience seems to be a sequence of “visual semi-worlds” or “partial representations per glimpse” rather than a complete internal model.

  8. The “New Robotics” approach of building systems without central control or symbolic representations that nonetheless exhibit coherent, adaptive behavior poses two problems: 1) Discovering the fragmentary mechanisms that could underlie such systems without relying on human intuitions. 2) Maintaining coherence as systems become more complex and diverse. Solutions may involve learning from nature, simulated evolution, and learning.

  9. The motto “Fast, cheap, and out of control” captures the vision of fluent, robust systems that emerge from interactions between components and environment. Though seeming complex, human intelligence may also rest on ecological tricks and strategies rather than central world models. We must look to nature and simulation to gain insight into such unintuitive systems.

  • Action loops refer to the intricate interplay between perception, cognition, and action. Our knowledge and problem solving emerge from the dynamic interactions between mind, body, and environment.

  • Research on infants’ responses to visual cliffs and slopes shows that infants acquire knowledge about the world in an action-specific manner. Their understanding depends on their current abilities and mode of locomotion (crawling vs. walking). Knowledge does not simply transfer from one action context to another.

  • Similar action-specificity is found in adults. Experiments with perceptual adaptation to sideways-shifting lenses show that people can adapt their perceptual-motor skills for a specific task (e.g. overhand dart throwing) but this adaptation does not transfer well to other tasks (e.g. underhand throwing or using nondominant hand). Adaptation seems tied to particular action loops.

  • Overall, these findings suggest that cognition emerges from the interdependent interactions between perception, action, and the environment. Knowledge and skills tend to be grounded in specific action contexts. The mind, the body, and the world are deeply intertwined.

Development can be characterized as “soft assembly” rather than following a predetermined “blueprint.” Soft assembly refers to development that is robust and adaptive to changes and individual differences. It emerges from the interaction of multiple internal and external factors, rather than being orchestrated by a single central mechanism.

Key factors in development include:

  • Bodily growth and maturation, including changes in physical abilities and constraints (e.g. leg mass affecting stepping in infants)

  • Environmental and experiential factors (e.g. presence of treadmills or water affecting infant stepping)

  • Learning and cognitive development

  • Historical and individual factors leading to variation between individuals

This multi-factor, interactive view suggests that development is decentralized rather than centrally controlled. Complex behaviors like walking emerge from local interactions between components, not according to a predetermined plan. This allows for flexibility, contextual adaptation, and compensation for changes. Soft assembly and decentralized solutions give rise to development that is robust yet still shaped by individual experiences and variations.

In summary, development can be seen as the gradual self-organization of cognitive, physical, and behavioral abilities through the multi-directional interaction of internal and external factors. There is no predetermined blueprint or central control mechanism dictating a fixed progression.

  • Soft assembly: Solutions emerge from the interactions of multiple, decentralized components rather than being centrally designed. This yields a mix of robustness and variability tailored to context.

  • Developmental problems differ for each child based on their intrinsic dynamics (e.g. activity level, muscle tone). The CNS learns to modulate parameters to harness these dynamics and achieve goals like reaching.

  • The CNS treats the body and environment as a set of springs and masses. It modulates factors like limb stiffness so energy combines with intrinsic dynamics to yield desired outcomes. Mind, body and world jointly determine behavior.

  • Scaffolding: Solutions often “piggyback” on reliable environmental properties. The CNS solves problems by assuming a backdrop of intrinsic bodily and environmental dynamics. Examples include using spatial layouts as memory aids and utensils that reduce degrees of freedom.

  • The environment reduces computational load, as per the “007 Principle”: Use the world as much as needed to get the job done, but no more. The world is its own best representation.

  • This contrasts a “mind as mirror” (detailed inner models) with a “mind as controller” (harnessing dynamics of mind, body and world). The embodied, embedded agent acts with the world, not just in it.

  • In sum, cognition exploits real-world action and structure to reduce computational demands. Adaptive responses emerge from the interaction of mind, body and world.

  • The mind is not confined within the brain but extends to the body and external environment. The brain interacts with the body and world in an iterative process of pattern completion.

  • The brain can be seen as an “associative engine” that completes patterns across internal and external structures. Human cognitive abilities emerge from the interaction of neural and external (e.g. cultural) resources.

  • Artificial neural networks are simplified models of biological neural networks. They are made up of interconnected nodes that can learn to detect patterns in data. Early neural networks were limited to simple tasks but modern deep learning networks have achieved human-level performance on complex tasks like image recognition and game playing.

  • Neural networks show that intelligence can emerge from the interaction of simple processing units (nodes) in a complex network. They demonstrate how the coupling of internal and external resources can enable sophisticated behavior. Neural networks extend into the external environment via their training data, just as human minds extend into the external environment via cultural artifacts and social interactions.

  • The extended mind thesis argues that the mind extends beyond the brain to include parts of the external environment, like notebooks, computers, and smartphones. These external structures are coupled with the brain in reciprocal causal interactions and collectively generate intelligent behavior. The extended mind is a distributed cognitive system spanning brain, body, and world.

  • The extended mind perspective suggests that human intelligence emerges from the interaction of neural and non-neural resources, both within and outside the body. The mind leaks into the world, and the world leaks into the mind. An understanding of human intelligence requires understanding how the mind couples with external resources to form extended cognitive systems.

  • Traditional AI relied on rule-and-symbol approaches, focusing on manipulating symbols according to rules and using large knowledge bases. Some examples are naive physics, STRIPS, and SOAR.

  • In the 1980s, connectionism and neural networks provided an alternative approach inspired by the brain. These models consist of interconnected units that simulate neurons. They learn by adjusting connection weights based on experience, rather than being programmed with rules.

  • An example is NETtalk, which learned to convert English text to phonetic speech. It had input units for letters, hidden units, and output units for phonemes. It started with random connection weights and learned by using a learning algorithm called backpropagation to adjust the weights and reduce error.

  • This learning involves gradient descent, adjusting weights to descend an “error surface” to find the solution, like finding the bottom of a basin. The end result is not just reproducing the training data but learning general features, e.g. NETtalk could handle new words. The knowledge is encoded in the connection weights, not explicit rules.

  • Connectionism and neural networks provided a new, biologically inspired approach that was fundamentally different from traditional symbolic AI. The models were more neurally plausible and made contact with neuroscience. Yet the models were still limited and simplified compared to the human brain.

• Neural networks encode knowledge as connection strengths between units, not as symbolic rules like in CYC and SOAR. This is a more biologically plausible format for how brains represent knowledge.

• Neural networks have been successfully applied in many domains, demonstrating their power and usefulness. However, their ability to illuminate human cognition depends on using biologically realistic input/output representations and problem domains. Much early neural network research used unrealistic, “vertical microworlds” that abstracted away from perception and action.

• Neural networks have strengths like tolerating noisy data, fast processing, and integrating many cues. But they also have weaknesses like crosstalk between similar patterns and difficulties with sequential reasoning. They are good at perceptual/motor tasks but bad at logic.

• Neural networks can overcome their weaknesses by relying on the environment, e.g. using pen and paper for complex multiplications. The environment becomes an extension of the mind. We can also learn to mentally simulate the environment, internalizing some originally external competencies.

• Early AI may have mistakenly attributed to internal computation what is really the result of basic pattern completers interacting with a structured environment. Classical AI models may reflect the abilities of embodied, environmentally-embedded neural networks rather than the innate power of disembodied reasoning.

  • There is a tendency to assume that the cognitive capabilities of an agent stem entirely from the agent’s naked brain. But in reality, an agent’s cognitive abilities emerge from the interaction between the brain and the external environment.

  • The human external environment is highly structured in ways that facilitate thinking, such as through language, logic, geometry, and culture. While not all animals can create or benefit from such structures, human brains are still special in that they can utilize these structures. The human cognitive advantage may stem from a small set of neuro-cognitive differences that allowed humans to develop and use simple cultural tools. These tools then built upon each other in a snowball effect.

  • Studies of planning and problem-solving show that agents interact with and manipulate the external environment in ways that shape the computational tasks for their brains. For example, physically arranging jigsaw pieces or Scrabble tiles in certain ways can prompt recall of solutions or words. The game Tetris also shows how players perform “epistemic actions” - actions aimed at changing their mental tasks rather than achieving a physical goal. Such actions suggest the brain itself may have limitations that are overcome through interacting with the external world.

  • External structures like words and physical symbols are special in that they allow for types of operations and manipulations that may not be possible within the brain alone. The external environment essentially expands the brain’s functional capabilities.

  • In summary, the classical view of the mind as an isolated information processor is mistaken. Cognition is deeply shaped by interactions between the brain, body, and external structures in the environment. The boundaries between mind and world are not as clear as traditionally assumed.

  • Slime molds are acellular organisms that can move and change shape.

  • They feed on bacteria and other microorganisms.

  • They have a simple life cycle with two main phases:

  1. Vegetative phase: Slime mold cells (myxamoebae) live separately and feed on bacteria. They grow and divide.

  2. Aggregative phase: When food is scarce, the myxamoebae aggregate together into a multicellular slug: like structure called a pseudoplasmodium. This structure can move and climb to find a new food source.

  • The aggregative phase shows a primitive form of collective behavior and decision making. The myxamoebae have to coordinate to come together, choose a direction to move in, and navigate obstacles.

  • The slime mold Physarum polycephalum has been studied as a model system to understand collective intelligence and problem solving without a central control system. Experiments show that this slime mold can solve mazes, form efficient transport networks, and anticipate periodic events.

  • The slime mold’s collective behavior emerges from the interactions of individual cells following simple rules, without an overarching control system. This “swarm intelligence” provides insights into how complex group behaviors can arise from simple interactions.

The key points are that slime molds show primitive collective behaviors, navigation, and problem-solving abilities that emerge from the local interactions of individual cells. They provide a useful model system for understanding self-organization and swarm intelligence in biology.

Source: The source of the passage is Morrissey, 1982.

  • Slime mold cells form clusters called pseudoplasmodia when food sources are scarce.

  • The pseudoplasmodia can move and are attracted to light, temperature and humidity. They help the cells find better environments.

  • Once a suitable location is found, the pseudoplasmodia differentiate into stalks and spore masses. The spores propagate, starting the cycle over.

  • Slime mold cells exhibit self-organization and emergence. There are no leader cells. Each cell releases chemicals that attract other cells, leading to aggregation. This is a form of positive feedback and results in the pseudoplasmodia.

  • There are two types of emergence:

  1. Direct emergence: Relies on properties and relations of individual elements. Examples are traffic jams and the properties of gases.

  2. Indirect emergence: Relies on interactions between elements as mediated by the environment. Examples are using an object as a memory aid or termite nest building, which uses stigmergic algorithms. The termites modify their environment, which then influences their further building behavior.

  • Termite nest building emerges from the termites’ responses to their environment without central control or a plan. It requires no communication except through the environmental effects of their actions.

  • Collective phenomena can display complex, adaptive behaviors without leaders or central control. They can also have properties quite different from their individual components.

  • An example of human collective behavior and indirect emergence is ship navigation, where crew members respond to the environment and each other to coordinate their actions without an overall plan.

  1. Many of the specified duties for members of a navigation team are in the form “Do X when Y” - i.e. the team members respond to local environmental cues by performing certain behaviors. These behaviors then affect the environment for other team members, who respond in turn. This process continues until the overall task is completed.

  2. Although the team members have mental models of the overall process, no one member has fully internalized all the relevant knowledge and procedures. Much of the “work” is done by external artifacts and structures. These help simplify complex problems and make them more tractable for human cognition.

  3. The overall success of the system emerges from interactions between team members, artifacts, the environment, and spatial organization. The captain sets the goals but does not centrally control how the goals are achieved. Instead, knowledge and expertise are distributed across the system.

  4. In evolutionary systems, small opportunistic changes accumulate over time based on how well they enhance success. A similar process may occur in navigation teams facing new challenges. Members perform their basic functions and negotiate local changes to the division of labor. An equilibrium emerges that solves the problem without any explicit overall plan. This is more akin to evolution than rational design.

  5. A simpler example is finding optimal paths between buildings. Individuals creating tracks over time based on their needs can solve this problem without any explicit design or representation of the global solution space. The solution emerges from local interactions and accumulates opportunistically.

  6. To understand such opportunistic, extended cognitive systems, a methodology of rational reconstruction that seeks an abstract optimal solution will not suffice. We need a methodology that can capture how solutions emerge from interactions between embodied, environmentally embedded agents and their problem-solving environments. Local rules and the accumulation of small changes can lead to functional global solutions without global oversight or representation.

  7. Studying the embodied, embedded mind is challenging because nature’s solutions often confound our intuitions and the conceptual distinctions we rely on. The biological brain is both constrained and empowered in non-intuitive ways.

  8. The brain is constrained by evolution, which must build on existing structures and resources. It is empowered by the real-world environment, which provides opportunities for offloading computation and transforming tasks.

  9. This poses a problem for cognitive scientists trying to model and understand such systems. We must find ways to capture the complex interactions between brain, body, and world. But we must also avoid losing sight of the brain itself.

  10. One strategy is to study simulated evolutionary robots. This allows us to explore how embodiment and environmental embedding can shape the development of adaptive behavior and cognition. We can also manipulate the robot’s embodiment, environment, and rewards to better understand their influence.

  11. Studies of evolved robots suggest several lessons:

  • Embodiment and environment deeply influence the nature of adaptive solutions, often in non-obvious ways. Simple interactions can yield complex solutions.

  • Cognition “leaks out” into the physical world, with parts of the solution offloaded into the environment. But a central system still plays an important role.

  • There are trade-offs between general, flexible solutions and specialized, efficient ones. Evolution often produces a mix, with different components serving different functions.

  • The rewards and metrics of success shape the evolved solutions in crucial ways. Small changes to rewards or metrics can lead to very different outcomes.

  • Evolved solutions are cobbled together from available tools and prior achievements, constrained by a system’s evolutionary history and components. Cognition emerges from this bricolage.

  1. Simulated evolution of robots is a promising methodology for understanding embodied, embedded cognition. But it also has limitations, like the reality gap. Studying natural evolved systems, like humans, is also crucial. An integrated approach can yield real insights.

The key points of the summary are:

  1. Naturally evolved systems often have solutions that seem messy and non-intuitive from an engineering design perspective. This is because evolution is constrained by the need to proceed incrementally, developing whole organisms at each stage. This leads to solutions that depend heavily on historical contingencies.

  2. Simulated evolution, using genetic algorithms, is one tool for better understanding naturally evolved systems. Genetic algorithms work by generating a population of solutions, selecting the fittest, and producing new generations through recombination and mutation. This allows exploration of the space of possibilities in a way analogous to natural evolution.

  3. Simulated evolution has been used to evolve neural network controllers for simulated robots engaged in walking, seeing, and navigation. This helps in understanding natural adaptive strategies in these domains.

  4. An example is evolving insect-like walking. Beer and Gallagher evolved 11 good solutions for a simulated cockroach with 6 legs. All used a tripod gait, as in real insects. Solutions depended on close interaction of controller and environment, not pre-set motor programs. Some solutions were robust to sensory deprivation or structural changes, showing the power of evolution.

  5. In sum, simulated evolution is a useful tool for confronting the challenge of generating adaptive behavior in complex, dynamic settings - like that of real embodied organisms in natural environments. It helps counter biases from a “disembodied design perspective”.

So the key idea is really using simulated evolution as a way to gain insight into naturally evolved, embodied adaptive systems - especially those that at first seem opaque or messy when viewed from an engineering design stance. Simulated evolution is a way to “catch a tinkerer to catch a tinkerer”.

  • Experiments with simulated robots that have leg sensors show that they can walk smoothly when sensors are on, switch to clumsy walking without sensors, and even automatically adapt to changes like different leg lengths. This shows the power of simulated evolution to find solutions that balance sensory feedback and internal pattern generation.

  • However, simulated evolution has some key limitations:

  1. The problem space is often fixed, unlike in natural evolution where problems and solutions co-evolve.

  2. Neural and bodily architectures are fixed, unlike in natural evolution where they can evolve.

  3. There is a direct genotype-phenotype mapping, unlike in biology where the environment plays a key role in how genes are expressed.

  4. It is difficult to “scale up” simulated evolution to larger, more complex systems. Some ways to help are better genetic encodings and relying more on interactions with the environment.

  • There is debate over the use of simulations vs. real-world robotics. Simulations provide benefits like simplifying the problem and allowing study of large populations. However, real-world robotics shows that researchers often underestimate problem difficulty and miss simple solutions that depend on physical properties. An example is early fly-ball governors, where crude but real versions outperformed finely-engineered simulated versions because the real ones had physical damping effects that prevented unstable feedback loops.

  • In summary, while simulated evolution is a useful tool, researchers must be aware of its limitations and how solutions can depend in unexpected ways on real-world environments and interactions. The debate between simulations and real-world robotics continues, with value in both approaches.

• Highly sensitive devices can overreact to small changes in the environment or to noise within the system. Using less precise components can help avoid this by damping responses to insignificant perturbations.

• It may be better to think of sensors as filters rather than measuring devices. Their role is partly to filter out unimportant variations and allow the system to interact robustly with the environment. The physical properties of real components often provide this filtering naturally through friction and energy loss.

• Simulations lack these stabilizing physical effects and tend to oversimplify the environment and agent. They can miss cheap solutions that emerge from real physics. Simulations are useful for research but real-world tests are ultimately needed.

• Evolved biological systems are hard to understand because evolution exploits physical features, the environment, and existing mechanisms in opaque ways. Components and their functions are not neatly separated or engineered.

• Dynamical systems theory may be better than computational theories based on representations for understanding embodied, embedded agents. It focuses on coupled systems, feedback loops, and the evolution of states over time rather than separate computations and representations.

• The Watt governor example shows how a physical system can solve a control problem without explicit representation and computation. Its operation can be described by dynamical systems theory but not as a computational process.

  • Componential explanation: Explaining a complex system by specifying the capacities and roles of its individual components and how they are organized. This is like traditional reductionistic explanation. It aims to explain higher-level phenomena in terms of lower-level components and interactions.

  • Emergent explanation: Explaining a system’s behaviors by showing how many relatively simple interactions at a lower level can combine to produce complex phenomena at a higher level. The higher-level properties arise from the system as a whole and are not predictable from the properties of the individual parts. Emergent phenomena depend on the organization and interaction between components, not just the components themselves.

  • Dynamical explanation: Focusing on the patterns of change over time exhibited by a system. It aims to capture the temporal trajectories and attractors of the system using mathematical tools like differential equations. This can provide a geometric sense of how the system will evolve and behave over time. However, scaling dynamical explanations up to highly complex systems can be challenging. Dynamical explanations also may fail to fully explain why a system behaves the way it does or the adaptive functions of its components.

  • An ecumenical approach: No single style of explanation is sufficient. We need componential, emergent, and dynamical explanations, as well as others. Componential and emergent explanations can complement each other, with componential explanations specifying how lower-level parts generate higher-level wholes and emergent explanations capturing how higher-level phenomena arise from and depend on the organization of those parts. Dynamical explanations provide insight into the temporal behaviors of systems but should be combined with componential and emergent explanations. A successful cognitive science will integrate multiple styles of explanation.

In summary, the key is recognizing that complex systems like the mind call for diverse, complementary explanatory tools. No single approach is enough. We must explore the interrelations between parts and wholes, components and organizations, micro-level mechanisms and macro-level dynamics if we want a satisfying understanding of such systems. An ecumenical, integrative theoretical framework is needed.

The key idea is that emergent explanation relies on identifying the interaction of multiple simple components within a system. The explanation focuses on how the collective behavior of these components gives rise to complex, self-organizing patterns that are not directly prescribed by any individual component. Emergent phenomena are characterized by circular causation, in which the overall behavior of the system guides the actions of the individual parts, even as the actions of the parts cause the overall behavior.

Emergent explanation aims to understand system behavior in terms of collective variables that track higher-level, relational features of the system rather than properties of individual components. By examining how these collective variables change over time and in relation to control parameters, we can come to understand the conditions under which different self-organizing patterns will emerge.

A simple example is convection rolls in heated liquid, which emerge from the collective effects of molecule interactions and circulation, even though no individual molecule is orchestrating the process. A second kind of emergence arises from organism-environment interactions. For instance, a robot could arrive between two poles by having simple behavior systems for phototaxis and obstacle avoidance, without requiring explicit computation of its relative position. The target behavior emerges from the interaction of these simple systems with the environment.

In both cases, emergence provides an alternative to more “componential” explanations that attribute system success primarily to the activity of individual controlling components within the system. Emergent explanation focuses on how adaptive success arises from the collective dynamics of a system and its interactions with the environment.

The key idea is that emergent phenomena are best understood by attention to collective variables - variables that track patterns resulting from interactions among multiple elements. This includes uncontrolled variables, which arise from complex interactions and are hard to manipulate directly. But it also includes controlled variables, like temperature gradients, that govern system behavior.

Degrees of emergence depend on interaction complexity. Nonlinear, asynchronous interactions yield the strongest emergence. Simple linear interactions typically don’t require collective variables to understand and show little emergence.

Emergence is not just unexpected behavior. It’s a systems-level pattern that arises from interactions, whether expected or not. Emergence demands explanation in new vocabulary - the vocabulary of collective variables.

Examples show how emergence arises from interactions of simple rules or forces and environmental features. Termite wood chip piling emerges from chip-moving rules and location blocking. Robot wall following emerges from a right veer, left turn rule, and the wall itself.

In summary, emergence arises from complex, collective interactions within and between system elements and environment. It yields new, functional patterns that can’t be reduced to individual parts and require higher-level explanation.

Does this summary accurately reflect the key ideas and concepts around emergence and explanation from the original text? Let me know if you would like me to clarify or expand on any part of the summary.

  • Emergent phenomena may arise from simple control parameters directing a system through a sequence of states that are best described by collective variables. Emergence is linked to finding the right explanatory variables that capture a system’s behavior.

  • Componential explanation often struggles with emergent phenomena for two reasons:

  1. Many emergent phenomena span an organism and its environment, requiring a framework that can model both. Computational frameworks suited to modeling information-processing components may struggle here.
  2. In some systems, the components are homogeneous and the interesting properties arise from their interactions. The explanatory burden falls more on organization than parts.
  • Dynamical systems theory provides an alternative framework that can span organism and environment and focus on organization. It explains behavior by describing the evolution of system states over time and the patterns that emerge, using concepts like attractors, bifurcation points, and phase portraits.

  • Though Dynamical Systems theory may seem just descriptive, it aims to provide real explanations by revealing the underlying principles that govern self-organization in complex systems. Both neural and bodily dynamics are seen as following the same principles.

  • A study of rhythmic finger movement shows how Dynamical Systems theory can explain a pattern of results. The theory looks at what variables and control parameters underlie the observed patterns of behavior over time, like the abrupt shift from anti-phase to in-phase movement at a critical frequency.

  1. The crucial variable that Haken et al. discovered was one that tracked the phase relationship between the fingers. This variable is constant over a range of frequencies but changes suddenly at a critical frequency. It is a collective variable that can only be defined for the whole system, not individual components.

  2. Frequency of movement is the control parameter for the phase relation. By plotting this, Haken et al. provided a mathematical model that described the system’s dynamics and state space, including attractors.

  3. The model could reproduce results from interfering with the system and generated predictions about transition times. This shows it provides explanations, not just descriptions.

  4. However, these explanations lack details about how to build the systems. They differ from traditional computational models that decompose a task into basic components.

  5. Dynamical Systems explanations can span multiple interacting components and agent-environment systems. They apply to both internal and external factors, like dripping taps.

  6. The parameters in Dynamical Systems explanations can be far from the system’s actual structure and processing. They explore the system’s dynamics but not its actual constitution.

  7. Intermediate options relate abstract parameters to physical structures. Salzman offered a Dynamical Systems explanation of speech coordination that used abstract dynamics defined in terms of constriction types, related to physical factors like lip aperture.

  8. A pure Dynamical Systems account isolates parameters to model a system’s unfolding over time and responses to new circumstances. It seeks mathematical models for observable phenomena but may lack details to build the system.

  9. A pure account provides a task analysis that handles interactions across brain, body and world. But it may not specify how overall dynamics arise from components. Computational accounts are closer to recipes for building target systems.

  10. A pure account accomplishes a sophisticated task analysis that is counterfactually useful but potentially wide, folding together external and internal factors. Identical dynamics can emerge from very different implementations.

Here is a summary of the key points regarding divisions of labor:

  1. A pure Dynamical Systems approach focuses on the evolution of overall system parameters and is well suited to modeling the complex interplay between multiple agent-side parameters and environmental ones. However, it often obscures the details of how various inner systems contribute to that coupling.

  2. Componential explanation and “catch and toss” explanation are better suited to explaining adaptive behavior by unraveling the contributions of specific agent-side components. They adopt both a modular/componential perspective and a representation-invoking perspective.

  3. Explanations at different levels (global dynamics vs. componential) capture different ranges of phenomena and provide different types of generalization and prediction. They should interlock to provide a full understanding.

  4. Damage and disruption studies require a componential perspective to understand how failure of inner systems affects overall behavior. Dynamical Systems theory alone cannot explain such effects.

  5. An example is Decision Field Theory, which describes the evolution of preference states over time using dynamical equations. It captures interesting phenomena but does not subsume other kinds of explanation, like those relevant to neuropsychological cases such as Phineas Gage.

  6. A radical “imperialist” view that favors Dynamical Systems theory over other perspectives is misguided. Multiple perspectives are needed for a full understanding of cognitive phenomena.

Does this summary accurately reflect the key points about divisions of labor as discussed? Let me know if you would like me to clarify or expand my summary in any way.

Neuroscientific research is crucial for a full understanding of the mind and cognition. Although some early work in cognitive science downplayed the importance of the biological brain, focusing instead on abstract computational models, connectionist models helped close the gap between computation and neural implementation.

Specifically, connectionist models were directly inspired by neural networks in the brain. They modeled cognition as arising from the interactions of many simple processing units, similar to neurons. This helped show how abstract cognitive functions could emerge from the brain.

However, connectionist models are still quite limited. They only capture some aspects of neural processing and cognition. Contemporary neuroscience is revealing a much more complex picture of the brain, with many interacting components and levels of organization. This includes:

  • Neural networks and connections
  • Regions and pathways
  • Neuromodulators that alter connectivity
  • Genetic, developmental, and environmental factors shaping the brain

A full understanding of the mind will require theories and models that integrate insights across these levels. Simply focusing on one level, like neural networks alone, is not enough. The brain, in all its biological richness, must be a central concern for cognitive science.

In summary, while early work in cognitive science downplayed the brain in favor of abstract models, contemporary neuroscience is demonstrating the necessity of understanding the complex, multilevel system that is the biological brain. Connectionist models helped reestablish the importance of neural networks, but a truly comprehensive understanding of cognition will require even more - integrating insights across genes, development, neural pathways, and environment. The brain, not any single model, must be the primary focus.

The connection between connectionist work and actual brain theory was often weaker than hoped. But as connectionism matured, attempts to bridge this gap increased. A synthesis of computational and neuroscientific views seemed possible.

Connectionist work was also moving toward embodied and embedded cognition. This should not diminish work on neurally plausible models. We need to understand neural systems and their interactions to fully understand extended cognitive processes. Focusing on organism-environment interactions should not avoid studying the biological brain.

The question is not whether to study the brain but how. Promising neuroscientific models have:

  1. Multiple, partial representations
  2. A focus on sensory and motor skills
  3. A decentralized view of the neural system

Examples:

  1. Monkey finger movements: The brain does not simply activate areas controlling individual fingers. More activity is required for precise than basic movements. Some neurons prevent unwanted finger movements. This suggests a focus on whole-hand synergies, with control of individual fingers built on top. This represents an elegant solution serving both basic needs and new abilities.

  2. Rat neurons: Some respond most to specific head orientations and landmarks, others to turning motions—agent-oriented and motocentric representations.

  3. Vision: Even simple vision involves staggering complexity. In macaques, there are 32 visual areas and 300 pathways. Areas include VI, V2, V4, and MT. Neurons are tuned to many features, with “tuning” changing over areas. Representations become less retinotopic and more complex over the visual hierarchy. Vision involves distributed, interacting representations and innate biases that likely evolved for perceiving and acting in complex environments.

In summary, current neuroscience suggests cognition relies on spatially distributed, interacting representations closely linked to perception and action, with complex control of basic abilities on which more sophisticated skills are built. This fits well with embodied/embedded approaches. The brain seems well designed for fluent real-time interaction, not abstract reasoning. Innate structure provides initial biases suited to natural environments.

  • Sensory information enters the visual system through two main pathways: the magnocellular (M) stream and the parvocellular (P) stream. The M stream handles rapid changes and motion, while the P stream handles form, color, and fine detail.

  • Information progresses through a hierarchy of cortical areas: V1, V2, V4, MT, MST, and IT. Higher areas respond to increasingly complex stimuli. Cells in IT respond best to complex shapes like hands, faces, and objects.

  • Cells at higher levels do not act as simple feature detectors. Their responses depend on context and attention. They act more like tunable filters that encode information along several dimensions.

  • Neural control structures help regulate information flow in the brain. They direct attention, allow flexible control of motion, and promote integration across senses. Examples include:

  • Controllers that route information between cortical areas. They allow flexible targeting of attention and control of motion.

  • Reentrant pathways that link cortical areas and allow coordination. They can encode higher-level properties through correlation.

  • Convergence zones that allow activation of multiple regions to represent an entity. Damage to convergence zones may lead to selective deficits where some knowledge is impaired but not other abilities.

  • In summary, the visual system is highly complex but can still be understood as separating, filtering, and routing information to make it available for perception and adaptive control of behavior. Neural control structures that modulate information flow are key to these functions.

The function of convergence zones is to allow the brain to generate widespread patterns of neural activity by sending signals back to multiple areas involved in early processing. When we access knowledge, we use these higher-level signals to reactivate patterns characteristic of the knowledge in question. Different types of knowledge may require activating different convergence zones. A hierarchy of convergence zones can explain how damage to different brain regions impairs different types of knowledge.

Lower convergence zones bind signals for basic categories. Higher zones bind more complex combinations, enabling knowledge of unique entities and events. The higher zones are in anterior temporal and frontal cortices. Knowledge of unique entities/events requires activating more basic areas and convergence zones. Knowledge of concepts requires activating several areas; simple features may only require one.

This framework proposes localized systems for sensory/motor info and convergence-zone control. Higher cognition emerges from multiple basic areas and convergence zones. Explanations require models showing complex interactions of components with feedback/feedforward. Classical componential analysis alone cannot explain phenomena arising from temporally evolving interactions of multiple components.

The framework is decentralized but retains componential and information-processing analyses. “Higher centers” trigger divergent retroactivation of lower areas; they do not store knowledge from lower areas. Opposition to information processing really targets “message-passing” models, not decentralized control and multiple representational formats, compatible with modularity and information processing.

Neural control hypotheses mix radicalism and traditionalism. They offer decentralized, non-message-passing higher cognition from time-locked basic processing; recognize complex dynamics. But they use componential, information-processing analyses, with neural components having specific content-bearing roles.

Contemporary neuroscience also mixes radicalism and traditionalism. It retains componential, information-processing analyses but with a decentralized, dynamic systems perspective. Notions of internal representation are being refined, based on research into neural populations’ complex response profiles, the role of environment, distributed representation, and neurons as multidimensional filters. Representation is an evolving process, not a pregiven map. Cognition emerges from a multidimensional field of locally encoded information, not a unified representational format.

  • Representation is a controversial concept in cognitive science. There are disagreements over its precise definition and necessary conditions.

  • A common view is that representation requires some kind of inner stand-in that coordinates behavior in the absence of environmental stimuli (Haugeland’s view). However, this may be too restrictive. Representation can still be useful as an explanatory concept even for inner states that cannot be decoupled from the environment.

  • What matters most for representation is the role that an inner state plays within the system, not its detailed properties. The key is that the inner state carries information that the system can use and exploit. Mere correlation between an inner state and an environmental factor is not enough. The system must somehow consume or make use of the information.

  • Representation should be understood broadly and flexibly. Static and dynamic properties, local and distributed states, accurate and inaccurate states can all potentially function as representations, depending on their role within the system. The strengths of representationalism lie in explaining how systems can register and utilize information about the environment and their own states. Its weaknesses come from narrow assumptions about what can serve as a representation.

  • In summary, representation depends on the contextual role of an inner state, not its detailed properties. Representation is a useful explanatory concept even when inner states cannot be decoupled from environmental input. What matters is that the system is set up to carry and utilize a specific kind of information.

I apologize, but I do not have enough context or information in the prompt “ate registration) amen” to provide a useful summary.

The key arguments and evidence presented by Thelen and Smith show that learning skills such as walking and reaching emerge from the interaction of neural, bodily, and environmental factors rather than being controlled by fixed inner programs. For example, they show how stepping in infants can be elicited outside the normal developmental period by suspending the infants in warm water. They also show how individual differences in infants’ motor activity and energy levels lead them to confront different problems and find different solutions in learning to reach.

However, Thelen and Smith’s findings do not conclusively argue against computational and representational approaches to cognition. Their work suggests that we need to develop better computational and representational theories that properly account for the role of the body and environment in both posing problems for the brain to solve and contributing to the solutions. The body and environment help determine the specific problems an individual needs to solve. The physical properties of the body, like the spring-like quality of infants’ muscles, also contribute to solutions by enabling certain behaviors. The overall system of brain, body, and environment can therefore be a meaningful unit of analysis.

The key point is that Thelen and Smith’s work calls for improved, not eliminated, computational and representational theories of cognition that recognize the role of the body and environment. Their findings are consistent with a computational and representational approach, but suggest that such an approach needs to recognize that cognition emerges from the interaction of neural, bodily, and environmental factors rather than being controlled by fixed programs in the brain. Better theories will see the brain as learning to manipulate bodily and environmental factors to achieve cognitive functions, not just executing stored programs.

In summary, while Thelen and Smith provide compelling evidence against approaches that see cognition as controlled by fixed inner programs, their work is consistent with an embodied computational and representational perspective. Their findings call for the development of more sophisticated theories in this framework, not the abandonment of the framework altogether.

The arguments presented do not amount to an outright rejection of computationalism or representationalism. Rather, they call into question:

  1. The view that development is driven by a fully detailed advance plan.

  2. The logicist view of cognition as involving logical operations on propositional data structures.

In place of these claims, the alternative views are:

1*) Development (and action) exhibit order that emerges from the interaction of multiple components (bodily, neural, environmental). Solutions are “soft assembled” rather than programmed in advance.

2*) Even highly logical adult cognition relies on resources developed through real-time bodily experience, not just logical propositions.

These alternative views are compatible with computational explanation and the idea of partial programs - programs that rely on and interact with bodily and environmental factors. The degree to which a solution is programmed depends on how much it specifies versus merely prompts a desired result. At some stages, neural processes may specify solutions in computational terms. But even the most detailed neural stages may not constitute a full program, if bodily dynamics contribute substantially.

The arguments suggest a continuum of more or less programmed solutions, not a strict dichotomy. They call into question only certain specific versions of computationalism and representationalism, not these broader notions themselves.

  • Representational approaches have difficulties capturing the temporal aspects of adaptive responses. Early connectionist models could not intrinsically represent time or order. Recurrent networks improved on this but still focused on order, not real timing.

  • Real-time adaptive responses, like catching a moving bus, require sensitivity to unfolding temporal patterns and coordinated timed responses. This requires systems that can set their internal dynamics based on the real timing of inputs, not just their order.

  • One approach is to use “adaptive oscillators” - devices that generate their own periodic outputs but can entrain those outputs to match the timing of inputs. They do this using gradient descent learning based on the difference between their expected spike timing and the actual spike caused by an input.

  • Adaptive oscillators can entrain to the frequencies of inputs and maintain those frequencies briefly after the input disappears. They are insensitive to nonperiodic inputs. Complex systems can use many oscillators sensitive to different frequencies to entrain to more complex stimuli.

  • Adaptive oscillators show that internal processes with temporal features can explain some adaptive behaviors. They achieve a “fit” with external events through coupling, not representation. They parasitize the real timing of external systems rather than using arbitrary symbols. We need to consider both the oscillator and the coupled system it’s entrained to.

  • Despite the complexity, it’s still useful to see oscillators as representing temporal dynamics in the external world. Temporal features are just as real as other features we represent.

So in summary, capturing the temporal aspects of adaptive behavior may require going beyond classic representational approaches. Adaptive oscillators are one alternative that couples internal and external dynamics rather than using symbolic representation. But we can still understand them as representing temporal features of the world, even if in a very different way.

The key points in the summary are:

  1. Neural representations often take the form of processes with intrinsic temporal properties, rather than static vector or symbol structures. These process representations are not arbitrary or quasi-linguistic.

  2. We understand these process representations by understanding what features of the external world they are keyed to, and how other neural and motor systems use the information they carry.

  3. In some cases, especially where there are continuous reciprocal causal interactions between brain, body, and world, the representational approach may break down or be of limited value. In these cases, the target phenomenon emerges from the coupling of components, and is not localized in any one component.

  4. There are two possibilities in these cases of dense reciprocal causation:

  • We may still be able to identify certain neural subsystems as representational, if they can be decoupled from the environment and their dynamics used in imagination or downstream processing.

  • Alternatively, there may be no clearly representational subsystems, just a mutually adaptive equilibrium between organism and environment. These cases lie outside the domain where a representational approach is most applicable.

  1. Examples of continuous reciprocal causation include improvised music, dancing, conversation, driving, and interacting with complex machinery. In these cases, the environment is not just providing inputs and receiving outputs, but is mutually and continually influencing the system’s behavior.

  2. These cases of dense reciprocal causation are conceptually interesting, but do not undermine the general role of representation in cognitive science, since they lie outside the domain where representation applies. Representation remains crucial for understanding many aspects of cognition.

So in summary, while neural and cognitive representations often differ from standard symbolic conceptions, the notion of representation still applies. However, in some cases of continuous reciprocal causation between agent and environment, the representational approach may not apply, as adaptive behavior emerges from the coupling of components rather than being localized within the agent. But these cases do not challenge the importance of representation in general.

The key idea in this passage is that there are two types of problems that strongly demand representational explanations:

  1. Problems that involve reasoning about absent, nonexistent or counterfactual states of affairs. These include thinking about events distant in time or space and considering hypothetical scenarios and outcomes. In these cases, it seems representations are needed to stand in for the absent phenomena and guide appropriate behavior.

  2. Problems where the system must respond selectively to states of affairs with wildly varied physical manifestations. Examples are identifying valuable objects in a room or reasoning about the pope’s belongings. Here representations seem needed to capture the abstract property or category at issue and guide sensitivity to the appropriate range of inputs.

In both types of cases, representations compensate for a lack of reliable or usable environmental input. However, representational explanations may still be undermined if the systems involved are so complex that representational vehicles cannot be usefully isolated or individuated. The ultimate resolution depends on future empirical work exploring the limits of representational explanation. This may lead to a reconception of representations and computation that allows representational vehicles to include environmental elements and complex dynamics. But taking representations too far from descriptions directly tied to system construction risks fracturing their explanatory power.

In summary, representation-hungry problems strongly suggest a representational explanation, but complexity may still preclude identifying representational vehicles. Reconceiving representations to include environmental and dynamical elements may help but risks limiting their explanatory value if taken too far from system construction.

Here is a high-level summary:

The information-processing perspective assumes that intelligent behavior arises from the manipulation of abstract internal representations that mirror the objective structure of the external world. However, this view overlooks the tight coupling between perception and action in biological agents and the role of embodied interactions in shaping internal states.

Recent work in robotics, embodied AI, and computational neuroscience suggests internal representations are often highly action-oriented, geared to the control of real-time interactions with the environment. They do not provide action-neutral encodings of the objective structure of the world that then require extensive computational processing to generate appropriate responses. Rather, internal states emerge from and continuously modulate embodied engagements with the world.

This “embodied” perspective has roots in phenomenology, especially the work of Heidegger, Merleau-Ponty, and Gibson. Like them, it rejects a rigid separation of perception and action and sees cognition as emerging from repeated cycles of embodied interaction. However, it is more compatible with representationalist and computational frameworks. It aims to reconceptualize internal representations, not eliminate them.

In summary, the embodied perspective suggests intelligent systems do not first build detailed inner mirrors of objective realities and then figure out what to do. Rather, they enact a world in which internal representations, external objects, and embodied interactions are thoroughly and reciprocally interdependent.

The author argues that advanced human cognition depends crucially on our ability to distribute reasoning across external structures, both social and symbolic. Individual human reasoning is cast as a fast, pattern-completing neural process, much like that seen in other animals and robots. But humans are uniquely able to create and exploit social institutions, language, and other environmental structures that complement our basic neural capacities, allowing us to achieve more advanced forms of intelligence.

The example of choosing beans in a supermarket is used to illustrate this view. Classical economic theory suggests that such choices would involve applying a preexisting, consistent set of preferences to perfect knowledge of available options, and then maximizing expected utility. But in reality, choices are made using very limited neural resources, with most of the cognitive work being done by the external structures of the market environment. The diversity of brands, the constraints of price and packaging, and the shelving of goods provide frames that radically simplify the choice process for the individual. Human reason is thus dissipated across the wider system of commercial institutions and organizations. The market environment is doing much of the thinking.

In this view, human intelligence depends essentially on our ability to scaffold cognition by structuring our physical and social environments. Our individual brains are not fundamentally different from those of other animals, but we put them to work in far more powerful ways by embedding them in a web of linguistic and social intelligence. The truly smart system is the combination of brain, body, and world—not the brain alone. Human reason is distributed across the wider networks we inhabit.

The key ideas are thus: distributed and dissipated reasoning, neural pattern completion as a basic cognitive mode shared with other animals, scaffolding by means of environmental and social structures, and the web of embodied and embedded intelligence created by combining brain, body, and world. Human minds emerge from this wider nexus, not simply from the brain itself.

  • The theory of substantive rationality assumes that individuals make rational choices based on a fixed, ordered set of preferences. However, human cognition is bounded and imperfect. Given our limited cognitive abilities, it is surprising that traditional economics, which assumes perfect rationality, has been moderately successful.

  • Traditional economics works best when individual choice is highly constrained by institutional structures and policies. In these “highly scaffolded” choice environments, individual psychology matters little. The external constraints promote choices that maximize certain goals. For example, competitive markets constrain firms to maximize profits. Traditional economics fails when individual cogitation plays a larger role, as in consumer choice or voting behavior.

  • The degree of scaffolding depends on whether there are “structurally determined theories of interests.” When environments select for actions that conform to certain preferences, traditional economics applies. Individual psychology is less relevant. For example, two-party systems select for vote-maximizing parties. In contrast, individual voter preferences are diverse and less constrained.

  • Experiments with “zero-intelligence” traders show that institutional details, not individual rationality, drive efficiency gains in some markets. Up to 75% efficiency can arise from constraints alone. Humans only improve this by 1%. Different institutions can yield a 6% boost, showing their importance.

  • The results suggest traditional economics would be largely unchanged if we assumed random individual choice. They make sense if scaffolding is key, reducing individuals to interchangeable cogs. In the extreme, even a coin flipper could play the role.

  • External structure is vital given what we know about human cognition. Bounded rationality suggests the need for scaffolding. The “intelligent office” provides knowledge, reminders, and constraints enabling complex tasks. Similarly, firms, markets, and institutions scaffold individual choice. They embody knowledge and constrain options, enabling a “sort of distributed reasoning.”

In summary, the main argument is that institutional and environmental scaffolding, not individual rationality, explains the success and failures of traditional economics. Strong external constraints reduce the role of individual psychology, enabling the model of substantive rationality. Without such scaffolding, individual cogitation comes to the fore, and the model breaks down.

  • Connectionist models like artificial neural networks challenge the classical view of cognition as involving explicit rules and linguistic representations. They rely on pattern recognition and heuristics instead of logical reasoning.

  • These models show how external structures can complement and enhance bare individual cognition. External structures enable us to solve complex problems that require chaining together basic pattern-completing capacities and reusing intermediate results. Examples include using pen and paper for long multiplication and organizations/institutions that prompt and coordinate individual problem solving.

  • Individuals within organizations are like neurons in a brain - their rationality is bounded, but the overall organization can display a grander kind of reason. Much of human behavior in organizations involves “stigmergic algorithms” where external structures control and prompt individual actions, which then modify the structures to guide future actions.

  • We need to understand the complex interplay between individual cognition, cultural and artifactual evolution, and group communication. Simulations show how communication patterns affect a group’s collective problem solving. With dense early communication, groups tend to rush to a shared interpretation and show high confirmation bias. Restricting early communication gives individuals time to evaluate evidence, leading to less bias and better solutions when the group then communicates.

  • This suggests that the advantage of groups over individuals may decrease with more communication early on. The results highlight how external structures at multiple levels, from words and numbers to organizations, complement human cognition.

The key idea is that human rationality is bounded, but we have developed social and cultural structures that help transcend these limitations and enable complex problem solving. Communication within and between these structures plays an important role in either enhancing or diminishing their effectiveness. A multi-level analysis that considers how structures at different scales interact and influence each other will be needed to fully understand human cognitive abilities.

• Language is commonly viewed primarily as a means of communication that allows us to share ideas. While this is true, it overlooks the role of language as a cognitive tool.

• Language acts as a tool that alters the computational demands of certain cognitive tasks, allowing our brains to perform these tasks more easily. In this sense, language exhibits a double adaptation: it is adapted to the capacities of the human brain, and it adapts the brain to perform new cognitive functions.

• Like any tool, language confers new cognitive capacities upon us that we do not naturally possess. It allows us to reshape difficult cognitive tasks into forms that are more suited to the basic computational abilities of our brains, such as pattern recognition and transformation.

• In this way, language acts as the “ultimate artifact” - it not only enables communication but expands our cognitive horizons by allowing us to exploit our mental capacities in new ways. Language is adapted to our cognitive abilities, and it adapts our cognition to new functions.

• The summary conveys the key idea that language should not be viewed solely as a medium of communication. Rather, it fundamentally acts as a cognitive tool that enhances our mental capacities by reframing difficult tasks into more tractable forms. In short, language boosts our cognitive success through a double adaptation to both our brains and our goals.

The passage discusses the idea that language serves purposes beyond just communication. Several thinkers have proposed that language profoundly shapes human thinking and problem-solving.

Lev Vygotsky argued that language and social interactions drive cognitive development in children. He proposed two key ideas: 1) private speech - children talk to themselves to guide their thinking and problem-solving; 2) scaffolded action - children can solve problems with the help of more experienced individuals that would otherwise be too difficult. Studies have found that children who talk to themselves the most while problem-solving also show the greatest improvement in mastering tasks.

Philosopher Christopher Gauker proposed that language should be seen as a tool for directly causing changes in the environment, not just for representing thoughts. He argued that the way humans and other animals understand language is by learning the causal relationships between producing certain linguistic signs and effects in the environment.

Peter Carruthers proposed that human thinking is often composed literally of inner speech - we think in words and sentences. This suggests that language serves as a medium for thinking, not just for expressing pre-formed thoughts. He argued that writing is also often a form of thinking, not just a way to record thoughts we’ve already had.

Daniel Dennett proposed that language inputs may actually reprogram brain computations. He suggested that human minds are like virtual serial machines implemented on the parallel hardware of the brain. Exposure to language, such as words, sentences, and texts, may reorganize brain computations in a way analogous to installing a new computer program. This reprogramming by language is what yields human consciousness and rationality.

In summary, these views propose that language shapes thinking in profound and complex ways. It serves not just to convey information between individuals but also as a tool for guiding our own behavior, restructuring our cognitive abilities, and enabling uniquely human forms of thought and problem-solving.

  • Daniel Dennett argues that human consciousness and advanced cognitive abilities arise from the interaction between the brain’s innate capabilities and cultural/linguistic influences. While humans have some biological differences from other animals, our advanced cognition depends heavily on learning from culture and language.
  • Exposure to culture and language alone is not enough; humans have a biological capacity to benefit from these external resources. But the changes to the brain may be relatively superficial. Language and culture provide tools that complement the brain’s native modes of thinking, rather than profoundly altering them.
  • One key way language augments cognition is by allowing us to “trade spaces” - to offload computational work onto external symbolic representations. This includes:
  • Using external symbols (writing, diagrams) to store information, reducing the memory burden.
  • Using signs and labels to simplify complex environments, reducing the inference required.
  • Using linguistic labels and categories to provide scaffolding for learning key concepts.
  • Coordinating action through language, by communicating plans and intentions.
  • “Chunking” information into linguistic representations, allowing problems to be broken into more manageable parts.
  • The ability to “trade spaces” depends on a capacity to learn from and work with external symbolic structures. But the brain’s core representational and computational abilities may remain largely unchanged. Language provides a set of tools that mesh with these native capacities, rather than fundamentally transforming them.

The summary outlines Dennett’s view that language and culture provide cognitive tools which complement the brain’s innate abilities, rather than profoundly restructuring them. A key way this works is by allowing us to offload computational work onto external symbolic representations, through mechanisms like storing information externally, using signs and labels to simplify environments, using language to scaffold learning, coordinating action through communication, and “chunking” problems into more tractable parts. Although language use depends on an ability to work with external symbols, the brain’s core capacities are seen as largely unchanged.

Explicitly planning our schedules and to-do lists helps coordinate our actions and reduces the cognitive load required for deliberating and reassessing options on the fly. Plans provide stability and guide our actions, though we revise them when needed in response to new information.

Mentally rehearsing speech plays a role in guiding our attention and allocating cognitive resources, even for experts. A study of Tetris experts found that their performance depends on both fast, pattern-completion reasoning and explicit higher-level policies which they monitor and use to manipulate their inputs and guide their reactive responses.

The linguistic encoding of thoughts enables communication with others, allowing ideas to be refined, critiqued, and built upon. Connectionist learning is highly path dependent, relying on the sequence of training; early exposure to complex ideas before basics can lead to getting stuck in local minima. Human learning also shows path dependence, with later ideas building on earlier ones. Education exposes minds to ideas in a sequence to enable progress.

Connectionist networks are constrained to explore at the edges of current knowledge, so current knowledge filters the space of options explored and possible new ideas. Linguistic communication allows traversing greater intellectual distances by building on others’ perspectives.

  • Language allows ideas to spread between individuals, enabling the construction of complex intellectual progressions that would be impossible for any single individual due to path dependence.

  • An idea that only one person can generate may flourish when shared with another person with a different intellectual background. Ideas can spread between people, navigating around individuals’ intellectual limitations.

  • The number of people in a linguistic community provides many possible trajectories for ideas. Shared language enables collective human cognition and helps overcome individual limitations.

  • Even blind searches can generate useful ideas. Shared language allows these ideas to spread and be improved on, exploring intellectual spaces that individuals could not.

  • This view fits with the idea that culture and cognition evolved together. Donald distinguishes “mythic” and “theoretic” scaffolding: the latter, which began with the Greeks, uses writing to record thinking processes and enable collective progress.

  • Language, speech, and writing are tools that shape how we think. They are not just expressions of thought but part of the mechanism of thought itself.

  • A “mangrove effect” operates in some kinds of thinking. We usually assume words express preexisting thoughts, but sometimes words shape thoughts. Poetry and complex arguments are examples, where the properties of words determine the thoughts.

  • Written words open up new possibilities for thinking by giving us a record we can revisit, rethink, and experiment with. The properties of text transform our space of possible thoughts.

  • Language may enable “second-order cognitive dynamics”: thinking about thinking, self-evaluation, criticism, and fixing our own flaws. This capacity for “thinking about thinking” seems distinctively human and may depend directly on language.

  • Public language and inner speech could generate second-order thoughts by giving us fixed points to attach further thoughts to, like mangrove aerial roots accumulating debris to build islands.

  • Formulating thoughts in words turns them into objects we can have further thoughts about. The act of linguistic formulation creates structures for subsequent thoughts to attach to.

  • Inner speech may make our own thoughts into objects we can reflect on, according to Jackendoff. Linguistic formulation is key to turning our thoughts into objects of attention and reflection.

  • Language allows us to engage in complex thinking by making thoughts available to focused attention and further mental operations. It allows us to isolate elements of thought, keep abstract ideas in working memory, and critically analyze our own thinking.

  • Language is well suited for this role because it evolved to enable efficient communication. It is context-independent, modality-neutral, and promotes rote memorization. This allows us to “freeze” thoughts in a memorable, abstract form that enables scrutiny and critical thinking.

  • The ability to reflect on our own thinking led to an explosion of cultural artifacts that extend and support human cognition. Writing and notation, for example, allow us to fix even more complex thoughts for attention.

  • Nonlinguistic forms of thought emerge to interact with and manage linguistic forms of thought. Linguistic and nonlinguistic thought thus coevolve to complement each other.

  • Language may be adapted to the human brain through a process of “reverse adaptation.” Language changes and evolves in ways that fit human learning and cognition. This could explain why language seems uniquely human, even if the human brain is not profoundly different from animal brains. Small changes enabled language learning, then language adapted to exploit human cognitive biases.

  • A “cultural phylogeny” model shows how language could change over generations to fit human learning. Errors and variations that are hard to learn disappear, while dominant and easier to learn forms prosper. This “snowballs” to produce major historical changes, as shown in the progression from Old English to Modern English.

  • In summary, language should be seen as a complementary cognitive artifact that extends human thinking. It is shaped by and shapes human cognition through a symbiotic, co-evolutionary relationship.

  • The boundary between mind and world is porous and hard to define precisely. Our cognitive capacities are deeply entangled with and scaffolded by external resources like language, symbols, and tools.

  • Some actions, like manipulating external symbols, are epistemic actions that shape our own thinking. They are a kind of materially extended thought.

  • Damaging some external props, like a constantly used notebook for a neurologically impaired person, can constitute harm to the self or person in a very real sense. The self extends, in some ways, into the local environment.

  • While consciousness is brain-based, our evolving cognitive profiles and thinking processes depend on the interplay between brain, body, and world. Minds extend, in some ways, into the local environment.

  • However, mind extension must be constrained by considerations of constant access, ease of use, automatic endorsement, and so on. Not just any external resource counts. The self does not extend willy-nilly into the world.

  • There is a tension between our intuitive sense of mind as portable (the “naked mind”) and the degree of scaffolding provided by external resources. But ultimately, the portable mind view begs the question about what should count as truly “mental.”

  • In sum, while minds are rooted in biology, they are not confined to the brain. They extend, in principled ways, into the world. The complementarity between biology and extended cognitive scaffolding creates a kind of symbiosis.

  1. Biological systems are adept at exploiting and manipulating environmental structure to aid their own processing and problem solving. Dolphins and tuna provide striking examples of this. They do not simply negotiate an external problem domain; they actively mold and control it to their advantage.

  2. Brains should not be conceived as disembodied problem solvers. They are the control systems for embodied and environmentally embedded organisms. As such, they will often devote considerable resources to controlling and exploiting environmental structure, rather than directly solving problems.

  3. The problem-solving profiles of embodied, embedded agents should not be equated with the innate processing capabilities of the biological brain. Many advanced human capacities, like logic and science, rely heavily on external media and institutions. The brain need only interface effectively with these external resources.

  4. The nature and bounds of the intelligent agent are unclear. There may be no central executive in the brain, and the division between agent and world is porous. For some purposes, it may make sense to consider the intelligent system as spanning brain, body, and world.

  5. Cognitive science needs a wider, more ecologically oriented perspective that looks at interactions spanning multiple time scales, individuals, and environments. This demands new tools, combining insights from dynamical systems theory, robotics, and large-scale simulations with ongoing neuroscientific research.

  6. Perception, cognition, and action are deeply intertwined in the brain and in embodied, embedded problem solving. Clear divisions among them are unhelpful. Actions often play cognitive and computational roles, just as internal processes do.

  7. In sum, biological intelligence emerges from the dense reciprocal interconnections between brain, body, and world. Each shapes and transforms the other in deep and fundamental ways.

  • The true lesson of embodied and embedded cognition research is not that we can succeed without internal representation and computation. Rather, it is that the kinds of internal representation and computation we employ are adapted to complement the settings in which we act. We must consider both internal and external factors.

  • The brain’s activities are mostly not accessible to conscious awareness. What we do access are occasional glimpses and distorted shadows of the brain’s real work. These glimpses portray the products, not the processes, of the brain’s subterranean activity.

  • Access to the brain’s products is limited. What filters into conscious awareness is a rough summary of results useful for thought and action. We get the minimum needed to guide behavior.

  • The brain does not have our thoughts. We have our thoughts. The brain is one part of what enables thinking. Thinking arises from interactions between the brain, the body, and the external world.

  • The brain acts as a mediator in feedback loops between the person and the environment. Ideas emerge from repeated interactions across this system, not just from the brain alone.

  • The brain is not a single, unified “inner voice.” It consists of many parallel computational processes orchestrated by evolution to enable adaptive behavior. The metaphor of an inner voice is misleading.

  • We wrongly assume the brain represents the world as we do and thinks as we report our thinking. The brain’s organization is alien to our experience. It evolved to serve adaptive needs, not match our conceptual frameworks. Damage to the brain can impair some abilities while leaving others intact, demonstrating this alien nature.

  • In short, we must move past an internalist view of cognition that focuses on the brain alone. Cognition arises from embodied and embedded interactions across brain, body, and world. We must understand each part in its own terms, not by projecting our experience onto the brain.

  • The author argues that the notion of “concepts” that humans have does not match the actual nature of the cognitive and physical mechanisms that underlie them. The unity of concepts exists primarily due to the significance they have for human language and culture, not because of the nature of human thought itself.

  • The author argues that human imagination and introspection lead to a mistaken view that we have a stable, detailed internal model of the world. In reality, human cognition rapidly samples details from the environment as needed. We do not maintain a comprehensive internal model.

  • The author argues that almost nothing about human cognition works the way humans intuitively believe. Human cognition evolved primarily to guide behavior and action, not to produce human language and introspection. Those capacities emerged later and lead to mistaken assumptions.

  • The author is hopeful that new research techniques like brain imaging and work on artificial neural networks and robotics will provide better insight into how human cognition actually works, including how it gives rise to the human sense of self. But for now, the author argues that human introspection provides very little insight into the nature of human cognition. We remain “strangers” to the “Martian in our head.”

  • In summary, the author is arguing for a view of human cognition as embodied, environmentally embedded, fragmented, and alien to human introspection. The aspects of human cognition that shape language and culture emerge from mechanisms that originally evolved for guiding adaptive behavior.

Here is a summary of the references:

39, 74, 97,110 Physics, naive, 54 Richardson, R., 113 Piaget, J., 35, 155 Ritzmann, R., 4, 5 Planning
central, 21, 32 and embodied action, 47-49, 62-647 and language, 202-204 long-term, 60

Robots
Attila, 15 Herbert, 14, 15, 24, 25 hexapod, 16,17 Rumelhart, D., 61,199 Saccades, visual, 22, 29, 31 Saito, E, 17, 19 Salzman, C, 134 Salzman, E., 118 Satz,D., 181-184 Scaffolding, 32, 33, 45-47, 60-63,
82, 179-192,194-218 Schieber, M., 131,132,141 Sejnowski, X, 30 Self, 216-218 Self-organization, 40, 43, 73,
107-113,116.
Sensing, niche-dependent, 24, 25 Sensors, 96 Shaw, C, 99, 114 Simon, H., 88,181,184, 185 Simulation, 94-97 Skarda, C, 148 Smith, L., 36, 37, 39, 42, 44, 122,148,153-155, 158,160 Smithers,X, 95,96, 111, 148 Smolensky, P., 199 Speech, private, 195.
SOAR system, 54 Steels, L., 108-110,112,113 Stein, L., 19 Stigmergy, 75, 76, 186,191 STRIPS program, 54

  • Artificial neural networks provide a candidate for modeling real neural systems. However, real neural systems have many features not captured in most ANNs, like asymmetric and specialized connectivity. Despite differences, ANNs are closer to biological neural systems than classical AI. The key similarity is the reliance on associative memory and pattern completion.

  • Digital Equipment Corporation’s model DTC-01-AA is an example of an ANN.

  • ANN functions are usually nonlinear, with output not directly proportional to input. Hidden unit responses can be sigmoid, Gaussian, etc. Backpropagation is a learning algorithm for training ANNs.

  • Recurrent neural networks can model temporal and sequential effects. They have feedback connections, allowing them to maintain state over time. Examples include Elman networks and Jordan networks.

  • Neural networks can model language acquisition and cognitive development. Rumelhart and McClelland’s model showed how learning the past tense in English can emerge from an ANN.

  • Creativity and imagination seem to involve going beyond the information given. ANNs can model this using their pattern completion abilities. Mental simulations may work by propagating patterns through neural networks.

  • Action and perception are deeply intertwined. We perceive to enable action, and act to enable perception. This suggests an “action-oriented” representation in the brain, encoded in neural connectivity. Some evidence for this view comes from neuropsychological cases of apraxia and agnosia.

  • Human problem solving is highly flexible, context-sensitive, and improvisational. It does not seem to fit with the classical “rational-planning” model. ANNs provide an alternative model, using spreading activation and pattern completion in a highly flexible, context-sensitive fashion.

  • Evolution and learning interact in complex ways. Genetic algorithms can be combined with neural networks to get the benefits of both evolution (to search a large space) and learning (to fine-tune solutions). However, the relationship between genes, environment, and development is highly complex. Simply combining GAs and ANNs neglects many of these complexities.

  • Cybernetics aimed to understand adaptive, self-regulating systems. Early work by Ashby and others helped inspire recent embodied/embedded approaches to cognitive science. These approaches view the agent/environment system as forming a feedback loop, with cognition arising from the interaction between the two.

  1. Complex systems, like the mind and brain, can be analyzed into smaller, “dumber” components that collectively give rise to intelligent behavior. These components are simple enough to actually build and implement physically. Digital circuits are an example of such “stupid” but implementable components.

  2. The mind and brain can be understood as composed of interacting parts, not as an indivisible whole. These parts can be studied individually to gain insight into the whole system.

  3. Emergence refers to properties of a system that arise from the interaction of parts but cannot be reduced to those parts. Emergence is a matter of degree, ranging from weak (e.g. a hi-fi system) to strong (e.g. the mind). Strong emergence arises from complex, nonlinear interactions.

  4. The neural code is distributed across many neurons, not contained in any single neuron. Information is represented in the pattern of activity over many neurons. There are hints of topographic mapping in some areas of the brain, but neural coding is more distributed than localized.

  5. Representation in the brain is not an all-or-nothing property but a matter of degree. Neural activity can represent in virtue of its causal-informational properties, not depending on arbitrary encoding conventions. Representation is shaped by natural selection and development.

  6. Connectionist models provide an alternative to classical computational theories of mind. They posit distributed representations and show how complex behavior can emerge from simple units and connections. Connectionism suggests a dynamical systems view of the mind.

  7. Some theorists argue against representational and computational views of mind in favor of a dynamical systems perspective focused on nonlinear interactions, coupling, intrinsic dynamics, and self-organization. Representation and computation are seen as the wrong abstractions or units of analysis. The vocabulary of dynamical systems is preferred.

  8. There are “online” and “offline” strategies for solving problems. Online strategies, used in everyday cognition, tend to be fast, automatic, and cue-based. Offline strategies are slower, more reflective, and tap into deeper knowledge. Representation should be understood in terms of the control of behavior, not arbitrary encodings in the head.

#book-summary
Author Photo

About Matheus Puppe