Self Help

Natural-born Cyborgs Minds, Technologies - Clark, Andy_

Author Photo

Matheus Puppe

· 64 min read

I see. The key idea is that the human mind should not be thought of as bounded by the skin or skull. Rather, thinking and reasoning systems span brain, body, and the various cognitive tools and media we interact with. We are cyborgs not in the science-fiction sense of having implants and prosthetics but in the more subtle sense that our minds and selves are intimately geared to cues and prompts from technologies outside the biological body. Thus the mind leaks out into the world not via implanted electrodes but via the habitual and skilled use of various external props and aids. The argument, then, is that to properly understand what human beings are we need to see the dense webs of reciprocating influence that tie biological organism and external, designed environment into unified, interacting wholes. Does this capture the essence of your cyborg vision?

The human brain has evolved to solve complex problems related to survival and reproduction. However, the brain alone is not sufficient for the development of human intelligence and mind. Human intelligence arises from the brain’s ability to enter into deeply integrated relationships with nonbiological components like tools, culture, and technology. The familiar idea of humans as toolmakers and users is extended - many of the tools we create become integral parts of our cognitive systems. Our deep neural plasticity allows us to seamlessly merge with tools and technology.

The example of using pen and paper to multiply large numbers illustrates this. The brain relies on the external symbolic resource of pen and paper to accomplish a complex cognitive task that it alone could not perform. This kind of neural-external hybridization is what makes human minds so potent and human intelligence so flexible and abstract. We are natural-born cyborgs - our minds seek out and integrate with nonbiological props and resources.

New technologies like AI, ubiquitous computing, and brain-computer interfaces will push this cyborg nature to new extremes, as our tools and environments become more responsive, personalized, and tailored to our needs. But this cyborgization reflects something that has always been true of human minds - we were designed by evolution to constantly annex and assimilate nonbiological resources into our cognitive apparatus. Our sense of self and mind emerges from this fluid hybrid of biology, culture, and technology.

We must understand human minds as fundamentally open, plastic, and geared towards change and hybridization. While we have an ancestral evolutionary past, human cognition operates according to a “moving target” - we are constantly reinventing ourselves by creating and assimilating new tools, culture, and technology. The line between mind and world was always porous, and new technologies simply highlight this. To understand human nature, we must appreciate our tendencies towards change, prosthesis, extension, and transcendence of bodily and cognitive boundaries. We are natural-born cyborgs.

• In 1960, Manfred Clynes and Nathan Kline coined the term “cyborg” to refer to a cybernetically enhanced human that could survive in space.

• They proposed merging humans and machines to create “cyborgs” that could automatically regulate their physiological functions to adapt to space.

• Clynes and Kline were scientists working on cybernetics, the study of control and communication in humans and machines.

• Clynes suggested the term “cyborg” to describe a hybrid of an organism and a machine with an extended self-regulating control system.

• “Cyborg” stood for either “cybernetic organism” or “cybernetically controlled organism.” Cyborgs incorporated machine components to adapt to new environments.

• The authors proposed cyborg enhancements like implanted devices to regulate breathing, heart rate, metabolism, and temperature for space travel.

• The paper discussed bypassing lung-based breathing, compensating for weightlessness, altering heart rate and body temperature, and reducing metabolism and food needs.

• Clynes coined “cyborg” and Kline said it sounded “like a town in Denmark.” But the term stuck and spread into fact and fiction.

• Cyberneticists studied self-regulating systems where a system’s output is fed back to regulate its own activity, like a thermostat. Cyborgs extended this self-regulation.

  • The example of the rat with the Rose osmotic pump showed one of the first cyborgs that integrated an artificial system for homeostatic control.

  • Cochlear implants allow deaf people to hear again by electronically stimulating the auditory nerve. More advanced implants that penetrate into the brain stem can provide more nuanced hearing.

  • Kevin Warwick, a professor of cybernetics, implanted a silicon chip in his arm that allowed him to control doors and lights. He then implanted a 100-spike array in his arm that could tap into signals between his brain and hand. This allowed signals to be sent in both directions, potentially allowing new kinds of sensation.

  • Warwick views technologies like these as allowing humans to transcend their biological limitations and transform themselves in radical ways. His experiments aim to explore how technology could be integrated into and expand human sensory capabilities.

  • These examples show some of the first real-world cyborgs that integrate technology and biological systems. While relatively simple, they point to the potential for much deeper fusion of the human and the artificial.

The key points in the summary are:

  1. Classic cyborg technologies involve direct neural or bodily implantation of electronic components. These technologies are advancing rapidly in areas like sensory prosthetics, brain-computer interfaces, and neurally controlled devices.

  2. However, the depth or invasiveness of an implant itself is not what really matters for determining how “cyborg-like” a technology is. What matters more is the potential for a deep, transformative integration between biological and nonbiological components, and how much the technology enhances or alters human capacities and experiences.

  3. Non-implant technologies like wearable or embedded computing can be just as cyborg-like as invasive implants, if they facilitate a seamless integration with and enhancement of human cognitive abilities. The modern aircraft flight deck is an example of how technology can become deeply integrated with human cognition without physical implants.

  4. In the near future, non-implant cyborg technologies may offer the greatest potential for transforming and enhancing human lives by bypassing biological barriers like skin and skull. These technologies can facilitate a symbiotic cognitive relationship between human and machine.

  5. While depth of interface was once a useful rule of thumb, what really matters for cyborg technology is the potential for profound transformation and enhancement of human experiences, projects, and capacities. Non-invasive technologies may ultimately offer the greatest bandwidth for such transformations.

  • Modern commercial aircraft rely on advanced automated systems and computers to assist human pilots. The responsibilities between pilots and computers vary but in many cases the computers impose limits on what pilots can do to ensure safety. This system could be seen as turning pilots into temporary cyborgs, with computer systems providing an “organizational system” to handle certain problems automatically.

  • In our daily lives, we also routinely rely on and integrate with advanced technologies in a way that makes us natural-born cyborgs. Things like alarm clocks, traction control and ABS in our cars, computers, cell phones and other digital tools become seamlessly integrated into how we live and work. Our brains readily adapt to relying on and working with these nonbiological tools and resources.

  • We tend to underestimate how seamlessly we integrate technology into our cognition and lives. We cling to the notion that our minds and selves can only truly operate within our biological brains and bodies. But in reality, our problem-solving and cognitive capabilities emerge from a matrix of brain, body and technology. Our brains are adept at becoming “team players” with the technological tools and props we have at our disposal.

  • Cell phones in particular have become deeply integrated into many people’s daily lives and routines, to the point of becoming like an extension of our minds and bodies. Cell phones and other digital tools enable a kind of nonpenetrative cyborg technology that does not require surgical implants or direct brain interfaces. This technology is poised to become even more seamlessly integrated into our lives in the coming years.

  • What matters most for human-machine merger is the potential for deep, fluid integration and personal transformation. While brain implants and bioelectronic interfaces may help achieve this, noninvasive tools and technology can be equally as compelling a route to cyborg existence. We as a society have already embarked on this route.

  • Ubiquitous computing and transparent technologies are increasingly integrating into and shaping our daily lives in intimate ways.

  • These tools and systems are becoming so well-fitted to our lives that they are almost invisible, like the pen we write with or the neural mechanisms that control our hand movements.

  • Mark Weiser envisioned ubiquitous computing as environments becoming increasingly infused with interconnected electronic devices, ranging from tiny tabs on books to pads to large boards. These devices would animate inert objects and the environment would become smarter and more responsive.

  • As the smart environment becomes more tailored to an individual and closely integrated with them, the line between the person and the environment blurs. The environment functions as an extension of the person’s mind.

  • Software agents that monitor a person’s online activity and interests, then provide recommendations are an example of how an electronically-infused environment can become intimately integrated with a person. Over decades of co-evolution, these agents could become almost like an extension of a person’s mind.

  • Humans are primed by evolution to incorporate external elements and structures into their extended minds. Cortical plasticity and an extended childhood enable humans to deeply integrate external operations and resources into their thinking.

  • The gradual merging of human minds and nonbiological resources mirrors the integration of distinct neural subsystems within a single brain. Parts of the brain learn to rely on other parts, just as the mind learns to rely on external tools.

  • The original vision of the cyborg was a human with machine-controlled homeostasis, freeing the human to focus on higher functions. But intimate integration of tools and the environment can also function as an extension of human minds.

The author visited the Los Alamos National Laboratory to give a talk on mind and technology. The lab contrasts old heavy technology with new lightweight technology. The old technology required huge machines, while the new technology only needs laptops and databases.

After the talk, the author visited a nearby restaurant and then the “Black Hole,” an antiques store filled with old technology from Los Alamos. The store is run by Ed Groshus, a former lab employee turned peace activist. He buys old lab equipment by the pound and sells it in his store. The store is a protest against technology used for warfare, but also a celebration of the beauty of old technology.

The technology in the store is the opposite of ubiquitous computing—it is heavy, obtrusive, and does not configure to the user. Instead, the user has to configure to it. The technology is meant to impress with its bulk and complexity. The store is a graveyard for this old style of opaque, obtrusive technology that demands attention and configuration from the user.

In summary, the author contrasts the new lightweight, ubiquitous technology at the lab with the old heavy, obtrusive technology celebrated and protested in the Black Hole. The store represents an era when technology did not cater to humans and was meant to impress them with its power and complexity.

The key distinction here is between transparent technologies and opaque technologies. Transparent technologies integrate so well with human use that they become almost invisible, while opaque technologies remain obtrusive and require conscious effort to operate. Many technologies start as opaque but become more transparent over time, as they are designed to suit human cognitive abilities and as humans adapt their skills to the technologies.

The wristwatch provides a good example of this transition from opaque to transparent technology. At first, humans did not keep precise time and relied on natural cycles. As societies industrialized, timekeeping technologies like sundials and water clocks were developed but remained unreliable and fixed in location. Tower clocks provided some “time obedience” by regulating the timing of activities like work, but people still lacked “time discipline” - the ability to factor time consistently into their own planning.

Wristwatches enabled this transition by providing cheap, accurate, and personal timekeeping. Over about 500 years, wristwatches became increasingly transparent technologies that were integrated into human time perception and daily habits. Wristwatches changed humans’ relationship to time by allowing new cultural practices and ways of thinking that depended on constantly factoring time into one’s thoughts and actions. In summary, as technologies are designed to suit human needs and as humans adapt to technologies, the technologies can become increasingly transparent and symbiotic.

The passage discusses the notion of information appliances—intelligent devices designed to support specific activities by processing and communicating information in a transparent, easy-to-use fashion. Three central features characterize information appliances:

  1. They are tailored to particular activities and functions.

  2. They can communicate with each other, forming an interconnecting web.

  3. They are transparent technologies, poised to fade into the background.

The vision is one of ubiquitous computing, with small, intercommunicating, and unobtrusive devices embedded in homes, workplaces, and even our own bodies. When highly integrated into human life in this way, such technologies must be maximally transparent, adding little complexity to the tasks they support. The goal is a world in which information is ubiquitously and effortlessly available.

Examples of information appliances include tiny cameras for sharing visual information, wall displays of local weather, sensor-equipped shopping carts, smart eyeglasses, and implanted bodily monitors. The key idea is that of information poised for easy and ubiquitous access, like a natural extension of human capacities.

The key idea is that technology should fade into the background and become transparent in use, allowing us to focus on the task, not the tool. This does not mean the technology itself needs to be simple. In fact, it often requires highly complex technology to create devices that are transparent and taken for granted by users. When technology becomes transparent in use, it becomes “pseudo-neural” - we learn to use it unconsciously like we have learned to use our neural capacities.

Personal information appliances that function transparently and constantly will change how we live and work. In the future, we may have implants or wearables that provide information instantly without conscious effort. Our sense of self and knowledge reflects the technologies we currently use.

We need a rich web of technologies to support us, not just smart environments. Wearable computing attaches resources directly to users, providing information processing that is integral to the user. It should allow hands-free use and present information unobtrusively. Wearable computing is human-centered but attaches computing to the user rather than embedding it in the environment. An example is a “wearable remembrance agent” - a heads-up display and software that provides memory prompts. It combines personal information with context from the environment.

However, truly seamless and invisible technologies can be hard to control and resist intervention. Tangible computing aims to avoid making tools permanently invisible. It seeks natural and easy interactions but wants to allow users to focus on tools as objects when needed, not just see through them. The ability to engage and disengage with tools is important for using them effectively.

In summary, the key ideas are creating transparent technologies focused on tasks, developing wearable and ubiquitous computing for rich support, but also retaining the ability to disengage from and reengage with tools through tangible computing. The overall goal is enhancing human capacities through seamless interactions with technologies.

Here are the key basic skills and knowledge summarized from the passage:

  1. Familiarity and expertise with a variety of digital and informational technologies, e.g. graphical user interfaces, tangible user interfaces, virtual reality, augmented reality, etc.

  2. Knowledge of interface design and how different interfaces can facilitate human-technology interaction. The passage discusses how tangible interfaces that use physical objects can help translate digital abstractions into more intuitive interactions.

  3. Familiarity with concepts such as “ready-to-hand” and “present-at-hand” in relation to human use of tools and technology. The passage suggests that the best tools and interfaces allow for easy transition between these two modes of encountering technology.

  4. Knowledge of various information technologies like global positioning systems, eyeglass displays, handheld devices, etc. The passage gives examples of how these can be combined to create augmented reality systems.

  5. Creativity and imagination to envision new ways of combining information technologies to solve practical problems or enhance human capabilities. The passage discusses innovative prototypes and concepts for new interfaces and augmented reality systems.

  6. An interdisciplinary perspective that combines insights from fields like human-computer interaction, information technology, media studies, design, and more. The passage brings together ideas from multiple disciplines.

  7. An interest in pushing the boundaries of human-technology interaction and exploring new frontiers. The technologies and concepts discussed in the passage represent innovative and forward-looking work.

So in summary, the key skills and knowledge include: familiarity with digital technologies and interfaces; interface design expertise; knowledge of concepts like “ready-to-hand” and “present-at-hand”; familiarity with information technologies; creativity and imagination; an interdisciplinary perspective; and an interest in innovative work at the frontiers of human-tech interaction. The passage highlights how these elements come together in the development of new interface prototypes and augmented reality systems.

Augmented Reality technologies allow digital information to be overlaid onto the physical world. This can provide useful information to help with tasks like surgery, repair, or education. Researchers have developed ‘mixed reality’ games and environments to help children become comfortable with the increasing integration of the physical and digital worlds.

While Invisible Computing and Tangible Computing seem opposed, they actually have a lot in common. Both aim for technologies that are easy and intuitive to use. The differences emerge at the extremes, with some technologies aiming to remain unseen while others exploit familiar physical objects. The choice depends on the purpose and needs of the technology.

Dynamic appliances are information technologies that actively learn about and adapt to individual users. Combining dynamic appliances and transparent interface technologies could lead to highly personalized and intuitive systems. These systems become skilled at the specific needs of each user through monitoring usage and adapting functions. This kind of dynamic, personalized adaptation is similar to how our own neural systems operate.

Overall, these emerging technologies point toward a future with seamless integration of digital information and the physical world, and highly personalized systems adapted to individual human needs and capabilities. These kinds of human-centered, transparent technologies are the hallmarks of the cyborg future.

  • Our sense of body and physical self is constructed by the brain and quite plastic. It is negotiable and open to rapid revision.

  • This is demonstrated through simple experiments like the “extended nose” illusion, where tapping someone else’s nose in synchrony with tapping your own nose can lead you to feel as if your nose has extended two feet in front of you.

  • The brain constructs our body image based on correlations in sensory input and experience. So providing correlated visual and tactile input, like in the mirror box for phantom limb patients, can lead the brain to revise its model of the body.

  • The implications of this plasticity and negotiability of our body image and sense of physical self are significant for how new technologies may be incorporated and come to feel like a natural extension of our minds and bodies.

  • The brain’s ability to readily incorporate non-biological tools and objects into our sense of embodied self, as with a blind person’s cane or a sports player’s racket, shows how technology may be profoundly incorporated into our sense of self and perception.

  • Our body image and sense of physical self, while feeling permanent and stable, is actually quite transitory and updated based on our ongoing experiences and interactions with the world. New technologies will provide further opportunities for updating and revising our sense of embodied self.

The key point is that our body image is highly malleable and constructed, and new technologies may be deeply incorporated into our sense of self in ways that feel quite natural, based on the brain’s tendency to construct correlations between experience, action, and perception. So our minds and bodies may become profoundly hybridized with new tools and technologies.

  • The human visual system has a small area of high-resolution processing (the fovea) that actively moves around a visual scene.

  • The brain does not actually build up a detailed internal representation of what it sees. It extracts less information than we intuitively expect.

  • Experiments show we often fail to notice major changes in visual scenes (known as “change blindness”). For example, subjects looking at text on a screen often don’t notice when most of the text is replaced with nonsense characters outside a small window that follows their gaze.

  • Similarly, subjects viewing a picture scene often don’t notice when major elements like the colors of objects are changed.

  • Even in real-world situations, people frequently fail to notice when the person they are interacting with is suddenly replaced by someone else.

  • The brain seems to operate based on a broad sense of what’s in the visual scene rather than a detailed internal model. It retrieves more details on demand by shifting gaze. This “neural opportunism” makes us open to intimate relationships with tools and technology.

Simons and Levin show that our failures to detect visual changes are not due to the artificial nature of experiments but rather arise from our lack of precise representation of the visual world. We encode only a rough sense of the current scene, enough to guide our attention and information retrieval as needed. Demonstrations using the flicker paradigm, where images rapidly alternate with changes occurring in between, show how hard it is to spot even large changes without motion cues.

This suggests the visual brain prefers “meta-knowledge” - knowing how to get information - over rich inner models. Like Rodney Brooks’ robots, our brains try to “let the world serve as its own best model” by relying on the external world and sampling it as needed rather than building detailed inner representations. While this seems to imply our visual experience is an illusion, the case is more complex. We readily ascribe knowledge to someone who can rapidly access information from memory, like a sports fan recalling statistics. Similarly, we have a rich visual database we can access, even if much of it is in the external world.

Infant brains also take advantage of external information, like language. Words act as a kind of cognitive technology, letting brains explore realms otherwise hard to access. Research on chimps shows how acquiring token associations enabled them to solve more abstract categorization problems that untrained chimps couldn’t. For humans, language provides a host of cognitive shortcuts, enabling us to convey and understand complex ideas. Our complex conceptual world depends profoundly on this seed technology of language. While the biological capacity for language is still debated, for infants language is simply part of the structured world they’re born into. Their opportunistic brains make the most of this tool, just as they do the visual world.

So in summary, the key conclusion is that human cognition depends profoundly on the external world, not just internal processing. Our brains readily incorporate information in the environment, like language, visual details, and more, using them as a kind of external memory to support thought and experience.

• The experimenters trained some chimps to associate plastic tokens with the concepts of “sameness” and “difference.” After this training, these chimps were able to solve problems involving relations between relations, which untrained chimps could not do.

• The tokens allowed the chimps to reduce the complex, abstract problems to simpler problems involving the tokens, which their brains could handle. The tokens acted as proxies for the abstract concepts.

• Humans have a similar ability to use words and labels as proxies for complex concepts. This allows us to reason about abstract ideas that would otherwise be difficult for our biological brains.

• Evidence shows that precise mathematical reasoning depends on language-specific representations of numbers. We have an innate approximate number sense, but exact calculation requires culturally learned number words and symbols.

• Studies show that short-term memory for numbers depends on the language. Speakers of languages with shorter number words can remember more numbers than English speakers.

• Advanced mathematics usually requires external tools like pen and paper in addition to number words. Our biological memory alone is not enough to handle complex calculations. These tools act as extensions of our minds.

• In summary, symbolic tools like words, numbers, and writing systems act as cognitive technologies that expand human reasoning capabilities. They allow us to “offload” cognitive work onto perceptible tools and representations outside our heads.

  • Human mathematical competence arises from the interaction of biological, cultural, and technological factors.

  • The biological brain gives us an intuitive sense of numbers and quantities.

  • Cultural innovations like number words and mathematical symbols augment our biological capabilities.

  • Technologies like the abacus, pencil and paper, and computers provide an external scaffolding that allows us to solve complex mathematical problems that we couldn’t solve with our brains alone.

  • This combination of biology, culture, and technology creates an extended cognitive system for mathematics that is much more powerful than the biological brain itself.

  • More generally, human cognition often depends on creating and using tools, media, and other nonbiological resources to complement the brain’s innate capacities.

  • These resources transform problems into ones the brain can solve by relying on its strengths like pattern-matching while minimizing its weaknesses like memorizing and executing long sequences of instructions.

  • Examples include using sketchbooks in artistic creation and using notes, outlines, and slides in preparing a presentation.

  • Such tools and practices create extended cognitive systems with different profiles than the biological brain alone. They augment and reshape human thinking and reasoning.

  • The author argues that language and culture have deeply transformed human cognition and allowed us to achieve feats of thinking that would otherwise be impossible for unaided biological brains.

  • There are two main ways this happens:

  1. Developmental loop: Exposure to external symbols (like words) during development adds new cognitive capacities. For example, learning number words allows us to represent and reason about quantities in new ways.

  2. Persisting loop: Ongoing use of cultural tools like language gears our neural activity to the presence of those tools. For example, being able to write down our thoughts allows us to then scrutinize and improve those thoughts.

  • The author argues that language in particular allows us to think about our own thinking in a new way, through “second-order cognitive dynamics.” Language gives us a way to represent our own thoughts and reasoning processes, which we can then reason about and improve.

  • This has allowed humans to actively build “better worlds to think in”—environments, institutions, and practices that scaffold and enhance our thinking. Things like schools, peer review, and artistic practices are examples of this.

  • The author uses the metaphor of mangrove trees building their own islands to illustrate this. Just as mangrove seeds build the land they grow on, our words and language may build the thoughts and concepts we then inhabit, rather than just expressing preexisting thoughts.

  • In summary, human cognition should now be seen as a “moving target” that depends profoundly on cultural and technological scaffolding, especially language. Biological brains are just one part of the larger “chameleon circuitry” of human thinking that flows through culture and the environment.

The human brain is exceptionally plastic and capable of constructive learning. This means our neural architecture can alter and expand as we learn and develop. The environments we grow up in help shape the structure of our brains.

The neural constructivist view sees brain development as experience-dependent, involving the growth of new neural connections, not just tuning existing ones. Learning changes the brain’s internal computational architecture. For example, deaf children lack neural connections for hearing. Cochlear implants allow rapid development of these connections. Visual cortex requires visual input during development or vision will be impaired.

The prolonged human childhood provides an opportunity for “cultural scaffolding” to guide cognitive development in new ways. This contradicts a “dualist” view of innate human biology and acquired culture. Humans emerge from a matrix of biology, culture, and technology. There is no fixed “human nature” separate from tools and culture.

Human brains, especially in childhood, are opportunistic and able to alter themselves to exploit environmental opportunities, including technologies. They change to take advantage of problem-solving opportunities. This is seen in physical adaptations like increased thumb dexterity in younger generations using mobile technology.

The developmental openness of human brains allows them to change themselves to maximize the benefits of environments, media, and technologies they encounter while learning. This has implications for policy and education. The goal should be to provide the richest possible set of opportunities during development.

Our sense of location is a construct formed by our awareness of our current potentials for action and engagement. New technologies can profoundly impact this fundamental sense.

In a thought experiment, philosopher Daniel Dennett imagines his brain removed but still controlling his body via radio links. He finds his sense of location shifts depending on levels of control, communication, and feedback. This shows our sense of location depends on more than just beliefs about our body’s location.

Scientists have used neural signals from a monkey’s brain to directly control a distant robot. The monkey gained a “600-mile-long virtual arm.” Such technologies could be used to alter our body image and sense of location by providing new types of feedback like visual, auditory or touch sensations from a distant device.

Telepresence technologies already allow a sense of remote presence using immersive virtual reality, robotics, and haptic feedback. These technologies stretch our abilities to act and engage at a distance, impacting our sense of location and self.

In sum, new technologies may profoundly reconstruct where and who we are by reshaping our embodied sense of potential action and by altering the feedback loops between self, world, and others. Our natural-born cyborg adaptations have only just begun.

• The term “telepresence” refers to technologies that provide a sense of being present at a remote location. It was coined in 1980 by Marvin Minsky.

• Telepresence requires high bandwidth, multisensory interaction and the ability for the user to act on and explore the remote environment. This interactivity is key to generating a sense of presence. Passive viewing of a remote location, like a stationary web camera, does not create a sense of telepresence.

• Our sense of presence depends heavily on the links between perception and action. Experiments showing adaptation to visually distorted environments demonstrate how our perception is tuned to support our ability to act. Adaptation can be specific to particular actions and contexts.

• Current telepresence technologies provide only limited opportunities for action and interaction. Examples include:

› Controlling a robot arm to excavate a sandbox.

› Controlling a robot bird in an aviary using virtual reality.

› Tending a remote garden using webcams and a robot arm.

› Exploring a art gallery through the eyes of a mobile robot named Tillie.

• These telepresence installations aim to invite participation and blur the line between local and remote, but they do not provide a highly immersive sense of really “being there.” More seamless multisensory interaction and control is needed for that experience.

  • Telerobotics allows a human operator to control a robotic device over a distance. Early systems were “teleoperators” where the operator directly guided the robot. Modern telerobotics incorporates more intelligent control in which the operator issues high-level commands and the robot executes them.

  • An example is controlling a robot to explore a distant location. The operator might command the robot to go to a room and show the cat. The robot would then navigate to the room, find the cat, and send visual/audio data back to the operator.

  • Telerobotics can provide a sense of telepresence, where the operator feels present at the distant location. This depends on the feedback provided. Systems providing rich sensory feedback and highly responsive control tend to provide a stronger sense of telepresence.

  • However, a strong sense of telepresence does not require that the operator directly control all details of the robot’s actions. Even when the robot acts with some autonomy to execute high-level commands, the operator can feel in control and present. This is similar to how we remain in control of and aware through our own bodies, even though much of the details of movement and decision making happen at an unconscious level.

  • An example is walking to the store. We make the high-level decision to go but don’t consciously control all the details of walking. Similarly, in a visual illusion experiment, people’s conscious perception was misled but their motor control was accurate. This shows how we can remain in control and aware through autonomous systems.

  • In summary, telerobotic systems that incorporate intelligent autonomy need not compromise telepresence. By operating at a high level, they can still provide a sense of direct engagement and transparent control, similar to how we experience control over our own bodies.

  • The visual system in humans has two distinct components: one for controlling motor actions and one for recognition, reasoning and conscious perception. The former is unaffected by visual illusions while the latter can be tricked by illusions.

  • In a visual illusion like the Tichener circles where one circle appears larger, the conscious perception causes the motor control system to reach for a specific circle. The conscious mind acts as a “supervisory controller” giving high-level commands to the motor control system.

  • The interplay between the conscious mind and automatic motor systems is like the interaction between a human operator and a semi-autonomous robot in teleoperation. The automatic systems are like a “zombie in the brain” that contribute to thoughts, actions and skills.

  • Early teleoperation systems with direct one-to-one control and feedback gave the compelling sense of telepresence by closely and continuously correlating neural commands, actions and sensory feedback. This activates the brain’s ability to incorporate tools and technology into the body image and identify with them.

  • Simple telemanipulation systems provide the close coupling between commands, actions and feedback required to achieve telepresence. More complex systems disrupt this coupling with variable time lags that break the sense of presence.

  • The sense of telepresence from well-designed systems is not really an illusion. Either the feeling of presence itself is veridical or the underlying mechanisms that generate it are normative parts of human cognition. With practice, telepresence can become second nature.

  • Our sense of bodily presence and agency is constructed by the brain based on correlations between our intentions, actions, and sensory feedback.

  • If these correlations are disrupted or delayed, it can undermine our sense of presence and agency. Time delays in telepresence systems, for example, can make the system feel alien or like an opponent rather than an extension of ourselves.

  • However, the brain has mechanisms for overcoming delays, like neural emulators that predict the sensory feedback from our actions. Similar emulators can be built into telepresence systems to overcome delays and support a sense of presence.

  • Supervisory control, where we issue high-level commands, typically does not feel like an extension of our agency in the way that direct, detailed control of movements does. Providing ongoing sensory feedback and the ability to take over more direct control when needed may help address this.

  • Advanced telepresence systems that provide high-resolution sensors and actuators, low latency, and modalities like vision, hearing, and touch, could create a sense of presence by connecting distant “knots in space.” However, we should not assume these systems need to replicate face-to-face interactions. They may be better suited to enabling new forms of action and experience.

  • In summary, a sense of presence depends on reliable correlations between intention, action, and feedback that make a system feel transparent and directly extended. Advanced telepresence systems may be able to leverage neural mechanisms for overcoming disruptions to these correlations and enable new, uniquely technological forms of presence and agency.

  • Written language provides a medium for exchanging and constructing ideas that is different from face-to-face communication, not inferior to it.

  • Hollan and Stormetta argue that we should build communication tools that enhance human abilities, like shoes, rather than compensate for deficiencies, like crutches. E-mail and text messaging are examples of tools that provide new functionality, not just imitate face-to-face interaction.

  • Much research into virtual reality and telepresence aims to imitate physical proximity, but the most promising path is to expand and reinvent human senses and experiences.

  • Objections to telepresence that focus on the inability to replicate physical intimacy and depth miss the point that the goal could be to create new kinds of experiences, not just imitate the familiar.

  • Simple systems like the LumiTouch, Data Dentata, and inTouch show how people can develop a sense of personal contact with low-bandwidth telepresence.

  • The 3DDI project’s ProPs aim to create personal robot representatives that become transparent interfaces, expanding human abilities rather than just compensating for being apart.

The key idea is that we should not judge telepresence and virtual reality as failed attempts to replicate physical co-presence. Rather, we can use technology to expand human connectivity in new ways. The most exciting tools are like shoes rather than crutches - they create new functionality rather than just serving as impersonal substitutes for what we are used to. With an open and exploratory mindset, we can develop new kinds of intimacy and depth using the possibilities of electronic mediation.

  • Stelarc is an Australian performance artist who explores cyborgism and human-machine interfaces.

  • His “Third Hand” is an electromechanical prosthesis attached to his right arm that he controls using EMG signals from his abdomen and legs. Over time, he has learned to use it intuitively, as if it were a part of his biological body.

  • In “Involuntary Body” performances, Stelarc’s biological body is electronically stimulated and moved by other people using a touchscreen interface. This disrupts the unity of voluntary control and biological body.

  • Stelarc sees these performances as a way to explore new forms of embodiment, agency, intimacy, and control enabled by cyborg technologies. However, the social impacts of these kinds of human-machine interfaces are hard to predict.

  • New technologies often end up being used in unforeseen ways, so we can’t assume cyborg technologies will necessarily expand our sense of presence and agency as Stelarc hopes. Their effects depend on how they are incorporated into social practices.

Originally, some imagined that the main use of technologies like the telephone might be for public demonstrations where people would pay to see them. The telephone, for example, was thought to possibly be used to transmit daily news to crowds gathered around a single outlet. Similarly, the idea of anyone finding a use for a computer in the home was once considered laughable.

While the full implications of Stelarc’s work are still uncertain, his explorations suggest interesting possibilities. His Third Hand, for example, is indirectly controlled by signals from Stelarc’s brain, demonstrating a primitive form of “mind control.” More direct forms of mind control are being developed to allow paralyzed people to control robotic limbs just by thinking about moving them. These systems tap into the brain’s neural signals for different motions and use them to control robotics. Such systems could also allow remote control and telepresence.

W H A T A R E W E ?

123

Stelarc reports that even his simple Third Hand system gives him a sense of direct, effortless control. This shows how adaptable the human brain is in learning new ways to control actions. Expert car drivers, athletes, gamers, etc. also reach a point where the tools they use become transparent.

Direct brain interfaces have been used to control robotics, virtual creatures, cursors, and more. In some cases, neural implants in the brain detect signals for intended movements and use them to control external devices. This can allow paralyzed patients to gain control over technology. While slow and difficult at first, patients can learn to control devices “by thought” as a kind of “second nature.” This may not match science fiction notions of thought control but shows the potential for profound human-machine integration.

In summary, while originally unanticipated or disparaged, technologies like the telephone, computer, and cybernetics have demonstrated unforeseen potential. Likewise, Stelarc’s art and related research into mind control and human-machine interfacing suggest possibilities for enhancing and redefining human capabilities that were once unimaginable. The adaptive power of the human brain to master new interfaces and incorporate them into a sense of transparent, intuitive control offers a model for how future human-machine merging may feel quite natural.

  • After sufficient practice controlling neural implants, the mental reflex becomes second nature. Users can will the implants to act just as they would their own limbs.

  • The sense of direct control over implants can become as strong as over one’s biological body. The neural signals come to mean whatever reliably results from them, whether moving a cursor or a finger.

  • Research is progressing on both translating neural signals into action (e.g. controlling cursors) and translating sensory inputs into neural signals (e.g. artificial vision). Successful prototypes include the Dobelle Eye, which provides artificial vision by transmitting camera signals to the visual cortex.

  • The human brain is adept at learning to exploit new inputs, whether from neural implants, tactile vision devices, or other sources. What matters is providing a reliable flow of signals correlated with the user’s actions and environment.

  • Defense agencies and researchers envision a future with “wired people and wireless gadgets.” Pilots may control planes with neural signals or gaze. People may have medical data and control systems woven into their everyday technology.

  • Early self-experimentation, like Kevin Warwick’s implant allowing communication between his and his wife’s nervous systems, provides a glimpse of how neural implants might enable new forms of connection between people.

  • Overall, a variety of technologies seem to be converging toward more seamless integration of human minds and bodies with their tools and environments. The results could profoundly transform human experience and what it means to be human.

Our sense of self and identity is complex and elusive. One important factor is our sense of control and agency over our own bodies and actions. We experience our body parts as directly responsive to our will in a way that distinguishes them from external objects. Some philosophers suggest that if we gained a similar sense of direct control and responsiveness over detached external objects, for example through neuroprosthetic connections, this could challenge our default assumption that the self is limited to the biological body. In a world of increasing human-technology mergers and connections, the self may become “soft”—more distributed, permeable and plastic. There are several ways this could unfold:

  1. We incorporate nonbiological tools and technologies into our sense of self, as “transported body parts.” For example, a neuroprosthetic limb or car that we experience controlling as directly as our biological body.

  2. Parts of our cognitive processing are performed by nonbiological systems, like software agents or bots. We may come to experience these as extensions or outposts of our own minds.

  3. We form intimate connections with other beings, biological or not, through technological links like brain-computer interfaces. This could create a sense of “we-self” that transcends the individual.

  4. The self dissolves into a flow of information and control distributed across biological and nonbiological matrices. There is no single locus of identity or will.

In all these scenarios, the self becomes more plastic, dispersed and permeable. The boundaries between human, technology and world start to melt away. While this could be disorienting, it may open up new vistas of human possibility—new modes of intimacy, creativity, cognition and experience currently unimaginable. The key is that we do not assume technology will remain separate and subservient. We must be open to symbiotic fusions that transform both sides of the human-machine divide.

  • Our sense of self and agency arises from experiencing a sense of direct control over our actions and thoughts. This includes control of biological capacities as well as technologies that become transparent and incorporated into our sense of self, like Stelarc’s Third Hand.

  • The self has two dimensions: a physical sense of embodiment and a narrative sense of self built on our goals, values, and life projects. Both of these senses of self can incorporate and adapt to advanced biotechnologies that enhance our capacities.

  • As biotechnologies become more integrated into our daily lives and transparent in use, the line between what we know and what the technology makes available becomes blurred. For example, constant access to an augmented reality system providing information could give us a feeling of “already knowing” things, even though we have to actively retrieve that information. The system becomes part of our taken-for-granted knowledge and skills that shape who we feel we are.

  • There is a distinction between conscious awareness and nonconscious cognitive processes. Much of the continuity and cohesion of our sense of self over time emerges from nonconscious processes, not just the sequence of conscious thoughts we have. These nonconscious processes are equally shaped by biological and technological factors.

  • In summary, our deepest sense of self — our persisting identity as thinking beings — emerges from an interconnected set of biological and technological capacities, some of which we are consciously aware of but many of which operate outside of our conscious control and awareness. We are, fundamentally, biotechnological hybrids.

  • Our sense of self, mind, and agency emerges from a complex interplay between conscious cognition and nonconscious neural processes. Neither has ultimate control or authority.

  • Just as we comfortably accept that our autonomic bodily functions and fast, skillful actions are partly handled by nonconscious processes, so too should we accept that external tools and resources can be deeply integrated into our cognitive systems and sense of self. Our minds are biotechnological hybrids.

  • There is no single element—neural, bodily or technological—that constitutes the mind or self. We are shifting coalitions of tools, both internal and external, biological and nonbiological. We continuously adapt to incorporate new tools as aspects of our extended cognitive systems.

  • We tend to identify with our stream of conscious thought and decision making, mistakenly seeing the rest—much of our brains, bodies and technologies—as merely tools for that internal user. But there is no central self or mind. There are only tools (neural, bodily, technological) and the stories we spin to make sense of how they cooperate.

  • A good analogy is a self-organizing pile of sand. There is no central grain of sand directing the process. Complex order emerges from the interactions of many grains. We hallucinate a central mind or self, but there is only the play of tools and the stories they produce.

  • In short, tools are us. We are natural-born cyborgs, always open to annexing new tools and technologies as aspects of our extended cognitive systems and soft selves. The biological brain is not the monolithic controller or chooser it seems. There are only shifting coalitions of tools, some neural, some technological, and the stories they continuously co-produce.

  • The notion of a fixed, unitary self is mistaken. Our selves are “soft” - distributed, decentralized, and profoundly shaped by our environments, contexts, and technologies.

  • This realization has implications for how we understand cognition, morality, education, law, and policy. These institutions are slower to change compared to the rapid pace of biotechnological development.

  • An example illustrates this well: Alzheimer’s patients who rely heavily on cognitive props and aids in their home environment are able to live independently, even though standard tests suggest they should not be able to. Their environments are essentially extensions of their minds.

  • We should reconsider how we view cognitive rehabilitation and impairment. Removing Alzheimer’s patients from supportive home environments can constitute a harm to the person.

  • While evolutionary psychology suggests our minds are adapted to Pleistocene environments, human cognition is highly adaptable to changing environments and technologies. Our minds are like “chameleons” that readily incorporate new tools and structures.

  • New technologies that extend our minds and senses do not necessarily make us “posthuman” - they are consistent with the human capacity for adaptation and incorporating new tools that has existed throughout our history.

  • An example is how technologies like timekeeping, text, and new biomechanical attachments have profoundly shaped human self-conception and experience over time.

  • We should appreciate how deeply we have always depended on non-biological technologies and structures. This helps us avoid an opposition between humans and technology. We are “natural-born cyborgs”.

  • The author observes slug trails in a garden and reflects on their functionality. The trails facilitate the slugs’ movement, allow other slugs to follow, indicate direction, and promote the growth of algae that the slugs eat. The trails are a “smart world for slugs” that enhance their activity.

  • The author compares slug trails to the electronic trails humans leave, like search histories, purchases, and location data. These electronic trails can generate new knowledge and personalize our experiences, like shopping recommendations. The electronic “free lunch” has already begun.

  • The author discusses how Argentine ants use pheromone trails to find the shortest route between their nest and food sources. The more a trail is used, the more pheromone builds up, attracting more ants. This positive feedback helps the colony quickly find the best paths.

  • The author relates this to how websites like Amazon use “collaborative filtering” to suggest products based on what other people who bought the same item also bought. This automated process helps people discover new items suited to their tastes. It exploits “swarm intelligence” principles like the ants use.

  • These electronic trails and collaborative filtering techniques avoid simplistic categorization. They let consumer activity and choices emerge to form connections, rather than assigning items to categories and suggesting based only on those categories. The flexible, unplanned connections can better reflect consumer preferences.

  • The author argues that maximizing this kind of collective self-organization and flexible information use will be crucial to gaining the most benefit from the growing web of human knowledge. Overall, the key insight is that both natural and electronic trails can enhance and structure the activity of those who make and follow them.

The book Things That Think is incorrectly categorized as “Non-Fiction: Computing” which limits its potential readership. However, in today’s world, consumers can discover resources through their own trails of interests, bypassing traditional categorizations and labels.

For example, a music recommendation system could suggest new artists to a user based on the purchasing patterns of other customers with similar tastes. Such a system might track how users’ tastes change over time and provide recommendations to match a user’s current interests. These kinds of trail-laying techniques are also being used for phone and internet routing to dynamically adapt to usage patterns.

The World Wide Web could also employ these techniques to become more “swarm intelligent.” The Principia Cybernetica Web, for example, creates, enhances, and disables links based on usage. More popular links become more prominent while little-used links fade away. The system can also create new links between pages that are frequently accessed sequentially by users. In an extreme scenario, the link structure of the web could even be personalized for each user based on their usage patterns.

While the current web does not adapt to this degree, some search engines do utilize the knowledge embodied in its link structure. Early search engines relied on simplistic heuristics like the number of query term occurrences, but these often return too much junk and miss the most relevant results. Newer engines like Google focus more on the hyperlink structure, which contains communal knowledge about the most authoritative web pages on a given topic.

For example, a text-only search for “Harvard” returns over a million pages, but the Harvard University homepage is not necessarily the most authoritative result. The link structure helps identify it as such. Searches on broad topics like “censorship” or terms that do not actually appear on the most relevant pages (“search engines”) also benefit from this approach. The basic procedure starts with a standard text search to get an initial set of pages. It then expands to pages that link to and from those, under the assumption that authoritative pages will link to or be linked from the initial set. It then ranks the new pages by their connections to filter for the most authoritative results.

  • Kleinberg developed an algorithm to find authoritative webpages on a topic using links between pages. His algorithm found “hub” pages that link to many “authorities” on a topic and “authority” pages that are linked to by many hubs. This algorithm found much more relevant results than simple text-based search.

  • Powerful search tools like Kleinberg’s algorithm and Google have made the web more accessible and useful. They allow “soft assembly” of information, dynamically grouping resources in response to a query. This contrasts with traditional information packages like books that have fixed organization.

  • “Distributed Information Systems” refer to networked electronic resources and their interactions with communities of users. Early search tools for these systems were passive, inflexible, and had “fixed semantics.” “Active Recommendation Systems” like collaborative filters address these issues by tailoring searches to users and evolving over time based on community use. They enable a kind of human-machine symbiosis.

Here is a summary of the section:

  • TALKMINE is a software agent developed by Rocha to retrieve, select and filter documents for users at the Los Alamos National Laboratory.

  • It uses a question-answer routine to build a profile of user needs and interests. This information is used to guide current and future searches, and to update the system’s knowledge over time.

  • If users start combining new keywords, the system will create new associations to guide future suggestions. The system’s knowledge changes continuously based on user input.

  • TALKMINE facilitates “human-machine symbiosis” and the “rapid dissemination of relevant information and the discovery of new knowledge.”

  • Physical information packages like books may become less important as users can assemble information tailored to their needs. This could erode divisions between information sources.

  • Validation of information may separate from packaging. Electronic journals could approve articles, which then carry multiple approvals to reach wider audiences.

  • Electronic and physical documents each have advantages. Physical documents show usefulness through wear and tear. Some software mimics this by showing usage data and highlighting popular sections. However, over-reliance on highlighting could narrow attention and create path dependency. Awareness of these issues can help avoid problems.

  • Tools like StarLogo help develop thinking about swarm-like systems. They allow influencing many on-screen agents with simple rules, showing how complex effects emerge from interactions. These tools support simulations informing research and help understand decentralized complexity in systems like the web, human minds, etc.

  • StarLogo helps know ourselves and our technologies better, but is limited to decentralized systems. SimCity helps understand centralized systems by running cities, showing how individual choices shape larger outcomes.

  • Simple swarm-like systems with multiple entities obeying the same rules can provide insight into complex heterogeneous systems like humans. SimCity is an example of this, teaching decentralized control of complex systems through indirect manipulation.

  • Technology could augment collaborative highlighting by giving more weight to certain users. This could create self-regulating knowledge communities.

  • We can have the benefits of paper books and digital technology. Researchers are developing ‘electronic ink’ and paper that can display digital information while preserving the benefits of paper. This allows rapid updating and personalization as well as the portability and readability of paper.

  • Open-source software like Linux shows how global information sharing can enable collaborative creation. Linux was created by a distributed, unorganized group and is freely available, with companies making profit from related services. However, there are risks around long-term support, privacy, and accountability.

  • In the future, smart devices and software agents will communicate and transact automatically over networks. For example, smart medicine cabinets, toilets, and fridges could monitor health, reorder supplies, and alert doctors, enabling lower-cost healthcare. Massive increases in network traffic will require efficient routing and self-organization.

  • Some believe ubiquitous computing and smart devices will reduce our need to actively access information by autonomously supporting our needs. However, human judgment and wisdom are still required, and we must be wary of over-reliance on automation.

The widespread adoption of human-centered technologies could lead to new inequalities between those who have access and can benefit, and those who do not. However, some technologies are becoming cheaper, more robust, and easier to use, opening opportunities for more people. Initiatives to provide free access to information and education could also help address inequality.

While connected technologies provide conveniences, they also enable intrusion into people’s lives by tracking their activities, movements, and preferences. People value privacy, quiet, and anonymous experiences. Giving governments and companies extensive access to personal data raises concerns over how that information might be used. Overall, we must ensure that human-centered technology respects humanity in all its diversity. It should empower and enrich lives rather than exploit or diminish them.

• Cookies and other online tracking methods allow companies and advertisers to monitor our online behavior and habits. They can then use this information to target us with ads and offers tailored to our interests. While seemingly innocuous, it raises major privacy concerns.

• New technologies like smart badges, ubiquitous computing, and swarm intelligence greatly expand the potential for surveillance by allowing our environments and devices to track our movements and activities. This data can then be aggregated and linked to our real identities.

• There are some countermeasures we can take like encryption, deleting online tracks, and opting out of some services. But taking such measures requires technical skill and often means losing out on some benefits like personalized recommendations and pricing.

• While some argue we should accept reduced privacy for increased benefits and efficiency, privacy is important and hard to regain once lost. We need to make sure new technologies are responsive to our needs for both connectivity and privacy.

• Most people have aspects of their lives they prefer to keep private, even if they are not outright illegal or harmful. We should not have to forfeit privacy just because we have “nothing to hide.” Monitoring and lack of privacy can negatively impact freedom and relationships.

There is a tension between wanting privacy in the digital realm and accepting our lives will become more visible as technologies advance. Attempts to gain control by limiting technology use or employing encryption may be futile. A better approach is to adopt a democratic optimism that as lives become more visible, social norms and ethics will adapt in a more liberal direction.

While some fear loss of control as we rely more on technology, control was always limited and indirect. Effective control comes through nudging systems with their own dynamics, not micromanaging them. We should view technology as enhancing our capacities, not diminishing our autonomy, and take a biological rather than managerial stance toward technology.

Information overload is a real concern and a source of wasted time and effort. The solution lies in a combination of intelligent filtering and a new communication etiquette that avoids lengthy, irrelevant messages sent to too many recipients. Unplugging completely is an extreme response and not feasible for most.

The summary touches on the key tensions and arguments around privacy, control, and overload in an increasingly technologically mediated world. The optimistic perspective is that rather than fearing these trends, we can adopt strategies and mindsets to address them in a balanced way. But there are no simple or perfect solutions.

  • The author gave an upbeat keynote address about human-centered technologies that blur the line between thinking systems and their tools for thought. While discussing some problems, the talk was mostly positive.

  • However, many audience members expressed ambivalence or fear about a future with increasing interactions with software agents instead of humans. One specific fear was that agent technologies may degrade how people value themselves and others.

  • John Pickering argued that interacting with software agents that mimic human social interaction could warp a child’s view of themselves and others. He worried that children may come to value interactions with software agents and humans equally, degrading human-to-human interactions.

  • Kirstie Bellman compared this to only using words a spell checker recognizes, limiting vocabulary. She imagined a child only expressing basic emotions in interactions with parents after interacting with simple software agents.

  • The author presented two perspectives: 1) Software agents would be good enough to engage social skills, so why worry? Or 2) They would be too limited to replace human interaction. Like a pet, limited software interactions wouldn’t replace parent interactions.

  • Bellman argued that as a working mother, software agents could supplement human interaction, not replace it. Fears about software agents seemed like a “luxury item” for some professionals. Interactive toys that teach social skills could even help development.

  • Educating people about technology’s nature and limits is key to combining biology and technology. Steve Talbott is a critic who discusses technology’s dangers. His NETFUTURE publication covers these issues. He argues communication tech enables distant communication but also separation, generating the problem it aims to solve.

Steve argues that while technologies like cell phones and email help connect us and bring us together, they also have a “darker side” that leads to disconnection and isolation. He points out that software agents and recommender systems, while helpful in suggesting new books or music based on our interests, can lock us into an echo chamber where we only experience things similar to what we already know and like. This can limit our exposure to new ideas and different perspectives.

Steve also discusses how some people use online communication technologies like chat rooms to deceive others by presenting personas that differ from their real identity. Some do this for amusement or exploration, while others have more malicious intents like spreading racist propaganda. Deception is also common by “cyber-bots,” or programs posing as humans. Researchers have created “CAPTCHA” tests to try and distinguish humans from bots in online spaces.

However, Steve notes that new communication technologies also have benefits like enabling the quick spread of important truths and bypassing traditional barriers like censorship. He gives the example of an email written by an Afghan-American after 9/11 that provided context about Afghanistan’s history and politics. In summary, while technologies provide both connection and disconnection, and enable both truth-telling and deceit, understanding how they work can help us maximize the benefits and minimize the downsides.

  • The author received an email within days of the 9/11 attacks arguing against bombing Afghanistan in retaliation. The email pointed out that Afghanistan had already suffered greatly from war and poverty.

  • The email spread widely, reaching millions of people, even though it lacked the authority or media backing of mainstream messages. It demonstrated the power of the internet to spread ideas based solely on their relevance and timing.

  • The internet enables both deceit and truth, and it is up to individuals and society to encourage the spread of truth. One concern is the lack of quality control on the internet, with so much information of varying quality and accuracy.

  • However, the internet also enables new forms of collective filtering and quality control. The website Slashdot developed a system where users rate each other’s posts, and users with highly-rated posts become moderators. Users can then filter posts based on the ratings. This harnesses “swarm intelligence” to filter information.

  • The author argues that while he values the role of the physical body, his support for telepresence and digital communication does not contradict that. Human minds are adapted to work with external structures, both in the body and the world. Technologies like telepresence are extensions of the mind’s natural tendency to interact with and incorporate external structures.

  • In summary, the internet enables both threats and benefits. But it also enables new collective and self-organizing solutions to the problems it creates. And mind-extending technologies do not contradict but rather build upon the mind’s natural interactions with body and world.

  • The worries about “disembodiment” stemming from technology use are overblown. Technology use does not necessarily lead to isolation or disconnect from the physical world.

  • Online communities can provide connections for those with niche interests that would be hard to find in person. However, these communities also risk further marginalizing these groups from mainstream society.

  • The vision of the “self” as pure information that can be separated from the physical body is misguided. Our minds and selves arise from the interplay of our biological brains, bodies, and the technologies we interact with.

  • Technology like wearable computers, augmented reality, and richer interfaces will likely increase our mobility, connectivity, and engagement with the physical world rather than decrease it.

  • Individuals will likely identify with and participate in a wide range of social groups and communities, some online and some in person. Technology can enable more fluid transitions between these groups rather than locking us into just one community.

  • In summary, fears of disembodiment and isolation due to technology are overblown. New technologies are more likely to enable increased mobility, social connectivity, and engagement with the physical world rather than the opposite. Individuals will form complex social identities across both online and in-person communities.

Here is a summary of the three terms:

Odiment: A term coined by Katherine Hayles to refer to the feeling of disembodiment that can arise from immersive digital environments and virtual reality. It refers to the perception of being detached from one’s physical body.

Contact: Refers to interactions and connections between humans and technology. As technology becomes more integrated into our lives, the contacts between humans and machines are becoming more extensive and intimate. Things like brain-computer interfaces and augmented reality systems provide an increasing level of contact and connection.

Sexuality: There are concerns that increased contact and immersion in digital environments could negatively impact human sexuality and relationships. However, others argue that technology may simply provide new avenues for exploring and expressing human sexuality in virtual environments and via teledildonics and other means. So technology could potentially enhance as well as disrupt human sexuality.

In summary, odiment refers to the feeling of disembodiment from technology, contact refers to the increasing connections between humans and technology, and sexuality refers to how technology may impact human sexual experiences and relationships. The key is ensuring these new technologically-mediated experiences do not isolate us but rather enhance embodied experience and bring us together.

Our minds and bodies adapt to changing circumstances in complex ways. Some adaptations are hardwired and happen automatically, as reflected in “phantom limb” cases where amputees continue to experience sensations from missing limbs. But adaptation also involves learning and expectation. Our perceptions are shaped by experience, context, task demands, and “online” strategies. Some examples:

• We perceive only a fraction of the information available in a visual scene. Our perceptual span is constrained by task demands like reading vs. scene perception.

• Magicians exploit our tendencies to perceive what we expect to perceive, not what is really there. They direct our attention and manipulate our expectations.

• In virtual reality, our perceptions adapt to the simulated environment. We can experience “presence”—the feeling of really being in the virtual space. This shows how perception depends on experience and expectation, not just sensory input.

• Our actions also depend strongly on context and task. They are not directly determined by sensory inputs and perceptual experiences. We learn specialized skills, like typing or playing an instrument, that depend on context and practice.

In all these cases, mind and world are deeply intertwined. Our mental lives unfold in a dense web of social, cultural, and technological scaffolding. Perception, cognition, and action emerge from this web of scaffolding, not just from what’s “inside” the brain. The brain adapts to and exploits the scaffolding, incorporating it into the fabric of the cognitive system itself.

O’Regan argues that visual perception depends heavily on the constructive use of information outside the brain. According to O’Regan, the visual world works as an “outside memory” that can be accessed as needed. This view contrasts with more traditional views that see visual perception as involving the construction of detailed inner representations.

Change blindness studies show how much we fail to notice changes to visual scenes. This supports the view that we do not have highly detailed representations of the visual environment in our mind. Some theorists argue that the world can serve as its own best representation, and we can perceive by using the world as an “outside memory.”

Brooks argues that intelligence does not require complex inner representations. Creatures can be intelligent by exploiting the properties of their environments and their own embodied experiences in those environments.

Clark, Dennett and others defend similar “extended mind” views. They argue that the mind depends on the environment and technological props, not just the brain. Mathematics, for example, is enabled by environmental props and sociocultural practices as much as brain-based abilities.

Early development may depend more on experience-driven neural growth than on innate knowledge. The mind is molded to the world it encounters. The brain develops into a sophisticated control system for embodied experience.

So in summary, these theorists argue for a highly interactionist vision of mind and environment. Visual perception, cognition, and even development depend crucially on environmental resources and real-time embodied encounters. The mind extends beyond the brain into the world of real-time embodied activity and environmental encounters. Inner representations are often either absent or surprisingly sparse and sketchy. The key idea is that the world itself can serve as its own best model. The mind is geared to use the world as an “outside memory.”

Here are the citations summarized:

  1. W. Thach, H. Goodkin, and J. Keating, “The Cerebellum and the Adaptive Coordination of Movement,” Annual Review of Neuroscience 15 (1992): 403–42.

  2. V. S. Ramachandran and S. Blakeslee, Phantoms in the Brain: Probing the Mysteries of the Human Mind (New York: William Morrow, 1998), 59.

  3. K. Goldberg, ed., “Introduction: The Unique Phenom-enon of a Distance,” The Robot in the Garden (Cambridge, Mass.: MIT Press, 2000).

  4. Ibid. See all the papers therein, especially Machiko Kusahara’s survey, “Presence, Absence and Knowledge in Telerobotic Art.”

  5. E. Kac, “Dialogical Telepresence and Net Ecology,” in Goldberg, The Robot in the Garden, 188.

  6. Hannaford, “Feeling Is Believing,” in Goldberg, The Robot in the Garden, 274.

  7. Jim Hollan and Scott Stormetta, Proceedings of the ACM (Association For Computing Machinery), ACM 0-89791-S513-S/92/0005-0119 (1992): 119–25.

  8. Ibid., 125

  9. Ibid.

  10. This term was coined by Howard Rheingold in his classic Virtual Reality (London: Seiter and Warburg, 1991.)

  11. Hubert Dreyfus, “Telepistemology: Descartes’s Last Stand,” in Goldberg, The Robot in the Garden, offers a balanced and sophisticated treatment of such worries.

  12. If your partner was very familiar to you, an emulation circuit might help here, but that seems a little dramatic, even for my liberal tastes.

  13. Albert Borgman, “Information, Nearness and Farness,” in Goldberg, The Robot in the Garden, calls this property of endless richness “repleteness.”

  14. Dreyfus “Telepistemology,” in Goldberg, The Robot in the Garden, 62.

  15. A. Chang, B. Resner, B. Koerner, X. Wang, and H. Ishii, “LumiTouch: An Emotional Communication Device” (short paper), in Extended Abstracts of Conference on Human Factors in Computing Systems (CHI ’01), (Seattle, Washington, USA, March 31–April 5, 2001), (New York: ACM Press), 313–14.

  16. See S. Brave, A. Dahley, P. Frei, V. Su, and H. Ishii, “inTouch” in SIGGRAPH [Special Interest on Computer Graphics] 1998: Conference Abstracts and Applications of Enhanced Realities (New York: ACM Press, 1998).

  17. J. Canny and E. Paulos “Tele-Embodiment and Shattered Presence: Reconstructing the Body for Online Interaction,” in Goldberg, The Robot in the Garden, 280–81.

  18. This theme is also explored by M. Indinopulos “Telepistemology, Mediation and the Design of Transparent Interfaces,” in Goldberg, The Robot in the Garden.

  19. Canny and Paulos, in Goldberg, The Robot in the Garden.

  20. N. K. Hayles, How We Became Post-Human (Chicago: University of Chicago Press, 1999), 291.

  • He linked-dancers application and Patti Maes wrote about brain-like software agents.
  • The thought-controlled prosthesis is based on the work of Nicolelis, Mussa-Ivaldi, Birbaumer, and others.
  • The “prosthetic car” is inspired by DERA’s “Cognitive cockpit. “
  • Footnotes reference Ed Regis’s Great Mambo Chicken and the Transhuman Condition, Daniel Dennett’s Elbow Room and Consciousness Explained, Anthony Damasio’s The Feeling of What Happens.
  • Other references discuss narrative selves, cognitive processes below the threshold of consciousness, the role of environment in neural degeneration, and evolutionary psychology.
  • Examples of swarm intelligence and distributed problem solving include mucus production in snails, ant colony optimization, Google’s search algorithms, and hyperlink analysis to determine authoritative web sources.
  • A dynamic systems approach views development as the self-organized emergence of new forms of order and complexity.
  • Open-ended symbiosis: The recommendation system and the user are in a feedback loop, each influencing the evolution of the other.

Building an

Character animation, 104

expressive Virtual character in 3D

Character encoding, 188

worlds, 206

Chat Bots, 188

Bizzi, E., 115

Chatrooms, 188, 189

Blackmore, S., 91, 217

Chattanooga, 34

Blanke, O., 106

Chazelle, B., 210

Blankerts, W., 208, 212

Cheeseman, J., 213

Blum, M., 219, 220

Chemical-Notice Lists, 145

Boden, M., 220

Cheminformatics, 218

Bodily kinaesthetic intelligence, 64

Chowdhury, N., 200, 203, 212

Boker, S.M., 213

Churchland, P., 201

Boltzmann machines, 208

Clarke, A., 3, 50, 58, 211

Bossard, C., 149, 156

Clarke, M., 134, 138, 141

Brain-computer interface 38, 41–42,

Clarkson Center for Biomedical

132, 135, 140, 141, 142

Engineering (Vanderbilt University),

Brain imaging technique,

116

Biofeedback, 49

Braino, 119

Clarke’s third law, 3, 50

Braverman, B., 213

Cleary, K., 213

Branwyn, G., 219

Clickspy, 170

Broca’s aphasia, 40

Clicksure, 164, 166, 170

Brooks, F., 96, 199, 200

Cloninger, C. R., 211

Brown, J.R., 208, 212

Cluster, 82

Budnarik, J., 176, 177

Cochlear implants, 123, 124, 125

Bunraku Puppets, 105, 115

Co-evolution, 87

Bunraku Theater, 101

Cog, 201

Burroughs, W., 220

Cognition, 3, 79, 208

Byrne, J., 213

Cognitive artifacts, 8, 200, 208

Cache

logic,155

Cognitive modeling, 208, 209, 210

Cahen, O. T., 209

Cognitive neuroscience, 164

Canadian Wheat Board, 117

Cognitive systems, 66

CAPTCHA, 90, 186, 219, 220

Cognitive technologies, 74

Capuchin monkeys, 115

Colby, K.M., 212

Car manufacturing, 47, 50

Collective mind, 91

Caribbean monk seal, 106

Collier, C.C., 11, 132

Carnegie Mellon University, 182–183

Collins, A.W., 212

Carrington, S., 211

Committee on Telecommunication,

Cash, J.M., Jr., 209

169

Cats

Complex networks, 144, 151, 153,

brain-based machine interface,

I N D E X

223

154, 156, 157, 204

Dancing dog, 35

Comte, A., 51, 210

Darken, R.P., 211

Connectionism, 71, 97, 210

Darwin, C., 85, 86

Connection machines 37

Davidson, J., 134, 138, 141

Connection weight matrix, 73

Davis, G., 179, 180

Connor, C.E., 211

Dawkins, R., 85, 87, 91

Consumer surveillance, 165, 169, 171

Dead City Radio, 220

Conway’s game of life, 78, 144

Deep Blue, 68, 69

Cooney, W., 209

Defense Advanced Research Projects

Cooper, R., 125

Agency (DARPA), 129, 131, 133

Copycat

Deisseroth, K., 124

Natural Language Learning

Demystified, 126

Deep Blue, Chess Grand Champ of

PROGRAM,

210

1997, 68

Cotterill, R.M. J., 211

Dement, W., 211, 212

CPU, 4, 5

Dennett, D., 219

Craik, K.J. W., 65, 209

Desktop molecular modeler, 61, 210

Crick, F., 84

Deutsch, J.A., 211

Cry babying, 13

Dewdney, A.K., 208

Cyber-space

Dialogue systems, 189

Social categories offline moving

Diego-San, 189

into cyberspace, 187

Digital Assistants, 176

Culture of Simulation, 47

Disabled people, 50, 51

Cyrano Sciences, 145

Display Artist, 209

“Curiously recurring template

Dittrich, P., 211, 212

patterns,” 143, 152, 157

Donath, J., 208, 212

Cybernetics

Donaldson, M., 199

Dancing to Tune of Our Fears, 1

Donna Haraway, 218

Cyber trauma, 168, 172

Dorigo, M., 212

Cytinski, D., 209

Dragan, A.D., 67, 68, 211 Dretske, F., 211

Daamen, W., 208, 212

Dressing robot, 105

“Daffy Approach to Neural Nets,” 97

Driver-Vehicle-Interface, 47

D’Andrade, R.G., 210

Drosophilabots, 143

Daniel, R., 212

D’Sofia, A., 212

Daniel Dennett 202

Dudai, Y., 212

Daniel Kahneman, 31

Dynamic-Link Library (DLL), 189

Dark Ages

Dyson

Neural networks retired as failed

Virtual embodied agent with

experiment, 97

humanlike conversational skills,

Darken

R. P., 211

189

Dystopian scenario, 11, 50, 171

Edelman, G.M., 211

ExoGen, 70

Edsall, A., 104

Expert Systems, 9, 65, 210

Edwards, J.C., 211

Extensible neural network modeling

Eede, Van Den, 212

environment, 157

Egoyan, A., 133

Eyepoint, 129

El Pais, 218

Elbow, Peter, 160

Face Adaptation Aftereffect, 106,

Electrodes, 38, 42, 49, 133, 134,

107, 203

140, 141, 142

Farahmand, F., 209

Electroencephalography (EEG), 38,

Farmer, J., 212

49, 50, 129, 131, 142, 212

Fast Fourier Transforms (FFTs), 40,

Eliza, 189

209

Ellard, C.G., 212

Ferrari, M., 212

Elliot, L., 107

Finney, J., 128, 129

Embodied AI, 96, 199

Fischer, M.H., 212

Embodied conversational agents

Flanagan, J.R., 212

(ECAs), 121

Fodor, J., 211

Emergence, 78, 144, 156, 198, 204,

Forbus, K.D., 212

218, 219

Forrest, S., 152, 156, 157, 158

Emerging Neural Networks for Signal

Foster, T., 218

Processing Applications, 157

Fox news, 19, 20

EMMA Artificial listener, 189

Frado, W., 128, 179

Emotional conversational agent, 195

Fransen, E., 212

Empathic communication, 98, 104,

Fransen, N. L., 212

115, 121, 131

Fraser, A., 212

Emotion and Music (Continued), 211

Free Frame, 206

Epistemological problem, 81

Fribourg, P, 208, 212

Equating the brain and the mind, 39

Frontotemporal Dementia, 42

Ergonomics Society, 47

Froyd, C., 127

Erlandson, R., 169–170

Functional magnetic resonance

Erzberger, C., 212

imaging (fMRI), 50

Eser, S., 197, 212

Furnation.com, 186

Esfeld, Michael, 198

FurryMUCK, 186

Estes, William, 203

Fürnkranz, J., 212

Ethical problems, 11, 14, 169, 177

Futurists, 13, 199

European Parliament, 169 Evan quadrant task, 90, 186

Games of the Intellect, 88

Evil Demon of

Sorg, B.A., 212

27,Change blindness, 205

Sparty the Spartan Hopper,12, 13

Perelmouter, E., 212

Spatial structure(s), 82, 83

Principa Cybernetica Web, 148

Spivey, N., 205

Mam-Bot, 129

Spooky actions at a distance, 158

Smeets, J., 209

Springer Series in Cognitive Science,

Sorg, B.A., 212

219

Rumelhart, D., 199, 206

Statistica, 54, 147, 164

Smolenksy, P., 199

Steels, L., 83, 85, 210, 216, 218

Sparty the Spartan Hopper,12, 13

Stein, L., 70, 72, 203, 206

Rosen, J., 170, 217, 218

Sterling, B., 217

Maes, P., 30, 182, 213

Stevanovic, M., 208

Soft assembly, 154, 157

Stevens, B., 39, 43, 204

Spatial structure(s), 82, 83

Strogatz, S.H., 216

Spooky actions at a distance, 158

Subcortical structure(s), 7, 31

Statistica, 54, 147, 164

Subsumption Architecture, 113

Steels, L., 83, 85, 210, 216, 218

Sunstein, C.R., 205

Sterling, B., 217

Supervenience, 64, 91

Stevens, B., 39, 43, 204

Swarm Intelligence, 113, 147

Strogatz, S.H., 216

Swarm Journal, 116

Subcortical structure(s), 7, 31

Symbiosis, 3, 5, 6, 8, 32, 87, 158,

Spooky actions at a distance, 158

174, 197, 201, 216

Soft assembly, 154, 157

Symbiotic constructionism, 87

228

I N D E X

Symbiotic Intelligence Project. See

Van Egeren, L.A., 204

also Collaborative filtering; Johnson,

Van Essen, D.C., 204

N.; Los Alamos National

Velmans, M., 200

Laboratory

Vendler, Z., 205

Symbolic, 6, 36, 68, 74, 83, 206

Ventilator, 75

Synaptic plasticity, 30, 33

Verbeek, P.P.C.C., 214

Syndrome, Capgras’, 210

Vertical market, 164

Syndrome, Cotard’s, 210

Virtual Office Assistant, 203

SynTap, 117, 118, 209

Virtual reality, 51, 52, 53, 55, 115

Sysop, 188

VISA, 166

Szewczyk, R., 182

Visceral reactions, 104

Vitality of pneumatic signaling, 171,

TED Conference, 51

174

Tenenbaum, J.B., 206

Von Hippel, E., 161, 217

Terrace, H.S., 206

Vygotsky, L., 17, 37, 201

Theodorson-Pedersen, K., 212 Thompson, W., 209

Wandhöfer, N., 212

Thorpe, S., 83, 85, 206

Warren, R., 205

TIM, 166

Weick, K., 86

Time-sending, 158

Welter, J., 69, 208

TMS (transcranial magnetic stimula-

Whitehead, E., 215

tion), 131

Whole mind, 127

Tooby, J., 199, 215

Wiener, N., 14, 200

Toronto Star, 51

Winograd, T., 86, 128, 206

Toyota, 166

Wittgenstein, L., 205

TPAG, 116

Woolley, A., 17, 21

Transactional databases, 163

World Wide Web, 23, 146, 151, 175,

Transhuman, 5, 114, 142, 167, 207

187, 217

Truckle, S., 196

Wozencroft, R., 212

Tumor, Phineas Gage-like, 210

Wright, R., 78

Turkle, S., 56, 204

WU-CRL, 148

Turner, M., 204

WYSIWYG, 28

Turing, Alan, 17, 65, 90, 176, 209, 215

Xerox PARC, 171

Turing test, 65, 66, 209

Zadeh, L.A., 47, 206

Turing, 1950, 66

Zarudiak, D., 218

Zhang, K., 220

Umbuntu, 162

ZMPI, 212

van der Smagt, P., 204

Here is a summary of the table of contents:

  • Introduction
  • Chapter 1: Cyborgs Unplugged - Introduction to cyborgs, discussions of practical and theoretical implications.
  • Chapter 2: Technologies to Bond With - Discussion of technologies like wearable computing, virtual reality, and telepresence that facilitate human-technology merging.
  • Chapter 3: Plastic Brains, Hybrid Minds - How the brain can adapt and change based on technology use, and how this can impact our sense of self and cognition.
  • Chapter 4: Where Are We? - Discussions of the current state of human-technology mergers and predictions for the near future.
  • Chapter 5: What Are We? - Exploration of how technology may impact what it means to be human. Discussions of posthumanism and transhumanism.
  • Chapter 6: Global Swarming - How connecting humans and technology at a global scale may enable a kind of collective intelligence or “global swarming”.
  • Chapter 7: Bad Borgs? - Potential downsides and risks associated with increasing human-technology interdependence like loss of privacy, addiction, and inequality.
  • Chapter 8: Conclusions: Post-Human, Moi? - Final reflections on how technology may enhance and extend humanity rather than diminish it. Thoughts on embracing our cyborg nature.
  • Notes
  • Index

The central themes of the book appear to be exploring how technology and humanity are becoming increasingly intertwined, both theoretically and practically. The book covers how specific technologies can facilitate merging, how this impacts our minds and sense of self, predictions for where this trend may lead, potential pros and cons of “cyborgization”, and thoughts on what the future of human evolution may look like in an age of advanced technology. Overall it seems to take a fairly optimistic perspective on the potential benefits of embracing our cyborg nature.

#book-summary
Author Photo

About Matheus Puppe