Self Help

Switching Codes Thinking Through Digital Technology in the Humanities and the Arts (Thomas Bartscherer, Roderick Coover)

Author Photo

Matheus Puppe

· 66 min read

BOOK LINK:

CLICK HERE

  • This book aims to bring together scholars, scientists, and artists to reflect on the impact of digital technology on thought and practice in the humanities and the arts.

  • It is responding to the “two cultures” problem identified by C.P. Snow, where scientific and humanistic discourses have become increasingly isolated, to the detriment of both. Digital technology has further complicated this divide.

  • The book seeks to facilitate dialogue between these disciplines through contributions from specialists in each field. Contributors address topics they are actively researching that are also relevant to a broader audience.

  • The goal is for the works to be comprehensible to experts across many fields to encourage exchange between the digital disciplines and humanities/arts.

  • The epilogue by Richard Powers further sparks this dialogue by responding to the essays through a work of speculative fiction set in a future shaped by the anticipated technological developments.

  • Overall, the book aims to bridge the divide between the digital sciences and humanities/arts through cross-disciplinary exchanges on the impacts of technology.

The introduction discusses the structure and goals of the book “Switching Codes”. It is divided into four sections, with an interlude in the middle and an epilogue at the end. Each section concludes with responses from different disciplines to promote cross-disciplinary dialogue.

Several key themes emerge from the collection. As vast amounts of digital data become available, how will it be organized and filtered to create meaningful information? Collaborative and collective works are also increasingly common, changing what constitutes a work and social organization. The impact of technology raises questions about what it means to be human relative to machines.

The book aims to foster discussion on how digital technology is transforming scholarship, art, and culture through developing shared vocabulary and understanding across fields. It models cross-disciplinary conversation by publishing traditional essays alongside experimental works. Publishing in print format reflects how the book form nurtures certain modes of thought in contrast to digital publications.

  • The essays in the book Switching Codes constitute an exchange between the scholarly and creative cultures of computing, the humanities, and the creative arts. They build a conversation across these disciplines.

  • Digital technologies are stimulating bridge-building within and between cultures. Terms used to describe intellectual and creative work now derive from computing and are being understood in new contexts.

  • The essays examine concepts from different fields as they gain new meaning or currency in digital contexts. They address how technology impacts thinking and representation through specific examples.

  • The book aims to seek common ground between disciplines while maintaining the integrity of different worldviews. It speculatively links ideas and makes propositions to further cross-disciplinary debates.

  • New technologies are bridging gaps by making digital scholars and artists out of humanists, and vice versa. The integration of diverse media adds to enduring discourses on rhetoric and poetics.

  • Readers may discover and redefine aspects of these fields in their own work. The goal is to illuminate shifting relationships between ideas and methods across conversations and disciplines.

Here is a summary of the key points from the readings:

  • On: Understanding new media by Lev Manovich (2001) examines the language and grammar of new media technologies like computer graphics, digital photography, internet and interactive installations. It argues that new media objects are composed of numbers and can be infinitely reproducible and modifiable.

  • Scholarship in the digital age by Christine Borgman (2007) discusses how the internet and digital technologies have transformed research practices and infrastructure. It explores issues around digital data, copyright and the economics of scholarly communication.

  • Holding on to reality by Albert Borgmann (1999) examines the impact of information technologies on culture and reality. It argues that virtual worlds blur the lines between real and not real.

  • Sorting things out by Geoffrey Bowker (1999) analyzes classification systems and their social and political consequences. It explains how categories shape research, organizations and perceptions.

  • As we may think by Vannevar Bush (1945) envisioned the future of personalized information management devices like the memex. It discussed how such technologies could augment human intellect.

  • The Blackwell guide to the philosophy of computing and information edited by Luciano Floridi (2004) is a collection of essays that explore philosophical questions around computing, information and digital technologies.

  • How we became posthuman by Katherine Hayles (1999) examines the relationships between bodies, technology and information from cybernetics to virtual reality. It discusses posthumanism and technological embodiment.

The summaries focus on the key themes, arguments and insights discussed in each reading. Let me know if you need any clarification or have additional questions.

For data to truly be accessible, it needs to be discoverable, understandable, and accessible via the internet. However, achieving these goals can be challenging, especially with large and diverse datasets.

Two examples are given of past datasets that were available digitally but not truly accessible: NASA earth observation data that could initially only be accessed by ordering physical tapes, and social science survey data available through separate difficult-to-use websites requiring manual downloading and format conversion.

Standards are needed to avoid these issues as datasets increase, like those developed for astronomical data.

Technologies like GIS and 3D modeling help make data more understandable by enabling visualization and exploration. The social informatics data grid and Buddhist cave reconstruction project are given as examples.

Automated analysis allows vast amounts of accessible data to be compared and analyzed in new ways not possible manually. Examples of text analysis, scientific literature mining, and consensus tracking are provided.

Making data accessible for both human and automated access requires technologies like web services that define standardized access interfaces. This allows distributed analysis networks like the cancer biomedical grid.

Computer modeling and simulation are also discussed as important research tools enabled by increased computing power, like modeling aircraft or forecasting weather without physical experimentation.

Here is a summary of the key points about the behavior of complex systems and how computer simulations are used to study them:

  • Complex systems exhibit emergent behavior that is difficult to predict from studying individual components. Researchers use computer simulations to model complex systems and study how their behavior is impacted by initial conditions or parameter changes.

  • Simulations are used across many fields, from modeling individual organisms to entire ecosystems in fields like biology and climate science. They help gain a better understanding of the robustness of simulation results.

  • The social sciences and humanities are less advanced in modeling but pioneering studies show promise. In economics, computers model theoretical models to study phenomena like production, consumption and pricing. Models can capture rational expectations over time.

  • Agent-based models have become popular, where individual agents and their rule-based interactions are modeled. This allows researchers to observe emergent behavior at the system level across topics like societal development.

  • In summary, computer simulations are increasingly used by researchers across fields as tools to gain insights into the dynamics and sensitivities of complex systems that would be difficult to study through experimentation or analysis alone. They provide a way to systematically explore non-linear and emergent behaviors.

Advances in communication technology have the potential to greatly democratize research by allowing anyone with an internet connection to access vast amounts of data, tools, and expertise from around the world. This could allow inspirational teachers to reach unlimited students.

By capturing data from online interactions in virtual worlds, social scientists can conduct new types of observational and experimental research into topics like group dynamics and knowledge sharing. However, this also raises significant ethical issues around privacy and consent.

As more scholars collaboratively tag and annotate data online, it creates opportunities for deeper collective analysis where analyses, opinions and computational processes can be hyperlinked to data and each other’s work, benefiting both human and automated understanding.

If programmed correctly, computational assistants could take on more routine research tasks, freeing up humans to focus on creative problem-solving. The goal is a symbiotic human-computer partnership where each plays to their strengths.

By mobilizing massive numbers of online participants, “citizen science” projects are able harness “massively human processing” to tackle problems at a scale not possible before. This could transform how knowledge is generated from the data people naturally produce.

  • The goal of fully encoding information as unambiguous semantic statements is unlikely to be achieved soon, as humans still struggle to fully understand each other even within the same language. However, tools like ontologies and thesauri can help facilitate communication within specific domains where agreement exists on semantics.

  • We can develop methods to reason about, translate between, detect inconsistencies in, and automatically synthesize semantic statements to a limited extent within restricted domains.

  • Large amounts of data require automated analysis approaches since manual analysis is no longer feasible. Challenges include the exponential growth of data outpacing computational improvements and the quadratic complexity of many analyses. Approximate and probabilistic methods are needed.

  • Distributed systems and grid technologies are needed to integrate resources for large-scale collaborative analysis and simulation that exceed any single institution’s capabilities. Issues like service composition, trust, and provenance must be addressed.

  • Data access challenges include licensing costs restricting access and specialized expertise/equipment needed for large datasets. Public access systems and computing resources aim to broaden participation.

  • Incentives are needed for creating, curating and sharing digital information and data, which can be addressed through reputation and incentive systems.

  • Computational skills must be better incorporated into education to enable broader use of these methods in research.

So in summary, while fully unambiguous semantic encoding is far off, progress can be made within domains, and computational approaches are transforming research through automated large-scale data analysis, simulation, and resource sharing - but challenges around data access, system design and incentives remain.

Here is a summary of the given text:

  • The impact of computation on research cannot be reduced to a simple formula. While computational methods have transformed fields like biology, physics, climate science, etc., other fields have seen little impact so far from computation.

  • When computational methods are applied to a research field, they have a profound impact on the methods, culture, and organization of research in that field. Resources must be diverted to computational tasks like software development, maintaining equipment, and data curation.

  • Research becomes more collaborative as these computational tasks require more people and resources. Factors like what constitutes a publishable result, graduate training, tenure criteria, and funding allocation within the field can all be affected by the application of computational methods.

  • In conclusion, computational methods have significantly changed some fields but have had little impact on others so far. When applied successfully, computation profoundly impacts the way research is conducted in terms of culture, methods, collaboration needs, and resource allocation within a research community.

  • Sensemaking is the process of understanding information and the world. Digital sensemaking does this with the help of digital tools and online information. However, current search engines and tools fall short for helping people make sense of all the available information.

  • People have “information diets” where they allocate their limited attention across different topics. Search engines are optimized for aggregate popularity but not individual interests.

  • Three main challenges for digital sensemaking are: 1) Tracking new information relevant to individual interests on an ongoing basis. 2) Search engines prioritize old, popular pages rather than fresh content. 3) News services categorize too broadly to fit specialized interests in information diets.

  • Social sensemaking strategies that harness collective knowledge, judgment and shared understanding may help counter these challenges by improving how people evaluate information quality, develop understanding, and apply information collectively.

  • Many people use RSS feeds and news alerts to keep up with important information across a wide range of topics, given their limited time. However, neither approach is perfect for sorting through the large volume of information.

  • There are three key challenges for making sense of digital information: 1) Tracking information in one’s core interests, 2) Discovering information at the edges of one’s interests, and 3) Understanding new subject areas.

  • For tracking core interests, better approaches are needed for developing a useful topical structure, organizing new information by topic, and presenting articles in a logical order within each topic.

  • Information at the “frontiers” or edges of one’s interests provides opportunities but exploring it requires help managing the larger volume of less familiar information.

  • When learning about a new subject area, improved approaches are needed to support rapid understanding of unfamiliar topics that suddenly become priorities.

In summary, better tools are needed to help people effectively track important information in their areas of focus, discover related information at the edges of their interests, and efficiently understand new subject areas.

Here is a summary of the key points from the given sections:

  • The approach uses a hierarchical generate-and-test algorithm to build fine-grained topic models and coarse-grained topic models for indexing content.

  • For fine-grained topics, it analyzes training pages to select seed words and generates combinatorial queries to identify patterns in the training examples. The top-rated query is chosen as the pattern for that subtopic.

  • Coarse-grained models characterize topics based on word frequency profiles to be less sensitive to noise on web pages with mixed content.

  • As new content is collected, it is classified by subtopic by matching to the generated queries. When humans identify relevant examples not matched, they are added as positive examples to refine the queries.

  • The indexing methodology aims to maintain an “evergreen” index that can incorporate new material over time by re-running the machine learning on updated training data.

  • Social media approaches to determining interest/importance of items, like Digg, use voting and social networks. But positive feedback loops and group voting behavior can influence rankings and require modifications to the algorithms.

  • Digg is a social news aggregation site where users can vote on stories, with popular stories rising to the top. The types of articles that appear on Digg tend to be weighted towards technology topics like the internet, computers, games and videos. Political and religious topics are not well represented.

  • Controversial articles may get canceled out on Digg if there are equal numbers of votes for and against. This leads to certain viewpoints and topics being suppressed.

  • One issue with existing social rating systems is that they aggregate all user votes into a single pool, which can lead to “tyranny of the majority” where popular views dominate.

  • A better approach may be to organize users into multiple interest groups or communities, each with more homogeneous views. This would allow smaller groups to explore niche topics without getting overwhelmed by majority views.

  • Users could belong to multiple communities corresponding to their different interests. And communities covering similar topics from different perspectives could exist, like political communities from different viewpoints.

  • Organizing into communities addresses issues of certain topics or views getting suppressed, but it’s also important for communities to remain transparent to prevent narrow self-focus. Users should be able to see discussions in other related communities.

Here are the key ideas and trends synthesized from the passage:

  • Social indexing addresses common sensemaking challenges like tracking core topics, discovering frontier topics, and orienting oneself in new subject areas.

  • It reimagines traditional indexes as computational, social, and interconnected resources that can be trained on user feedback and activities.

  • Communities form around topics of interest, with their own indexes, members, information sources, and ratings. Neighboring communities provide information frontiers.

  • Frontier information can be surfaced and rated based on interest levels in neighboring communities, distance between communities, and topic match. Articles are organized by home community topics.

  • Multiple communities can interconnect, with some designated as frontier neighbors explicitly or algorithmically based on overlap. This forms networks and “constellations” of communities.

  • Orientation in new topics is supported by exploring community indexes, which embody expert understandings and provide layered topic overviews, questions, and approved answers/sources.

The key trend is harnessing social interactions and relationships between users and communities to develop more sophisticated computational indexes and pathways for collaborative sensemaking across interconnected information networks.

The philosopher asks the computer scientist if he thinks it’s possible to use the internet and web for humanities scholarship in meaningful ways, including publishing new work that stands the test of time, educating younger generations, and winning academic prestige - just as humanities scholars currently do through academic publishers and archives/libraries.

The computer scientist acknowledges they’ve been working on this issue for years, along with others. He references a book from 2000 about using technology for the Nietzsche project. While busy, he agrees to discuss the topic further with the philosopher over Skype.

  • Scholars need to understand the “conditions of possibility” or foundational principles that underpin scholarly work before building new digital infrastructures.

  • Three such principles identified are: quoting original sources, achieving consensus through peer review/recognition, and long-term preservation of scholarly outputs.

  • The technology exists to support these principles digitally, through tools like stable web addresses, cryptographic hashing to ensure text integrity, online peer review models, and distributed preservation through multiple copies.

  • However, technology alone is not sufficient - policies and investment are needed to implement technologies and maintain them over time for the principles to be upheld digitally as they are in traditional scholarship. It requires engagement from both technical experts and scholars/policymakers.

So in summary, the tech is there but needs guidance from scholars on needed features, and commitment to implement and sustain new digital scholarly environments and practices over the long term through joint technical-policy efforts. The principles of scholarship must inform our digital designs and uses.

This discussion summarizes key points about infrastructures for digital scholarship in the humanities:

  • An infrastructure provides an underlying support system that facilitates an activity through coordinated buildings, equipment, services, models, etc. It establishes horizontal connections between elements.

  • Observing current scholarly practices is important to understand needs before proposing solutions. Scholars may not be aware of technological possibilities.

  • The humanities research infrastructure has historical precedents that developed over centuries, including physical structures like libraries and objects of study.

  • Elements include physical structures, distribution systems, and financial support. Open access and dissemination are important.

  • Formats and licensing are important for long-term preservation and access. Copyright can obstruct dissemination.

  • New digital tools and communities could enhance collaboration, but policy issues around universities, publishers need agreement.

  • Lessons from other infrastructures like air travel can provide metaphors, but the humanities’ needs require understanding current practices and constraints first.

Based on the conversation:

  • Paolo and Michele are discussing the traditional academic infrastructure and challenges it faces in the digital age.

  • Paolo outlines the key elements of the traditional model - organizational structure, logical structure, citation practices, etc.

  • Problems with the traditional model include slow and expensive access to materials, lack of collaboration at large scale, and issues with academic publishing monopolies and markets.

  • While digital technologies and the internet opened up possibilities, they have not fully been adapted or integrated in a way that addresses the structural problems.

  • Paolo argues a new, structured digital infrastructure is needed to better organize scholarly knowledge online in way that maintains standards of peer review and scholarly rigor.

So in summary, Paolo is engaging Michele in a discussion about the need to rethink and rebuild the academic infrastructure for the humanities in the digital age. He outlines issues with the current system and argues a new structured digital model could help address longstanding problems.

The discussion is about building a digital infrastructure for the humanities. The key points discussed are:

  • Primary sources (like original manuscripts) need to be distinguished from secondary sources (critical analyses, commentaries, etc.) as they have different epistemic value for scholarship.

  • Traditional libraries organize books to show these relationships, like placing critical works next to primary sources they analyze.

  • A digital infrastructure could do this better by allowing sources to be reconfigured dynamically based on different research contexts and ontologies.

  • General “ontologies” would define common relationships like “primary” and “secondary”. Domain-specific ontologies for fields like Nietzsche or Wittgenstein studies would define source types and relationships in those domains.

  • Scholars could access the infrastructure through different domain lenses and see sources organized according to the conventions of their research community.

  • This would preserve traditional scholarship structures while improving discoverability and flexibility compared to physical libraries. Interoperability across domains would also be possible.

The discussion outlines how ontologies and specialized digital libraries could help structure scholarship in a digital infrastructure for the humanities.

Here is a summary of the key details about the cost of the proposed super-technological video game for scholars:

  • The interlocutors do not provide a specific cost estimate, since there are still elements missing from fully developing the concept. When one interlocutor says “it will cost much more” with the missing element, they are being facetious.

  • The main missing element they discuss is a navigation system that would allow scholars to precisely reference other documents and sources, similar to traditional bibliographic citations.

  • Developing this navigation system, along with the necessary infrastructure to link sources together through citation analysis, would involve significant technological implementation relying on semantic web technologies, linked data, and automatic updating of bidirectional links between sources.

  • While the exact costs are never specified, it is implied the full realization of this vision would be an ambitious undertaking, arguably on the scale of building out an entirely new digital infrastructure for scholarly knowledge, not just a single “video game.”

So in summary, while no price tag is put on it, the discussion portrays this proposed scholarly tool as having extensive technical requirements that could make it a large and costly long-term project rather than a simple video game. The interlocutors seem more focused on conceptual design than monetary costs.

This passage discusses a dialogue between two individuals, referred to as P and M, about the potential for a new digital infrastructure for the humanities called “Scholarsource”.

Some key points:

  • P and M debate the pros and cons of traditional scholarship methods vs. more open/collaborative digital approaches like Wikipedia. P wants to preserve traditional methods but adapt them for the digital age.

  • They reference an author named Baricco and ideas about civilization vs barbarism in the context of technological/scholarly mutations.

  • P’s vision is for Scholarsource to be like an “archipelago” that maintains trusted scholarly methods online alongside open collaboration.

  • They discuss the need to find a name, secure funding, and streamline the organizational structure for their proposed digital infrastructure project.

  • The dialogue touches on issues like open access, peer review, collaborative knowledge sharing, and balancing old and new approaches in digital scholarship.

So in summary, it presents a discussion debating different visions for a new digital platform or infrastructure for research and knowledge sharing in the humanities called “Scholarsource”.

Here is a summary of the key points made in the article:

  • The essays by Foster, Stefik, and D’Iorio & Barbera predict that new technologies will enable significantly more data collection, analysis, modeling, and collaboration, allowing us to make better sense of information and knowledge. They suggest we will “really know” rather than just work at knowledge.

  • However, there is a question around who the “we” is in statements like “we will really know.” Foster refers to enthusiasts rather than just experts, but how does thatreconcile with his own expertise?

  • Stefik’s essay is structured in a modular, chunked style similar to online documents. This supports the metaphor of an “information wilderness” that rewards autonomy. But it also conflicts with his emphasis on the role of experts and judgment.

  • Modular writing challenges the role of expert judgment and authority, as it enables collective and potentially unintelligent contributions like Wikipedia. It is in tension with visions of knowledge that rely on the “hard work of the few.”

  • In summary, the essays predict sweeping changes but don’t fully address tensions around who contributes to knowledge - experts or a more dispersed group of enthusiasts and laypeople, and how expert judgment is balanced with collaborative knowledge-making. The role of the “we” is not entirely clear.

The essay discusses the need to thoughtfully consider how technology is used within scholarship, drawing an analogy to kitchen gadgets. While some computer tools have undoubtedly helped research, humans still need to study each other in a human way, with human motives and concerns.

The essay argues that scholarship studies humans and their works, but should do so in a human manner befitting our interests in understanding each other better. Just as one must critically evaluate new kitchen devices to see if they enhance or impede cooking, scholars must evaluate technology to ensure it facilitates, rather than hinders, genuine humanistic study and exchange between people.

Overall, the essay calls for more reflection on how computers and digital tools can best support scholarship in keeping with its fundamental human aims and methods, rather than replacing human capacities or priorities with technological ones. The key point is that technology should augment, not substitute for, studying people in a thoughtful, motivated way characteristic of humanism.

  • The author acknowledges the importance of the three norms of quoting, consensus, and preservation in scholarship that Iorio and Barbera describe. However, there are other underlying concerns that are just as important.

  • Scholarship in the human sciences is based on imperfect and incomplete evidence, and conclusions are reached through inferences that can be overturned by new evidence. Therefore, scholars must clearly explain both the evidence and reasoning used to reach their conclusions.

  • Unlike transparent interfaces that directly manipulate finite, accessible systems, the objects of study in human sciences are not finite or immediately accessible. Our goal of accurately representing the past is a limit never fully attainable.

  • The author discusses issues like the vagueness of algorithms like those used by Google, the “long tail” distribution of data, and the idea of “essentially contested concepts” that challenge aims of fully encoding unambiguous information.

  • While developments like those described can make scholarship more accessible, they still face limitations of defeasibility, need for cognitive understanding of methods, adaptation to statistical properties of data, and interface design for representing incomplete knowledge. Accounting for these issues is important for faithful use of new techniques.

  • Iorio and Barbera suggest that when developing a new interface, one should start by talking to potential human users to understand their needs and perspectives. This human-centered approach is important for any design problem.

  • They propose starting a dialogue with humanists and scholars to understand how they currently search for and access information, and what could help them better. Gathering user input at the beginning is key to building an effective interface.

  • The goal is to develop technologies that enhance and augment human capabilities, not replace human judgment and nuanced thinking. Prioritizing user needs will help ensure the design supports rather than detracts from humanistic work.

This passage discusses some of the key challenges in using search engines and digital archives to identify and analyze dances depicted in videos. Specifically, it brings up:

  • The “semantic gap” between what computers can extract from multimedia data (like motion patterns) versus the rich human interpretations needed to understand activities like dancing.

  • Problems include recognizing human bodies and movements amid varying backgrounds, distinguishing individuals, tracking motion over time, and analyzing audio-visual combinations.

  • Goals for video analysis include automatically identifying dance types, styles, phases/figures, levels of expertise, and tracking changes over time or between regions.

  • This level of analysis requires clarifying definitions of dances and dance elements, as subtle distinctions may not be detectable from pixel data alone without deeper contextual knowledge.

  • Overall, the challenges reflect difficulties in bridging computational representation of movements as pixels with the rich cultural and historical understanding needed to truly interpret and compare dances. Advances in computer vision, pattern recognition, and ontology are needed.

  • Dancing can be classified in different ways, like by UNESCO which classifies it as intangible cultural heritage.

  • Dances evolve over time and location. They start as elite activities but become popularized. Traditionalists want to preserve original forms while innovators change dances.

  • Dances contribute to both cultural diversity and a universal artistic language as they change and spread between cultures.

  • To analyze dance video content, ontologies are needed to bridge the semantic gap between visuals/audio and meaning. Two ontologies are required - one for real-world dance phenomena and another for how those phenomena appear in videos.

  • Building such complex dance ontologies poses challenges. Past ontology efforts have failed to accurately model reality or use expressive enough languages.

  • Philosophers like Roman Ingarden have done relevant work on the ontology of artistic works like music, which can inform dance ontology by distinguishing the dance work, performances, scores, and viewer experiences.

  • Theoretical frameworks for movement and dance analysis, like those developed by Delsarte, Alexander, Dalcroze and Laban, aim to understand the natural laws of bodily movement and develop systems to describe gestures, poses and overall movement. Movement annotation methods like choreology and Laban Movement Analysis can be used to notate dances.

  • Benesh notation developed a purely kinetic language to represent dance positions, steps and movements objectively enough that readings could trigger kinesthetic understanding.

  • Research in dance history has focused on single time periods or cultures rather than continuity across epochs. Understanding how dance terminology and practices have evolved over time remains challenging.

  • Advances in video analysis aim to recognize individual and group actions through layered modeling, but applications so far are limited compared to the complexity of social dancing events. Ontological contributions to video annotation are still relatively simple and need refinement to represent more complex domains like dance.

  • The passage discusses developing video event ontologies to represent physical objects and events in observed scenes using logical and temporal constraints. It describes using these ontologies to build systems for visual monitoring of banks and metro stations.

  • It provides a review of existing content-based video retrieval systems like QBIC and discusses the need for ontology standards to bridge differing multimedia standards and allow description of complex spatial-temporal details.

  • Standards like MPEG-7, MPEG-21, and OWL are discussed but noted to not yet be mature enough to represent all the needed details. Developing new data types for multimedia is suggested.

  • Building a digital repository and tools for analyzing/searching dance videos is presented as an example project that could help address technical challenges while improving access to cultural heritage. Understanding dances could provide new cultural and historical insights.

  • Tight integration of professionals from different fields is argued to be needed to develop innovative cross-disciplinary solutions. Resulting ontologies should be openly accessible to researchers.

Here is a summary of the key points from the article:

  • The author introduces the word “relevate”, meaning to make something relevant. He argues that the World Wide Web “relevated” hypertext by bringing it into mainstream use and changing society.

  • When the author first learned of this word, he realized research in knowledge representation (KR) needed “relevation” - to be made more relevant and impactful.

  • The author argues that research in the Semantic Web could potentially “relevate” artificial intelligence in the same way the web relevated information retrieval. Though maybe not to the same degree, it could boost AI beyond expectations.

  • However, the author then argues that many assumptions about representing knowledge for computers have actually held back understanding of how humans use knowledge and hindered the “relevation” of AI.

  • Specifically, he refers to the field of knowledge representation and reasoning (KR&R), but notes the arguments may go beyond this area as well.

  • In summary, the article introduces the concept of “relevation” and argues that research like the Semantic Web could significantly boost the relevance and impact of AI by challenging some current assumptions in knowledge representation.

Here is a summary of the key points about the World Wide Web:

  • The web started in the early 1990s and has grown enormously, with over a billion pages by the early 2000s and growing exponentially since then.

  • Traditional approaches to knowledge representation and reasoning in AI assumed a single, high-quality knowledge base representing a single entity’s view of the world.

  • The knowledge on the web is vastly larger in scale, comes from many different sources of unknown quality, and represents many different points of view. It is inconsistent, changing, and unreliable.

  • This posed major challenges for using traditional AI techniques on the web. Systems would need to be highly scalable, able to deal with inconsistency, and not assume a single coherent knowledge base.

  • Early uses of tagging and folksonomies in Web 2.0 applications showed promise but were limited in their ability to search and organize large amounts of data at web scale.

  • The social and collaborative aspects of many web applications, like sharing and virality, were an important factor in their success that was underappreciated by some proponents of formal knowledge representation.

  • Both formal semantics and social processes have roles to play in organizing knowledge on the massive, heterogeneous, and constantly changing web.

  • The article argues that even a little semantics/structure can go a long way in enabling interoperability and applications on the huge, unorganized web.

  • Unique identifiers/URIs for terms, along with social conventions to differentiate them, provides structure and avoids confusion even without detailed descriptions of relationships.

  • Asserting some basic relationships like equality/inequality of URIs has enabled numerous web applications and data mining, even if not fully precise.

  • Standards like SPARQL and GRDDL have made it easier to embed semantics into web applications.

  • ‘Bottom-up’ approaches that embed semantic technologies into existing web apps, rather than top-down construction of comprehensive ontologies, have proven successful.

  • The complexity and ‘messiness’ of human knowledge and intelligence challenges traditional AI assumptions and suggests the need for systems that can operate in real-world complexity.

  • Reinjecting empirical ideas from real-world domains, as von Neumann advocated for mathematics, could help reinvigorate knowledge representation and make AI more relevant.

  • Jones argues that small birds like swallows have abilities that surpass any plane, being able to migrate long distances but also perform precise maneuvers like catching insects or building nests. No plane can match such versatility.

  • Bird flight has become an active area of study to help improve engineering. The aerospace industry recognizes the potential to learn from natural flyers.

  • The author believes AI is in a similar situation, excelling in narrow domains like chess but unable to flexibly apply capabilities to new, unexpected problems like humans can. Initial goals of human-level AI have only been achieved in restricted areas, not open domains.

  • When studying the semantic web, scale and quality must be considered. There are billions of concepts involved in human thought and data instances on the semantic web. Ontologies vary greatly in size and quality.

  • Information on the semantic web can be contradictory or inconsistent, just as real world knowledge is. Traditional knowledge representation systems struggle with these characteristics of open, human-generated knowledge.

  • A new, “bottom up” approach is needed that uses different reasoning techniques tailored to different types of data and acknowledges limitations of any single approach. An “ecosystem” of reasoners working together could help process the diverse, messy semantic web data more effectively.

  • There is a debate around whether machines can be truly creative. Ada Lovelace argued in the 19th century that computers have no power of anticipation or creativity. Turing argued they could be unpredictable and therefore creative.

  • Attempts have been made to build creative machines like storytelling systems, automatic music composers, and painting machines, but with varying levels of success. Total randomness or complete rule-following both lack real aesthetic value.

  • The author argues there may be a “logic of creation” that allows for systematic and general methods between total predetermined generation and pure randomness.

  • Deduction alone is not sufficient for creativity as it is conservative. Induction involves generalization and a loss of specifics, paradoxical for creativity.

  • However, machine learning brings out the importance of conceptual mapping and structural matching in induction, rather than just loss of information. These operations were also part of Aristotle’s practical induction in natural science.

  • The author aims to analyze and define the logical status of creativity to provide justification for thinking creativity can be reconstructed through logical steps simulated on computers, building on empirical work in AI.

The essay investigates the role of conceptual mapping and structural matching in inductive reasoning in machine learning and human creativity. It is divided into five parts:

  1. An overview of artificial intelligence theories of creativity, including exploratory, mathematical, entropy-based, and compositional models.

  2. A background on classical theories of inductive reasoning.

  3. Discussion of structural induction in symbolic artificial intelligence.

  4. Inductive logic programming, which simulates induction by inverting deduction.

  5. An analogy between structural induction and Aristotelian induction as practiced in Aristotelian biology.

The essay also discusses how conceptual mapping allows matching of structures, like wings/legs/fins to locomotion. Lastly, it mentions research showing young children need underlying structural representations to be creative in drawing, storytelling or music. The role of conceptual mapping and structural matching is explored in both machine learning and human creativity.

  • Creativity is viewed as involving the reuse and recombination of memories or past experiences. Two key mechanisms are memory retrieval and case adaptation.

  • Memory retrieval allows past experiences to be evoked based on their associations with the current context. Case adaptation maps retrieved cases onto the present situation, then adapts and combines them to solve a new problem.

  • Mythological creatures from ancient bestiaries exemplify this approach, as they combine remembered elements in new ways. Computational models of creativity also utilize these mechanisms, like an artificial jazz player that improvises based on musical chunks retrieved from memory.

  • There is debate around the logical status of induction. Aristotle saw it as an inversion of deduction, while Mill viewed it as a type of deductive inference. Later thinkers have attempted to formally define inductive inference and analyze the theoretical limitations of inductive machines and learning. But the precise relationship between induction and deduction remains problematic.

So in summary, it discusses how creativity relies on memory retrieval and adaptation mechanisms, provides examples, and outlines philosophical debates around defining the logical nature of induction.

Here is a summary of the key points about induction from the passage:

  • John Stuart Mill formulated induction as a form of syllogism, where the major premise is supplied to complete the argument. The major premise is the principle of the uniformity of nature.

  • Typically, induction is viewed as a generalization procedure that leads to a loss of information, as it reduces particular examples down to a common description. This view sees induction as a decrease in information.

  • However, artificial intelligence techniques implement inductive procedures that do not necessarily reduce information. Structural induction in particular represents knowledge in a more structured way.

  • Traditional symbolic AI focuses on logical, exact representations, while numeric/machine learning uses approximation and uncertainty. However, these fields also combine symbolic and numeric approaches.

  • Many core inductive mechanisms used in numeric machine learning, like detecting correlations or separating examples, predate AI and were analyzed by philosophers. The novelty is more in how these mechanisms are combined and applied at scale.

  • Symbolic AI can represent structured knowledge using first-order logic, while numeric ML typically uses propositional conjunctions. This increases the complexity of the generalization space.

  • Structured examples in machine learning refer to predicates and functions, not propositions. Descriptions are built from terms that are always different, even if they describe similar properties.

  • Generalization cannot be defined as an intersection of common attributes. It requires considering possible mappings or matchings between the subparts of different descriptions.

  • Researchers have worked for over 35 years to clearly define generalization of structured examples to enable logical foundations of inductive machine learning.

  • Inductive logic programming (ILP) relates to logic programming and resolution-based theorem proving techniques. It defines induction as an inversion of deduction and resolution.

  • Resolution is the basis for deductive inference and logic programming languages. It involves unification of terms and derivations from clause sets.

  • ILP formally defines induction as finding a hypothesis H that explains observations L given background knowledge θ, by inverting the resolution relation.

  • Constraints are needed to limit the number of possible mappings during inversion, related to determinism, locality, structure, etc.

  • Structural matching and conceptual mapping were also important for Aristotle in biology for understanding new organisms, though he did not relate this to logic.

  • Mapping is also key to creative activities like imagination but many machine learning techniques do not incorporate this aspect of induction.

The passage discusses switching between different modes of thought or worldviews when relating concepts like logic, artificial intelligence, philosophy, etc. to areas like music composition.

It analyzes three essays that represent different stages of learning to embrace multiple worldviews rather than just one objectivist view.

The first essay by Ceusters and Smith is presented as “getting it” - they show how different conceptual codes or representations can be related and transformed to understand complex phenomena like dancing. They see knowledge as dynamic and aim to use ontologies as a tool for inquiry rather than static representation.

The second essay by Hendler is described as “getting it” - realizing errors in a single worldview and finding an alternative approach.

The third essay by Ganascia represents an early stage of understanding, showing the difficulties of transcending the technical rationalist worldview and glimpsing other possibilities.

The passage argues that technologists especially need to accept that there are “two kinds of people” with different valid perspectives, and that blending codes and switching between worldviews is needed to relate technology to other domains like the arts. Ceusters and Smith exemplify those who have successfully made this transition.

  • Ceusters and Smith advocate for developing a “kinetic language” to describe dance movements, focused on conveying the movements themselves rather than analytical descriptions. They recognize the need for different languages to model different domains of action.

  • Their framework understands dance as both a physical and cultural phenomenon, requiring multimedia representations of motion, tempo, sound, etc. This leads them to adapt standards that relate complex spatial-temporal details of dance.

  • They navigate balancing the worldview of dance as a lived experience versus technology as a tool. However, their paper could benefit from better frameworks relating data/information and learning to see/hear/move.

  • Overall, Ceusters and Smith successfully bridge the gap between modeling dance for human understanding versus computer processing. Hendler, meanwhile, describes his realization that the semantic web approach needed to shift from a logical/formal framework to an empirical one grounded in real-world use and multiple perspectives. His experience underscores the need to “switch codes” between different conceptual systems.

Here are the two kinds of people described in the passage:

  1. Technologists/researchers focused on building technological systems and tools. They value technical rationality, truth, expressivity, rigor, and formal representations of knowledge. Their perspective is one of creating knowledge models to capture and distribute expertise.

  2. Ordinary people interacting socially on the web. Their knowledge and intelligence emerge from sharing, interacting, commenting and building on each other’s work in diverse modalities beyond just language. The web reflects human intelligence in all its messiness, inconsistency, multitude of perspectives and goals.

The passage describes Hendler’s struggle to shift from the perspective of a technologist focused on building AI systems, to recognizing and embracing the more organic nature of human knowledge and interaction on the web. It traces his gradual realization that technology needs to work with and facilitate people, rather than try to capture or reform human qualities like the web.

  • Hendler used to believe that knowledge and technology required a formal or logical worldview. His new view is that it must also incorporate cultural and informal factors.

  • He recognizes the need to evaluate claims based on different communities’ worldviews, rather than pure logic/formalism. Automated reasoners need to figure out what fits best within different worldviews.

  • The semantic web involves multiple autonomous reasoners with different worldviews, possibly operating in different “worlds”.

  • Hendler hasn’t fully integrated the idea of worldviews into his work on the semantic web. He is still figuring out how to approach and think about the web.

  • His interest has shifted from pure technical rationality to also incorporating empirical and human/cultural factors. But he hasn’t fully articulated how formalism is just one approach among many.

  • The web involves personal expression, values, dynamics/evolution, and is impacting humanity and fields like AI in profound ways by questioning assumptions of technical rationality.

  • Hendler seems to be in the early stages of this transition but not fully committed yet to providing useful tools for others or truly integrating with different communities.

  • The real transformation requires meeting real-world needs of communities through partnership and participatory approaches.

The author analyzes computer-generated drawings by the AI program Aaron to better understand the limitations of a logic-based machine compared to human conceptualization. While the drawings vary in their configurations of objects, there are distinct patterns and categories that emerge. The author generated thousands of Aaron drawings and studied them to reverse-engineer the program’s ontology and generation rules.

While interesting to look at, the drawings have a closed set of possible variations due to the fixed conceptual space and predetermined categories used by Aaron. Humans, on the other hand, are not bound by today’s conceptions and can generate new categories and ways of conceptualizing over time through learning. The author argues this dynamic, improvised nature of human thought cannot yet be replicated in computer programs.

The process of understanding Aaron’s drawings involved collecting, sorting, analyzing and describing them in portfolio - demonstrating how human sense-making involves real-world interaction and manipulation of information, not just mental transformations. The author critiques Ganascia’s notion of creativity as occurring in a theoretical “placeless space” without consideration of tools, skills and adaptive processes involved in human conceptualization and problem-solving.

  • The passage discusses different modes of thought, particularly logical/rational vs ecological/contextual modes. It argues these are not opposing but rather complementary ways of coding information.

  • Logical thought focuses on objectivity, fixing ideas, distinctions, literal meanings, etc. Ecological thought focuses more on relationships, processes, metaphor, context, assimilation.

  • These modes are reflected in different types of activities, conceptualizations, treatments of knowledge, and conceptual frameworks. Neither is complete on its own.

  • The modes correspond to different kinds of coding in the brain and different ways of regulating human activities and communities. Both are needed.

  • The challenge is for technical/rational thinkers to recognize ecological thinking as another valid worldview rather than the only pathway to truth. Understanding different modes of thought is important for collaboration across disciplines.

  • Overall it argues these different modes are complementary rather than contradictory, and both are needed for a full understanding of human thought and knowledge. The key is recognizing multiple valid perspectives rather than one being viewed as superior.

  • As psychologists exploring concepts and symbols in the brain engaged with social scientists, some felt science itself was being undermined as definitions, rules, consistency, etc. became more fluid across disciplines.

  • Technical rationality, which requires formal models and truth correspondence, saw multiple worldviews as cultural relativism undermining absolute truth in science.

  • However, scientists studying complex systems like ecology adopted a constructivist epistemology over an objectivist one.

  • Rather than criticize technical rationality, the point is to provide a transcendent perspective that situates it alongside other approaches. Disciplines can mature in isolation then dialogue to solve practical problems or utilize new tools.

  • Differences in mentality across fields provide opportunities for intersection and synthesis, as seen in this volume. The web facilitates “relevation” across disciplines on a massive scale.

  • Unifications have occurred between sciences and philosophy all along. Short-lived approaches find their way into larger activities through this continual dialogue between objectification and relation of perspectives.

  • Newtonian physics provides an approximate explanation of reality, but more powerful scientific theories have been discovered that better capture reality through codes/formulas.

  • Scientific laws express general patterns, but to understand a specific situation you plug in actual variable values like mass, distance, etc. Computer simulations can model complex phenomena like air flow in more detail than calculations alone.

  • Computer science aims to model and capture complex phenomena like human intelligence through coding. Early attempts at general artificial intelligence failed, so current efforts focus on specific functions/aspects.

  • The “thickness” of a functional duplication refers to how many aspects/functions are replicated. No artificial system can fully duplicate a natural system due to irreducible complexity.

  • People are often satisfied with devices that provide certain functions, but are ultimately disappointed when the duplications are revealed as inauthentic.

  • Human intelligence has an irreducible depth and embodiment that escapes complete coding, similar to the “aura” of a real fireplace versus an artificial one. Passing the Turing test would still leave ambiguities about one’s actual experiences and identity.

  • Coding can capture systems at physical/chemical levels but not fully trace all the way up through biological organizational complexity to consciousness and experience.

  • The passage discusses the difference between the ontologies of things and devices. Things have an “aspect or sense” that speaks to us through context, while devices have a “function or commodity” defined by their design and machinery.

  • While it is possible to deepen the sense of a thing, if you probe too much into a commodity you encounter its underlying machinery and lose touch with the commodity itself.

  • Humans relate to things through intelligence and context in a way that devices cannot. However, aspects of the human ability can be captured as functions or commodities.

  • If an aspect becomes too thick, it is no longer a function but either an actual human being or a failed project. The essays discussed may have aspects too thick to be feasible projects.

  • Work at the leading edge of technology is compatible with understanding the depth of things. Ultimately, devices should be put in the service of things, not the other way around.

Here is a three-card summary of the statements:

The process by which we go about understanding the world remains painstakingly, sometimes mind-numbingly slow but serves throughout history as an important force for social cohesion. Scholarship constitutes the backbone of the European cultural tradition and shifts the creative practice of art very much away from being a manual craft into one of conceptual engineering.

Here is a summary of the key points about digital panoramas and cinemascapes from the passage:

  • Digital panoramas and cinemascapes are interactive media works where users can navigate virtual environments that appear visually contiguous (like panoramas) or both spatially and temporally continuous (like moving shots).

  • They disrupt conventions of traditional panoramas and films by layering and compositing different elements that may not have coexisted naturally in the same space and time.

  • This allows for the simultaneous presentation of both syntagmatic (linear) and paradigmatic (relative) elements, combining exposition, poetry, narrative, and other forms of expression.

  • Users can follow or participate in the process of building arguments or expression, blending critical and creative modes of representation.

  • Scrolling through layered elements bridges critical and creative representation and turns passive viewing into an active user experience.

  • The differences between researcher, artist, and user may dissolve as they use the same tools to gather, compose, and interact with multimedia works.

  • Historically, 19th century painted panoramas provided viewers a panoptic view from the center that suggested dominance and control over the surrounding scene.

A panorama gives viewers the illusion of having a total commanding view, but it is actually impossible to grasp the entire scene in one glance. The panorama encompasses the viewer within its circular form and exotic content. To maintain the illusion of seamlessness, the image cannot have any breaks in continuity as the viewer turns.

Similarly, aerial views from skyscrapers appear to grant a sense of the whole city, perceiving it as a unified text. However, on the ground the experience is fragmented as one navigates different paths. Walking through the city is an active process of montage where meanings are continually reevaluated.

Just as panoramas use spatial continuity to interpret elements, film employs temporal continuity through devices like long takes and pans. This conveys a sense of verisimilitude by constructing an experience of an unbroken slice of time. Digital panoramas maximize the spatial metaphor of navigating within a screen environment, allowing for both continuous and discontinuous elements to coexist. This has led to the concept of “cinemascapes,” navigable visual environments incorporating cinematic materials. Both panoramas and cinemascapes present users with seemingly coherent yet expanding spaces to explore.

The passage discusses two works that employ interactive panoramic environments:

  1. Mysteries and Desire: Searching the Worlds of John Rechy - A CD-ROM collaboration between author John Rechy and artists exploring symbols of being a gay writer in LA through the 20th century. Users navigate QTVR panoramas linking to biographical materials, interviews, images, text, etc. The panoramic structure allows diverse representations to come together.

  2. What we will have of what we are: something past… - An online work by John Cayley presenting parallel naturalistic/subjective timelines. Users enter time-stamped London scenes and find links to dream-like black-and-white memories/whispers. While appearing chronological, the timelines are actually interlinked and non-linear. The work represents both technological and subjective experiences of time without disrupting the ongoing structure.

Both works employ interactive panoramic environments to provide wandering/discovery of hidden details and layered/interconnected narratives that blur conventional divisions of representation.

The passage discusses several digital works that use layered and nonlinear temporal structures to destabilize notions of continuity in images and narratives.

It describes works like Counterface and Something That Happened Only Once that composite multiple recordings over time into a single layered image or animation. This causes elements to appear and disappear in apparently continuous environments.

The Unknown Territories project layers different multimedia materials across interactive panoramic landscapes. Viewers can navigate nonlinear pathways through the materials, building custom documentaries. This allows inclusion of more context than traditional linear formats.

These works demonstrate how digital tools expand on techniques from film and early photography to manipulate time in composites. Rather than strict continuity editing, they allow parallel temporal modes and user choice in structuring narratives. The layered, networked structures give access to more context while questioning singular authoritative narratives imposed by recording/viewing devices.

  • Legible City (1989-1991) allows a user to bike through a virtual cityscape where words and sentences form the architecture. It embodied interactivity through physical navigation and exploration of the virtual world.

  • Distributed Legible City (1998) extended this to allow multiple users at remote locations to co-exist and interact in the virtual city simultaneously via avatars. This introduced a social/communicative aspect.

  • conFiguring the CAVE (1996) used an immersive VR environment (CAVE) with projections on all sides. The user interface was a life-sized wooden puppet the viewer could manipulate to control parameters in real-time and essentially “inhabit” the virtual worlds through a surrogate body.

  • Web of Life (year?) allowed users to traverse and impact largescale virtual ecosystems through embodied navigation and manipulation of “nodes.” This explored connections and interactions in complex systems.

The key themes are embodied/physical interfaces for virtual worlds, social/communicative aspects through multi-user systems, and using surrogate bodies/physical manipulation to inhabit and shape virtual spaces in real-time. Shaw’s works integrate art, technology and experience to construct new types of mediated spaces and interactions.

  • The artwork Web of Life uses palm scans from visitors to activate and modulate an audiovisual and thematic experience across networked installations. The varying palm lines merge and influence the projected 3D graphics, video sequences, spatialized sound, and architectural space.

  • The visuals are programmed as a self-organizing system using biological metaphors like neuronal growth. The theme of networked logic is core to the work, reflecting how connectivity emerges in digital networks rather than being based on single causes.

  • The artwork aims to both describe and evoke the experience of emergent behavior and shared connectivity through strategies that can “reembody” fragmented digital spaces. User interaction at any location communicates with and affects all installations in the networked artwork.

  • Legible City replaces the architecture of cities like Manhattan, Amsterdam, and Karlsruke with text made up of words and letters.

  • Riding through the virtual city on a bicycle allows the viewer to follow different stories or “paths” depending on which letters they choose, making it an experience of reading as they navigate the space.

  • Shaw saw parallels between this and avant-garde traditions like lettrism, concrete poetry, and Situationist concepts of psychogeography and urban drift.

  • The technology of computer graphics in the late 1980s lent itself to displaying text in 3D, which inspired Shaw to use it as the material of the virtual cityscapes.

  • Shaw’s work explores tensions between unconstrained browsing of digital spaces and more traditional framed narratives. Works involving text create deliberate paths for interpretation, while those heavy with layered images invite more open-ended engagement.

  • This relates to broader tensions between the proliferation of unconstrained digital images and a desire to elevate language and conceptual constraints in virtual spaces. Shaw’s practice navigates both approaches.

  • Interactive new media artworks allow users to actively engage with and shape the work through their interactions. This placement of the user firmly within the artistic experience is a defining characteristic of the medium.

  • Successful interactive works will craft their underlying algorithms in a way that maintains coherence and expression of the artistic vision regardless of user inputs or circumstances.

  • Social media platforms operate more as frameworks shaped entirely by users, rather than artistic works modulated by user interaction like the examples discussed.

  • Points of View III and T_Visionarium are interactive installations that immerse users within virtual spaces rendered through images and sound. Users can shift their perspective and actively assemble narrative sequences.

  • EVE takes immersion further by projecting images all around a dome, tracking the user’s gaze to follow their viewpoint within the virtual environment.

  • Creating these types of interactive artworks often requires interdisciplinary collaborations between artists and technical experts like programmers and engineers. This challenges traditional notions of authorship and artistic roles.

  • The Place installations use a motorized platform in the center of a large cylindrical projection screen to allow users to rotate their view of interactive panoramic video environments. This provides a more embodied experience than a fixed CAVE system.

  • Place-Hampi adds stereoscopic 3D projection, which enhances the immersive quality by allowing users to perceive depth in the virtual space.

  • Narrative elements in Place-Hampi are initially pre-animated 3D characters composited into panoramic photographs. Future versions will use real-time tracking to allow virtual characters to react to users’ physical movements and position in the space.

  • This adds an element of machine intelligence where autonomous virtual agents are co-present and can interact with human users in real time. While technically complex, it aims to create a “co-space” merging the physical and virtual worlds.

  • Walking experiences in these environments suggests integration of film/video-based narratives with interactive and responsive virtual elements.

  • Independent artists play an important critical role in providing a social perspective on technology development. They help ensure technology is humanized.

  • Historically, artists have been involved in developing new spatial representations like maps that influence how people understand and navigate the world.

  • During colonial times, visual cultures and meaning-making were violently attacked to destroy other cultures’ memories and impose new narratives.

  • Today, the contributions of early computer graphic artists are often marginalized despite their technical innovations underpinning modern 3D imaging.

  • To survive colonization, oppressed cultures developed strategies like syncretism to preserve cultural fragments through metaphor and collective memory despite discontinuities imposed.

  • Large-scale cultural disruption from colonization mirrored destructive cultural traumas that artists help address through critical engagement with new representational technologies.

So in summary, the passage discusses the role of artists in critically shaping new representational technologies and navigating cultural disruption, as seen from colonial history to digital media today. Artists provide an important humanizing perspective and work to preserve cultural meaning-making in the face of imposed discontinuities.

  • The Dadaists responded to WWI by making “anti-art” that fragmented and subverted the status quo, using techniques like collage and found objects. This was meant as criticism but was adopted by the mainstream.

  • Today, digital technology enables even more sophisticated “culture wars” through media that spread disinformation and censorship more widely and invisibly than ever before. There is an urgent need for alternative reconstruction to resist this.

  • While digital media promises connectivity, in reality it exacerbates issues like inequality and environmental problems. Greater inclusion of marginalized voices and connection to the natural world are needed to realize its potential for positive change.

  • Artists like Shaw and Coover are exploring how digital media can rupture colonial narratives and dominant power structures by incorporating new perspectives. However, access and participation remain limited due to socioeconomic and technological barriers.

  • Greater multicultural participation in digital media could help build bridges between fragmented cultures and memories, but global cultural fragmentation currently outpaces reconstruction efforts. Ongoing neocolonial tendencies also reinforce divisions.

  • Gary Hill’s artwork has moved from a focus on imagery and perceptual qualities toward a more conceptual approach involving the development of idea constructs.

  • His early works explored complex images and compositional/rhythmic structures, while more recent works involve “image-text syntax” - a kind of “electronic linguistics” using dialogue to manipulate a conceptual space.

  • The essay is discussing concepts/phenomena that exist in an ambiguous liminal zone between existing and not existing in electronic time/space. Their status is ontologically indeterminate.

  • By referring to these phenomena as “language”, Hill and the author aim to give them dimension and prevent misapprehension or trivialization as an art phenomenon, not to resolve debates around definitions.

  • The concept of “liminality” is important as it provides an alternative to positivistic/scientific perspectives that cannot account for phenomena that resist positive existence claims. A liminal lens is more appropriate.

  • The essay aims to explore this notion of “electronic linguistics” that Hill referred to as playing an important role in composing some of his works, both past and present, by taking him at his word rather than imposing other frameworks.

Here is a 250 word summary:

This passage discusses the development of language and linguistic concepts in Gary Hill’s early electronic artworks. It begins by focusing on his 1977 piece “Electronic Linguistic,” which featured abstract visuals and sounds generated through analog and digital electronic instruments. Though there was no apparent text or speech, Hill felt some of the emerging sounds seemed close to human voices, representing a more primal form of language.

Subsequent works like “Processual Video” and “Videograms” further developed this idea of machines “talking back” through unexpected sounds and patterns. The passage argues Hill’s work inquires into the nature of language as intrinsic to electronic technology. It situates his practice within the technological context of the late 1970s, when he began “dialoguing with technology” through real-time feedback between visuals and sounds.

Overall, the summary aims to convey how Hill’s work traced an early conception of language emerging from electronic signals and media, opening possibilities for both verbal and nonverbal forms of expression within electronic art practices.

The work “lve minutes,7” by the artist Gary Hill is a conceptual, minimalist video piece consisting of a single rotating white line on a black screen. The line revolves clockwise around the center as the artist recites a text in a monotone voice.

The visual of the rotating line and the spoken text seem independent yet obliquely related. It’s unclear if the visual reflects or recreates the text. Figuration and abstraction are two sides of the same configurative event.

While the rotating line is a pure geometric abstraction, the spoken text pulls the visual experience toward narrative elements that emerge moment to moment. This creates a liminal feedback between the visual and linguistic that foregrounds the materiality of both.

The automatic rotation of the line and the layered, ambiguous meanings in the text create an “axiality” - a reflexive referencing between visual and verbal figures that seems to physicalize language and instantaneously relate words to the geometric shape in incongruous ways.

This work reflects Gary Hill’s exploration of an “electronic linguistic” through the merging of mind, language and technology in a self-reflexive, feedback-based artistic process.

  • The piece discusses Gary Hill’s work Processual Video and its use of language and narrative structure. The narrative jumps between different perspectives and lenses, mirroring the works’ exploration of language.

  • Words become objects and vice versa, as the narration slides between discussing visuals and language. Syntax is treated like a journey across dimensional surfaces.

  • Hill’s work brings language into electronic media in a way that humanizes the experience, using vocalizing text to modulate and ride alongside wildly proliferating imagery.

  • In Videograms and Happenstance, the dynamic between image and text is reversed compared to earlier works. The unique Rutt/Etra scan processor used informed the novel image world generated.

  • Videograms consists of 19 short text vignettes that unfold slowly over 12 minutes. The images and texts seem to generate each other in parallel dimensions, with the prose poems narrating their own emergence at the threshold between virtual and incarnate.

  • The piece analyzes how Hill’s work explores the energizing and process-thinking connections between electronic phenomena, physical waves like in the body and ocean, and the dynamics of language use.

Here is a summary of key details about Gary Hill and his work Videograms 50 (1980-1981) and Happenstance (part one of many parts) (1982-1983):

  • Gary Hill is an American visual artist known for pioneering works that combine video, language, and electronic signals.

  • Videograms 50 (1980-1981) was an early work that manipulated video signals and generated text/language to get “inside the time of these transmogrifying signals and generate stories.” Hill would write language events and shift them to reflect the changing video images.

  • Happenstance (part one of many parts) (1982-1983) carried forward the brief abstract image events of Videograms but extended their quality of “poetic integrity.” The discrete image events seemed to speak for themselves without fully representing or illustrating the accompanying texts.

  • Hill sees the image events in works like Happenstance as “unfolding hyperglyphs” that point to multiple meanings simultaneously and developmentally, without exclusively referring to any site. They maintain a performative quality.

  • Hill’s works explored the relationship between electronic/video signals, language, narrative and investigated compositional possibilities at the intersection of technology and creative expression. He pioneered an approach known as “electronic linguistics.”

  • The text discusses the concept of “electronic linguistics” in Gary Hill’s artwork, where language and image emerge and interact dynamically through electronic/digital technology.

  • It focuses on self-generating and self-limiting processes, with language conceptualizing its own emergence without fixed references.

  • Concept and method give way to an “operative principle” that governs the work’s emergence in an open-ended process.

  • Key features highlighted include the speed of electronic image configuration, free-flowing self-generating language, oscillation between opposites at perceptual/conceptual thresholds, and physical/material grounding.

  • The work has no fixed center but only an instantly apparent present focus within a continuously moving/evolving center.

  • Language aims not just for communication but “communion” as a meeting place of energies in its singular expressions.

  • Electronic linguistics tracks life-emergence through nonlinear dynamics in a type of “Klein bottle space” where boundaries are perspectival.

  • It resists being pinned down through a “Hermes factor” that disrupts artificial constructions of language applied to situations.

The text discusses Gary Hill’s artwork as exploring self-generating processes of language and image emerging through electronic technology in unique and singular ways.

The video describes an experiment conducted by the author where they drew a graphite line on paper and placed it on a turntable underneath an overhead camera. The camera footage was passed through an outline/border generator circuit which converted it to a black and white digital signal. Due to peculiarities in the circuit, as the line approached a vertical position it would expand in the image beyond just outlining the borders.

This added complexity to the piece beyond just showing the disappearing line as it turned horizontal. It introduced the idea of expansion at the vertical position and subsequent contraction. The author refers to this as establishing a “subtext” and greater complexity in what they call “Processual Video.” The video explores concepts of process, emergence and complexity in electronic art.

Here are the key points about the migration of the aura from the original to its facsimiles:

  • Something strange has happened to Holbein’s Ambassadors at the National Gallery - it has lost its depth and texture and now resembles a poster reproduction. The original seems to have been replaced by a flat, brightly colored facsimile.

  • A similar thing occurs with Veronese’s Nozze di Cana at the Louvre. The visitor remembers seeing a high-quality facsimile of it in Venice that felt true to the original, but the version in the Louvre feels disconnected and loses the original meaning/experience.

  • In both cases, the facsimiles have taken on an “aura” that makes them seem like the true originals, even though they lack the original texture, depth, context, etc. The originals seem to have been replaced by flat, brightly colored copies.

  • This points to how the “aura” or authenticity of the original can migrate to high-quality reproductions through techniques like laser scanning, digital stitching, and faithful material reproduction. The context and experience of the originals is lost in their museum displays.

So in summary, it’s about how advances in reproduction technology can cause the authentic experience and “aura” of the original work of art to shift to facsimiles, with the actual originals losing their impact and meaning.

  • The author reflects on whether a high-quality facsimile could actually be considered more “original” than the original artwork, citing as an example the facsimile of Veronese’s Nozze di Cana in San Giorgio Maggiore vs the original in Paris.

  • She argues that our obsession with identifying originals only increases as copies become more accessible and better. Copies help drive the pursuit of the original by fueling passion and conspiracy theories about authenticity.

  • Rather than get stuck on whether something is an original or copy, we should consider the “whole trajectory” or “career” of a work of art, which includes originals, copies, adaptations over time.

  • She uses the metaphor of a river system to represent the trajectory - with originals as the source/headwaters but the full career being the flow downstream including tributaries, deltas, etc.

  • Focusing solely on identifying originals vs copies misses this fuller perspective and phenomenon of reproductions fueling the longevity and interpretation of the original over time.

  • The quality of reproductions, whether good or bad, impacts the preservation and evolution of the original work. Good copies can enhance understanding and appreciation of the original.

The essay discusses the idea of the “aura” as it relates to works of art, and how it can migrate or disappear over multiple iterations or reproductions of a work.

For performing arts like theater, each new production requires similar resources as the original, so we don’t distinguish between original and copy. The aura is not attached to one specific production.

However, for visual artworks, there is a perceived difference between the original piece and any reproductions. This is due to the asymmetric effort/techniques involved - an original painting took much more effort than a simple photo reproduction, for example.

But the author argues this gap depends more on the differences in production techniques between iterations, rather than any essential difference in medium. In the past, manuscript copying vs printing press introduced a huge technical gap. But with digital techniques, this gap is closing.

If iterations can be produced with similar techniques, the distinction between original and reproduction blur, and the aura could potentially migrate between versions. The aura depends more on the trajectory of a work through different media over time, rather than being fixed to one original location or version. Even original paintings require ongoing care and re-presentation to maintain their aura over time.

The passage justifies art restoration by arguing works need to be preserved or they will deteriorate over time, like the art in the National Museum in Kabul. For a work to survive, it requires ongoing care and reproduction through techniques like facsimiles.

It criticizes how the Ambassadors painting was restored by only using photographs as reference rather than more accurate reproduction techniques. This made the original painting disappear forever. In contrast, the Nozze di Cana facsimile added originality without harming the original, by accurately recording its details in three dimensions.

It also discusses how elements of a work’s originality, like its intended location context or surface details, can be preserved through reproduction rather than being lost. Facsimiles allow these aspects to still be appreciated while protecting the original. Digital techniques have improved reproduction accuracy without risking the original. The key is using reproduction respectfully and originally, not just slavishly copying.

The passage discusses the process used by Factum Arte to create an accurate digital facsimile of Veronese’s large painting “Le Nozze di Cana” in the Louvre museum. They used non-contact color scanning and 3D scanning equipment, along with high resolution photography, to digitally record every detail of the original painting without physically interacting with it. Over 1500 individual scans were carefully stitched together using specialized software to align the data precisely. Color samples were also recorded on site for reference during printing. The digital data was then used to produce a highly accurate facsimile copy of the painting through large format printing. Creating the facsimile involved solving complex challenges around accurately aligning and merging the different digital datasets while maintaining color fidelity to the original. The process demonstrated how digital techniques, when applied carefully, can be used to reproduce cultural works at scale with minimal risk of damage.

  • Latour and Lowe argue that there is no single original or authentic version of a work of art. Works exist through various social contexts and interpretations, not as a fixed original.

  • For poetry, recordings and performances offer richer textures than the written text alone, with elements like timbre, tempo, accent, and pitch that are essential parts of the poem. The “work itself” does not exist independently of its interpretations and versions.

  • Poems are “networked texts” that exist through various linked interpretations and possibilities, not a single original essence.

  • Authorship is complicated - the author dies but their work takes on a life of its own through different versions and contexts. There is no single determining authority over a work.

  • Digital spaces create new types of embodied experiences that can be as impactful as real-world experiences. Works take on new life through different media and formats beyond any single original. The focus is on evolving interpretations rather than a fixed point of origin.

  • The author describes holding an actual pamphlet from a plague year, which gave them a sense of its passage through history and connection to real events. Physical objects take on an identity and significance from their biography over time.

  • There is tension between valuing the original creation/intent of an artwork versus the marks/history it accumulates. This applies to restoration and reproduction.

  • Digital reproduction allows perfect, identical copies without visible signs of use or history. However, systems could be designed to introduce subtle variations or traces with each copy/viewing to mimic physical aging.

  • While digital systems could synthesize “imperfection,” it may not capture the uncontrollable, organic quality of marks left by true physical contact over long periods.

  • The “sterile cleanness” of digital may be refreshing now, but mechanisms are needed to articulate value in objects that naturally accumulate complexity through unpredictable interactions over time.

  • Simple reproduction is not analogous to the “fertile” creative interpretations that can come from performance works - influence between artists is a better analogy for paintings’ impact.

  • The narrator describes the life of a boy born in 1989 who grows up entirely in the digital age. He learns to use computers from a very young age.

  • As technology advances rapidly, things like card catalogs become obsolete before he’s old enough to use them. By high school he has difficulty focusing on less than four things at once.

  • Through college and his career, he builds a life deeply intertwined with online social networks, data mining, and the growing flood of digital information.

  • He meets his wife through online compatibility algorithms. He makes a fortune developing programs to analyze the accuracy and interconnectedness of online information.

  • In retirement, he builds an immersive 3D model of his childhood city. However, his own son is uninterested.

  • Late in life, his son drops off the grid, leading the narrator to spend decades searching online communities to find any clues to his whereabouts.

So in summary, it traces the life of someone who came of age fully in the digital world and was profoundly shaped by emerging technologies and social trends around online data and connectivity.

  • The boy discovers by accident that a mysterious site has been tracking his significant movements and information since childhood. However, he later cannot find the site.

  • He is told many times by bots and humans to read Borges’ stories but never does in the life he actually leads.

  • His entire life could be accidentally generated from random web searches by someone else researching their own autobiography.

  • His life exists as a fictional series of data structures in an essay collection that will eventually be digitized and lost to oblivion.

  • After his death, various social networks will dynamically generate thousands of partly accurate obituaries customized for different readers.

  • His entire digitally restored life could be simulated based on extrapolations from semi-accurate data, experiencing moments of existential realization.

In summary, it discusses how the boy’s life information could be accidentally collected, generated, and analyzed digitally in various ways without his full comprehension or consent, raising questions about privacy, data ownership, and the fictional/simulated nature of digitized existence.

Here is a summary of the nside article on contributors to the Museum of the American Philosophical Society:

The article profiles several contributors associated with the Museum of the American Philosophical Society. It provides brief biographies of thinkers from various fields including computer science, art, literature, philosophy and more. Some of the contributors mentioned include Judith Donath, Ian Foster, Jean-Gabriel Ganascia, James Hendler, Gary Hill, Paolo D’Iorio, Sarah Kenderdine, Bruno Latour, Alan Liu, Adam Lowe, Richard Powers, George Quasha, Jeffrey Shaw, Barry Smith, and Vibeke Sorensen. The biographies highlight each contributor’s areas of work and research, notable publications and achievements, as well as their current affiliations. Overall, the article introduces a diverse group of individuals who have made significant contributions across different domains related to the museum.

Here are summaries of the background information provided for each contributor:

  • Thomas Hill - Artist known for conceptual and minimalist artworks. Published books on art including Breakthrough: Stories and Strategies of Radical Innovation, The Internet Edge: Social, Technical, and Legal Challenges for a Networked World, and Internet Dreams: Archetypes, Myths, and Metaphors. Fellow of the American Association for Artificial Intelligence and the American Association for the Advancement of Science.

  • Graham White - Lecturer of computer science at Queen Mary, University of London. Published widely on topics in computer science, philosophy, and mathematics. Author of Luther as Nominalist and coeditor of several collections.

  • Eric Zimmerman - Independent game designer and visiting arts professor at NYU Game Center. Known for award-winning games created for entertainment, education, and art. Coauthor of the textbooks Rules of Play and The Game Design Reader, both from MIT Press. Founding faculty member at the NYU Game Center.

Here is a summary of the relevant sections:

  • Organization in scholarship and research has been enabled by computation and massive collaboration (interdisciplinary, cross-disciplinary exchange). Computing allows for automated analysis of large amounts of data, modeling/simulation, and is part of social networks.

  • Victor Cousin was a 19th century French philosopher who studied philosophy and its history.

  • Numbers 86 and 87n20 refer to endnotes in the text.

  • Creativity, both human and computational, is discussed in several passages. Models of human creativity include compositional, exploratory, and problem-solving approaches. Models of computational creativity include entropy-based (compression), exploratory, and mathematical approaches.

  • Some key aspects of culture discussed are its relationship to AI research, computational analysis of culture, facilitation of cross-cultural exchange through digital technologies, issues of cultural capital/oppression/fragmentation, and the role of artists in promoting cultural diversity.

  • Gary Hill’s work often explores concepts like memory, feedback, images/text relationships, narrative, sound, and poetics through video and media installations. Some of his major works discussed include Electronic Linguistic, Happenstance, and Processual Video.

  • Hill uses specific terminology in his work like “axial,” “electronic linguistics,” “language,” “limen,” “liminal,” “principle,” and “processual.”

  • His videos and installations often incorporate text, images, and sound to explore linguistic and perceptual concepts in an experiential way for viewers. Narrative and the experience of the viewer are important aspects of Hill’s work.

  • The passage discusses several of Hill’s artworks and installations in detail and analyzes some of the theoretical and methodological approaches he employs. It focuses on how Hill uses multimedia to investigate language and perception.

  • The work discusses digital artworks, interactive narratives, panoramas, and immersive environments created between the late 1990s-2010s. Artists mentioned include Jeffrey Shaw.

  • Shaw’s works like Legible City, Place- a user’s manual and Distributed Legible City used digital technologies to create interactive, navigable art experiences blending text, images, and user choice. They explored relationships between text, images, and user experience.

  • Other topics discussed include how museums display art in context, digital preservation of cultural artifacts, conceptual mapping and topic modeling in machine learning, ontology definitions and uses in knowledge representation, and networked and immersive panoramas from the 19th century to now.

  • The work also discusses scholarly research in the humanities, including challenges of digital infrastructure, text reproduction and quoting, peer review, and ensuring preservation of primary sources and research objects/results. Traditional and “digital scholarship” models are compared.

  • Key figures mentioned include Lev Manovich, Jean-François Lyotard, Theodore Nelson, and George Quasha in relation to digital art, new media, and interactive/hypertext works.

Here is a summary of the key points from the passages:

  • PLACE-Hampi project created an interactive digital reconstruction of the historic site of Hampi, India using 3D modeling, virtual reality, and other techniques. It allowed users to explore the site from different historical periods.

  • The T_Visionarium and Points of View III projects created interactive virtual environments and experiences using virtual reality and other tools.

  • Schön’s concept of the reflecting practitioner emphasizes reflecting on experimental action to test new understandings and practices.

  • The Semantic Web involves relating data on the web through ontologies and formal representation of knowledge to enable machines to better understand and “reason” about the meaning, relationships and truth of statements.

  • Ceusters and Smith discuss how ontology in biomedical sciences involves negotiating different perspectives and tacit assumptions between domain experts and ontology engineers.

  • Hendler outlines how the Semantic Web is being built from the bottom up through individual data owners linking their data to other data on the web through shared ontologies.

  • Ganascia discusses how machine learning and human creativity both involve exploring a space of possible inductive generalizations from observations or experiences, with humans having richer cognitive capacities.

  • Clancey analyzes how different modes of understanding like experiential and abstract thinking relate and help address different types of problems.

  • Borgmann argues that codes or conceptual frameworks ultimately have limits and blind spots that can only be addressed through open engagement with alternative perspectives.

#book-summary
Author Photo

About Matheus Puppe