Self Help

Understanding Digital Humanities (David M. Berry)

Author Photo

Matheus Puppe

· 77 min read

BOOK LINK:

CLICK HERE

This edited collection addresses key issues and debates in the emerging field of digital humanities. Some main points:

  • It contains chapters by leading thinkers exploring diverse topics like the changing role of humanities in the digital age, digital history and archives, data analysis methods, legal informatics, film studies using computational tools, and debates around gender and Wikipedia.

  • The editor David Berry organized a workshop at Swansea University called “The Computational Turn” which brought together many of the contributors to discuss these themes in a rigorous manner.

  • The book aims to provide both interpretive and critical perspectives on digital humanities through examining foundational issues around power, technology, methods, aesthetics and theory.

  • It acknowledges the debates around digital humanities in terms of whether it represents a break from traditional humanities or builds on long-standing intersections between the humanities and technology.

  • The diverse range of case studies and topics addressed seeks to better understand the potentials and limitations of digital approaches to different aspects of the humanities.

In summary, the edited collection provides a forum for leading scholars to explore and debate key interpretive and critical questions arising at the intersection of the humanities and digital technology. It aims to further understanding of the emerging field of digital humanities.

Here is a summary of the acknowledgments:

  • The author thanks Stig Jarle Hansen and Mohamed Husein Gaas from Noragric and Universitetet for miljø- og biovitenskap (UMB) for providing a visiting research stipend that allowed them to stay in Oslo from June to July 2011 to work on the manuscript.

  • Thanks are also given to Anders Fagerjord for the invitation to work at IMK at the University of Oslo.

  • The Research Council of Norway is acknowledged for providing a Yggdrasil research fellowship supporting a sabbatical leave in Oslo in 2012.

  • The author’s wife Trine is thanked for her continual help and support throughout the writing process.

  • Digital technology is playing an increasing role in how research is conducted across the university, affecting both epistemologies and ontologies underlying research.

  • While some disciplines rely more on digital tools than others, it is rare to find an academic today who has not used digital technology as part of their work in some way.

  • Digital libraries, databases, email, and search engines have become crucial research tools as more content is digitized and placed online.

  • The digital humanities has emerged as an area that warmly embraces these digital research methods and technologies.

  • Traditional humanist approaches focus on close reading a small canon of texts, but digital methods allow exploring much larger collections as collective systems using distant reading.

  • The huge amount of digitized content now available creates problems of abundance rather than scarcity for researchers to grapple with.

  • The digital humanities tries to develop techniques and methodologies to analyze this massive new digital archive and cultural data in meaningful ways.

The passage discusses the evolution of the digital humanities from its early origins to its current state and proposes a path forward for a “third wave.” Originally called “humanities computing,” early digital humanities projects applied computers as technical support for humanities scholars. This transitioned to a “first wave” where computational techniques were used to analyze large text corpora.

The “second wave” focuses on born-digital works and expands what counts as an archive to include digital artifacts. It applies humanities methodologies to interpretive, experiential digital works. However, neither wave fully questioned the ontological foundations of humanities disciplines.

The passage argues for a “third wave” that examines digital culture through the underlying computational forms and processes within digital media. It draws from software and platform studies to analyze how technical specifics produce epistemic changes. A third wave would also problematize humanities assumptions and boundaries. In sum, it calls for a focus on how computer code has infiltrated all aspects of culture, memory, and even the humanities academy itself.

  • The author argues that understanding digital humanities requires understanding code and computationality. Code reflects cultural practices and possibilities in the digital age.

  • Historically, the university’s role was to produce knowledge through reason (Kant) and later culture (German Idealists like Humboldt). This provided a unifying idea for the university.

  • In the postmodern era, excellence replaced culture as the unifying idea, but digital culture could now serve this role.

  • Rather than just learning digital skills, we need a critical understanding of the “literature of the digital” through a form of digital Bildung (cultivation of character).

  • This develops a shared digital culture and digital intellect, versus just digital intelligence/skills. It provides regulation in the face of destabilizing digital knowledge, replacing the role of philosophy.

So in summary, the author argues digital humanities requires understanding code and computationality, and digital culture could now serve as the unifying idea for the postmodern university.

  • The digital age has introduced a state of information overload, requiring computational solutions and “super-critical modes of thought” to deal with vast amounts of available knowledge.

  • Online networks enable new forms of collaborative thinking and a potential “collective intellect”. Software and technology could support collaborative knowledge creation beyond individual blogs/feeds.

  • Computational methods may reshape disciplines by providing a shared “computational hard core”. Computer science could play a foundational role in supporting and directing other fields.

  • Relying more on technology could change how knowledge, wisdom and intelligence are understood. The individual memorization-based model of education may weaken in favor of just-in-time, networked, collaborative learning supported by computation.

  • Computational techniques could enhance human thinking abilities and information access, but may also exacerbate inequality if access is restricted by markets. Critical examination of technologies’ social impact is needed.

  • Emerging fields like digital humanities and computational social science leverage computational tools for analyzing large amounts of cultural and social data to advance research.

This passage from Understanding Digital Humanities provides a broad, in-depth, and large-scale analysis of the emergence and implications of digital humanities and computational approaches in the arts, humanities, and social sciences. Some key points:

  • Digital humanities projects require significant startup costs, money, organization, and sustainability plans, introducing new financial considerations.

  • Computational tools allow quantification and analysis of research that was traditionally qualitative, facilitating distant/distant readings of large amounts of cultural, social, and political data.

  • This has profound effects beyond just new methods, impacting disciplines, knowledge structures, and the modular, post-disciplinary nature of the university.

  • Computational subjects trained in new critical reading practices (code, visualization, patterns, narrative) will be needed to make sense of increasing amounts of data and information in society.

  • The passage extensively explores the unprecedented breadth, depth, and scale that digital technologies and computational approaches bring to research in the humanities and social sciences.

This summary describes the increasing use of “charticles” or journalistic articles that combine multiple forms of media like text, images, video, and interactivity. This allows journalism to become non-linear and multidimensional.

It also discusses the need for developing computational literacy skills to navigate an increasingly digital world. One example is the emergence of real-time data streams instead of traditional static knowledge objects. Understanding and analyzing streaming data will require new methods like data visualization.

The passage then discusses how computational approaches are transforming disciplines by focusing them on computational elements of their topics. This can be viewed as creating new “ontological epochs”. Digital humanities can orient itself around questioning computationality as an “ontotheology” that establishes new understandings of entities.

Finally, it argues that philosophy should focus on the “whole landscape” to understand new ontotheological developments. And that examining computer code both ontically and ontologically will be important for digital humanities as code mediates contemporary culture.

Here is a summary of the key points from the provided references:

  • Clinamen (2011) discusses the procedural rhetorics used in the Obama 2008 presidential campaign, analyzing how the campaign integrated digital tools and processes into its persuasive strategies.

  • Flanders (2009) examines the “productive unease” that digital tools and approaches create within academic scholarship, pushing researchers in new directions.

  • Fuller (2006, 2008) discusses “software studies” as a new area of inquiry that examines culture and society through the analysis of software and computational processes.

  • Hayles (2012) analyzes how digital technologies are transforming human capacities and cognition.

  • Kirschenbaum (2009) advocates for “hello worlds” or small demonstration projects that can showcase new digital methods and discoveries.

  • Latour (1986, 2010) explores how visualization and quantification tools can enhance understanding of social and cultural phenomena.

  • Lazer et al. (2009) promote the new field of “computational social science” using large-scale digital data and modeling.

  • Liu (2003, 2011) discusses the role and identity of the humanities in light of digital tools, and the importance of cultural criticism.

  • Manovich (2008) and Manovich and Douglas (2009) examine new forms of cultural analytics and visualization enabled by digital methods.

  • McCarty (2009) reflects on the changing relationship between humans and machines in digital scholarship.

The references cover a range of topics around new computational and digital approaches in the arts and humanities. Major themes include new forms of scholarship, changing relations between humans and technology, and strategies for promoting digital humanities work.

  • Digital technology is challenging fundamental assumptions about human identity by blurring boundaries. Early cybernetics introduced the idea that human identity is constructed rather than given.

  • Interacting with digital technologies changes our brains over time. We develop a “hyper-attention mode” suited to electronic media rather than print’s “deep-attention mode”.

  • This is different from old notions of communication leading to spiritual connection between souls. Digital approaches can analyze large datasets and graphics to understand cultural trends in new ways.

  • The digital humanities offer new ways of studying and interpreting fields through quantitative analysis of large datasets and visualization tools. This includes mapping relationships in literary works, histories, and cultures.

  • Emerging areas of study include analyzing editor conflicts on Wikipedia articles over time and understanding concepts like open source, intellectual property, and digital creativity commons.

  • In summary, digital technologies are challenging concepts of human identity and interactions while also enabling new perspectives and fields of study within the humanities through data-driven and networked approaches.

This passage discusses some of the emerging research issues and debates surrounding the digital humanities. It touches on the tensions between traditional humanities research approaches and quantitative/computational methods. Some key points:

  • There is a gap between the critical thinking of humanities and the quantitative approach of computational studies. This has led to problems integrating the two.

  • Scholars debate questions around objectivity, visualization as rhetoric, opacity of computational methods, and the risk of certain research becoming obsolete or too expensive.

  • Formalizing research for computation reduces transparency and interpretability of methods.

  • Case studies show potentials but also challenges of computational modeling, text analysis, and tagging digital resources like photos.

  • Scholars argue for a more critical view of formalization and consideration of hybrid human-computer cognitive systems to enhance but not replace humanities research.

  • The field appears to be moving from an initial wave of applying digital tools to a new phase of more integrated and qualitative approaches. Debates continue around integration of methods and research traditions.

The authors discuss whether digital humanities are simply new tools for research or transformational to existing epistemology. They argue digital techniques could be used not just to illuminate the humanities, but for the humanities to challenge and illuminate the sciences through a sociological lens.

They distinguish two waves - the first uses digital tools qualitatively, while the second emerging field utilizes digital tools to address existing humanities concerns, revealing new analytical techniques.

The authors focus on the second wave and identify two movements within it: 1) Using digital technology to analyze existing humanities in new ways 2) Theorizing new digital humanistic concepts.

They argue opposition based on the field not addressing humanities concerns is misguided, using Hayles’ definition of digital humanities as employing computational analysis to qualitatively extend the scope of textual analysis as an example. While algorithms cannot replace human interpretation, computational analysis augments traditional humanities aims, topics and methods when driven by the same desire to know. This represents a transformation and expanded conception of what constitutes the humanities.

  • The field of digital humanities examines humanities topics and questions using digital methods and computational tools. This can provide novel approaches and insights that traditional humanistic analysis may miss.

  • Several papers profiled use techniques like data mining, visualization, and analysis of large cultural datasets to explore topics in narratives, magazines, texts, and history. While employing computational methods, the goals remain interpretive and focused on cultural/humanistic meaning.

  • Some argue this positions digital humanities firmly within the broader humanities by linking digital methods to traditional humanistic concerns. Others see it opening new conceptual areas for the humanities to evolve.

  • Specifically, some propose that examining software, code, and technology through approaches like literary criticism and deconstruction could reshape understandings of concepts like the human and our relationship with technology.

  • Visualization tools may also enrich traditionally text-dependent humanities fields. Overall, digital methods are argued to both tackle old questions differently and open possibilities for new humanistic theories and perspectives.

This section discusses how digital technologies have become integrated into everyday life and the humanities. It argues that humanities scholars cannot be detached from the world they study. As computational devices are now ubiquitous, they have become part of the “background of being” and affect how people experience and understand the world. The proliferation of digital information also means new methods are needed to address this information landscape. The authors discuss how Nietzsche’s typewriter influenced his writing style, showing how technologies can shape thought and production in the humanities. As the world increasingly consists of computational entities, scholars are oriented towards it computationally through daily device use. This opens new ways of interpreting and evaluating humanities subjects. Therefore, digital humanities methods are an appropriate and necessary response to studying the computational world we now inhabit.

  • Digital technology allows researchers to analyze vast amounts of information and visual materials in ways that were not possible before through traditional manual methods. This includes using software to exponentially speed up visual analysis compared to human eyes alone.

  • These new digital methods emerge from the digital world itself, which the humanities aims to study. Technology is now integral to the humanities, as research relies on computers rather than handwriting.

  • Digital methods are evolving from everyday practices as researchers and people in today’s digital world. Researchers now depend on technology similarly to how Nietzsche depended on his typewriter.

  • The integration of digital technologies is changing the form, content and products of research. However, changes to core philosophies and methods should be avoided with knee-jerk reactions to new technologies.

  • Technologies allow work to continue and impact working habits and outputs, as with Nietzsche. Digital technologies will change some ways humanities work is conducted and conclusions drawn, due to changes in the technological world.

  • Technology has always impacted fields in these ways and will continue to do so. Humanities research reflects changes in both the world and research practices/thinking in today’s digital era.

I do not have access to the full text of the book “The Spirit of Music” to provide a detailed summary. It was published in 1994 by Penguin Classics and focuses on the subject of music, but without reading the book I cannot summarize its key arguments, themes, or storyline. Summarizing a full-length work would require accessing significant portions of the original text.

  • The digital humanities is a diverse field encompassing text encoding, digital editions, virtual reality reconstructions, archives, and digital arts/literature that draws on humanities traditions.

  • Scale is a major factor that transforms traditional humanities practices. Digitized texts number in the hundreds of thousands compared to the 25,000 lifetime readable maximum. This enables new types of queries and analyses.

  • The concept of “reading” is challenged. Distant reading allows interpretation without directly engaging texts. Algorithms may also “read” by avoiding expectations. But algorithms still embody interpretive models.

  • There is a spectrum of views on machine vs. human interpretation. At one end is traditional interpretation, while at the other machines eschew human bias. Middle views see machines modelling understanding while exposing assumptions.

  • Large-scale analysis accepts multi-causal events influenced by many forces, human and non-human. So machine analysis could make visible patterns for human interpretation. But humans still create, implement, and interpret computational work.

So in summary, it outlines how scale, new concepts of reading, and debates around machine vs. human interpretation are transforming the field of digital humanities from traditional humanities practices.

The passage discusses the dynamic, mutually reinforcing relationship between digital technologies and the digital humanities. As we use computers more, we need their large-scale analysis capabilities to handle huge datasets. This drives the need for more data to be digitized and made machine-readable.

Large-scale historical events cannot be fully analyzed without computational modeling and simulations, as these make complex interactions and systems visible. However, digital tools also unsettle traditional humanities techniques like narrative history.

The relationship between algorithmic analysis and close reading should not be seen as oppositional. Often they interact synergistically, with each enriching the other. Digital media also influence print works and vice versa.

Pioneers in spatial and temporal data integration, like historian Philip Ethington, have undertaken an intellectual journey toward a “cartographic imagination.” This moves beyond linear causality to spatialized, multidirectional representations incorporating rich networks of connections. As digital projects grow in scale, visualization tools become increasingly important to make sense of massive datasets.

  • There is a debate in digital humanities around whether the discovery of patterns in data is sufficient, or if those patterns need to be linked to deeper meanings. Some argue just following the data is enough, while others like Ramsay argue meaning is important.

  • How digital humanities articulates with traditional humanities will influence if it becomes a separate field or remains deeply intertwined around questions of interpretation. This has implications for funding, disciplines, and academic prestige.

  • Collaboration is becoming more common and important in digital humanities given large project sizes and need for diverse skills. It allows for student contributions and crossing boundaries between academic and amateur experts.

  • Collaboration also presents challenges around credit, decision making, access controls, and establishing collaboration norms. Scientific lab models may not always apply to humanities.

  • Digital tools are influencing how research is conceived at both macro project levels and micro sentence/paragraph levels, offering new ways of organizing and presenting information and arguments.

  • The database format was proposed as a way to liberate contradictory threads from the constraints of coherent argumentation in a linear narrative. Database elements can be combined in many ways depending on how the reader navigates the interface.

  • Lloyd and Loyer created a database visualization of topics as potatoes in a field that readers could “dig” through to navigate. This provided a richer context and challenged readers.

  • Digital work often requires more effort from readers but offers an enhanced understanding of complexity.

  • Databases allow for different interface designs tailored to different user or scholar needs. The same data can support multiple uses.

  • Collaborations enabled by databases can change relationships, like Daniel’s work giving voice to marginalized groups.

  • Digital humanities projects incorporate a range of multimedia, where elements have conceptual and emotional meaning, not just decorative functions.

  • Projects highlight media creation as a core aspect of digital humanities work. This can involve both academic and commercial partners or tools.

The passage discusses tensions between poststructuralist approaches and productive theory in digital humanities work environments. It notes that digital humanities projects require collaboration between human and machine cognition, necessitating that humans write executable code for machines and machines structure communications legibly for humans.

This puts pressure on digital literacy and writing styles, requiring clarity of purpose in code and favoring modular prose. Scholars also feel pressure from needing expertise in both humanistic fields and computer languages. While engagement with digital technologies is now widespread, only about 10% of humanists currently conduct serious digital scholarship beyond basic IT uses.

However, rates are much higher among younger scholars, suggesting digital humanities practices will continue increasing. Several factors point to critical mass being reached, like growth of digital humanities centers and programs. If so, new presuppositions about knowledge production and dissemination in the humanities could reshape traditional disciplines over the next 10-15 years.

  • Several scholars interviewed believed digital technologies have transformative potential to reimagine humanities practices, program development, and the relationship between higher education and communities/global conversations.

  • Kenneth Knoespel of Georgia Tech pointed to cooperative ventures between humanities and STEM fields using digital tools as common ground, allowing humanities to enhance technical projects with interpretation and analysis skills. This reimagines higher education.

  • Alan Liu of UC Santa Barbara spearheads digital initiatives like Transcriptions to reimagine and revitalize humanities research through collaborations across disciplines and providing support for traditional text-based research.

  • Tara McPherson believes digital humanities can reimagine higher education by finding funding and infrastructure to move humanities into new multimodal projects and research visions.

  • When faculty adopt these approaches, not just research but also pedagogy and relationships with other fields are reimagined, with students involved in digital projects.

  • Engaging fully with digital humanities issues can help ensure humanities’ continuing vitality and relevance in the 21st century by rethinking practices and values as the “Age of Print” passes.

Here is a summary of the key points from the provided article:

  • Digital artifacts and online interactions have created a massive amount of cultural and social data available for analysis. Computational tools hold promise for analyzing this abundance of digital material at large scales that would not be possible manually.

  • Computational methods can process much larger corpora, zoom seamlessly from micro to macro levels of analysis, reveal patterns and structures not discernible to the naked eye, and potentially probe into meanings.

  • Researchers are increasingly turning to automated methods to explore digital artifacts and understand the human realities associated with them. Computational analysis provides the ability to reconcile breadth and depth.

  • Examples of computational analysis methods in use include using textual analysis techniques to classify narrative structures in film scripts, content-based image retrieval systems to analyze image collections, and network analysis of online social interactions.

  • While promising, digital methods also pose challenges related to issues of representation, scale, interpretation, and evaluation that researchers are working to address. Computational tools must be carefully designed and their limitations understood.

In summary, the article discusses how computational methods are increasingly being used in the humanities and social sciences to analyze large amounts of cultural and social data available in digital form, but cautions that such digital methods also present challenges that require attention.

  • Network analysis of godparent-godchild relationships is used at the University of Coimbra to uncover social stratifications in 17th-18th century Portugal.

  • While some digital humanities research projects analyze huge datasets comparable to projects in physics, most tools used by humanists are for “auxiliary” functions like communication, organization, and pedagogy rather than “heuristic” discovery of new knowledge.

  • Truly heuristic digital methods that are integrated into research methodologies can profoundly impact how scholars work and understand their materials, but best practices for many new computational methods are still emerging in the humanities.

  • Natively digital methods tend to take a more inductive, exploratory approach to analyzing large datasets to uncover patterns rather than deductively testing hypotheses. However, all methods encode theoretical assumptions and choices that impact results.

  • A key question is how computational methods will reshape the legitimate production and justification of knowledge in the humanities. More debate is needed around their epistemological and methodological status.

  • Digital tools and computational methods are gaining significant interest in the humanities, but it’s important to critically assess how these new methods may impact scholarship.

  • There is a risk of pursuing “objectivity” by relying too heavily on automated/mechanical processes without sufficient human interpretation. This could echo past ideals like “mechanical objectivity” that have been questioned.

  • Visual outputs from digital tools like networks, timelines and maps can be persuasive but also reductive. They emphasize some aspects over others, implicitly guiding interpretation. Their aesthetic qualities contribute to their argumentative power despite reductions.

  • Issues like inherent biases, choices of formalization/algorithms, and presentation of results reintroduce questions of bias/subjectivity even as computers were meant to remove them.

  • Scholars need to examine the conceptual models and assumptions embedded in the tools used, and make methodological decisions explicit rather than relying on impressive evidence alone. Interpretive capacities remain important alongside computational capacities.

In summary, it calls for a reflective approach to critically assessing the impacts of digital methods on knowledge production and scholarship, without an uncritical embrace or fear of technology. Key is understanding how the tools may shape outcomes and interpretations.

The passage discusses some challenges and issues related to the use of computational tools and digital methods in humanities research. It addresses three main points:

  1. The persuasiveness of scientific images comes from their imagined link to an external reality and the obscurity of their production process. This can be addressed through a more process-based approach that focuses on the methodological steps and interpretation rather than presenting images as finished evidence.

  2. Technological “black-boxing” may occur if digital methods become more widespread, obscuring the transparency, scrutiny, and reproducibility expected in academic scholarship. This can happen at the level of inaccessible source code, lack of “code literacy” to understand algorithms, and experimental methods whose outputs cannot be mapped back to inputs.

  3. Some computational approaches imported from computer science may never be fully understood like traditional statistical concepts because there is no “manual” equivalent. While this may seem to apply more to deductive methods, it remains a problem for interpretive tools as every tool directs research in some way. Different tools should be used and their assumptions integrated into interpretations.

The passage discusses some theoretical and methodological issues that arise from the introduction of digital tools and computational methods in the humanities. It identifies five areas of concern:

  1. The distinction between internal and external factors becomes difficult to uphold as tech influences social and institutional parameters and vice versa.

  2. Assembling interdisciplinary teams and creating shared concepts leads to a “social overhaul” that impacts results. Skill requirements may also shift significantly.

  3. Computerized tools could render traditional manual methods obsolete over time due to their speed, efficiency and ability to handle large datasets. This favors project-based funding models.

  4. There is a strong current of universalism in computational discourses seeking unified theories and applicable rules. This poses issues like corroboration over challenging models.

  5. When applied undogmatically, digital and traditional methods can correct each other’s blindspots, uncovering new perspectives.

Rather than discouraging digital exploration, the authors advocate critical involvement - both applying methods and reflecting methodologically. The issues presented are tasks to address technically, theoretically and institutionally going forward through an approach of “critical technical practice.”

  • The author argues that questions of methodology need to play a larger role in digital humanities projects in order to make results more transparent, critiquable and challenging.

  • To achieve this, they recommend companion websites documenting methodological choices, tools, technical aspects, and providing raw data. This allows other researchers to experiment and arrive at different results.

  • They also suggest drawing on agile project management methods from software development to enable continuous iteration between research questions, the epistemological level and technical implementations.

  • This helps identify obstacles, address complications, and potentially overcome the division of labor between researchers and programmers.

  • Overall, confronting computational tools can help humanities scholars develop a critical understanding of software’s role in knowledge production and contemporary society. While challenges remain, this engagement could inform theories around digital infiltration across many fields.

  • The author draws a parallel between the intellectual traditions that influenced Marx (British political economy, German idealism, French socialism) and the current influences on digital humanities (German media theory, Italian political theory, French philosophy).

  • The author argues that digital humanities could benefit from engaging more directly with the theoretical underpinnings of these intellectual traditions, rather than just focusing on practical applications of digital tools and methods.

  • The chapter focuses specifically on the work of Wolfgang Ernst and his concept of “media archaeology.” Ernst is a prominent German media theorist who has pushed the analysis of archives in new directions relevant to media studies and digital humanities.

  • Ernst’s theory views archives not just as stores of past information but as shaping our modern cultures of memory and information in technical media environments.

  • The author argues Ernst’s media archaeological approach can provide new insights for understanding data and calculation in digital humanities from a technical, software/hardware perspective rather than just a quantitative one.

  • In summary, the chapter promotes drawing on influential media theories like Ernst’s to develop a stronger theoretical foundation for digital humanities grounded in intellectual traditions like German media studies.

Here are the key points regarding the relevance of media archaeological debates to contemporary archival and memory cultures:

  • Media archaeology offers concepts for thinking about media temporally and excavating past media, not just writing histories narratively. Concepts like remediation, recurrence, variantology, and an-archaeology are useful.

  • Media archaeology views media (particularly technical media like recordings, software, computing) as modes of recording and inscription that take on agency themselves, not just as objects of human history. This challenges dominant narratives.

  • Storage and archiving processes are seen as cultural features linked to power. How information can be stored shapes what can be expressed and remembered.

  • For Ernst, technical media like recordings, software, computing represent dominant forms of cultural memory today rather than literature-based narratives. This demands rethinking memory in algorithmic, computational terms rather than just representation.

  • Software and databases constitute new ontologies of memory as counting, lists, algorithms rather than just narratives. This influences how cultural memory is organized and expressed.

  • Media archaeology offers a lens for excavating past technical media and understanding how present archival/memory forms are shaped by those past inscription technologies rather than just human narratives.

So in summary, media archaeology provides theoretical frameworks to critically examine technical media’s role in cultural memory and how archival practices have co-evolved with changing technologies of recording information.

This passage discusses Wolfgang Ernst’s concept of “ic media” and its implications for understanding cultural memory and history. Some key points:

  • Ernst argues we need to understand memory/history not just through narrative frameworks but also through “ic media” as the non-discursive technical base. Ic media refers to the underlying computing/algorithmic processes that media objects are based on.

  • Computing operates through temporal algorithms and sequential processing of information, not just narratives. This highlights the calculational/operational aspects of digital culture.

  • Storage of cultural artifacts in digital archives is mathematically defined through retrieval algorithms, not just interpretive semantics. Archives are addresses defined by protocols/software, not just spaces.

  • This shifts understanding of cultural institutions like museums - they become “non-places” managed through computational processes/patterns of retrieval rather than physical spaces.

  • Ernst emphasizes the various micro-temporalities inherent in digital technologies that must inform our understanding of cultural memory and temporality, beyond just human-scale narratives. Time is critical to technical media on computational levels.

So in summary, Ernst argues we must understand cultural memory and history through both narrative and non-discursive “ic media” dimensions like underlying computing processes and temporal operations. This has implications for how we view cultural institutions in the digital age.

  • The passage discusses Wolfgang Ernst’s approach to media archaeology, which focuses on the concrete materiality and operational aspects of media technologies, rather than just symbolic or textual meanings.

  • Ernst draws from other German media theorists like Kittler who questioned humanist assumptions in media studies. He advocates an approach grounded in the technological infrastructure and “aisthesis medialis” rather than human perception.

  • Methodologically, Ernst emphasizes examining the “operative diagrammatics” of media - understanding how technologies function internally through their circuits and operations.

  • He has a collection of old media technologies called the “Media Archaeological Fundus” that are still operational, not just archived as dead objects. This allows investigating how technologies connect different historical time periods.

  • The goal is to understand media as processes and events rather than static forms or meanings. Ernst’s approach demands engaging directly with the material functioning of media to explore how they shape experience across time in technological rather than just human terms.

  • Media theorists like Ernst view media as archives that transmit cultural memory and technical memory between past and present. Technologies like phonographs archive more than just human meanings - they also record noises, sounds, and other raw phenomena.

  • Media archaeology pays attention to this “noise” level as much as the intended messages. It focuses on the mathematical and technical aspects of media rather than just content.

  • Ernst advocates a “microtemporal” focus on the layers and patterns of how media synchronize and channel cultural life, rather than a macro view of history. This involves opening up technologies experimentally through practices like hardware hacking.

  • The “diagram” is important for Ernst - it maps how technologies circuit temporality and processes on a micro level, revealing how society operates through machines. Diagrams show how machines work and how power/knowledge circulates through technical media cultures.

  • For media studies, the archive refers not just to collections but how machines constantly operationalize social functions algorithmically through their internal “microtemporal archives” of time-critical processes.

  • The chapter discusses approaches that place the archive at the center of digital culture and examines how this requires rethinking archives and the humanities in the digital age.

  • Media theorists like Ernst emphasize the technical, operational dimensions of media and argue this should be incorporated into media archaeology. However, their approaches often lack analysis of political economy and power structures.

  • Integrating insights from political theory could help articulate how knowledge is produced through technical media and its connection to micropolitical power. This would provide a more comprehensive understanding of digital culture.

  • Social media platforms are increasingly relying on new conceptions of personal archives for data mining and monetization. They are also remediating past cultures in new ways like YouTube archiving music/popular cultures.

  • Approaches like software and platform studies emphasize analyzing specific technical components of machines over vague notions of “virtuality” or “digital.” This provides a materialist analysis aligned with media archaeology.

  • The chapter argues digital humanities should dynamically revive theoretical foundations and integrate critical analysis of politics, economics to engage fully with contemporary media transformations.

Here is a summary of the key points from the passage:

  • The computational turn is leading to various claims being made about how digital artifacts and practices should be defined, categorized, and situated within academic disciplines and fields of study.

  • These claims involve rendering computational frameworks, recalibrating traditional objects of inquiry, and relocating fields of study across new terrain or dimensions.

  • Claims are being made implicitly and explicitly about the appropriate lenses and frameworks for analyzing digital technologies. This includes determining languages, tools, taxonomies, scales and levels of analysis.

  • These claims amount to disciplinary positioning and plays for ownership/exploration rights over emerging fields related to the digital humanities.

  • There are territorial disputes occurring between traditional humanities subjects as domains shift with the computational turn.

  • However, the computational recalibration of frameworks and objects of study may lead to more radical transformations than anticipated, as fields mutually influence each other in asymmetric ways.

  • The potential delegation of neo-liberal values through digital technologies importing them into humanistic domains is also a factor to consider.

  • Digital/computational technologies encourage connection, blending of genres/forms/media, and challenging of boundaries. However, academic disciplines still seek to maintain boundaries around digital objects and practices.

  • Simulation is a key characteristic of new media. Through simulation, disciplines can claim digital media fits within traditional frameworks focused on text/representation. But this reconfigures research objects and disciplines in subtle ways.

  • There are ongoing tensions as disciplines seek to incorporate, expand to, or exclude digital/computational areas. Various techniques are used like constructing taxonomies, defining new fields, proclaiming exclusions/qualifications, or writing authorized histories.

  • Debates emerge about what constitutes “new” media versus the evolution of older forms. Terms like “Web 2.0” are used both to define new platforms but also pull together existing technologies. This provides opportunities for both projecting new perspectives and retrospection on lengthening histories of computing/new media.

This passage discusses debates around digital narratives versus games. It notes that this debate has played out not just at the level of analyzing specific works, but also in terms of disciplinary boundaries and framings.

Some scholars, like Espen Aarseth, have argued strongly for distinguishing games from narratives and for establishing “games studies” as its own field separate from narratology or media studies. Narrative is framed as an outdated paradigm that can’t adequately analyze games, which are characterized as computational and based on action rather than representation.

The passage then discusses Marie Laure Ryan’s intervention in this debate from around 15 years ago. Ryan acknowledges the shortcomings of traditional interactive narratives but argues the notion of narratology used by “ludologists” is too reductive. She proposes rethinking narratology to integrate both top-down design and user actions/interactivity, mapping different narrative types based on axes of internal/external and exploratory/ontological dynamics.

In summary, it outlines long-running debates between narrative and games as analytical frameworks, and one scholar’s attempt to reconceptualize narratology to encompass both designed structures and user participation in digital narratives. The debates also took on disciplinary significance regarding relationships between fields like games studies, media studies, and narratology.

The passage discusses Katherine Hayles’ efforts to develop a canon and definition of electronic literature. Some key points:

  • Hayles aims to improve on an existing definition by surveying the entire field, identifying major genres and theories. She also wants to expand what counts as literary works.

  • Her book serves as a definitive survey of electronic literature so far, with an accompanying DVD collection clearly establishing an emerging canon.

  • The works included extend beyond hypertext pieces already considered canonical, to include media-rich immersive pieces and dispersed locative works for mobile platforms.

  • Hayles justifies this extension by arguing new media works are already deeply informed by print literature traditions. She also notes the transition from print to digital means print works are now largely digital productions.

  • There is no true division between “old” and “new” media, as printed books exist in a digital format on the cusp of further digital transformation.

So in summary, the passage discusses Hayles’ efforts to develop a canon and definition of electronic literature by surveying the field, identifying genres/theories, and expanding what counts as literary works in light of digital transformations across media.

  • Hayles argues that the shift to digital media and computation is largely unacknowledged, especially in traditional humanities disciplines. However, others argue this claim is hard to credit given existing research on transformations in printing industries, cultural materials, and impacts of digital technologies.

  • Hayles is likely addressing only a restricted audience within traditional humanities. She calls for a shift away from seeing computers as “soul-less” and non-literary.

  • Hayles explores the possibility of “cognitive partnerships” between humans and machines through electronic literature. This blurs distinctions between embodied cognition and machine processes, as well as between the literary and non-literary.

  • This raises new perspectives on questions of “soul” and cognition, and recasts electronic literature as a site to explore human-machine dynamics in everyday digital interactions rather than as derivative of traditional literature.

  • However, Hayles gives little attention to how these new material interactions might raise questions about the workings of social power and ideology.

  • Others, like Neil Badmington and Matthew Fuller, feel traditional humanities disciplines can no longer adequately address digital media andComputation. Fuller’s “Software Studies” project aims to directly study digital operations rather than their representational forms.

The passage describes the Lexicon software studies project and its approach. It aims to study software by focusing on its operations, processes, actors, and functions rather than cultural products. This shifts the perspective to how software enables cultural interactions and impacts.

Key aspects include:

  • Exploring the tensions between the formalism of code and the inexactitude of human language and practices.

  • Entries tackle software from different perspectives like algorithms, languages, interfaces, etc.

  • Codeworks that are both executable and readable in both machine and human language are seen as valuable for bridging the gap.

  • Contributors have a “two-way intelligence” of both technical skills and critical perspectives from other disciplines.

  • There is a tension addressed between software’s technical potentials and its reality as a commodified market product.

  • The goal is to trace software’s alternative history outside just market influences and explore how use/execution shapes outcomes beyond just coded outputs.

So in summary, it lays out an interdisciplinary field of software studies focused on software’s own terms and cultural interactions, not just products or representations.

  • The passages discuss canonical approaches and categorization strategies in digital media and humanities. There is a debate between more formalist vs critical approaches, with some North American scholars like Ryan and Hayles taking a more depoliticized formalist stance compared to cultural critics like Fuller.

  • Software studies is presented as an attempt to establish a grand field and canonize certain works, taking a radically optimistic view. canonization raises questions of procedure and theory.

  • Categorization strategies set up new divisions and taxonomies of digital media works, but also require certain forms of engagement and disable others. This can impact disciplinary claims over new media.

  • Considering materiality of both texts and human-machine interactions could lead to new forms of literary study beyond distant reading, or a “beyond text” approach that moves past traditional humanities.

  • The computational turn prompts rethinking digital humanities to engage more with questions of materiality and engagement. Maintaining a critical view is important when analyzing new media technically.

Here is a summary of the provided sources:

  • Juul (2001) discusses the relationship between games and narratives, arguing that games have their own logic and rules separate from storytelling.

  • Latour (1996) explores the development of a failed public transit system called Aramis through the lens of technological determinism. Latour (1992) also discusses how technologies shape society.

  • Lister (1995) examines the changing nature of photographic images in digital culture.

  • Mumford (1946) provides a wide-ranging analysis of the relationship between technology and civilization.

  • Moretti (2009) uses computational methods to analyze patterns in 7,000 18th century British novels.

  • Plant (1996) discusses how digital technologies are changing concepts of culture.

  • Ryan (2006) analyzes the role of avatar-based representation in interactive digital narratives.

  • Selzer (2009) applies philosopher Mark Seltzer’s work on trauma and publicity to new media technologies.

  • Sterne (2006) discusses MP3 files as cultural artifacts.

  • Turner (2006) traces the rise of digital utopianism through the work of futurist Stewart Brand and his Whole Earth Network.

  • Thornham (2011) ethnographically examines representations of gender and narratives in video games.

  • Williams (1975) analyzes television as a technology that shapes cultural forms.

  • Zielinski (2006) presents an alternative history of media and technology focused on discontinuities and innovations throughout deep history.

  • Zylinska (2009) applies ethical frameworks to philosophical issues raised by new media technologies.

The chapter discusses Alan Turing’s 1937 paper which developed the concept of a universal computing machine by abstracting from the practice of human computation, where mathematicians were hired to perform calculations. Turing’s analysis of the human computing process focused on how humans use writing to augment their limited memory in performing complex calculations.

The chapter then discusses how Turing’s later work in the 1940s on whether machines could think obscured the human roots of computation. His proposal of the “Turing Test” involved concealing physical gender indicators through communication by teleprinter, shifting judgments of intelligence inward.

More broadly, the chapter argues that efforts to overcome limits of human memory and cognition through technology have aimed to make memory and information more exterior and collective. While writing was initially problematic for secrecy, modern memory technologies conflate writing and memory, creating new anxieties about information revelation and concealment. The chapter traces this issue back to ancient accounts of the Druids’ preference for human over written memory for secret teachings.

  • The metaphor of memory as a storehouse or repository has been criticized, as memories cannot truly be stored when not perceived and then retrieved later. However, the metaphor persisted as information technologies advanced.

  • Early computers described their memory as a type of storage, like Charles Babbage describing the memory of his Analytical Engine. Von Neumann also used metaphors comparing computer and human memory.

  • However, computer and human memory are quite different. Computer memory is volatile and prone to errors, unlike the portrayal in early metaphors. It was not designed to protect secrets like human memory.

  • Turing conceived of memory as symbols on a tape, which could theoretically be inspected or altered from outside. Subsequent work tried to retrofit computer memory with properties like individual ownership and control that characterize human memory.

  • The idea of “hiding” emerged as a strategy to address the growing complexity of computing tasks and diversity of users/programmers. Pioneers like Butler Lampson designed layered, modular systems to isolate processes and allocate controlled access to resources, in effect “hiding” the lower layers and other processes.

  • Early computing systems in the 1960s employed “powerful protection” and “enforced interactions” to guard against accidental actions by users sharing the “bare machine” and memory. While security was not a major concern then, the systems aimed to prevent unintended access.

  • The Cambridge CAP computer from the late 1960s implemented tight control over memory access, restricting each program to only the required data. Processes were organized hierarchically by level of access privileges.

  • Around the same time, David Parnas proposed the concept of “information hiding” in software design, where each module conceals internal details and reveals minimal interface information. This aimed to improve programs as complexity increased.

  • Later works emphasized strategies like protection, layering and individuation to simulate private memory ownership by software “individuals.” Fully preventing unintended memory access remained a core challenge in security.

  • Object-oriented programming emerged as a dominant metaphor, blending technical needs like security with claims about human cognition. It offered structures for determining memory access through abstraction away from the “bare machine.”

  • Object-oriented programming views computation as objects actively passing messages to each other, rather than just a sequence of calculations. This gives computation a metaphorical social order of autonomous objects.

  • Early languages like Simula aimed to provide security by allowing attributes to be hidden from other objects. This developed into an anthropomorphic perspective of individualized, communicating processes.

  • Languages like C++ further developed protection mechanisms and the idea that information is private by default with access control. This gave objects an even more developed social life.

  • The reveal/conceal dynamic in code traverses binaries like hidden/shown and inside/outside as different imperatives motivated their creation. Ultimately code viewing computation as a social order hints at a proto-anthropomorphic perspective.

  • However, truly hiding things in software is difficult. Marking something as ‘hidden’ does not actually hide it but makes another mark. From certain perspectives, hidden boundaries between objects dissolve. The hidden is often transparent.

Here is a summary of the key points about erica Frabetti, and Amy E. Hughes in the provided text:

  • The text discusses the meaning and interpretation of legal texts, and how this may be impacted by computational analysis and data mining of large collections of legal documents.

  • It notes that positive law encoded in legal texts carries authority and can have real consequences, so legal interpretation is a normative undertaking that requires elements of predictability, coherence, and compliance with constitutional safeguards.

  • The proliferation of legal texts online makes comprehensive human review and interpretation impossible. Data mining and analytics may start doing more of this work to find relevant arguments and precedents.

  • This raises questions about the difference between human and computational interpretation, and if the meaning of law could be impacted. Data mining may surface patterns in past decisions that human review would miss.

  • The text focuses on how the data mining process must disambiguate texts into machine-readable data, and the algorithms used will co-determine the outcomes, with patterns potentially invisible to human review in courts.

  • This poses due process concerns if invisible computational analyses start substantively impacting legal arguments and decisions. The text aims to explore these issues around the “computational turn” in legal interpretation.

  • Sculley and Pasanek developed recommendations for interpreting the results of data mining legal texts to avoid unwarranted deference to hegemonic legal systems and open results to scrutiny.

  • Their recommendations aim to provide protection against issues that could arise from data mining, like overreliance on computational results without understanding their limitations and complexity.

  • Legal interpretation requires relating computational findings back to concepts like due process to ensure techniques like data mining do not undermine legal norms and principles.

  • Properly interpreting data mining results requires lawyers, legislators and citizens to scrutinize computational “knowledge” and understand the complexity of data mining operations.

  • This relates to debates on balancing computational decision-making with principles of legal interpretation, normativity, integrity and notions of justice in legal systems.

  • The law struggles with issues of measurability - can we quantify concepts like justice, causality, culpability, and punishment? Actuarial approaches try to calculate probabilities and predictions but have faced criticism.

  • Data mining legal texts introduces challenges around translating real-world events into discrete data points and interpreting patterns mined from large databases. Visualization tools may help understand these processes.

  • Early AI systems used production rules, case-based reasoning, neural networks, and logical approaches to model legal reasoning, but faced difficulties in ambiguous concepts like determining case similarities.

  • Modern techniques combine methods, like neural networks with fuzzy logic, to better analyze legal texts through clustering, annotation, and correlation of linguistic features to predictive outcomes.

  • However, human legal reasoning depends on context, viewpoints, and desired outcomes in perceiving analogies and similarities. Any analogical system would introduce a constitutive normative bias through how it frames and manipulates comparisons.

  • This reflects broader findings on the impact of emotion and bounded rationality on decision-making, challenging ideas of a purely objective or rational legal system and highlighting the need for scrutinizing normative biases in these quantitative approaches.

Here is a summary of the key points from the excerpt:

  • Data mining of legal texts aims to classify them to enable instant retrieval, analogous reasoning, or prediction. This involves “distant reading” of texts rather than close analysis.

  • The goal is for machine learning systems to detect patterns that help lawyers argue cases or scholars develop legal theories, rather than manually analyzing large amounts of individual cases.

  • However, “distant reading” raises questions about how it differs from traditional close reading of legal texts and how it may impact the normative bias lawyers use when interpreting laws.

  • Legal practice involves negotiating competing demands of certainty, justice and purpose, establishing a normative bias. Machine learning could expose inconsistencies but also unintentionally introduce new biases.

  • There are concerns about opaque algorithms and trade secrets restricting scrutiny of assumptions behind machine learning systems.

  • Lessons from digital humanities research suggest machine learning generates new ambiguous texts rather than clear answers, sending analysts back to original sources.

  • For legal mining, transparency, scrutiny of methods and reporting failed experiments are important to prevent domination by biases and protect minority positions, in line with principles of due process and fair trials.

  • Due process in criminal law signifies the effective right to contest a specific legal interpretation. In the era of automated legal text mining, lawyers need to develop an understanding of how data mining processes work in order to properly scrutinize and contest them when needed.

  • Citizens also have a due process right to understand how their actions may be legally interpreted and judged, and to contest determinations of guilt or liability. To uphold legitimacy and effectiveness, citizens must be empowered to understand the standards and data mining techniques that determined the outcome of their case.

  • In the future, autonomous legal systems may be used to directly argue cases, so techniques are needed to disclose how legal text mining influenced court reasoning, e.g. through visual analytics. This allows citizens to participate in scrutinizing knowledge construction technologies affecting their lives.

  • When developing legal knowledge systems through text mining, assumptions and methodologies must be made explicit, all results reported, the data and methods must be transparent, and a dialogue around different approaches should inform legal theory and debates. Oversight of these systems is needed to safeguard principles of due process and legitimacy.

Here is a summary of the key points from the chapter:

  • The chapter takes a somewhat different perspective on the relationship between the humanities and digitality compared to common understandings of digital humanities.

  • It argues that if digital humanities encompass the study of software, writing and code, it needs to critically investigate the role of digitality in constituting the concepts of the ‘humanities’ and the human.

  • It draws on the concept of ‘originary technicity’ from thinkers like Heidegger, Derrida and Stiegler who see technology as constitutive of the human, not just a tool used by humans.

  • Originary technicity critiques the Aristotelian view of technology as instrumental, and sees it as forming knowledge, culture and philosophy by providing inscription and memory.

  • The chapter aims to demonstrate through a deconstructive reading of software/code how the digital and human mutually co-constitute each other.

  • It suggests this perspective has implications not just for humanities and media studies but for concepts of disciplinarity.

In summary, the chapter argues for a deeper understanding of the co-constitution of technology and the human as essential to digital humanities work, drawing on the idea of ‘originary technicity’ and software studies.

The author argues that the digital humanities needs to engage more deeply with the concept of the ‘digital’ rather than viewing technology in a purely instrumental way. Technology, including digital technology, plays a fundamental role in constituting the human and should be acknowledged as such.

While fields like digital media studies have reflected on digitality for over a decade, focusing on concepts like software and code, technology has always been a key concept in media and cultural studies since McLuhan. However, media studies traditionally studied technology ‘culturally’ by examining representations, social meanings, and user practices rather than technology itself.

The author suggests digital humanities should supplement this perspective with a more direct investigation of how digital technologies like software work underneath user practices and discourses. This does not mean directly observing software but understanding it involves more than just cultural approaches. The definition of software should include computer programs as well as related technical documentation and methodologies of software engineering. In short, digital humanities needs a deeper engagement with digital technology as a technical object, not just a cultural one.

  • The author defines software broadly as any technology with cultural significance, not just computer code. This allows them to analyze software through a cultural lens.

  • They want to ask how readable or legible software is, and consider software a form of writing since software engineering involves writing code. Concepts like reading, writing, documents, and text can be applied to software, not just metaphorically.

  • To truly read software, its distinctive forms of writing and reading must be understood. A functional reading only makes it work as intended by engineers.

  • The author proposes a “deconstructive” reading influenced by Derrida - examining what concepts in software must remain unthought for the system to function. This reveals how the software’s conceptual system is constituted.

  • Looking at how software engineering emerged helps understand how software came to be defined in relation to concepts of language, writing, and temporality. Its development involved controlling unpredictable consequences.

  • This history must be analyzed for specific software, not assumed universally. Software’s nature cannot be reduced but involves both concepts and escape from them.

  • Such a reading informs how we conceptualize digital knowledge and the humanities, not just software itself. New technologies shape the nature and content of academic knowledge.

  • The author was asked to give the closing plenary speech at the 2010 Digital Humanities Conference in London as an early career scholar within the field.

  • She reflects on the challenges of giving such a high-profile speech, particularly as a younger scholar, and the responsibility of speaking to the whole DH community.

  • She considers how new technologies have affected academic knowledge and disciplines. While technologies require rethinking traditional frameworks of knowledge, we cannot rely solely on those frameworks to understand technology’s impact.

  • A critical attitude is needed towards disciplinarity. The digital humanities should pursue deep self-reflection to keep its own disciplinarity and commitments open to scrutiny.

  • Drawing on Derrida, the author argues this self-reflection could be enabled by establishing a productive relationship between digital humanities and deconstruction as a questioning reading practice.

  • She proposes reconceptualizing digital humanities as understanding the digital in terms other than just technical or media/cultural studies frameworks. A close engagement with digital technologies and software itself is needed.

  • Examining specific software instances could show how software works and the conceptual issues involved, bringing opacity and technology’s constitutive role in knowledge formation to the fore.

This passage summarizes key aspects of a presentation the author was preparing to give at an international conference in their field. Some key points:

  • The author was very nervous about giving a plenary presentation to colleagues and friends at the conference, where it could make or break their young career.

  • Presenting styles have changed significantly in recent years with streaming and social media, requiring a more engaging presentation than just reading a chapter.

  • The author decided to discuss the Transcribe Bentham project at UCL, which uses crowdsourcing to transcribe Jeremy Bentham’s manuscripts.

  • This would demonstrate digital humanities work, issues in the field like proving value amid budget cuts, and what practitioners can do better like maintaining impact and identity.

  • Transcribe Bentham and the competitive relationship between UCL and King’s College London provide context for discussing digital humanities challenges and next steps.

  • The Transcribe Bentham project has had promising initial success, with over 200 registered users and 200 completed transcriptions after only one month. There are dedicated regular transcribers.

  • However, important questions remain around sustainability, both funding ongoing work and maintaining the digital resources long-term. Legacy data issues also need addressing as transcriptions are converted to standard formats.

  • The project highlights the digital humanities field’s dependence on both primary sources/expertise and modern technology. It also deals with maintaining and updating existing as well as new digital resources.

  • Sustainability is a major concern for all digital humanities work. Having a strong digital presence and identity is also crucial for projects like Transcribe Bentham that exist primarily online. Addressing these emerging issues is important as the field continues to progress.

  • The author discusses strategies for engaging audiences on social media like Twitter and Facebook for their Transcribe Bentham project to transcribe and digitize the work of Jeremy Bentham. They hope to encourage interaction and have people watching their website.

  • Digital presence and digital identity are becoming more important in digital humanities. Interacting in online spaces like Twitter brings complexity. They aim to harness social media to properly promote Transcribe Bentham.

  • The project embraces working with a wider audience and random, open input that may not be perfectly polished. They have to accept less than absolute perfection.

  • Publications are still important for academic “impact” and careers, though digital projects may have more actual impact.

  • There is debate around requiring PhDs for digital humanities roles focused more on technical skills. Career paths are unclear and the field has become more exclusive.

  • Young scholars face difficulties entering the field and lack of opportunities is a frustration amplified on Twitter.

  • Economic uncertainties loom with major funding cuts predicted for higher education, threatening research projects like Transcribe Bentham.

  • The speaker felt it was important to discuss job insecurity in the humanities at King’s College London, where all humanities academics faced redundancy threats in 2009-2010. Palaeography is close to the speaker’s heart.

  • Digital humanities projects need to demonstrate their relevance, impact, and why continued funding is important. However, humanities scholars have historically struggled to make this economic case. DH scholars also need to clearly articulate the transferable skills involved in their work.

  • Not just humanities but whole education sector in the UK is in a precarious financial state. Universities are making brutal cuts. This impacts continuation of projects, students, early career scholars, and job security.

  • If outsiders are scrutinizing academics and making value/funding judgments, digital humanities does not hold up well regarding digital identity, impact, and sustainability. Association websites are outdated and lacking in technology usage and best practices. DH is also poor at articulating relevance and impact.

  • DH needs to lead in preserving its heritage but has only recently started addressing gaps like missing conference abstracts and journals. This undermines claims that DH knows how to sustain disciplines.

  • For DH to improve status and demonstrate excellence, relevance and strength, it needs to take its digital presence and articulation of impact more seriously. Associations also need renewed participation and leadership.

  • For individual scholars, we should be prepared to clearly articulate what we do, why it matters, and why digital humanities should be supported. We need to think about the impact of our work and how it is relevant.

  • Individuals can find support through networks and communities. We should take our digital identity and online presence seriously and promote DH through publications and advocacy.

  • For those in management positions, support digital scholarship within departments and push for it to count towards hiring, promotion, and tenure. Promote teaching programs to train new scholars.

  • Institutions should build records of DH individuals and projects to facilitate research collaborations. Support digital outputs and encourage cross-institutional collaboration.

  • ADHO and affiliated organizations should maintain digital resources to promote DH successes and best practices. Foster collaboration and support young scholars.

  • Funding agencies have supported DH and should continue to do so to ensure research comes to fruition, though we recognize constrained finances. Support is needed for infrastructure and sustainability of projects.

  • As individuals, departments, institutions, organizations, and agencies, there are proactive steps we can take to further establish DH within the humanities and demonstrate our value.

Here is a summary of the key points from the article:

  • Patterns are frequently mentioned in digital humanities research but rarely rigorously defined or problematized as an epistemological concept. Pattern is often used interchangeably with related concepts like structure, shape, graph, and repetition.

  • In digital humanities, patterns usually refer to new structures or relationships uncovered through computational analysis of texts that would not be readily apparent to human readers. However, “structure” carries philosophical baggage so pattern is used instead.

  • While intuition and pattern recognition are central to knowledge generation across disciplines, there is no rigorous philosophical tradition specifically examining patterns. More work is needed to understand patterns as an epistemological object.

  • Patterns on their own cannot justify knowledge claims. They are part of the research process, not the end product. Knowledge from patterns is contextual rather than generalizable.

  • Patterns may be justified as a methodology in interventionist research like design and action research where value judgments about utility are made. They are less applicable as a standalone approach in other digital humanities areas.

  • In summary, the article argues that patterns can inform inquiry but not validate knowledge on their own, and their use depends on the research context and goals. More conceptual work is needed to understand patterns epistemologically.

Here is a summary of the key areas of development in digital humanities according to the provided text:

  • Humanities computing: Uses computers and information technology to create tools, infrastructure, standards, and digital collections for humanities research. Focuses on applying computing to transform traditional humanities approaches.

  • Blogging humanities: Leverages networked media and internet collaboration capabilities for humanities projects. Emphasizes collaboration and sharing through technologies like blogs.

  • Multimodal humanities: Integrates scholarly tools, databases, networked writing, peer commentary, and studies the technologies being used. Brings together different modes and formats.

The chapter then focuses particularly on computational humanities, where computing is applied to analyze individual texts/media or conduct meta-analysis of digitized archives using metadata and databases. It outlines the development of the chapter in five sections exploring psychological perspectives on pattern recognition, patterns in systems theory research, locating patterns epistemologically, Charles Sanders Peirce’s concept of abduction, and proposing an instrumental view of patterns.

  • Gregory Bateson identified common patterns or structures that occur across disciplines in the human and natural world. He called these “meta-patterns” and advocated for a “meta-science” to study them using a multidisciplinary approach.

  • Ecologist Tyler Volk has recently revived Bateson’s concept of meta-patterns as a way to unify knowledge across science and humanities. Examples include spheres, binaries, and sheets.

  • Christopher Alexander popularized patterns as a design methodology in architecture. He saw patterns as representations of underlying social, cultural, and physical forces that create livable spaces.

  • All of these approaches view the world from a systems perspective and see patterns as emergent structures within complex systems. Studying patterns allows insight into the forces and processes of these systems without fully modeling or abstracting them.

  • Patterns provide an empirical and intermediate way to understand systems by documenting common structural elements, rather than creating models or interpretations. They reveal optimal solutions that emerge from natural or social evolution.

  • The concept of patterns can be applied to understand systems across disciplines, but identifying meaningful patterns in fields like history or culture requires understanding the underlying structures and forces in those domains as well.

  • The context discussed is humanities computing and different research methodologies used in digital humanities, including positivist/empiricist, humanities, and action research approaches.

  • Action research/design research aims to effect change through intervention, so findings are only valid for that specific context and time period. Interpretative approaches aim for more generalizable claims but still require subjective judgements.

  • There can be a bias toward more scientific/positivist methods in digital humanities due to the researchers’ backgrounds and the tools/data-focused nature of the work. But care must be taken not to conflate humanities inquiry with empirical sciences.

  • Pattern-based approaches like Alexander’s patterns sit within the action research category, focusing on practical effectiveness over universal validity. Patterns are a design tool rather than fixed knowledge.

  • Peirce’s abductive reasoning framework describes how patterns and hypotheses emerge through intuition and are then tested using deduction and induction. Care must be taken to validate patterns and not fall into “apophenia” of seeing false connections.

  • The discussion frames pattern analysis within different research philosophies and highlights challenges of balancing interpretative and empirical approaches in digital humanities work.

  • Patterns found in data can be either discovered organically or designed/retrofitted by researchers based on preconceptions. Discovered patterns require validation to avoid false patterns, while designed patterns risk fitting the data to a preselected model.

  • Spotting patterns requires care not to influence the data with preconceptions or see patterns where there are none. Inductive or deductive reasoning is also needed to validate patterns.

  • Rather than defining what patterns are, a methodology is needed for appropriately using patterns in line with Peirce’s abductive-deductive-inductive method.

  • Action research provides an established epistemological framework justifying interventionist methods like pattern analysis. It focuses on practical problem-solving over detached explanation or prediction.

  • Process philosophy and pragmatism view knowledge as emerging from iterative processes rather than static truths, supporting the use of patterns within a research cycle rather than as fixed outcomes.

  • Patterns have validity as part of research processes like inspiration, but cannot be abstractly justified on their own. More questions remain around defining patterns across disciplines and developing a methodology.

This passage provides a formal analysis of the concept of patterns in digital humanities and related fields. Some key points:

  • Contemporary digital humanities aims to make patterns visible through data visualization, rather than just understand hidden conceptual structures as structuralism sought to do. This raises questions about whether we find patterns that aren’t really there.

  • Pattern recognition is important in fields like cybernetics, information theory, cognitive psychology that view systems and information processing models of the mind/world. Digital humanities now draws on these concepts.

  • Are the patterns actually emerging from evolutionary systems, or are researchers imposing systems/patterns? What is the nature of the conceived systems - connections vs. physical/social structures?

  • Researchers’ intentions in digitizing and systematizing texts needs consideration. Are observed patterns just descriptive, or used to support theoretical models in a way that may not be justified?

  • Controlled experimentation is difficult given the large-scale, human nature of many systems studied. Patterns may be useful for local structures but not describe whole underlying systems.

  • Questions about patterns are interrelated to larger concepts of information, systems, and epistemology. As these concepts increase in importance, patterns will be too in digital humanities and other disciplines.

So in summary, it provides a formal analysis of key conceptual issues regarding patterns and related concepts in digital humanities methodology.

Here is a summary of the key points from the article:

  • The Digital Formalism project aimed to gain new insights into Russian filmmaker Dziga Vertov’s formally structured style through frame-by-frame analysis of his films. It involved archivists, film scholars, and computer scientists.

  • Films were digitized frame-by-frame to create digital image sequences without interpolated frames. Manual annotation was done using software to provide ground truth for automated analysis.

  • Challenges included the theoretical vagueness of Russian Formalism and differences between humanities and sciences terminology. Automated analysis also had limitations due to film condition and Vertov’s techniques.

  • Close study of Vertov’s writings and films were needed to understand his intended structures, rather than just formal analysis. His films employ dissolves, superimpositions, and tricks that complicate analysis.

  • The project aimed to determine what cinematic elements are key to analysis, how can computer methods support or enhance manual annotation, and how data can be visually represented to reveal hidden structures.

  • Visualization is discussed as a way to represent large quantities of data or structure in films to provide new insights for film scholars. Two collaborators, Lev Manovich and Yuri Tsivian, represented different approaches to visualization.

  • Cinemetrics is an online database created by film scholar Yuri Tsivian for analyzing the formal aspects of films, such as shot lengths and structures. Early filmmakers and scholars counted frames and shots to study rhythm and style.

  • The database allows contributors to manually annotate films with shot details. For a project on Dziga Vertov’s films, the author needed precise frame-level annotations to analyze short shots. Some data was input via Excel. Discussions were held regarding reel changes and film structure.

  • Lev Manovich and his researchers created visualizations of the film data, starting with basic shot parameters and adding other metrics like shot types and motion. Visualizations showed shot contents, structures, and montage variations across reels.

  • Automated analysis has limitations as it cannot interpret semantics or distinguish camera from object motion. Face recognition also has issues. Manual annotation and discussion with experts is still needed to interpret computer results properly.

  • Visualizations can be useful for studying how technical innovations impacted style, comparative analyses across time/place, and analyzing how politics influenced certain filmmakers’ ability to experiment, as seen in some of Vertov’s purportedly “uninteresting” later films.

  • The passage discusses analyzing controversy and its resolution on Wikipedia, as the encyclopedia allows open editing from multiple perspectives.

  • It analyzes a specific sequence in the film “wing” where a formally structured sequence appears in the middle, disrupting the conventional opening structure.

  • It also touches on limitations of automated analysis for interpreting genres, semantics, and artistic signatures/styles, noting that human input is still needed.

  • It discusses opportunities for computer science to aid film studies, like facilitating data mining, shot recognition, and digitization to compare film versions held in different archives. While computational methods provide new insights, human expertise remains important for developing meaningful analysis categories.

  • The contribution of digital humanities to this project and film studies more broadly could include making patterns and micro/macro structures within films more visible through automated extraction and visualization of film data. However, categories for analysis still require human knowledge and judgement.

This chapter argues that previous research on Wikipedia has focused too much on evaluating the accuracy and quality of articles, rather than examining Wikipedia as a dynamic collaborative process. It reviews studies that have looked at the quality and reliability of Wikipedia articles compared to traditional encyclopedias. While some studies found issues like omissions, others found Wikipedia articles to be quite accurate, with errors quickly corrected by the community.

The chapter asserts that Wikipedia should be understood as a socio-technical system, not just a collection of articles. Technical features like software, discussion pages, history pages, and protections/oversight from administrators and bots all mediate and facilitate the collaborative editing process. Controversy and differing perspectives during this process can actually be positive and lead to article improvements over time.

The chapter proposes analyzing controversial editing events and visualizing how articles and their associated discussion networks evolve. It will use the Wikipedia article on Feminism as a case study to explore how controversy is resolved and the role it plays in the collaborative development of articles on Wikipedia.

  • The chapter proposes using Wikipedia’s editing history and processes of managing controversy to study how societal concerns are reflected in article revisions over time. Controversial topics within articles can provide insights into how human and non-human actors shape article quality.

  • Tommaso Venturini provides a definition of controversy as a contestation between various actors (human and non-human) that challenges established assumptions or aggregates. Controversies both open up “black boxes” for discussion and construct them.

  • Venturini outlines five pointers for locating controversies on Wikipedia: they involve diverse actors; show the dynamic social form; are reduction-resistant; bring previously unnoticed objects into discussion; and are conflicts between worldviews.

  • To analyze controversies on Wikipedia, the chapter examines tools like talk pages, editing histories, warning templates, and hierarchical privileges that govern how disputes are handled. Features like forking and splintering of content also reflect controversies.

  • The chapter argues controversies can be fruitfully studied on Wikipedia to understand how consensus and quality emerge through the participation and perspectives of diverse editors over time.

  • The study analyzes the Wikipedia article on “Feminism” to understand controversy and debate around this topic on the platform. It uses various digital tools to collect and visualize editing data.

  • The sample focuses on “Feminism” as it is relatively mature and covers a historically controversial issue. Accusations of gender bias on Wikipedia also provide context.

  • Macro analysis looks at the network of articles linked to “Feminism” through bi-directional links. It finds the articles receiving the most edits and charts editing activity over time for some articles.

  • Micro analysis examines two narratives - the two peaks of activity in edits to “Feminism” between 2005-2007, and the splitting of the “Criticisms of Feminism” heading into “pro-feminism” and “anti-feminism” sections.

  • The study aims to understand if periods of highest activity indicate controversy and whether the heading split resolved disputes by analyzing editing patterns before and after these events.

  • Findings provide insights into controversy across the “Feminism” article ecology over time, distribution of editors, and relationship between activity levels and article maturity/controversy.

  • The comments section of editing histories on the Feminism Wikipedia article were analyzed to see which Table of Contents headings received the most edits during two time periods of peak activity.

  • During the first peak from 2006-2007, the “Criticisms of Feminism” section received the most edits, showing high interest in debating that topic. Other frequently edited sections included “Feminism and Science”, “Famous Feminists”, etc.

  • During the second peak in mid-2007, “Criticisms of Feminism” remained a focus, and the new section “Feminism and Religion” also saw a lot of edits as it was introduced. There was also activity on the heading “Issues Defining Feminism”.

  • The splitting of the “Criticisms of Feminism” heading into separate “Pro-feminism” and “Anti-feminism” sections in 2007 corresponded with a decline in editing activity, suggesting it helped resolve controversy. The split headings also saw less changes over time than the original single heading.

  • Key editors during periods of controversy either continued focusing on related topics or were eventually banned, indicating Wikipedia’s hierarchical protocols at work in resolving disputes.

  • Tracing edits across related article topics and headings within the Feminism article provided insights into how controversies emerged, migrated, and were potentially resolved on Wikipedia.

  • The study analyzes the Wikipedia article on “Feminism” to map controversies and track how they change over time.

  • It uses digital tools and methods like the Wikipedia edit scraper and network analysis tool to analyze edits, editors, links, and discussion page activity.

  • In the early years, the most controversial topic was “Criticisms of Feminism.” This was relegated to smaller sections by 2007 as interest declined.

  • Editors like Anacapa and Bremskraft were blocked due to disruptive editing patterns around feminism topics.

  • The data shows how controversy management shapes both the content and structure of the article over time. Heated topics are highlighted and refined through disputes.

  • Mapping controversies provides insights into what issues are most important to editors and reveals complex dynamics that influence the article’s content maturity.

  • Further studies could use similar methods to analyze real-time controversies by tracking issues and editors as debates unfold on Wikipedia.

Here is a summary of the key points from the article:

  • With the rise of digital media and social media, humanities researchers now have access to huge collections and repositories of digitized cultural data, far more than any individual could examine in a lifetime. This represents both an opportunity and a challenge for how scholars study culture.

  • Collections of amateur/user-generated media like photos, videos, designs are now in the billions or millions of items, greatly exceeding the size of traditional cultural datasets studied by humanists.

  • The authors developed computational methods for analyzing and visualizing large collections of digital images and media. This includes automatic digital image analysis to extract numerical descriptions of visual features, and visualizations that organize the complete dataset based on these features.

  • As a test case, they applied this method to analyze over 1 million pages of manga images downloaded from a manga hosting website, extracting various visual features from each page using computation.

  • The goal is to demonstrate the need for computational approaches to explore such large collections and identify patterns, as well as introduce their specific methodology combining image analysis, feature extraction and visualization.

The passage discusses the limitations of traditional humanities research in analyzing large cultural datasets through a small sample size or “close reading” approach. To properly analyze large collections of images from fields like manga, we need automated methods that can precisely compare and find patterns across sets of millions of images.

The author uses the example of comparing stylistic differences between pages from two popular manga series, One Piece and Vampire Knight. While we can note some distinctions by examining two sample pages, this approach does not scale to analyzing the thousands of pages in each full series, or millions of pages across different manga genres and authors. Looking at a small random sample provides no certainty that observations will generalize.

New computational methods that can automatically analyze visual elements of large image datasets at scale are needed. This would allow mapping the full graphical spectrum used across commercial manga, understanding stylistic evolution over time, comparing long-running versus short series, and objectively testing hypotheses about stylistic differences between gender/age demographics. The key is developing interfaces that make precisley comparing and exploring millions of cultural images feasible.

  • Examining only a small sample of images from a large dataset poses limitations. The sample may not be representative of variations in style, content, etc. across the full dataset. It also makes it difficult to analyze gradual transitions or changes over time.

  • Even looking at a moderate sized dataset of 100 images, solely using visual inspection has limitations. The human eye is not very good at registering subtle differences between images. Two images that are only slightly different may appear the same to our eyes.

  • To properly analyze large image datasets, a more systematic approach is needed, such as annotating/tagging each image according to visual characteristics using a controlled vocabulary. This allows statistical analysis of patterns in the tags across the full dataset.

  • However, manually tagging a dataset as large as 1 million images would require an unrealistic amount of time, taking one person around 6 years to complete. Crowdsourcing can help but has its own limitations for visual analysis.

So in summary, visual inspection alone is insufficient for large datasets, but manual annotation does not scale either - new computational approaches are needed to analyze visual patterns and changes at large scales.

  • It is difficult to notice small visual differences between images using human visual analysis or annotation methods alone. Natural languages also lack terms to describe all possible visual variations.

  • The method used involves digital image processing to automatically measure visual characteristics (features) of images, like color, texture, lines, etc. This quantifies visual qualities as numeric values.

  • The average values of features for each image are then used as coordinates to position images on 2D scatter plots based on the selected features. This visualizes differences that are hard for humans to perceive directly.

  • It bypasses issues with human perception and description by translating visual qualities directly into quantitative values and using those to position images in visual space. This “describes images with images” rather than relying solely on human perception or language.

  • The core technique is scatter plots that map averaged feature values of images to Cartesian coordinates to display and compare images based on selected visual dimensions quantified numerically.

Here are the key points about using digital image processing and visualization techniques to analyze a large collection of manga images, as summarized from the provided text:

  • The researcher uses techniques like scatter plots, line graphs, and a new “image plot” method to visualize images according to quantitative descriptions of their visual properties like brightness, saturation, etc.

  • Image plots superimpose actual images onto data points in graphs, allowing visualization of both numerical values and image content.

  • Case studies analyze two manga titles, Abara and NOISE, plotting pages based on standard deviation of brightness values and entropy. This reveals stylistic variation between titles and outlier pages.

  • Temporal changes in styles are studied by plotting visual features like entropy against page sequence numbers. This shows how styles varied over the narrative sequence.

  • Both titles showed pages becoming more stylistically varied over time, though patterns differed between titles.

  • The methods allow large image collections to be explored and compared based on quantitative visual features extracted from the images. This provides new insights into graphic narratives beyond traditional qualitative analysis.

  • The y-axis represents the mean (average) greyscale value of all pixels in a page, where greyscale value is calculated as (Red value + Green value + Blue value) / 3.

  • An example is provided of visualizing 342 pages from the webcomic Freakangels over 15 months of weekly publication. The visualization shows the mean greyscale value changes very gradually and systematically over the publication period, revealing an unexpected pattern.

  • Two popular manga series are compared - Naruto (8037 pages) and One Piece (9745 pages). Scatter plots map each page based on standard deviation of greyscale values on the x-axis and entropy calculated from greyscale values on the y-axis.

  • The plots show quantitative rather than qualitative differences between the visual languages of the two series. One Piece tends to have more detailed pages while Naruto’s visual language is more diverse.

  • Time-based visualizations reveal patterns in stylistic development over long publication periods, from years down to monthly/weekly scales.

  • Visualizing over 1 million manga pages covers the full space of graphical possibilities more densely than individual series, questioning concepts like discrete artistic “styles”.

  • Formalization is seen as key to introducing computational methods in the humanities, but experiences have shown that rigidly imposing formalization from a computational perspective does not work well for humanities research communities.

  • The authors conceptualize different approaches to formalization in humanities research and computing as “cultures of formalization” to highlight epistemic variety.

  • They present four case studies exploring how formalization is already a part of humanities research without necessarily thinking of computation.

  • Analyzing cultures of formalization can identify aspects of research that could be better supported through suitable computing approaches, enriching the computing research agenda and enabling more constructive interactions between humanities and computing.

  • Considering formalization as having epistemic variety instead of being a singular concept is important for computational programs in the humanities to succeed. It leads to more symmetrical and collaborative relationships between different stakeholders.

  • The authors’ explorations and findings influenced developments within their Alfalab virtual research environment project to take into account different cultures of formalization.

This section discusses four examples of formalization in different areas of the humanities. The first case study explores using computational techniques like topic maps and rhetorical structure theory to formalize and visualize the hypothesis dependency structure in historical interpretation. This would make the complex networks of arguments and evidence more explicit and evaluable. However, formalizing arguments poses challenges as it moves them away from their original styled text into abstract symbolic representations. This risks alienating historians.

The second case study looks at analyzing and comparing the use and functions of names in literary texts from different genres and time periods. Studying all the names in a text or body of work, rather than just significant ones, allows for a deeper understanding of how names contribute to the “onymic landscape”.

In summary, these case studies show different approaches to formalization across humanities domains like history and literature. While computation can facilitate new analyses, formalization also risks distancing researchers from their original practices and styles of argumentation. Care must be taken to avoid alienating scholars and promote formalization approaches that establish trust.

This case study looks at how researchers in visual fields like sociology and anthropology use Flickr to study empirical material related to graffiti and street art. While primarily a photo sharing platform, Flickr allows users to build personal archives, browse others’ content, and engage in social networking activities. Researchers use Flickr as a supplemental source to illuminate materials gathered through fieldwork by connecting different pieces of empirical data. They are able to do this because of the huge amount of user-uploaded content on Flickr that is searchable through tags and labels. Even though researchers are critical of overly descriptive text for photos, they still add titles and tags to their own Flickr photos to facilitate search and connection to other relevant materials. The act of tagging contributes to the informal formalization of how content is categorized and understood on the platform, which in turn shapes how it can be constituted as an empirical research source.

  • The case studies illustrate different modes and motivations for formalization in humanities research, rather than seeing it as a unitary principle driven by computation.

  • Formalization can take technical/cultural, heuristic, emergent, and other forms depending on the research domain and goals.

  • Computational approaches should be developed to better serve researchers’ needs and ambitions, not dictate them.

  • Recognizing different cultures of formalization can help researchers learn from each other and cross-fertilize ideas.

  • The Alfalab project aimed to foster interdisciplinary collaboration but found a shared infrastructure was not enough - virtual research environments were tailored to specific research workflows.

  • Computational humanities should maintain humanities traditions and not claim its methods are universal or superior. Formalization also impacts the cognitive process of research.

  • Understanding formalization practices can help align technology development with humanities needs in a more nuanced way.

So in summary, it advocates recognizing the diversity of formalization in the humanities and tailoring computational approaches accordingly, rather than imposing a single model.

The project Alfalab aimed to develop digital resources to support humanities research, but found that generic computational approaches were not suitable as they did not align with specific research topics and methodologies.

Instead, Alfalab created three targeted virtual research environments (VREs): TextLab for text-based research, GISLab for geospatial data analysis, and LifeLab for historical life course analysis. These VREs provided domain-specific tools and workflows tailored to the needs of relevant research communities.

Workshops and documentation helped familiarize researchers with the VREs. An InterfaceLab group ensured collaboration between humanities scholars, computer scientists, and science and technology studies experts to develop the VREs.

While acknowledging different epistemic cultures and formalization approaches, one commonality identified across disciplines was the concept of annotation. Therefore, a fourth demonstrator was created as an annotation discovery tool to link annotations across the three VREs.

This aligned computational approaches with specific research needs while also identifying common grounds to enable some interdisciplinary collaboration through digital means like shared annotation capabilities. More knowledge of varied humanities research practices could further boost successful networking of digital resources.

Here is a summary of the key points from the given text:

  • Transdisciplinarity goes beyond multidisciplinarity and interdisciplinarity by involving integration and collaboration across different sectors of society in addition to academic disciplines.

  • Funding agencies are encouraging transdisciplinary and data-driven research in the humanities to address complex societal issues. This involves large-scale data sharing, computational analysis methods, and collaboration across disciplines.

  • e-Research tools and cyberinfrastructure aim to facilitate this type of data-intensive and collaborative research. However, developing such tools requires transdisciplinary work integrating the expertise and perspectives of different fields.

  • The paper uses the example of developing text-mining tools for social scientists to illustrate the transdisciplinary nature of digital humanities research enabled by e-Research.

  • Transdisciplinary research presents challenges in integrating knowledge and practices from different domains. But it also signals major changes and opportunities for conducting humanities research at large computational scales and through inter-sector collaboration.

In summary, the text discusses the shift towards transdisciplinary digital humanities research supported by e-Research infrastructure, using the case study of developing text-mining tools to demonstrate this transdisciplinary approach in practice.

This passage discusses transdisciplinarity in the context of digital humanities research. Some key points:

  • Mode 2 research refers to knowledge production that is application-oriented, involves diverse stakeholders from various sectors, and crosses disciplinary boundaries. Transdisciplinarity is key to Mode 2.

  • Transdisciplinarity goes beyond interdisciplinarity by involving a more dynamic interaction between disciplines and developing a shared conceptual framework.

  • Several studies are discussed that use qualitative methods like ethnography and interviews to study collaboration and knowledge sharing across disciplines in projects involving areas like nanoscience, computing grids, and different academic fields.

  • The case study examined is an 18-month project customizing text mining tools for social science research. It involved collaboration between domain experts and text mining developers.

  • The project aimed to demonstrate the usefulness of text mining for humanities research but ended up providing insights into how computational tools are actually implemented and situated within real interdisciplinary projects and the negotiations between different methodologies and practices this involves.

  • Text mining refers to the discovery of new information from text sources by automatically extracting and linking information using techniques like data mining, machine learning, natural language processing, etc. It aims to gain insights and knowledge that cannot be achieved manually.

  • Textual analysis traditionally involves close reading of texts by researchers to interpret and code themes. It is labor intensive and limited to small corpora sizes.

  • The project aimed to develop text mining tools to assist large scale textual analysis of newspaper articles.

  • The process involved scoping the domain, preparing datasets, developing algorithms to build corpuses of different sizes (5,000 vs 200-300 articles), and acquiring a codebook from researchers to categorize exemplar documents.

  • There were differences in how the researchers and text miners built the corpuses - researchers carefully selected a small set while text miners built a larger indiscriminate set for greater insights from mining algorithms.

  • Adopting text mining tools represents a paradigm shift from interpretative analysis to algorithm-based, statistics-led practices using large corpuses for knowledge discovery.

  • Text miners claimed that larger corpora (dataset of documents) would allow users to reduce noise by ignoring common words, and that larger datasets are better for finding patterns through training text mining algorithms. A sensible clustering typically needs 2,000-4,500 documents.

  • The project involved a domain researcher analyzing a smaller corpus of 200-300 news articles qualitatively as a baseline for training the text mining tools. They met periodically for the researcher to provide details on their analysis process.

  • The text mining system developed tools for term extraction, document clustering, named entity recognition, and sentiment analysis. These required deductive tagging of documents unlike the researcher’s inductive coding process.

  • Iterative user testing revealed the tools sometimes highlighted irrelevant words/phrases and clustering produced many documents to review. Researchers wanted more explanation of the algorithms.

  • While user-friendly, the “black box” nature of the tools and lack of data management features hampered validity judgments and efficiency compared to manual analysis. Overall it was a useful feasibility study but improvements were needed.

  • Users saw potential value in document clustering and term extraction for preliminary analysis of large corpora, but would need extensive domain expert training and validation of results. Quality assurance comes from expert training, not understanding the algorithm.

  • Named entity recognition was straightforward to use but had limitations like not catching all entities and providing little value for qualitative social research.

  • Sentiment analysis was also easy to use but social scientists questioned its validity due to issues with interpreting language and sentences based only on individual words.

  • This raised fundamental issues around what semantic content from texts would be most useful to qualitative researchers and how to present tools to build trust in valid results.

  • While text mining can quickly process large amounts of text, users lost confidence when long lists of undifferentiated search results stretched across many screens. Confidence normally comes from iterative close reading.

  • There is a continuum from highly structured limited-domain texts that quantitative methods may suit, to unstructured unlimited-domain texts requiring more interpretative effort. Some argue a discontinuity exists on this continuum.

  • Adopting text mining for analysis denotes transdisciplinarity, but challenges include the extent of shared theoretical frameworks and who is on board between computer scientists and social scientists. Tools risk instrumentalizing humanities with a big data/AI mindset.

  • The integration of expertise from scientific domains like genetics, physics and biology into the humanities has caused unease among domain experts. Pieri (2009) notes that many social scientists are unaware of promises of connecting large datasets and new analytical tools.

  • To balance this, she calls for discussing limitations of e-science tools and negotiating shared values across stakeholder groups. Better communication is needed for transparent debate and research policy negotiation.

  • Existing literature emphasizes collaboration as key to interdisciplinary success. Strategies include common learning, modeling, expert negotiation, and leader integration.

  • A case study of a text mining project showed disciplinary integration was difficult, hindering transdisciplinarity. Domain experts and text miners struggled to communicate assumptions and appear user-centered.

  • Lessons include collaboration being central, not incidental, to interdisciplinary projects. While automatic coding may aid research, human intelligence remains needed to make sense of results. Efforts continue to harness what computers can do like processing large data to support new social science advances.

Here are the key points from the citations provided:

  • Many of the citations discuss concepts related to new knowledge production, interdisciplinarity, and collaboration within and across academic fields as digital technologies have emerged and evolved. Areas discussed include e-research, digital humanities, computational methods, knowledge discovery/data mining, and information/communication technologies.

  • Several look specifically at how ICTs and digital methods are taken up differently across various research domains and specialisms. Some focus on specific case studies like corpus linguistics.

  • Some address changes to scholarly/scientific communication, practices, and infrastructure through digital/electronic media. Topics covered include open access/open science.

  • Many discuss changing models/frameworks for understanding knowledge production and research dynamics in contemporary digital environments, with some drawing on theories like post-production of knowledge.

  • Some engage conceptual frameworks like interdisciplinarity, framing, paradigms to analyze cross-pollination of ideas and practices between fields.

  • A few analyze sociotechnical aspects like the role of institutions, infrastructure, collaboration in digital research transformation.

  • Text/data mining, digital methods, computational approaches are discussed in relation to new forms of inquiry across disciplines.

Here is a summary of the key terms:

  • Ecologies, networks, media, digital humanities studies, theory, methodology were frequent topics.

  • Print culture, production, memory, reading, narrative, language, and text were areas of focus.

  • Tools, software, data analysis, visualization, encoding, and metadata were techniques discussed.

  • Platforms like Twitter, Wikipedia, and YouTube were referenced in relation to current research.

  • Broader themes included process, practice, skills, power relations, public spheres, and the roles of universities.

  • Key figures mentioned were Turing, Vertov, Tarde, Stiegler, and Williams in relation to foundational ideas.

  • Qualitative and quantitative social science methods were applied across cultural, linguistic, and computational studies.

  • The transformation of knowledge and scholarship through digitization and technical developments was a widespread theme.

#book-summary
Author Photo

About Matheus Puppe