DHCH@ISR 2025 / Speakers

<rdf:RDF>
<dhch:Speaker
dhch:name

Ismini Makaratzi

dhch:member-of
University of Basel
dhch:talk-topic
From Data to Meaning: AI and the Epistemologies of Interpretation in DAH
dhch:talk-summary
Show Talk Summary

Generative AI systems are increasingly involved in art historical interpretation—but on what terms, and with whose interpretive authority? These systems are not neutral infrastructures; they encode assumptions about meaning and knowledge through their training data, design choices, and the cultural logics they inherit. When applied to art history, they risk reinforcing dominant canons, privileging formalist or stylistic criteria over critical engagement, and obscuring the positionality of meaning-making.
This project investigates how Gen-AI can be critically integrated into art historical research while maintaining the principles of humanistic inquiry. Building on interpretive processes that understand meaning as historically situated and constructed, not inherent in an artwork itself, the project explores the application of interpretational modeling to AI. It employs domain-adapted language models, multimodal tools like CLIP, and semantic embeddings. It also explores standardized authorship attribution methodologies.
The research focuses on the design of AI methodologies that align with art historical and image theory frameworks. Through case studies such as the reception of Rembrandt, it examines how interpretative paradigms are encoded and perpetuated in both human and machine analysis. Rather than replacing humanistic inquiry, this work argues for designing AI methods that support it: methods that make visible the frameworks of authorship behind interpretation and contribute to more inclusive, context-sensitive AI approaches to cultural analysis.

/>
</rdf:RDF>
<rdf:RDF>
<dhch:Speaker
dhch:name

Lucas Burkart

dhch:member-of
Professor of Medieval and Renaissance History University of Basel
dhch:talk-topic
Data Bias—A Historian’s Perspective Or What Machine Learning Systems Have to Do with the Japanese Middle Ages
dhch:talk-summary
/>
</rdf:RDF>
<rdf:RDF>
<dhch:Speaker
dhch:name

Sven Burkhardt

dhch:member-of
University of Basel
dhch:talk-topic
Bias Is Not a Bug: Towards Methodological Awareness in AI-Assisted Humanities
dhch:talk-summary
Show Talk Summary

Abstract:
This talk examines the intersecting layers of bias that emerge when applying large language models (LLMs) to historical sources—specifically, a corpus of letters from Nazi-era Germany (1939–1945). Drawing on a Digital Humanities project that uses neural networks and LLMs to extract named entities (persons, roles, places, organizations), I argue that machine bias does not act in isolation, but rather interacts with and amplifies existing archival and source-based distortions. The first layer is archival bias: not all individuals, voices, or materials are preserved equally. The second is source bias: even within surviving documents, representation is uneven—particularly for women and marginalized groups. For example, female actors often appear only in relational forms, making them nearly untraceable. The third layer is model bias: LLMs trained on modern or biased datasets tend to misclassify, erase, or flatten these already fragile traces. Rather than viewing these biases as mere technical errors, I propose treating them as structural epistemic problems. The talk discusses how we address these issues within the annotation pipeline by implementing review flags (needs_review), context-sensitive deduplication, and uncertainty tracking—turning machine limitations into analytical tools. This presentation aims to offer perspectives on how we engage with AI in the humanities. Existing biases in our research data can be amplified by the tools we use—especially by powerful systems such as LLMs. What we call “AI” becomes, through our application of it, a co-producer of historical interpretation and meaning. Recognizing and documenting bias at every level—within archives, within source texts, and within computational models—is not a limitation, but a methodological necessity. This case study offers a practical and critical reflection on the ethical responsibilities of DH projects that rely on AI, and calls for bias-aware annotation frameworks that foreground what remains invisible, uncertain, or contested in historical data.

/>
</rdf:RDF>
<rdf:RDF>
<dhch:Speaker
dhch:name

Chiara Capulli

dhch:member-of
Bibliotheca Hertziana – MPI & Kunsthistorisches Institut in Florence
dhch:talk-topic
Bias by Absence: Mapping Loss and the Promise of NLP in Post-Disaster Urban Histories
dhch:talk-summary
Show Talk Summary

This presentation explores the structural biases involved in reconstructing the urban, devotional, and architectural history of L’Aquila after the 1703 earthquake—a city repeatedly transformed by disaster, political intervention, and erasure. Drawing on an ongoing digital mapping project, I do not present a finished implementation of AI, but rather open up a set of methodological and ethical challenges related to fractured records and asymmetrical data. The project georeferences the 1753 Vandi map of L’Aquila to visualize sacred and civic structures marked as “ruined” or “disappeared.” These ruins speak not only to seismic destruction but to longer patterns of neglect, relocation, or instrumental reuse—particularly during the Fascist-era demolitions of the 1930s. In this context, bias emerges not just from what is recorded, but from what remains unsaid: saints without altars, artworks without provenance, churches with only partial footprints. I am especially interested in whether Natural Language Processing might help trace suppressed or dispersed information across difficult historical sources. Texts like Raffaele Colapietra’s unsystematic but rich writings resist conventional database approaches. NLP might extract spatial and temporal cues to support relational analysis—yet such tools also risk imposing reductive assumptions or flattening cultural nuance. Rather than proposing a solution, this case study invites discussion: how might AI be used critically to surface archival silences without reinforcing them? I seek feedback on the possibilities and limits of NLP in this context, advocating for a reflective engagement with data gaps, epistemic loss, and the politics of archival survival.

/>
</rdf:RDF>
<rdf:RDF>
<dhch:Speaker
dhch:name

Vera Chiquet

dhch:member-of
Director, Swiss Agricultural Museum Former Deputy Head, Department Professorship Digital Humanities, University of Basel
dhch:talk-topic
From Canonical Works to Algorithmic Bias: Rethinking Cultural Authority in the Digital Age
dhch:talk-summary
/>
</rdf:RDF>
<rdf:RDF>
<dhch:Speaker
dhch:name

Maria-Teresa De Rosa-Palmini

dhch:member-of
Digital Society Initiative, University of Zurich
dhch:talk-topic
Evaluating Historical Representation in Text-to-Image Models: The HistVis Dataset
dhch:talk-summary
Show Talk Summary

As Text-to-Image (TTI) diffusion models become increasingly influential in content creation, growing attention is being directed toward their societal and cultural implications. While prior research has primarily examined demographic and cultural biases, the ability of these models to accurately represent historical contexts remains largely underexplored. In this work, we present a systematic and reproducible methodology for evaluating how TTI systems depict different historical periods. For this purpose, we introduce the HistVis dataset, a curated collection of 30,000 synthetic images generated by three state-of-the-art diffusion models using carefully designed prompts depicting universal human activities across different historical periods. We evaluate generated imagery across three key aspects: (1) Implicit Stylistic Associations: examining default visual styles associated with specific eras; (2) Historical Consistency: identifying anachronisms such as modern artifacts in pre-modern contexts; and (3) Demographic Representation: comparing generated racial and gender distributions against historically plausible baselines. Our findings reveal systematic inaccuracies in historically themed generated imagery, as TTI models frequently stereotype past eras by incorporating unstated stylistic cues, introduce anachronisms, and fail to reflect plausible demographic patterns. By offering a scalable methodology and benchmark for assessing historical representation in generated imagery, this work provides an initial step toward building more historically accurate and culturally aligned TTI models.

/>
</rdf:RDF>
<rdf:RDF>
<dhch:Speaker
dhch:name

Ramon Erdem-Sanchez

dhch:member-of
University of Basel
dhch:talk-topic
Charting Absences: Toward an RDF and AI-Supported Framework for Ethical Research Design
dhch:talk-summary
Show Talk Summary

This contribution explores how RDF-based linked open data (LOD) infrastructures, when combined with AI methods, can serve as ethical tools to identify both presence and absence in social science research. Drawing from my political science seminar research on economic inequality and political disaffection in California, and my current work developing a semantic RDF model of walkability and housing data from U.S. federal sources, I argue that research datasets should be embedded in semantic structures that expose where knowledge is dense, and where it is missing. Rather than treating data absence as a passive outcome, this project proposes a proactive framework for identifying and visualizing underrepresented populations, concepts, or geographies across studies. AI systems are envisioned not only as tools for analysis but also as reflexive agents that help researchers detect epistemic gaps, bias propagation, and structural omissions in existing data regimes. By enabling such a diagnostic layer in the research process, RDF and AI can help shift study design toward greater transparency, inclusivity, and ethical responsibility. This approach sits at the intersection of critical data studies, digital humanities, and computational social science, proposing a design-oriented vision for mitigating structural bias through the very infrastructures that govern how data is described, connected, and reused.

/>
</rdf:RDF>
<rdf:RDF>
<dhch:Speaker
dhch:name

Pema Frick

dhch:member-of
University of Basel
dhch:talk-topic
Caught in the Feed: How Algorithms Navigate Us Through Digital Culture
dhch:talk-summary
Show Talk Summary

This talk examines the intersecting layers of bias that emerge when applying large language models (LLMs) to historical sources—specifically, a corpus of letters from Nazi-era Germany (1939–1945). Drawing on a Digital Humanities project that uses neural networks and LLMs to extract named entities (persons, roles, places, organizations), I argue that machine bias does not act in isolation, but rather interacts with and amplifies existing archival and source-based distortions. The first layer is archival bias: not all individuals, voices, or materials are preserved equally. The second is source bias: even within surviving documents, representation is uneven—particularly for women and marginalized groups. For example, female actors often appear only in relational forms, making them nearly untraceable. The third layer is model bias: LLMs trained on modern or biased datasets tend to misclassify, erase, or flatten these already fragile traces. Rather than viewing these biases as mere technical errors, I propose treating them as structural epistemic problems. The talk discusses how we address these issues within the annotation pipeline by implementing review flags (needs_review), context-sensitive deduplication, and uncertainty tracking—turning machine limitations into analytical tools. This presentation aims to offer perspectives on how we engage with AI in the humanities. Existing biases in our research data can be amplified by the tools we use—especially by powerful systems such as LLMs. What we call “AI” becomes, through our application of it, a co-producer of historical interpretation and meaning. Recognizing and documenting bias at every level—within archives, within source texts, and within computational models—is not a limitation, but a methodological necessity. This case study offers a practical and critical reflection on the ethical responsibilities of DH projects that rely on AI, and calls for bias-aware annotation frameworks that foreground what remains invisible, uncertain, or contested in historical data.

/>
</rdf:RDF>
<rdf:RDF>
<dhch:Speaker
dhch:name

Carlijn Juste

dhch:member-of
Lille University,
dhch:talk-topic
Origins and Imaginaries of Artificial Intelligence: An Investigation into the History of a Concept in Constant Re-evaluation
dhch:talk-summary
Show Talk Summary

Today, the term artificial intelligence generally refers to neural networks and deep learning technologies—computer programs capable of absorbing vast amounts of data to generate statistically probable outputs that mimic human processes of thought, create texts or images. The rise of these technologies, particularly noticeable since around 2015, often conceals the fact that the concepts behind artificial intelligence are rooted in developments that have been unfolding for much longer and are historically much richer. Among the first thinkers to explore technologies now related to artificial intelligence were Norbert Wiener, who modelled the human brain through feedback processes, and neurologists Warren McCulloch and Walter Pitts, who in 1943 designed the first mathematical model of a biological neuron. In 1949, Warren Weaver published one of the first studies on machine translation. Even before the term artificial intelligence appeared in 1956, the idea of a humanoid, intelligent machines was already fascinating people, as evidenced by Clara, the automaton described by E.T.A. Hoffmann in the short story The Sandman (1816), or the android in Fritz Lang's Metropolis (1927). These figures show that artificial intelligence is part of a cultural framework that is often far removed from the actual capabilities of the technology defined by this much discussed terminology. The image of artificial intelligence has been shaped as much by the actual capabilities of the technology as by the stories, representations and collective imaginations that surround it. Between technological reality and artistic imagination, this presentation investigates the origins and histories of a current concept. Based on the analysis of The Senster—an artwork created in 1970 by artist-engineer Edward Ihnatowicz, who was described by critics at the time as one of the first artists active in the field of artificial intelligence—this study investigates how the rich imaginary surrounding AI often leads to an overestimation of the machine’s actual capabilities and a misinterpretation of its capabilities and impact. This presentation aims to highlight the fluctuation of technologies referred to as artificial intelligence and the constant re-evaluation of this concept. Particular attention will be paid to the creative processes and forms of collaboration between artists and researchers or between artists and machines that enable the creation of technologically innovative objects that reflect the constant dialogue between technical innovation and cultural representation.

/>
</rdf:RDF>
<rdf:RDF>
<dhch:Speaker
dhch:name

Rosa Lavelle-Hill

dhch:member-of
Associate Professor of Digital Humanities, Social Science, and AI University of Basel
dhch:talk-topic
Using Explainable AI to Understand Human Behaviour and Bias
dhch:talk-summary
/>
</rdf:RDF>
<rdf:RDF>
<dhch:Speaker
dhch:name

Paola Lechuga

dhch:member-of
University of Basel
dhch:talk-topic
Caught in the Feed: How Algorithms Navigate Us Through Digital Culture
dhch:talk-summary
Show Talk Summary

This talk examines the intersecting layers of bias that emerge when applying large language models (LLMs) to historical sources—specifically, a corpus of letters from Nazi-era Germany (1939–1945). Drawing on a Digital Humanities project that uses neural networks and LLMs to extract named entities (persons, roles, places, organizations), I argue that machine bias does not act in isolation, but rather interacts with and amplifies existing archival and source-based distortions. The first layer is archival bias: not all individuals, voices, or materials are preserved equally. The second is source bias: even within surviving documents, representation is uneven—particularly for women and marginalized groups. For example, female actors often appear only in relational forms, making them nearly untraceable. The third layer is model bias: LLMs trained on modern or biased datasets tend to misclassify, erase, or flatten these already fragile traces. Rather than viewing these biases as mere technical errors, I propose treating them as structural epistemic problems. The talk discusses how we address these issues within the annotation pipeline by implementing review flags (needs_review), context-sensitive deduplication, and uncertainty tracking—turning machine limitations into analytical tools. This presentation aims to offer perspectives on how we engage with AI in the humanities. Existing biases in our research data can be amplified by the tools we use—especially by powerful systems such as LLMs. What we call “AI” becomes, through our application of it, a co-producer of historical interpretation and meaning. Recognizing and documenting bias at every level—within archives, within source texts, and within computational models—is not a limitation, but a methodological necessity. This case study offers a practical and critical reflection on the ethical responsibilities of DH projects that rely on AI, and calls for bias-aware annotation frameworks that foreground what remains invisible, uncertain, or contested in historical data.

/>
</rdf:RDF>
<rdf:RDF>
<dhch:Speaker
dhch:name

Ismini Makaratzi

dhch:member-of
Digital Humanities Lab, University of Basel
dhch:talk-topic
From Data to Meaning: AI and the Epistemologies of Interpretation in DAH
dhch:talk-summary
Show Talk Summary

Generative AI systems are increasingly involved in art historical interpretation—but on what terms, and with whose interpretive authority? These systems are not neutral infrastructures; they encode assumptions about meaning and knowledge through their training data, design choices, and the cultural logics they inherit. When applied to art history, they risk reinforcing dominant canons, privileging formalist or stylistic criteria over critical engagement, and obscuring the positionality of meaning-making.
This project investigates how Gen-AI can be critically integrated into art historical research while maintaining the principles of humanistic inquiry. Building on interpretive processes that understand meaning as historically situated and constructed, not inherent in an artwork itself, the project explores the application of interpretational modeling to AI. It employs domain-adapted language models, multimodal tools like CLIP, and semantic embeddings. It also explores standardized authorship attribution methodologies.
The research focuses on the design of AI methodologies that align with art historical and image theory frameworks. Through case studies such as the reception of Rembrandt, it examines how interpretative paradigms are encoded and perpetuated in both human and machine analysis. Rather than replacing humanistic inquiry, this work argues for designing AI methods that support it: methods that make visible the frameworks of authorship behind interpretation and contribute to more inclusive, context-sensitive AI approaches to cultural analysis.

/>
</rdf:RDF>
<rdf:RDF>
<dhch:Speaker
dhch:name

Laura Wagner

dhch:member-of
University of Zurich
dhch:talk-topic
Revealing Fault Lines of Our Visual Culture: The Implications of Text-to-Image Model Personalization
dhch:talk-summary
Show Talk Summary

The rise of open-source text-to-image (TTI) models has transformed AI-generated visual content, making powerful generative AI tools widely accessible. With recent advancements, users can now personalize large open-source models to suit specific creative goals, expanding artistic possibilities in ways that were once unimaginable. However, TTI model personalization practices also raise complex ethical concerns, such as the proliferation of non-consensual deepfakes, the amplification of societal biases through discriminatory patterns embedded in models. Central to this shift is the growing ecosystem of model-sharing platforms, which, while often framed as democratizing AI, are primarily shaped by engagement metrics and commercialization. These incentives influence what types of content are created and shared and are often prioritizing sensational or controversial content over equitable representation. Our recent study presents an exploratory sociotechnical analysis of CivitAI, the most active platform for sharing and developing open-source TTI personalizations. Drawing on a dataset of more than 40 million user-generated images and over 230,000 models, we identify systemic patterns in visual output, model visibility, and user behavior. Exploring model creation pipelines, we examine how patterns of discrimination can be introduced and amplified through tools and practices employed in open-source TTI personalization workflows. The research is situated in the broader aim to contribute to a critical, interdisciplinary understanding of how computational bias manifests in open-source generative AI and to propose proactive approaches to mitigate downstream harm to support more equitable and responsible development within the open-source ecosystem.

/>
</rdf:RDF>