DH Access, Language, and Ethics
DH 500 - 2025-11-25
The Global Divide: DH500 on Access, Deepfakes, and the AI Responsibility Gap
Class 11 of DH500 delivered a powerful and, at times, unsettling look at the ethical minefields and systemic biases currently defining the Digital Humanities. From the literal decay of digital projects to the legal fiction of AI chatbots, this session underscored a crucial point: Digital is not synonymous with accessible or infallible.
The Great Barriers: Who Gets to Do DH?
The student presentations highlighted how DH, despite its promise of openness, is riddled with barriers:
Geographical and Linguistic Dominance: The field is structurally biased. Most active DH labs are concentrated in the Global North (North America, UK, Europe). Scholars from the Global South face immense challenges securing visas and funding simply to attend conferences. This is compounded by the monolingualism of the field, where English is the default setting, leading to the failure of tools for the world’s 7,000+ languages.
Anglo-Scientificity: This core concept—the implicit assumption that English is the baseline for scholarship—means tools and models (like OCR and NLP) often lack quality for non-English languages, making DH training fundamentally exclusive.
Accessibility: Our own resources often fail basic accessibility standards (no alt text, cluttered pages, small interactive areas), excluding disabled people from participating fully. The solution requires strict adherence to WCAG guidelines and an international strategy that respects and funds non-English content creators.
Digital Entropy and the Deepfake Threat
The presentations also forced us to confront the fragility of the digital sphere:
The Problem of Project Decay: The average DH project becomes unusable within 5 to 10 years due to short-term funding, obsolete codebases, and a lack of preservation plans. We lose cultural and historical knowledge when a project dies without proper documentation (heredity). The solution requires institutions to commit to ongoing funding and for projects to use standardized, long-term preservation formats (like XML).
Deepfakes and Synthetic Reality: Deepfakes—AI-generated synthetic media—pose a direct threat to the integrity of DH archives because they create a reality that “feels more real than the authentic one.” They are already being used for harassment (especially against women) and threatening evidence-based medicine by fabricating data. As DH practitioners, we must prioritize source-critical habits: stay skeptical, verify sources, and never take digital artifacts at face value.
The Air Canada Case and the Responsibility Gap
The session concluded with a sharp look at AI ethics and the rapid creation of “responsibility gaps.”
Instructor Jeffrey Rockwell discussed the Air Canada chatbot case, where a customer sued after the AI provided false information about a bereavement discount. Air Canada’s “remarkable submission” was to argue that they were not responsible, suggesting the chatbot was a “separate legal entity.”
This legal maneuvering evokes the historical concept of the limited liability corporation—a legal fiction designed to limit investor liability. The rapid development of agential AI systems (systems that act on a human’s behalf) creates massive responsibility gaps, leaving vulnerable people exposed when systems misbehave (like the Australian welfare algorithm that miscalculated debts).
Jeffrey suggested that a feminist ethics of care—focused on relationships, power imbalances, and the responsibility to care for the vulnerable—is the most appropriate framework for guiding responsibility in the age of AI.
Final Administrative Notes:
We were reminded that the Short Paper (5 to 10 pages) should be tightly argued with a strong, controversial thesis. Students are permitted to use AI (like ChatGPT) for literature reviews, but must disclose its use and verify all references, as AI frequently produces fake bibliographic sources.


