$ cat mla2026_session82.txt

MLA 2026 Session 82 - Open Source Humanities in the Age of AI

Date: Thursday, 8 January 2026
Time: 3:30 PM - 4:45 PM
Location: MTCC - 206F

// Description

With edtech approaches to generative AI increasingly taking center stage on campus, how can we learn from ethical, feminist, and open-source efforts in digital humanities and electronic literature to build meaningful alternatives? Participants share brief provocations with examples ranging from material zines and feminist poetics to software platforms and sustainable publishing, followed by a discussion of strategies and tools.

// Speakers

Anastasia Salter (Presider)
U of Central Florida
U of Waterloo
U of Waterloo
Appalachian State U
U of Central Florida
U of Guelph
Georgetown U

// Abstracts and Bios

Learning Mycelial Strategies in Algorithmic Culture from Voz Pública
Kavi Duvvoori

Dora Bartilotti's Voz Pública weaves relational and affective networks through web-based storytelling and community workshops, linking narratives both in JavaScript algorithms and woven textiles, to amplify the voices of survivors of sexual violence in urban Latin America (LASTESIS 2023; D'Ignazio 2024). This project exemplifies how digital networks can enable socially just world-building. Using mycelial networks (fungal threads in soil) as a guide, we reimagine digital communication systems as spaces for cultivating diverse, sustainable, and interconnected forms of community and expression amidst the challenges of the algorithmic age.

Rather than accepting AI as it is sold––an inevitable replacement of human creativity, labor, and sociality––we argue for using mycelial thinking (Tsing 2015) and glitch feminism (Russell 2022) to design systems that support cultural plurality, ecological repair, and social control (Tandon et. al. 2022) to confront the power dynamics shaped by fantasies of domination, dehumanization, and replacement prevalent in tech bro cultures and AI imaginaries (Rhee 2023), which often displace consideration of "AI"s material infrastructures and social impacts (Crawford 2021). We argue that interconnectedness, remediation, and detoxification (Brown 2017, drawing on Boggs 2012) are vital tactics for creating habitable computational contexts. We offer interconnectedness to ask the question: could linguistic algorithms play a role in activating and supporting social mobilizations, if algorithms are composed as an entangled component rather than a techno-fix opposed to sociality? We offer remediation to ask: can computational communicative contexts be made habitable? And, last, we offer detoxification to ask: can the homogenization of language be resisted by using these systems against their intent?

Works Cited
Bartilotti, Dora. n.d. Voz Pública. https://vozpublica.net/.
Bender, Emily M., et al. "On the dangers of stochastic parrots: Can language models be too big?🦜." Proceedings of the 2021 ACM conference on fairness, accountability, and transparency. 2021.
Brown, Adrienne Maree. 2017. Emergent Strategy: Shaping Change, Changing Worlds. AK Press.
Boggs, Grace Lee., Kurashige, Scott. 2012. The Next American Revolution: Sustainable Activism for the Twenty-First Century. University of California Press.
Chun, Wendy. 2021. Discriminating Data: Correlation, Neighborhoods, and the New Politics of Recognition. MIT Press.
Crawford, Kate. The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press, 2021.
D'Ignazio, Catherine. Counting feminicide: Data feminism in action. MIT Press, 2024.
LASTESIS. 2023. Set Fear on Fire: The Feminist Call That Set the Americas Ablaze. Verso
Russell, Legacy. 2020. Glitch Feminism: A Manifesto. Verso Books.
Tsing, Anna Lowenhaupt. 2015. The Mushroom at the End of the World: On the Possibility of Life in Capitalist Ruins. Princeton University Press.
Udayan Tandon, Vera Khovanskaya, Enrique Arcilla, Mikaiil Haji Hussein, Peter Zschiesche, and Lilly Irani. 2022. "Hostile Ecologies: Navigating the Barriers to Community-Led Innovation." Proc. ACM Hum.-Comput. Interact. 6, CSCW2, Article 443.
BIO:
Kavi Duvvoori is a PhD candidate on the Haldimand tract, at the University of Waterloo, in English. Their research on computational rhetoric explores how algorithms mediate communication: incorporating linguistics, research/creation, and feminist, queer, and ecological media studies. They are an editor for Taper magazine, and have shared computational writing in various publications.

Human-Centered AI Literacy: Open Humanities Approaches for the Classroom
Lai-Tze Fan

This talk presents a human-centered, social sciences and humanities (SSH) approach to improving AI literacy among youth and non-specialists, grounded in a series of participatory workshops I have delivered to over 250 middle-school students and adapted into a new research program on AI-synthetic image literacy. While public discourse around AI education tends to focus on technical skills or computational proficiency, these workshops demonstrate how SSH-driven methods—critical media literacy, visual rhetoric, bias awareness, and participatory discussion—can significantly enhance young people's confidence, discernment, and trust when engaging with AI-generated content. Activities include simplified versions of technical image-auditing methods, hands-on "spot-the-fake" practices, guided ethical reflection, and collaborative meaning-making that foreground students' lived experiences online.

Although designed for middle and high school students, this work also speaks directly to a growing challenge within higher education: the rapid expansion of AI-driven surveillance tools in learning management systems and academic integrity platforms. These systems, often proprietary and opaque, are raising urgent questions about privacy, consent, fairness, and institutional trust. I argue that digital humanities and electronic literature scholars—long attuned to questions of authorship, mediation, and power—are uniquely positioned to lead critical, open, and participatory alternatives.

Talking through the design of the youth workshops, I illustrate how SSH-grounded AI literacy practices can inform university-level responses to surveillance-driven edtech, including by emphasizing transparency, interpretability, open-source thinking, and public-facing literacy as counterweights to opacity and control. In these ways, we can use open humanities frameworks to shape more ethical pathways for AI in educational spaces.

BIO:
Lai-Tze Fan is the Canada Research Chair in Technology and Social Change and Associate Professor at the University of Waterloo, Canada, and Associate Professor II at the University of Bergen, Norway. She directs The U&AI Lab at UWaterloo, which uses interdisciplinary approaches toward the design of responsible AI, and she is the Co-Director of Waterloo's TRuST Scholarly Network with Nobel Laureate Donna Strickland, focusing on public trust in science and technology. Fan is an Editor of the open-access journals electronic book review and the digital review, and she serves on the Board of the Electronic Literature Organization. She is Co-Editor of the forthcoming open-access collection EnTwine: A Critical and Creative Companion to Teaching with Twine (2026) from Amherst College Press.

Communities and Open Weights: The Ends of Software and "Open Source"
John Murray

The advances in AI agents have been as profound as they have been rapid. We are now three years after the introduction of ChatGPT, and we've seen how practitioners have adapted their workflows to the new tools and the new dangers that LLMs and generative AI present. In this talk, I argue that the historic community of open source software has a major opportunity to transform with the access to open weight models and the value of technique and knowledge over software and source code. The most profound implication behind recent capabilities are that local models can largely replace existing proprietary software. Instead of embedding the knowledge and workflows into UIs and programs developed by a team of developers over the course of years, local agents can largely write and execute code on behalf of users without a traditional UI. This is made possible by recent advances by large companies such as OpenAI and Anthropic. Learning to work with frontier models—even proprietary ones—builds transferable skills that apply equally to open-weight alternatives. It is even more important as local-friendly LLM models are released such as IBM's Granite 4.0 Nano, and versions of OpenAI's gpt-oss. We can avoid the trap of giving our data to ed-tech vendors when every instructor is empowered to build copies of their high-cost offerings but without the risks since it executes locally. The idea of LLMs taking actions in an environment is not necessarily new, but with every major LLM provider introducing models that are better trained to take actions, this new capability casts a completely different light on traditional economic models of software especially in academia. The digital humanities has relied on grant funding to support large monolithic software projects that must be maintained and which are sometimes unable to be supported after funding dries out. I propose that the newer models represent the shift to researcher-users of LLM agents that share techniques and knowledge instead of source code, and that existing barriers of costs and expertise no longer restrict the digital humanities as they once did. We can finally stop building platforms and start doing the work.

BIO:
John T. Murray is an Associate Professor of Games and Interactive Media and core faculty in the Texts and Technology Ph.D. program at the University of Central Florida. He is co-author of Adventure Games: Playing the Outsider (Bloomsbury 2020) and Flash: Building the Interactive Web (MIT Press 2014). His current research focuses on Virtual Reality Interactive Narratives and democratizing creative tools through natural language interfaces.

Visualizing Feminist Poetry with Open-Source Python Animation and AI systems
Kiera Obbard

As generative AI becomes increasingly centred in not only educational technology, but also contemporary explorations of art and technology, this paper explores ethical, feminist, and open-source approaches to AI-driven kinetic poetry, offering a meaningful alternative to proprietary, black-box AI systems.

Drawing from feminist and digital humanities methodologies that centre difference (Ahmed; McPherson), and employing creative analytic practices (Berbary), this project resists AI as an extractive force and instead embraces open-source tools as a method for technologically-facilitated artistic expression. Using Manim, an open-source Python library for mathematical animations, and open-source natural language processing (NLP) tools like spaCy, I will map poetic structures to dynamic animations, where words appear, fade, and shift in response to AI-assigned rhythm scores. The result will be kinetic poetry visualizations that bring movement and life to the feminist poetry on the page. This phase of visualizations is the first in a longer project that seeks to explore and compare the use of non-generative AI and generative AI (GenAI) to visualize feminist poetry in ways that uphold the ethics and values of intersectional feminism.

In this panel, I will discuss the first phase of visualizing feminist poetry for poetic rhythm, rhyme, and emotional tone while prioritizing transparency, open-source tools, and creative agency. By employing open-source tools to create AI-powered kinetic poetry, this work envisions a future where GenAI is used not to replace creativity but to enhance participatory, ethical, and critical engagements with language in a way that fosters a "poetics of dissent" (Milne). Unlike mainstream AI-driven edtech that prioritizes efficiency and automation, this project foregrounds interpretation, embodiment, and play—critical components to feminist creative analytic practices.

BIO:
Kiera Obbard (she/her) is a poet and a SSHRC Postdoctoral Fellow at the University of Waterloo where she studies the feminization and denigration of women's writing and associated publishing technologies. Her current book project, The Instagram Effect (forthcoming WLU Press), critically evaluates the economic, technological, literary, social, and cultural risks of a globally popular literature being shared and read on third-party proprietary platforms. She was previously the Michael Ridley Postdoctoral Scholar in Digital Humanities at the University of Guelph.

What the FOSS? Lessons from the Electronic Literature Community on Creating Open Access Software
Leonardo Flores

In this presentation I argue that the electronic literature (elit) community offers a vital model for how the humanities can ethically navigate generative AI by embracing free and open-source software (FOSS) principles. Drawing on Montfort and Wardrip-Fruin's Acid-Free Bits white paper from 2004, I identify elit practices they highlight– such as using open-source platforms, documentation, code validation and porting, using plain-text rather than compiled code, and more– to create and sustain creative work despite waves of technological obsolescence. Using the computational poetry journal Taper as a case study, the talk shows how publishing software-like literary works under libre licenses fosters community, remix culture, and creativity in ways that will outlast work created with proprietary, corporately developed software. Extending these lessons to the AI era, it proposes that adopting open-source methodologies and engaging with accessible models such as OLMo 2 can help humanists shift from passive consumption to transparent, collaborative, and community-driven knowledge production. The FOSS ethos thus equips humanities research not only to survive technological change but to help shape a more equitable and open digital ecosystem.

BIO:
Professor Leonardo Flores is Chair of the English Department at Appalachian State University and President of the Latin American Electronic Literature Network – Lit(e)Lat. His research areas are electronic literature, with a focus on e-poetry, digital writing, and the history and strategic growth of the field. He's known for I ♥ E-Poetry, the Electronic Literature Collection, Volume 3, "Third Generation Electronic Literature" and the Antología Lit(e)Lat, Volume 1. He was a member of the MLA-CCCC Joint Task Force on AI and Writing and is now part of the MLA Task Force on Generative AI Initiatives and is available to offer talks and workshops on AI and its impact on education, policy, scholarship, and creativity. He is also a cyborg digital writer with a thriving creative coding practice. For more information on his current work, visit leonardoflores.net.

Make It a Zine: Resisting AI in the Writing Classroom
Lee Skallerup Bessette

Writing is a technology. Our ability to create symbols on surfaces required technological innovations, and our ability to then circulate, then reproduce, said symbols on surfaces required further technological innovation. Within those cycles of innovation around the creation and circulation of texts, as the cost of producing text on paper came down, the rise of small, self-published works rose. You can trace the history as far back as pamphlets in the late 19th century, but the booklet format gained popularity in the 1920s during the Harlem Renaissance with a publication called Fire!! Early on, zines were also popular in the science fiction communities. They have always been used to connect smaller, marginal communities, providing inspiration, resources, and an outlet for outrage and resistance.

Fast-forward to a writing classroom in the fall of 2024. I have long been an early adopter of technology in the classroom, but recently I am becoming more and more disenchanted by so-called "innovations". In an effort to help students understand what AI does when it generates text and what they do when they write, we look at the history of writing technology, as well as computing metaphors and tech hype. As an instructor, how could I ask the students to create a final project that would be fodder for the AI machine and have limited (if any) impact on the world around them? Enter THE ZINE. Inspired and informed by the SlowDH, MakerDH, and MiniComp movements, and by the subversive zines from times past, I wanted to inspire students to explore and engage critically with technology through analogue outputs, resisting AI and other extractive ed-tech tools. This talk will examine the rhetorical and subversive elements of the zine, and the zine as a tool of resistance to both AI and our current political moment

BIO:
Lee Skallerup Bessette is the Assistant Director of Digital Learning at the Center for New Designs in Learning and Scholarship at Georgetown University. She is the editor of Affective Labor and Alt-Ac Careers.
user@mla2026:~$