PRAGMATIC AGENTIC AI IN DIGITAL HUMANITIES

A guided tour through agentic tools for humanities scholars

LOADING...
The chatbot interface reminds me of what Brenda Laurel called “chocolate-covered broccoli” (Laurel)—gamification rather than meaningful play. The chatbot reduces friction to an outcome and encourages reductive production, just like Math Blaster had no actual correlation between the math and the game mechanics.
Screenshot of Math Blaster, a 1990s edutainment game
Image: Math Blaster! (Davidson & Associates, 1983)
AI slop is something that “requires less effort to produce than it does to consume” (Furze). It’s possible to use AI in a way that is labor-intensive and requires expertise—but a chatbot won’t get you there, because the chatbot interface isn’t designed for sustained intentionality.
Microsoft Copilot terms of service stating it is for entertainment purposes only
Image: Microsoft Copilot Terms of Service
Agentic tools require expertise—they reward and extend it, working as an extension of ourselves (McLuhan) in ways fundamentally different from what a chatbot offers. Agentic tools demand significant context, knowledge input, guidance, and management.
Ethan Mollick post about tireless computer people completing tasks in 15 minutes
Image: More from Mollick, “Management as AI Superpower”
Karamanis draws a sharp distinction between an expert building a research project with Claude Code and a grad student using it as a shortcut—“the paper looks identical but the scientist doesn’t.” Agentic outputs reflect expertise; chatbot outputs reflect training data.
Lincoln Mullen's 'Behind, ahead' blog post about discovering agentic coding
Image: Mullen, “Behind, Ahead”
Mollick writes that our AI usage is being decided by departments worried about risk aversion—“the IT department: where AI goes to die.” These are weird tools with serious potential to augment creativity and research that cannot be easily regulated or defined.
Ethan Mollick post about technology deskilling and deliberate choices about which skills to keep
Image: More from Mollick, “The IT Department: Where AI Goes to Die”
Mr. Chatterbox is a language model trained from scratch on over 28,000 Victorian-era texts—built using Claude Code, it’s not a frontier model and can be brought into a classroom without data risks (Venturella). Wouldn’t we rather have students who are makers and creators than passive consumers of a chatbot’s output?
Mr. Chatterbox Victorian chatbot roleplay conversation
Image: Venturella, “Mr. Chatterbox”
An agentic tool has three components: the LLM, the reasoning layer, and the harness—and the harness can work with local models running entirely on your desktop (Raschka). These harnesses and local models would still exist if the big AI companies shut down tomorrow.
Ted Underwood post comparing AI disclosure to putting 'Web' after internet citations
Image: Ted Underwood, 2026
“In this wider context, vibe coding and diligent analysis can coexist” (Cohen). If you’ve ever had things you wanted to build that you don’t have the time or resources for, agentic tools are a way to unlock those side projects and build the things you’re dreaming about.
Map of Bankhead-Jones grasslands generated with Claude, demonstrating agentic research tools
Image: Heppler, “Vibing Digital History”