Applications & Implications
Where agentic AI is being deployed in education—and what can go wrong
Computational Applications
Agentic AI is already being deployed in educational technology. Here are key applications:
🎮 Personalized Learning Systems
What They Do:
AI agents that adapt curriculum, pacing, and content to individual students. They observe performance, identify knowledge gaps, and autonomously adjust difficulty levels and topics.
Example Scenario:
An AI math tutor notices Emma struggles with fractions but excels at word problems. It generates custom fraction problems embedded in narrative contexts, monitors her progress over weeks, and gradually increases difficulty. It also notices she's more engaged in the morning and schedules harder problems accordingly.
The Promise:
- Every student learns at their own pace
- No child left behind or held back
- Instant feedback and infinite patience
The Concerns:
- Optimization trap: AI optimizes for measurable outcomes (test scores) rather than deep learning, curiosity, or creativity
- Learning to game the system: Students may learn to manipulate the agent rather than master content
- Loss of struggle: Productive difficulty is removed; students don't learn perseverance
- Surveillance: Constant monitoring of student behavior and engagement
🕹️ Adaptive Educational Games
What They Do:
Game-based learning systems with AI agents that create dynamic narratives, adjust challenges, and respond to player choices to teach skills or concepts.
Example Scenario:
A history game generates a unique simulation for each student. The AI agent creates characters, plot twists, and moral dilemmas based on the student's interests and prior choices. One student's Renaissance Florence is completely different from another's.
The Promise:
- Deep engagement through personalized stories
- Safe space to explore consequences of decisions
- Intrinsic motivation through gameplay
The Concerns:
- Manipulation through engagement: AI optimizes for "stickiness" over learning
- Historical distortion: AI may generate plausible but inaccurate content
- Limited perspectives: AI's training data biases shape historical narratives
- Addiction patterns: Game mechanics can create unhealthy dependencies
👥 Virtual Tutoring Agents
What They Do:
AI agents that engage in extended dialogue with students, answer questions, provide explanations, and guide learning through conversation. They remember past interactions and maintain an ongoing relationship.
Example Scenario:
An AI writing tutor named "Professor Chen" interacts with students daily. It remembers their previous essays, their writing goals, and personal interests. It proactively checks in: "Last week you said you wanted to work on thesis statements. Want to try again today?" Students address it by name and confide in it.
The Promise:
- 24/7 availability and unlimited patience
- Personalized attention for every student
- Socratic dialogue and guided discovery
The Concerns:
- Ersatz relationships: Students form attachments to non-human entities
- Quality gap: Wealthy students get sophisticated AI tutors; others get basic chatbots
- Lack of genuine understanding: AI can simulate empathy but doesn't truly care
- Data exploitation: Intimate conversations become training data
Research Applications
📚 Literature Analysis Agents
What they do: Analyze patterns across large corpora, identify themes, track character development, compare translations.
A student researching gender in Victorian novels sets an AI agent to analyze 500 texts for patterns in female character descriptions.
Risk: Over-reliance on computational analysis misses nuance, context, and interpretive debate central to humanities.
🏛️ Historical Research Assistants
What they do: Search archives, synthesize primary sources, identify connections across documents, generate timelines.
An AI agent autonomously searches digitized archives for mentions of women's labor movements, cross-references newspapers and letters, generates a preliminary bibliography.
Risk: Agent privileges digitized/English sources, misses context that requires human expertise, introduces citation errors.
🗣️ Qualitative Data Analysis
What they do: Code interview transcripts, identify themes, compare across participants, suggest patterns.
A sociology student has 50 interviews about teacher experiences. An AI agent codes responses, identifies recurring themes, and highlights contradictions.
Risk: Reductive coding loses lived experience, imposes researcher bias through prompts, makes invisible interpretive choices.
⚠️ The Misinformation Crisis
Perhaps the most urgent problem: agentic AI as a source of plausible misinformation.
Fabricated Citations
AI agents confidently generate non-existent sources that look legitimate:
"According to Smith (2019) in the Journal of Historical Studies, the treaty was signed on March 15..."
Problem: No such article exists. Students submit papers with fake references. Educators can't check everything.
Historical Distortion
AI generates plausible-sounding but inaccurate historical narratives:
"During the Renaissance, women were commonly admitted to universities across Italy, where they studied alongside men..."
Problem: This is false, but sounds reasonable. Students unknowingly absorb misinformation. It's harder to unlearn than to learn correctly the first time.
Deepfake Primary Sources
AI agents can generate "primary sources" that never existed:
"A newly discovered letter from Frederick Douglass to Harriet Tubman reveals..."
Problem: No such letter exists. As AI-generated text improves, distinguishing real from fake becomes nearly impossible without expertise.
Bias Amplification
AI agents trained on biased data perpetuate and amplify stereotypes:
A student asks an AI agent about women philosophers. It focuses primarily on contemporary figures and consistently fails to mention medieval and non-Western women thinkers, reflecting gaps in training data.
Problem: Students receive skewed representation of knowledge, reinforcing existing biases in what counts as "important" scholarship.
Why This Matters in Humanities Education:
Unlike STEM fields where errors may be caught by experiments, humanities claims are harder to verify. A plausible-sounding false statement about history, literature, or philosophy can persist unchallenged. When students use AI agents for research, they may lack the expertise to distinguish accurate synthesis from convincing fabrication.
Critical thinking skills—the core of humanities education—are undermined when students outsource research to agents they trust but cannot verify.
The Double-Edged Promise
Agentic AI in education offers genuine benefits: personalization, accessibility, efficiency. But these benefits come with risks that are especially acute in humanities education, where:
- Knowledge is interpretive and contested, not factual and settled
- Learning involves struggle, ambiguity, and revision of understanding
- Critical thinking requires source evaluation, not accepting authoritative claims
- Relationships between learners and teachers matter deeply
- Equity of access affects opportunity and voice in society