History & AI

Resources and strategies for integrating artificial intelligence into history education, research, and public engagement.

View the Project on GitHub AMSUCF/HistoryAndAIDemo

Ethics & Critical Perspectives

AI tools are powerful but not neutral. Historians have a professional responsibility to engage with these technologies critically. This page outlines key ethical considerations and ongoing debates.


Bias and Representation

Training Data Problems

Large language models are trained on internet-scale text corpora that reflect existing power structures. Consequences for historical work include:

What Historians Can Do


Accuracy and Hallucination

AI models generate text that is statistically plausible, not verified. “Hallucination”—the confident generation of false information—is a well-documented phenomenon. In historical contexts, this can include:

Mitigation Strategies

  1. Never cite AI output as a source. Use it as a starting point for research, not an endpoint.
  2. Verify every factual claim. Cross-reference with established secondary sources and, where possible, primary sources.
  3. Ask the AI to flag uncertainty. Prompts like “indicate where you are less confident” can help, though models may still present false information with high confidence.
  4. Use retrieval-augmented generation (RAG) when possible—tools that ground AI responses in specific, verified document collections.

Citation and Attribution

Should You Cite AI?

Professional norms are still emerging. Key considerations:

In the Classroom

Be explicit with students about your expectations for AI citation. See Teaching with AI for sample syllabus policies.


Data Privacy and Archival Ethics

Uploading Sources to AI Services

When you paste text into a commercial AI chatbot, consider:

Best Practices


Labor and Environmental Concerns


Key Questions for Reflection


Further Reading


← Back to Home