From Tools to Agents

Most of us are familiar with AI tools—spell checkers, search engines, recommendation systems. You use them when you want, they do what you ask, and they stop when you're done.

Agentic AI is different. An agent is a system that:

🎯 Has Goals

An agent pursues objectives, not just tasks. It wants to achieve something.

🔄 Acts Autonomously

It makes decisions and takes actions without constant human direction.

👁️ Perceives & Adapts

It observes its environment and adjusts its behavior to reach its goals.

⏰ Persists Over Time

It continues working toward goals across multiple interactions and contexts.

The Agent Loop

Think of an agent as constantly cycling through:

1

Perceive

Observe the current situation

2

Decide

Choose an action based on goals

3

Act

Take action in the environment

4

Learn

Update understanding and strategy

Examples in Education

Not an Agent: Grammar Checker

  • Passive until you use it
  • Responds to specific text
  • No goals beyond highlighting errors
  • Doesn't persist or follow up

Agent: AI Writing Tutor

  • Goal: Improve student's writing over time
  • Observes patterns in student work
  • Proactively suggests exercises
  • Adapts difficulty based on progress
  • Initiates check-ins and interventions

Not an Agent: Search Engine

  • Responds to queries
  • Returns results passively
  • No ongoing relationship

Agent: Research Assistant AI

  • Goal: Complete research project
  • Searches multiple sources autonomously
  • Synthesizes findings
  • Identifies gaps and pursues new leads
  • Adjusts strategy based on what it finds

Why Does This Matter?

When AI moves from tool to agent, the power dynamics shift dramatically:

  • Less human control: You're not directing every action—the agent is making decisions.
  • Opacity: Complex agent behavior can be hard to predict or explain.
  • Goal alignment: What if the agent's goals don't perfectly match educational values?
  • Relationship changes: Students may form attachments or dependencies on agents.
  • Accountability: When an agent makes a harmful decision, who's responsible?

In education, these questions become especially urgent because they involve children, power imbalances, and the fundamental purpose of learning.

A Humanities Perspective

From a humanities standpoint, agentic AI raises profound questions:

📖 Literature & Philosophy

What does it mean to learn from a non-human agent? How does this change the nature of mentorship, wisdom, and knowledge transfer?

🏛️ History & Politics

Who controls these agents? What historical patterns of technological adoption and inequality are we repeating?

🎭 Arts & Creativity

Can an agent nurture genuine creativity, or does it optimize for measurable outcomes at the expense of exploration?

⚖️ Ethics & Values

What values are embedded in agents? How do we ensure educational AI serves human flourishing, not just efficiency?

Ready to explore specific applications and risks?

Continue to Applications →