GenAI in Education: From Classroom Tool to Intelligent Agent
02/10/2024
Autor: Mtro. Juan Gaspar Hernández
Cargo: Profesor DELC

In my Global Perspective classes and other courses, I have consistently described Artificial Intelligence as a tool. This perspective was reinforced by various authors, including ISO (2024) and Laher, J. (2023), who penned an article aptly titled "Why AI is a Tool and Not a Replacement for Human Originality". Their arguments were compelling, and I accepted this view without much hesitation. However, my position on the nature of AI has been dramatically shifted by one of the world's most influential thinkers: Yuval Noah Harari.

Recently, I purchased Harari's latest book, Nexus, A Brief History of Information Networks from the Stone Age to AI. Eager to dive into his insights, I began reading it as soon as it was delivered to my house. To my surprise, Harari presented a perspective that challenged my long-held beliefs about AI. On page xxii of the Prologue, he writes:

"...AI is the first technology in history that can make decisions and create new ideas by itself. All previous human inventions have empowered humans, because no matter how powerful the new tool was, the decisions about its usage remained in our hands. Knives and bombs do not themselves decide whom to kill. They are dumb tools, lacking the intelligence necessary to process information and make independent decisions. In contrast, AI can process information and make independent decisions by itself, and thereby replace humans in decision making. AI isn't a tool – it's an agent."

This last statement struck me like a bolt of lightning, prompting me to reconsider my understanding of AI and its role in our world, particularly in education.

But what exactly is an agent, and how does this redefinition change our relationship with AI? According to the Cambridge dictionary, an agent is a person who acts for or represents another; a person or thing that produces a particular effect or change. The Merriam-Webster dictionary expands on this, defining an agent as one that acts or exerts power, a means or instrument by which a guiding intelligence achieves a result, or a computer application designed to automate certain tasks.

Viewing AI as an agent rather than a tool fundamentally alters our perspective on its capabilities and potential impacts. While a tool is passive, waiting for human direction, an agent can act independently, make decisions, and potentially influence outcomes without direct human input. This shift in understanding carries profound implications for how we interact with and utilize AI, especially in educational settings.

In my classes, students primarily use Generative AI, particularly Large Language Models (LLMs), for research and as an aid in writing in English as a second language. I've always emphasized the importance of the three Cs: Communication, Critical Thinking, and Creativity. However, with this new understanding of AI as an agent, I realize I must place even greater emphasis on Critical Thinking and Creativity.

Critical thinking becomes paramount when interacting with an AI agent. Students must learn to carefully evaluate and verify the information provided by AI, understanding that while powerful, it's not infallible. They need to approach AI-generated content with the same scrutiny they would apply to any other source.

Creativity, too, takes on new importance. When interacting with an AI agent, the quality and creativity of the instructions or prompts we provide can significantly impact the outcomes. This reminds me of a lesson from my time at a military academy in the United States. In Leadership School, we were drilled on the importance of both giving and following instructions precisely. The principle was clear: to give effective orders, one must first understand how to follow them meticulously.

This military experience now finds a surprising parallel in our interactions with AI. Just as a soldier must give clear, precise orders to their subordinates, users of AI must learn to craft thoughtful, specific prompts to get the best results from these intelligent agents. This skill – the ability to communicate effectively with AI – is becoming increasingly crucial in our AI-augmented world.

The shift from viewing AI as a tool to recognizing it as an agent also raises important questions about accountability and ethics. If AI can make independent decisions, who is responsible for those decisions and their consequences? How do we ensure that AI agents act in ways that align with human values and societal norms? These are complex questions that we, as educators and citizens, must grapple with as AI continues to evolve and integrate into our daily lives.

As I continue reading "Nexus" and exploring this new perspective, I anticipate gaining further insights to share with my students and readers. This shift in understanding from AI as a tool to AI as an agent is not merely an academic exercise; it has practical implications for how we teach, learn, and interact with technology.

In conclusion, Harari's perspective has profoundly impacted my approach to teaching about and with AI. As educators, we must prepare our students not just to use AI, but to engage with it as an intelligent agent – one that can assist, challenge, and potentially even surprise us. We must foster in our students the critical thinking skills to evaluate AI-generated content, the creativity to effectively direct AI agents, and the ethical framework to navigate the complex landscape of human-AI interaction.

As a teacher, I too am an agent of change in this twenty-first century. By embracing this new understanding of AI, I hope to better equip my students for a future where the line between human and artificial intelligence grows increasingly complex and intertwined.

Image generated by Google Gemini: the traditional classroom with AI as a tool, and the futuristic classroom with AI as an agent.