
July 21, 2025
By Sarah Quesen
Artificial intelligence (AI) is no longer a far-off idea. It’s already in the classroom.
From adaptive feedback tools like Duolingo to AI-powered tutors like Khanmigo, students and educators are interacting with AI systems every day. But these systems, while increasingly powerful, are still what we call “narrow AI.” They can simulate intelligence in specific tasks, but they don’t reason, understand, or make decisions like a human does.
As part of a presentation at the Council of Chief State School Officers 2025 National Conference on Student Assessment, I walked through the current reality of AI in education, what might be coming with the emergence of Artificial General Intelligence (AGI), and what we can do now to ensure AI serves students and educators well.
What Can AI Do Today?
We often talk about AI like it’s a single technology, but it’s really a collection of systems. Today’s AI can
- provide adaptive tutoring feedback (e.g., Khanmigo),
- simulate conversation practice (e.g., Duolingo),
- help students revise writing in real time (e.g., Grammarly, Writable), and
- recommend lessons or identify students who may benefit from additional support in learning platforms (some Learning Management System platforms and early-warning systems use rule-based algorithms to track engagement or performance, though most are not powered by deep AI).
These systems are useful. But they are narrow. They don’t understand student intent or learning context in the way a teacher does. They pattern match and generate. They can be assistive. But they aren’t autonomous decision-makers.
What Is Artificial General Intelligence?
AGI refers to a system that could perform any intellectual task that a human can. It could learn across contexts, reason through problems, and plan in real time without retraining. Think of narrow AI as a highly skilled specialist (like a rubric-based scoring system that expertly evaluates five-paragraph essays), while AGI would be a generalist who could evaluate any student work, from poetry to math proofs to multimedia projects, understanding intent, creativity, and reasoning just like an experienced teacher can.
While AGI isn’t here yet, how we design and govern today’s narrow AI systems may set the trajectory. The potential is enormous, but so are the risks.
Experts like Geoffrey Hinton and Sam Altman have warned that AGI could arrive sooner than we think. Predictions vary, but many believe we are within a decade of major breakthroughs.
Why Are Experts Concerned?
The concern is not just about what AGI will be able to do but also how quickly it might do it. Some of the most cited risks are listed below:
- unpredictability: An AGI could behave in ways we don’t understand or expect.
- speed: It could begin writing and improving its own code faster than other systems can respond.
- power concentration: Development and control may increasingly rest with a small number of actors.
- misalignment: Even well-intentioned goals could be executed in harmful ways if the system misinterprets human values.
- educational risk: In schools, unchecked automation could displace educators, reinforce existing bias, and diminish student agency.
These risks aren’t abstract. In education, they translate to real classroom concerns. Power concentration could mean a handful of tech companies controlling what and how millions of students learn, widening gaps between districts that can afford premium AI tools and those that cannot. Misalignment might look like an AI tutor that optimizes for test scores at the expense of creativity or a system that interprets “help struggling students” as providing answers rather than building understanding. Even unpredictability becomes concerning when AI makes decisions about student placement, resource allocation, or learning pathways. The speed of AGI development means we might not have time to course correct once these systems are deployed at scale.
A Framework for Responsible AI in Education
Whether we’re talking about narrow AI today or the possibility of AGI tomorrow, we need a clear framework to guide responsible use. Drawing from guidance from the U.S. Department of Education (link) and the National Institute of Standards and Technology (link), we focus on four core principles: transparency, privacy and security, fairness, and human oversight. To these, I add a fifth principle: student agency. While many frameworks focus on making AI safe and accountable, they don’t explicitly address what happens to student learning when we optimize away struggle and exploration.
- Transparency
- Students and teachers deserve to know when AI is in use. Label systems clearly, explain their capabilities, and avoid black-box decision-making, especially when the stakes are high.
What you can do: Involve educators and students from the beginning when implementing new AI systems.
- Students and teachers deserve to know when AI is in use. Label systems clearly, explain their capabilities, and avoid black-box decision-making, especially when the stakes are high.
- Privacy and Security
- AI tools must respect student data. Collect only what’s necessary for educational purposes, encrypt what you store, and avoid third-party tracking that could expose student information.
What you can do: Treat every AI tool as if it were handling sensitive student records.
- AI tools must respect student data. Collect only what’s necessary for educational purposes, encrypt what you store, and avoid third-party tracking that could expose student information.
- Fairness
- Bias can be embedded in training data or emerge from model outputs. Audit regularly and correct problematic patterns if and when they appear
What you can do: Don’t just tweak the prompts. If bias persists, consider retraining the model or using another approach.
- Bias can be embedded in training data or emerge from model outputs. Audit regularly and correct problematic patterns if and when they appear
- Human Oversight
- AI should support educators, not replace them. Ensure teachers can override AI suggestions, build in checkpoints for human review, and maintain educator authority over final decisions.
What you can do: Build systems in which human judgment remains central.
- AI should support educators, not replace them. Ensure teachers can override AI suggestions, build in checkpoints for human review, and maintain educator authority over final decisions.
- Student Agency
- Too much automation can flatten curiosity and constrain creativity. Preserve spaces for exploration, encourage productive struggle, and let students own their learning journey.
What you can do: Use AI to scaffold choice, not to script every move.
- Too much automation can flatten curiosity and constrain creativity. Preserve spaces for exploration, encourage productive struggle, and let students own their learning journey.
Why Student Agency Matters
Student agency is foundational to learning. When students make choices, reflect on their reasoning, and struggle through uncertainty, they build cognitive and emotional skills that go beyond content knowledge. It’s important to recognize that productive struggle looks different for different learners. The goal is to use AI tools to scaffold learning appropriately by removing barriers while preserving challenge and discovery.
Many AI systems are designed to optimize for speed, correctness, and efficiency. Because they rely on pattern recognition and probabilistic reasoning, they often reinforce the most common answers and suppress outliers or creative divergence (2022).
This can lead to what some have started calling blandification. As models are trained on and reinforce the average, originality begins to flatten. When the internet is saturated with AI-generated content, newer models are often trained on what previous models produced. The cycle continues, and outputs converge further toward the middle.
Models trained to predict correctness tend to reward safe answers over novel ones. When AI tools get it almost right, students may not push back. They may just accept the output. Without care, we risk raising a generation of compliant prompt followers instead of creative thinkers.
What Happens Next Is Up to Us
What does responsible AI implementation look like in practice? Start small. Choose one AI tool your school uses and audit it against these five principles. Form a committee that includes teachers, students, and parents to review AI policies. Create assignments in which students compare their own work to AI-generated content and reflect on the differences.
Most importantly, preserve spaces where AI doesn’t belong. Not every problem needs an algorithmic solution. Not every struggle should be optimized away. Sometimes the messy, inefficient, deeply human process of learning is exactly what students need.
The future of AI in education isn’t predetermined. But the choices we make today, in our classrooms and districts, will shape whether these tools expand human potential or constrain it.
Navigating the Promise and Reality of AI With WestEd
The futures of education and human development fields are being reshaped by AI. WestEd aims to support practitioners, policymakers, and other educational professionals with practical solutions, critical resources, and pioneering research as they make innovative, efficient, and safe use of AI.
Navigate the promise and reality of AI with WestEd as your trusted partner.
Sarah Quesen is an expert in statistics and psychometrics with a keen interest in emerging technologies. As Director of Assessment Research and Innovation (ARI), she leverages her understanding of assessment systems to lead rigorous, transformative research and provide evidence-based technical assistance to states, districts, and commercial organizations.