FacebookBlueskyLinkedInShare

WestEd’s Leading Together Webinar Series: Selecting GenAI Tools That Build Healthy Classroom Culture Transcript

Featured Speaker

  • Dr. Saroja Warner, Senior Director of Responsive and Sustaining Education, WestEd

Host

  • Danny Torres, Associate Director of Events and Digital Media, WestEd

Danny Torres:

Hello, everyone, and welcome to the 24th session of our Leading Together series. In these 30-minute learning webinars, WestEd experts are sharing research and evidence-based practices that help bridge opportunity gaps, support positive outcomes for children and adults, and help build thriving communities. Our topic: “Selecting Generative AI Tools That Build Healthy Classroom Culture.” Our featured speaker today is Dr. Saroja Warner, Senior Director of Responsive and Sustaining Education at WestEd. Thank you all very much for joining us. My name is Danny Torres. I’m Associate Director of Events and Digital Media for WestEd, and I’ll be your host.

Now before we move into the contents of today’s webinar, I’d like to take a brief moment to introduce WestEd. As a non-partisan research, development, and service agency, WestEd works to promote excellence, improve learning, and increase opportunity for children, youth, and adults. Our staff partner with policymakers, district leaders, school leaders, communities, and others providing a broad range of tailored services, including research and evaluation, professional learning, technical assistance, and policy guidance. We work to generate knowledge and apply evidence and expertise to improve policies, systems, and practices. Now I’d like to pass the mic over to Dr. Warner. Dr. Warner, take it away.
 
Dr. Saroja Warner:

Thanks, Danny, and welcome everyone. Thank you for joining us today. Again, my name is Saroja Warner, and I’m thrilled to have educators, school and district leaders, and EdTech developers all in the same room for this very important conversation.
I started my career as a high school social studies teacher, and since then I have worked extensively in educator preparation and in educator education, (chuckles) in higher education programs at the state level and in the policy arena.
Today’s session isn’t about demonstrating a new AI tool or making predictions about the future. Instead, it’s about a question many of us are already grappling with and that is: How do we make sure the technologies entering classrooms actually help rather than distract from teaching and learning? For the next 25 minutes, we’re going to step back from individual products and look at patterns. We’ll talk about why so many well-intentioned EdTech tools fail to take hold and why GenAI, despite its promise, faces the same risk. The goal is to leave you with a lens you can use in your own role, whether you’re selecting tools, using them, or building them.

Before we get started, I’d like you to consider this question. Based on your experience and observations, when new EdTech tools fail to gain traction in schools, what is most often the root cause?

Please take a minute to respond to the poll. The responses include: insufficient training or professional development, lack of time for teachers to use the tool, the tool doesn’t align with classroom culture and instructional values, technical or integration issues, and lastly, the tool doesn’t demonstrate clear impact on learning.

So the majority of people chose insufficient training or professional development, and then it seems like it’s a close tie for all of the others. I’m pretty sure, I’m going to make a bet that if I had an “all of the above,” that might have been the majority of responses as well.

Thanks for participating in that, and I think that just really affirms what a lot of folks’ experiences have been. I think what the research is starting to also show us about what’s happening with the use of EdTech tools in schools, I’m going to move on and get us started, I mean, this is a really great way to introduce this framing of why this matters now.

We are certainly living through another major wave of educational technology adoption. GenAI tools are appearing quickly, and schools are under real pressure to respond sometimes before there’s time to fully reflect. That urgency feels new, but the situation itself isn’t. Education has been here before many times. Most of us have seen this pattern. A tool is adopted with excitement and high expectations. There’s training. There’s rollout, and then over time, use becomes uneven. Eventually, the tool is quietly set aside. I laugh, but I think it’s the reality a lot of us have experienced.

What’s important is that these tools usually didn’t fail because they were broken or ineffective in theory. They failed because they didn’t fit. They didn’t align with how teachers teach, how students learn, or what schools value most. And if we don’t pay attention to that with GenAI, we risk repeating the same cycle only faster and with higher stakes.

So today I want to focus on a factor that’s often overlooked: Whether our technologies reflect the culture and values of classrooms and schools? Tools are not neutral. They shape decisions, roles, and relationships.

Our ambitious objectives over the next 22 minutes or so are to: identify why EdTech and GenAI tools often fail to achieve lasting impact in classrooms, understand how cultural and value alignment shapes effective technology use in schools, and use a cultural-fit lens to make better EdTech and GenAI adoption decisions. To achieve these objectives, (coughs) excuse me, we will explore briefly why EdTech often fails to deliver impact. Then I’ll review GenAI opportunities and risks, and explain what I mean by cultural fit in classrooms and schools. Then I’m going to preview a new resource we are developing at WestEd aimed at supporting districts and schools and evaluating cultural fit in practice. And lastly, I’ll address the implications for different people in various roles involved in GenAI adoption decisions, and offer a call to action in closing.

To understand what’s at stake with GenAI, it helps to look back, not because the technology is the same, because it isn’t, but because the adoption patterns are remarkably consistent. EdTech history gives us a set of warning signs that are easy to miss when enthusiasm is high.

Over the past few decades, we’ve seen a repeated cycle. A new technology is introduced with big promises. It’s positioned as a solution to longstanding problems. Schools invest time and money, expectations rise, and then reality sets in. After rollout, use becomes uneven. A small group of educators adopt deeply, others use it minimally, and many stop using it altogether. Within a few years, the tool is replaced or quietly retired often without a clear sense of why it didn’t deliver.

Imagine a middle school teacher who prioritizes discussion, student voice, and flexible pacing. Her school adopts an adaptive learning platform that requires students to move through content in a fixed sequence and discourages deviation. Technically, the tool works well, but it quietly reshapes her classroom. Over time, she uses it less and less, not because it’s broken, but because it conflicts with how she believes learning should happen.

This pattern has played out across many tools, learning management systems, adaptive learning softwares, personalized learning platforms, and data dashboards. The specifics change, but the outcome is often the same. Limited impact relative to original promise.

We often explain these failures in familiar ways. Not enough training as you all identified, not enough time, not enough support, and those factors do matter, but they don’t fully explain why some tools stick, while others don’t, even when resources are similar.

A deeper issue is that many tools are designed around an idealized version of teaching rather than the realities of classrooms. They assume certain beliefs about instruction, efficiency, and decision-making that don’t always match how educators see their work.

Every tool encodes assumptions about what good teaching looks like, how learning should happen, and who gets to make judgements. When those assumptions clash with school culture or professional values, educators adapt a tool or disengage from them.

The key lesson from EdTech history is this. Impact isn’t just about whether a tool works, it’s about whether it fits. As we turn to GenAI, this question becomes even more important, because these tools don’t just support instruction, they actively shape it.
GenAI isn’t just another EdTech tool. It represents a shift. Earlier tools focused on content delivery or task management. GenAI generates language, feedback, and suggestions for instructional activities. This changes not just efficiency, but whose judgment shapes learning.

GenAI doesn’t just support instruction, it participates in it. In other words, it’s influential but not substitutive. GenAI doesn’t teach students, teachers do, but the tools we adopt shape how teaching is experienced. GenAI won’t replace good teaching, but it will shape learning, unless educators shape it first. It’s not a substitute for teaching. It is a force that reshapes it, and for better or for worse.

Educators are turning to GenAI for understandable reasons: faster feedback for students, support for bilingual learners, help differentiating instruction at scale. And schools see a lot of promise in GenAI: gaining access to academic supports, reducing teacher workload in under-resourced content.

These benefits are real and important, especially from an equity perspective. The question is not whether GenAI can support equity, but how it defines quality and success when it does.

Imagine a high school English teacher in a culturally and linguistically diverse school. The teacher uses a GenAI writing assistant to provide feedback on student narratives. The tool is marketed as supporting clear academic writing and college readiness. Students receive feedback encouraging use of standardized grammar and sentence structure, removal of idiomatic or community-based language. A narrow definition of formal voice is used.

Over time the teacher notices patterns: multilingual students, home languages are flagged as errors, culturally grounded storytelling styles are marked as off topic. Students even begin to censor voice and identity to satisfy the tool. In this scenario, the tool wasn’t just correcting writing. It was quietly defining, maybe not so quiet, which voices belonged.

Now imagine a similar classroom, but with a differently designed GenAI writing tool. This tool is explicitly built with linguistic diversity as a design goal, not a problem to fix. Instead of flagging home language use as errors, the tool asks students: What audience and purpose they’re writing for? It asks them to recognize multiple English varieties and translanguaging practices, labels feedback, (coughs) excuse me, as suggestions, not corrections. When students use community-based language or bilingual expressions, the tool explains how that language functions rhetorically and offers options: keep this voice or revise for a different audience.

The teacher notices different outcomes. Students maintain voice while learning code switching as a skill, not a mandate. Multilingual learners see their language as an asset. The tool supports rather than replaces the teacher’s instructional values.
Both tools use GenAI. The difference isn’t the technology. It’s the values guiding the design.
This contrast shows why effectiveness alone isn’t enough. Two tools can work technically, but only one fits the culture and values of the classroom.

This reveals risks that are easy to miss. GenAI may reinforce dominant cultural norms under the label equality. Bias doesn’t always show up as explicit harm. It shows up as erasure, and teachers spend time mediating between the tool and their values. The pressure is subtle. AI feedback sounds neutral and objective, and students trust it more than human guidance. When AI defines what “good” looks like, it shapes who feels successful.

GenAI systems embed assumptions about language, knowledge, and intelligence, what counts as rigor, and whose way of knowing are legitimate. If these assumptions conflict with cultural responsive teaching adoption will stall or harm will occur. That’s why cultural fit must be examined before tools enter classrooms.

This brings us to a critical question. How do we evaluate whether GenAI tools align with the values and cultures of our schools?
So what do we mean by cultural fit? What do I mean? When we say culture, we’re not talking about personality or comfort. We mean shared beliefs about how students learn, norms around authority, autonomy, and relationships, values about equity, voice, and what counts as success. Every classroom and school has a culture, even if it’s never written down.

Cultural fit is the degree to which a technology’s assumptions, values, and ways of working align with the beliefs, practices, and priorities of the classroom, schools, and communities in which it is used. Culture is how teaching and learning actually happen day to day.

I think it’s worth repeating. Cultural fit is the degree to which a technology’s assumptions, values, and ways of working align with the beliefs, practices, and priorities of the classroom, schools, and communities in which it is used.
EdTech tools don’t enter neutral spaces. They interact with instructional philosophies, professional identities, and community values. Tools encode assumptions such as: Is learning about efficiency or exploration? Is the teacher a facilitator, an expert, or a monitor? Is variation expected or treated as a problem?
When these assumptions align with school culture, schools feel supportive. When they don’t, schools feel intrusive or are quietly ignored.

Many adoption failures aren’t about training or fidelity. They’re about value conflict. Teachers feel tools push practices they don’t believe in, and students experience learning as less human or less relevant. With GenAI, misalignment is amplified because tools generate guidance, not just content. And AI feedback can override local norms and professional judgment. The result is superficial use, workarounds, and eventual abandonment.

The key takeaway here is that if a tool conflicts with how educators define good teaching, it won’t stick no matter how powerful it is. Cultural fit is not just about comfort, it’s about equity. When tools assume a narrow set of norms, certain students’ language, identities, or ways of knowing are privileged, and others are treated as deviations to be corrected. Schools serving diverse communities feel this most acutely.

A mismatch between tool values and community values can undermine trust, reinforce inequities, and create hidden harms even when intentions are good.

So this reframes a familiar question, not just does the tool work, but work for whom, in what context, and according to whose values? Cultural fit isn’t a soft add-on. It’s a condition for meaningful sustained impact, especially with GenAI. So the question becomes: How do we actually evaluate cultural fit when choosing tools?

Now, traditional procurement processes typically ask questions like, “Does it work? Is it compliant? Is it affordable?” But what’s often missing is a way to ask, does this tool reflect how we believe teaching and learning should happen here?

Cultural fit rarely fails loudly. It fails quietly through low use and workarounds. To address that gap, we at WestEd are developing a cultural fit scoring and discussion protocol paired with a facilitator guide that will be ready for primetime before the clocks go forward this year.

Our protocol exists to answer a simple but often skipped question. Not just does the tool work, but does it reinforce how we believe teaching and learning should happen here? The protocol is intentionally not a technical evaluation or a compliance checklist. It’s a structured way to surface assumptions and trade-offs before tools enter classrooms or before they’re scaled. It’s designed to bring educators, leaders, and technical staff into the same conversation using a shared language.

At a high level, the protocol follows four steps. First, the group aligns on context. What problem we’re solving, and what matters most here? Second, participants rate alignment individually, which prevents groupthink. Third, differences in scores drive discussion. Disagreement is treated as useful data. And finally, the group looks for patterns and frames next steps rather than forcing a simple yes or no.

The categories are deliberately broad and human centered. They include not just instructional values, but educator agency, student experience, equity and inclusion, transparency and sustainability. The goal is to understand how this tool behaves in real classrooms over time, not just how it demos.

This matters especially for GenAI because these tools don’t just automate tasks. They generate guidance, feedback, and norms. If their embedded values don’t align with local culture, that misalignment shows up quietly, again, low use, workarounds, or subtle harm.

So again, as we come to the end, I can’t believe that 23 minutes went by that fast, (chuckles) it’s important we also address the roles and responsibilities for different people in the system. For school district leaders and school leaders, remember that procurement is a value-shaping act, not just a financial one. Cultural fit helps explain why investments succeed or quietly fail. For principals and teachers, remember your professional judgment is not a barrier to innovation. It’s essential data. Naming values upfront protects instructional integrity and student experience. And for EdTech developers, always remember that adoption depends on trust and alignment, not just features. Making values explicit is a design strength, not a liability.

As GenAI tools continue to enter schools, and the question isn’t how fast can we adopt, it’s how intentionally can we choose? My recommendation is to start small, use our protocol in a meeting, in a pilot, or in one decision in the coming year, and notice what becomes visible when values are named.

So, in closing, I offer a few reflection questions. What is one upcoming EdTech or GenAI decision where evaluating cultural fit could change the outcome? What value in your classroom, school, or district feels most important to protect as GenAI tools expand? What assumption about teaching or learning is most important for GenAI tools to make explicit rather than hide? Whichever one resonates most, that’s a good place to start, because asking better questions, is how we make better technology decisions.

Remember, the future of EdTech won’t be shaped by tools alone, but by the values we choose to design, adopt, and protect.
I want to thank everybody for joining the webinar today, and encourage you to please stay connected. If anyone is attending SXSW EDU in Austin, Texas this spring, shameless plug, I invite you to join the session that my colleagues and I are facilitating called Designing Belonging: Centering Identity in Math EdTech on March 9th. We’ll be presenting also at the ISTE+ASCD conference in June in Orlando, and I really look forward to staying in touch with all of you. With that, I’m going to close and send this back to Danny.
 
Danny Torres:
Well, thank you, Dr. Warner for a great session today. And thank you to all our participants for joining us. We really, really appreciate you being here.

Please feel free to reach out to Dr. Warner via email. If you have any questions about the work we discussed today, you can reach Dr. Warner at [email protected]. And to learn more about AI at WestEd, visit us online at wested.org/ai.

And you can check out recordings of our past Leading Together webinars online. We’ve covered a range of topics, including literacy, assessment, special education, mathematics, and other sessions on artificial intelligence. We also have another session on artificial intelligence this week and one on math at the end of January. To access our Leading Together webinar series recordings and to register for upcoming events, visit us online at wested.org/leading-together.

And finally, if you’re interested in learning more about WestEd and staying connected, you can sign up for WestEd’s email newsletter to receive updates on research, free resources, services, and more. Subscribe online at wested.org/subscribe, or you can scan the QR code displayed on the screen here. You can also follow us on LinkedIn and Bluesky.

With that, thank you all very, very much. We’ll see you next time.