Last July, the Peninsula School District in Washington state was among the first school systems in the country to put out guidance on using artificial intelligence in the classroom.
Since then, the district’s website has become a busy corner of the internet, with its “Artificial Intelligence—Principles and Beliefs” garnering more than 600 hits a month. That’s likely because the document has been shared by nonprofit organizations—such as TeachAI and the Consortium for School Networking—seeking to highlight AI guidance models for other districts.
Kris Hagel, the executive director for digital learning for the Peninsula district, who has been making the rounds on webinars and forums for using AI in K-12 schools, doesn’t hide that he had an unusual co-author on the AI guidance: ChatGPT.
The AI tool helped develop Peninsula’s first draft, using data Hagel provided. Then Hagel and his team edited and improved on the document—a writing process Hagel thinks could become more common as generative AI takes hold.
Peninsula, which released its guidance last July, is unusual. Nearly 80 percent of educators say their district has not crafted clear policies on using AI in the classroom, according to a survey of 924 educators conducted by the EdWeek Research Center in November and December.
Hagel believes that school districts need to learn to use the technology themselves—and get their students and teachers to use it, too. Peninsula’s guidance embraces AI’s potential to help educators with routine tasks and cater to students’ individual learning needs by helping to present information in different formats. It also acknowledges AI’s flaws—including algorithmic bias—and says humans must have ultimate authority over any decisions informed by AI tools. And it recommends teaching students to use AI appropriately.
Education Week spoke with Hagel over Zoom about setting expectations around the use of AI.
This interview has been edited for brevity and clarity.
Why did your district release an official policy on AI so far ahead of everyone else?
Teachers [using AI with students] asked for it, so that they were not the ones sticking their necks out. They said “the district needs to tell us something so that then I have cover if a parent complains or a student doesn’t want to do it, or my principal is evaluating me, and they don’t like what I’m doing.”
It’s easier for teachers to move forward if the district says this is something that we believe in and we embrace.
How did you go about crafting guidance on AI?
Right about the time that [teachers] asked for guidance, the U.S. Department of Ed came out with a document around teaching and learning with artificial intelligence. And so that was one document that I took to base our guidance on.
I highlighted [ED’s] 75-page PDF with everything that I thought was important. And then I took all of those highlights, and another couple of documents that were out at the time around generative AI in education, and I dumped them all into ChatGPT and said, “Give me a rough draft of a principles and beliefs document.”
Then I took four pieces of my own writing and had ChatGPT analyze those and then rewrite it in my voice and tone.
That became the first draft. Then we took it to the teaching and learning staff and had them look it over and give feedback. And then I took it to our teachers that have been working with AI and had them provide feedback. And at that point, we just [released] it, saying, “This is what we think we’re going to do. If you care, let us know.”
What’s your advice to other districts looking to craft their own AI guidance?
The first thing I tell people, is to just steal ours. [Laughs.]
Honestly, I tell them not to spend a ton of time thinking this through. There’s a ton of examples out there. You can take ours. You can take the guidance from TeachAI. There are six states now that have guidance out.
One of the things that’s really hard with AI right now: you don’t know what it’s gonna be next year, you don’t know what it’s gonna be next week. Every day, any big tech company could come out with some amazing new AI thing. And then you’re like, “oh, wow, that throws everything up [in the air] that we’ve already been talking about.”
So, I tell people to just get started. Historically, in education, everyone wants to set policy and set guidelines, and then set professional development, make adoptions, and buy programs. And we’re kind of doing it backwards.
Let’s try it. And then we’ll figure out what works. And then we’ll write rules around it. And a lot of people are really hesitant about that.
There are a lot of superintendents who I’ve talked to that say “I want standards, I want guidelines, I want the state to tell me what to do. And then I’m going to do that in my district.”
I think the problem with doing that with AI is you have the possibility of falling so far behind. You’re gonna have that whole class of 12th graders that never got anything around AI. They’re gonna leave your system. As you continue to wait, because you’re not ready, there are kids who are going to leave your system that you could have done something with.
Where are you in your adoption of AI in the classroom?
Have you seen the curve of innovation where you have your early adopters, and then your fast followers, then most everybody else? And your laggards at the end? We’re in the second [fast followers] phase of that now.
Our early adopters have latched onto AI. And now we’re trying to figure out how do we get the bulk of our teaching staff to embrace it? We have about 10 percent of our staff that’s pretty comfortable utilizing AI in the classroom now.
I just met with a bunch of our teachers yesterday, and they’re expanding and growing what they’re doing. They’re starting to change the way that they’re teaching in class. We had a high school chemistry and astronomy teacher talk about how he’s expecting every single student to use AI. And he’s suspicious if, when they submit work, they don’t also include the link to their ChatGPT conversations [for reference].
We embrace AI as a district. Our board embraces us moving forward with AI. We’ve set basic beliefs around AI. We have essentially the right foundation, and now it’s trying to figure out how to expand and grow this.