The AI schools framework is right about the goal — but there's a simpler path
The Australian Framework for Generative AI in Schools gets the diagnosis right. Students need to understand how AI works, not just how to use it. Teachers need to be able to explain what’s happening under the hood. Schools need to think carefully about privacy, fairness, and the wellbeing of their students. Six principles, twenty-five guiding statements — hard to argue with any of it.
But while spelling out the problem is important, we need solutions too.
As Lucinda McKnight and Leon Furze argued in The Conversation, the framework places an “extraordinary onus” on teachers. It asks them to conduct risk assessments of algorithms, ensure “explainability” of AI systems, revise assessments, consult communities — all within already-stretched workloads, without additional funding, and in many cases without the technical background to know where to start. The implicit assumption is that teaching AI literacy means buying edtech products, navigating data privacy agreements, and somehow becoming an expert in algorithmic auditing on top of everything else.
There’s a much simpler way in.
Build the thing
LLMs Unplugged takes a different approach: instead of interacting with a commercial AI product and trying to explain what it’s doing, students build their own language model from scratch. Pen, paper, dice. No screens, no accounts, no data leaving the room.
The activity works like this. Students take a short text — a few sentences of a picture book, say — and count how often each word follows each other word. They fill in a table. They roll dice to sample from those frequencies. Out comes new text, generated by their model. It’s often funny, sometimes nonsensical, and occasionally eerily plausible. The point isn’t the output — it’s the understanding that comes from having built the machinery yourself.
This is not a simplified analogy for how language models work. It is how they work — at a smaller scale, obviously, but the core mechanism is the same. Modern LLMs use exactly this approach (modelling language as weighted distributions over sequences) at vastly greater scale and with learned rather than hand-crafted statistics. When a student rolls a die to pick the next word based on probabilities they counted by hand, they’re doing what GPT does billions of times a second.
What the framework asks for, without the overhead
The framework’s transparency principle calls for “explainability” — that teachers and students should be able to understand and explain how an AI system reaches its outputs. This is genuinely difficult when the system in question is a black-box hosted chatbot built on hundreds of billions of parameters. But a model you built yourself on a sheet of A3 paper? Every step is visible. Every decision is traceable. Explainability isn’t an aspiration — it’s a structural property of the activity.
The privacy and safety principle is similarly straightforward. Most paths to AI literacy in schools involve students interacting with online tools, which means accounts, data collection, acceptable use policies, and the ever-present question of what happens to student conversations. LLMs Unplugged sidesteps all of this. There’s no software to vet, no data to protect, no terms of service to parse. The most sensitive piece of technology in the room is a six-sided die.
Then there’s fairness. The framework rightly asks that AI in schools should be accessible regardless of a school’s resources. But the edtech pathway creates exactly the inequity it’s trying to prevent — well-resourced schools get the good tools, everyone else gets whatever’s free. An activity that requires paper and dice works the same in every classroom, in every school, in every state.
And the wellbeing principle? No risk of students developing over-reliance on a chatbot, no chance of harmful or inappropriate AI-generated content, no parasocial relationships with a language model. The model they built doesn’t talk back.
The framework’s teaching and learning principle emphasises that AI should support teacher expertise, not replace it. I’d argue it should also not overwhelm it. Asking teachers to become algorithmic auditors is not support — it’s an unfunded mandate dressed up as professional development.
LLMs Unplugged is designed so that any teacher can run it, regardless of their technical background. You don’t need to understand neural networks or transformer architectures. You need to be able to count words, fill in a table, and roll dice. The activity builds the teacher’s understanding alongside the students’ — and it maps directly to existing curriculum outcomes in maths, digital technologies, and English.
Doing LLMs Unplugged doesn’t mean that there aren’t deep and nuanced questions to wrestle with in the classroom. The fact that your very small language model can actually generate (at times) coherent text is marvellous, but it’s vastly less impressive than the real Large Language Models which everyone with an internet connection now has access to today, and often for free.
The simplest path is often the best one
And so I’m clearly not arguing that schools should never use AI tools (I use Claude Code all the time for software development), or that the framework is wrong to think carefully about how they do. But there’s a version of AI literacy that doesn’t require procurement processes, data protection impact assessments, or teachers becoming overnight experts in machine learning. It requires paper, dice, and a willingness to get stuck in.
The AI Schools framework asks schools to do hard things. Some of those things genuinely are hard. But understanding how language models work doesn’t have to be one of them.
If you’d like to try LLMs Unplugged in your school, get in touch.