Agentic Tool Use
Turn your language model into an agent by giving it access to external tools—people, objects, or resources in the room—that it can call when it needs information beyond what’s in the grid.

You will need
- a completed model from Training
- pen, paper, and dice as per Generation
- people or things to serve as “tools” (see examples below)
Your goal
Generate text where the model acts as an agent. Stretch goal: design your own tool and integrate it into your model.
Key idea
What makes a language model an “agent”? In practice, it comes down to tool use—a model that can recognise special tokens triggering external actions, pause generation, call a tool, and incorporate the result before continuing. That loop of generate → call tool → incorporate result → keep generating is the core of agentic AI.
Setting up tools
Before generation, choose a person or object who gets to role-play as the “tool”. Each tool has:
- a trigger word that appears in your model’s vocabulary
- a capability (what it can do)
- a return format (what it gives back)
Example tools
| Trigger word | Tool | Capability | Returns |
|---|---|---|---|
VOTE | the room | ask a yes/no question | “yes” or “no” (majority) |
POLL | show of hands | ask a multiple-choice question | the winning option |
LOOKUP | someone with a phone | search for a fact | a short answer |
CALCULATE | someone with a calculator | do arithmetic | a number |
TIME | a clock or watch | check the current time | the time |
COLOUR | a physical object | observe something | a colour word |
ASK | a designated expert | answer a domain question | a short phrase |
Algorithm
- Add trigger words to your model’s vocabulary (new rows and columns).
- Train or manually add counts so trigger words can appear in generation.
- During generation, when you sample a trigger word:
- pause generation
- formulate a question or request based on context
- the tool “executes” and returns a result
- write down the result as the next word(s)
- continue generation from there
This is the agentic loop: your model generates until it hits a trigger, hands off to a tool, gets a result, and keeps going. Real AI agents do exactly the same thing—just faster and with more tools.
Example session
Model trained on: “The answer is VOTE. We should CALCULATE the total.”
Agentic generation with tools:
- Start with “The”
- Sample → “answer”
- Sample → “is”
- Sample →
VOTE(trigger!)- Pause. Ask the room: “Should we continue?”
- Room votes: majority says “yes”
- Write down “yes”
- Sample from “yes” row → “.”
- Continue: “We” → “should” →
CALCULATE(trigger!)- Pause. Ask: “What is 7 times 8?”
- Calculator person says: “56”
- Write down “56”
- Sample from “56” row (if it exists) or treat as end
Generated text: “The answer is yes. We should 56”
(The grammar breaks down, but that’s fine—the point is demonstrating the agentic mechanism.)
Instructor notes
Designing good tool triggers
For the activity to work well:
- add trigger words to rows where they make sense contextually (e.g.,
VOTEafter “is” or “the”) - design tools where all the “outputs” are tokens in the model, e.g. yes/no, multiple choice (one cheeky trick is to have the tool return it’s result then a
., because there’s no reason a tool can’t return multiple tokens) - have the tool operator ready before you start generating
Discussion questions
- when should an agent use a tool vs try to answer itself?
- what happens if a tool returns something unexpected?
- how does the model “know” to call a tool? (it doesn’t—it just samples the trigger word)
- what tools would be most useful for different kinds of text?
- could a tool’s response change what the agent generates next?
Classroom variations
Simple version: use just one tool (VOTE) and have the whole class participate. The model generates until it hits VOTE, then you ask a question and count hands.
Advanced version: set up multiple tools around the room. Different students operate different tools. The agent doesn’t know which tool will be called next.
Multi-step version: chain tool calls together—the result of one tool becomes the context for calling another. This is closer to how real AI agents plan and execute multi-step tasks.
Connection to current LLMs
“Agentic AI” has become a buzzword, but in practice it really just means tool use in a loop. As Simon Willison puts it, an LLM agent is something that “runs tools in a loop to achieve a goal”—and that’s exactly what your model is doing.
Tool use (also called “function calling”) is how modern AI assistants perform actions in the world:
- the agentic loop: generate → detect tool call → execute tool → feed result back → continue generating, exactly like your trigger-word cycle
- examples: web search, code execution, API calls, database queries, image generation
- structured calls: modern models output JSON-formatted tool calls (function name, parameters) rather than just trigger words, but the mechanism is the same
- chaining: real agents chain multiple tool calls to complete complex tasks—planning, executing, observing results, and adjusting
The key insight: the model doesn’t “know” anything the tool returns—it just learns when to ask. Your classroom tools demonstrate this perfectly: the model samples VOTE not because it knows the answer, but because the training data included VOTE in that context. The actual knowledge comes from outside the model.
This is why tool-using AI agents can do things like search the web for current information, run calculations they couldn’t do in their head, or control robots and software. The model’s job is to know when to call a tool and how to use the result—not to contain all knowledge itself.