Designing the LLMs Unplugged brand mark
Every project should feel like a natural extension of your brand, and for a project about demystifying how language models work it felt right to make the brand mark itself an explainer. So the LLMs Unplugged brand mark is built from the thing that language models actually operate on—tokens.
#Tokenising the title
If you feed the string “LLMs Unplugged” into OpenAI’s tokeniser (the cl100k_base vocabulary used by GPT-3.5/4), you get five tokens:
| Token | ID |
|---|---|
LL | 3069 |
Ms | 5765 |
Un | 1252 |
plug | 37729 |
ged | 2004 |
These splits are a nice illustration of tokenisation in practice. “LLMs” doesn’t
stay as one piece—it becomes LL + Ms, which makes a certain kind of sense
if you squint (capital letters are rare enough that the tokeniser treats them
separately). “Unplugged” becomes Un + plug + ged, with that leading space
on Un showing how BPE
tokenisers encode word boundaries. None of this is obvious until you actually
look at it, which is kind of the point of the whole project.
#The word mark
Each of those five tokens becomes a coloured brick in the word mark, arranged on two lines to spell out the title. Each brick cycles between its text label and a 4×4 dot pattern encoding the token ID in binary—more on that below.
#The favicon
The favicon takes this one step further. Each token ID is just a number, and
numbers can be written in binary. Token 3069 (“LL”) in 16-bit binary is
0000 1011 1111 1101. Lay those 16 bits out in a 4×4 grid, colour the 1s in
gold and leave the 0s dim, and you get a tiny visual fingerprint of the token.
If you watch the favicon for a moment you’ll notice it isn’t static—it cycles
through the bit patterns for all five title tokens on a 15-second loop, with
smooth transitions between each pattern. The animation is pure CSS @keyframes,
no JavaScript involved. Each of the 16 circles gets its own keyframe animation
based on how its bit differs across the five tokens, so some circles hold steady
while others flicker between gold and dim.
It’s the kind of thing that nobody will ever notice unprompted (pun intended), but once you know what it is, you can’t unsee it. Every dot pattern on the site is a real token ID rendered the same way.
#The five-up
There’s also a static version that lays all five token grids side by side in a
single row—no animation, just the five bit patterns sitting next to each
other. It reads left to right as LL · Ms · \u00A0Un · plug · ged.
Because it’s completely static it works nicely on things like t-shirts and
stickers where you can’t rely on animation.
The first version matches the favicon colour scheme—gold dots on a dark background, uniform across all five tokens:
There’s also a tinted variant where each token gets its own gold-brick background, matching the colour scheme used in the title logo:
#The animated version
There’s also an animated version of the brand mark which starts with the title “LLMs Unplugged” already assembled from five gold bricks, sitting against a field of around 270 dimmed background bricks. After a pause the title tokens dissolve back into the grid, blending in with the rest. The bricks reshuffle into new random positions, then the five title tokens light up again and slide into place to re-spell the title. Each cycle the bricks are re-randomised, so the title tokens emerge from different positions every time.
The whole thing scales to whatever container you put it in—no pixel breakpoints or resize listeners needed. The animation captures the core visual metaphor of the project: meaningful text emerging from what looks like a random jumble of tokens. It’s basically what a language model does, just at a pace humans can follow.
#Background mode
There’s also a stripped-back variant that just shows the shuffling brick grid without ever highlighting or assembling the title tokens. It’s useful as a visual backdrop—we use it behind divider slides in our workshop presentations—where you want the token texture without the full animation competing for attention.