On the Meter
Common tiers, common classrooms

English · 日本語 · Tiếng Việt
In the first episode of Black Mirror's seventh season, a primary school teacher named Amanda is mid-conversation with a struggling student when she stops, smiles, and recommends a faith-based counselling service. The recommendation isn't hers. It is an advertisement, served through a brain implant her husband can no longer afford to keep ad-free. She doesn't notice it happening. The student does. The principal will, soon.
The episode is called Common People, and it is bleak in all the obvious ways. What lingered for me was the smaller cruelty under the dystopia: that Amanda kept walking into her classroom anyway. Her tier had changed. Her teaching, in some quiet structural sense, had not been allowed to.
Less than a year after the episode aired, Sam Altman told a room of infrastructure investors at the BlackRock summit that intelligence would soon be a utility — like electricity or water — bought from his company on a meter . He added, hopefully, that the aim was for it to be "too cheap to meter" . The clip travelled fast. The most cited reaction was the obvious one. This sounds like a Black Mirror episode. Specifically, it sounds like that one.
The framing is doing more work than it admits. Calling intelligence a utility makes it sound civic — like the grid, like the tap. But utilities, when they work properly, are common. Universal access is the point. What is being proposed is a meter on a thing that is privately built, privately rate-limited, and privately marked up. The analogy borrows the moral weight of the public good while keeping the business model.
What's already true: ChatGPT sells access through Free, Go, Plus, two different Pro plans, Business, and Enterprise, a gradient that has visibly multiplied across three years, with ads appearing on the lower plans since February. Claude and Gemini have their own tier systems that quietly throttle context windows, reasoning depth, and message volume. Free users get the model that answers shorter and forgets faster. Paid users get the model that thinks longer and remembers more. Pro users get the model that does what that model can't do. The gradient from Common to Plus to Lux is no longer a satire. It is a comparison chart.
In classrooms this lands as a problem of asymmetric cognition. A student on the free tier and a student paying twenty dollars a month are not, anymore, doing the same task. The paying student has a longer working memory at their elbow, a sharper draft partner, fewer rate limits at midnight when the assessment is due. They are also, often, the student whose family could already afford the older advantages: a quiet room, a parent who reads, a tutor on weekends.
UK survey data from HEPI and UCAS finds the divide between students who can pay for premium AI and those who can't is widening an existing gap rather than creating a new one. EDUCAUSE's 2025 landscape study reports the institutional shape of the same problem: around half of US institutions still don't provide enrolled students with any institutional AI access, with cost the most-cited reason. So students self-fund. Unevenly.
Meanwhile our assessment policies tend to address the cohort as if it were one tier. AI use permitted with citation. AI use prohibited. AI use must be declared. These rules assume a level field. They sit on top of an unlevel one.
We end up grading the difference between subscription levels and calling it the difference between students.
There is something else in the Rivermind metaphor that pricing pages don't quite catch. Amanda's brain didn't fail. The contract around her brain failed. The company kept reorganising the tiers, redefining what she had paid for, retroactively turning yesterday's Plus into today's Common. Anyone who has built a course around a particular tool, only to watch a vendor rename, restrict, deprecate, or paywall the feature mid-semester, knows this floor, and knows it is moving.
Teaching is already a relational act made under conditions of structural drift. We are now adding a second drift: the cognitive prosthesis a student brings to class can be quietly downgraded by an email from a company they have never met.
I don't have a tidy answer. The honest ones I've heard from colleagues are about reducing what the tools are asked to bear, not increasing it. Assessing process more than product. Holding more conversations and fewer essays-in-the-dark. Treating AI use as something to be discussed in the open and shaped together, rather than policed across an invisible line. None of this neutralises the meter. It just refuses to let the meter design the relationship.
Amanda kept walking into her classroom, and that is the part I find hardest to put down: less the dystopia of the meter, and more the ordinary act of us arriving in it anyway. The relational floor of teaching holds up a lot, including, for now, the consequences of someone else's pricing strategy. There is a question, somewhere in here, about how much weight that floor was ever supposed to carry.
(日本語)
to be updated
(Tiếng Việt)
to be updated
References
- Brooker, C. & Ali, B. K. (2025). Common People. Black Mirror, S7E1. Netflix.
- Altman, S. (2026, March). Remarks at BlackRock US Infrastructure Summit.
- HEPI & UCAS (2025). Student Generative AI Survey.
- EDUCAUSE (2025). AI Landscape Study: Into the Digital AI Divide; Inside Higher Ed, April 2025.