Reflections 3rd May
AI is not the problem we need to address.....
I have been thinking about the news that GitHub Copilot is moving to token-based billing. It is a small story, on the face of it. A subscription becomes a meter rather than a licence, and the price of using AI, which has been hidden inside a monthly fee subsidised by enormous quantities of venture capital, is becoming visible.
Where GitHub leads, other vendors will follow; the move is structural rather than transitional, as Ed Zitron has been at pains to point out in this excellent article. The economics have never been in doubt; the marketing effort is evolving as business need turns to addiction; and the bill is now arriving.
What has been turning over in my head is not the price story itself; it is what lies beneath it. For three years we have been told, in roughly equal measure, that AI is a transformative resource and that it is essentially free at point of use. And we have chosen to believe it. We have been encouraged to use it for everything; organisations have built workflows on the assumption that an AI session costs roughly nothing; whole categories of professional task have quietly been outsourced to it on the implicit understanding that the costs would never come due. The tokens were treated as if they came from a tap, and the tap was assumed to be infinite.
They are not, and it is not. The interesting question is what becomes visible when this is finally admitted.
The idea I cannot quite let go of? That the binding constraint on what we can do with AI was never its price.
And we have been here before.
Consider antibiotics. We discovered them, a century ago, as something close to a miracle; and then proceeded to use them with the casualness of people who believed the abundance would last forever. We gave them to livestock as a growth promoter; prescribed them for viral infections they could not touch; built whole protocols of acute medicine on the assumption that the next antibiotic would always be along. Roll forward, and the World Health Organisation talks about a post-antibiotic era, and a resistance ledger being paid by people who never personally misused a single dose. Nobody in that story was a villain. Everybody was rational. The window in which the resource could have been husbanded with care closed without anyone noticing it was closing.
Or consider Concorde, and the Apollo programme. We had supersonic passenger aviation between 1976 and 2003; and crewed lunar capability between 1969 and 1972. In each case the capability was real, was used, became familiar, and then went away. Technological progress is not unidirectional; the capability available today is not guaranteed to be the capability available in 2030. Anyone working in 1990 to imagine the world of 2025 would have got a great deal wrong, and one of the things they would have got wrong is the assumption that what they had would still be there.
Both precedents are sobering, but neither is quite the point.
The point is that something else is going on, underneath the price story, that has very little to do with AI specifically and everything to do with how scarcity behaves over time. Whenever a previously-scarce capability becomes abundant, the scarcity migrates; it does not disappear. Cheap books did not abolish the scarcity of thinking; they relocated it to attention and discernment. Cheap information did not abolish the scarcity of knowledge; it relocated it to the time needed for evaluation and synthesis. Cheap generative output will not abolish the scarcity of creativity; it will relocate it to the capacity to know what is worth making.
The migration of scarcity is the historical constant. The location of scarcity is what changes. And the part that should be giving any honest practitioner of AI a slight chill at the back of the neck is that the new location of scarcity in the AI era turns out to be exactly the thing we have spent a century training people not to develop.
Intelligence is multi-faceted. When one looks at it carefully, it is not a property of agents; it is a property of realisations. Capacity that is not realised, in a particular situation, by an agent who has something at stake in the outcome, is not intelligence in any sense that bears on the world. The Greeks had a word, mētis, for the practical wisdom that knows what to do in the situation at hand; it was always understood to be a thing that could not be possessed in the abstract, only exercised. It exists in the moment of judgment or it does not exist.
Recall and process can be made cheap by removing degrees of discretion, and AI is making them cheaper still. But mētis has not become cheaper at all. It may, if anything, have become more expensive, because the conditions in which mētis is formed, sustained engagement with consequence, the willingness to be wrong in ways that matter, the long apprenticeship of a particular occupation, have been steadily stripped from working life over the last century. Even if AI were free, the binding constraint on what useful work the system could do would be the supply of mētis-capable agents.
We have spent a century creating industrialised education and training for the kinds of intelligence whose realisation is becoming cheap, and almost none for the kind whose realisation is becoming load-bearing. This is the deeper scarcity. It is not silicon; it is the human capacity to know what silicon is for.
There is a story from the early eighteenth century, in the streams of Minas Gerais. The Brazilian gold prospectors, the garimpeiros, working the river beds for alluvial gold, kept finding peculiar pebbles among the gravel; hard, awkward little stones that were not gold and were not anything else they recognised. They used them, the records say, as gambling chips. They used them as weights for fishing lines.
The pebbles were diamonds. The European market at the time was calibrated to Indian supply and the recognition had not yet reached Minas Gerais; the garimpeiros were not stupid, and they were not primitives. Theirs was what Daniel DeNicola calls an ignorance of place; they simply had no frame in which their pebbles were what their pebbles actually were. Once the recognition arrived, of course, the price followed; the diamonds were in time priced into a market that the garimpeiros had no part in, and the men who had been weighting their lines with them were left with their lines.
I suspect we are the garimpeiros, right now, using diamonds as fishing weights.
The substance of AI is real, but our individual use of it is incommensurate with the substance. The window in which we get to learn what AI actually is, while the price has not yet caught up, is open partly because the price has not yet caught up. When the price catches up, and it will catch up, we will have either learned to use these things as something more than weights, or we will be priced out of them before we ever did.
There is a question in the room which the framing above leaves alone, and which deserves at least a moment. Whether AI is genuinely intelligent or only an elaborate simulation of intelligence is one I find I can hold both ways at once. For the realisations that bear on drafting, summarising, surfacing pattern, the question seems to me beside the point; the realisation serves the situation and the silicon nature of the realising agent is incidental. For the realisations that matter most, though, the ones that require an agent who will live with the consequences, I am not sure the question is beside the point at all; mētis is constituted in part by the agent’s relation to what is at stake, and an agent with nothing at stake cannot, as far as I can see, supply it. Both moves seem to me to be true at once, and this is not the place to resolve them.
What this might be the place for, perhaps, is a small set of questions worth carrying around for a fortnight.
If the price of AI rose tomorrow to its true cost, what would your organisation stop doing? What would it be glad to keep doing?
What would your team look like if AI were a metered input you had to budget for, like legal advice or laboratory time, rather than a flat-rate utility?
What kind of judgment is worth a million tokens? Who in your organisation has it?
If we are the garimpeiros, what are the diamonds we are using as weights?
Would we recognise diamonds if we trod on them, while we are busy finding work for an intelligence that is becoming a commodity?
I wonder about the answers to these, for my own work or for anyone else’s. I am in no particular hurry to find out. The diamonds will get repriced soon enough. The point, while we have them lying around, is recognising them and working out what we do with them.


