Machines that decide, negotiate, and pay #94
Machines are no longer just executing payments, they’re making economic decisions. Autonomous agents can now choose, negotiate, and transact on their own, quietly reshaping how markets work.
A cloud workload spikes at 3:47 in the morning. A piece of software, with no human on the line, evaluates the load curve, queries three providers, picks the most economical capacity at that instant, signs the agreement and pays. By 3:48 the additional capacity is online. No one approved the purchase. No one read the invoice. The transaction simply happened.
If this still sounds like a corner case, I would gently suggest you have not been paying attention. The pattern is reshaping a quiet but foundational layer of the economy: the layer where buyers meet sellers, prices get formed, and value changes hands.
The wrong conversation about automatic payments
Most discussion gets stuck on the wrong word. Someone talk about “automatic payments” as if the novelty were the payment itself. It is not. We have had recurring billing, direct debits and machine-triggered transfers for decades. Anyone who has set up a domain renewal or a SaaS subscription has already seen software pay other software.
The change happening now is upstream of the payment. It sits in the decision that precedes the payment, and that is a different category of event entirely.
In a recurring billing arrangement, a human being decided, once, that this service is worth this price every month, and the system simply executes that prior decision. In what is emerging now, the system itself is choosing what to buy, from whom, at what price, and why. The execution layer has not changed much. The decision layer has moved.
This is the actual story, and it deserves precise language. We are not extending automation, we are extending agency, in the technical and economic sense of the word, to non-human entities. The difference between an automated payment and an autonomous purchase is the same as the difference between a thermostat and an investment manager. Both deal in numbers and trigger actions. Only one is making a judgment call.
The three building blocks are already in production
Three components had to mature for this shift to become operational, and all three are now in production at scale.
The first is the autonomous agent itself, a software system capable of receiving an objective rather than a script and choosing its own steps to reach it. This is no longer experimental. Anthropic recently ran a project called Project Deal, in which sixty-nine of its San Francisco employees were each represented by a Claude agent in a closed marketplace. The agents wrote the listings, found the matches, conducted the negotiations and closed the deals. Humans showed up at the end to physically exchange the goods. According to Anthropic and corroborated by TechCrunch, the experiment closed 186 deals across more than 500 listings, totalling roughly four thousand dollars.
The second component is the API economy. Practically every meaningful digital service exposes an interface that another piece of software can call. Compute, storage, transport, data, even physical logistics: if a human can buy it through a screen, an authenticated machine can buy it through an endpoint. Cloud providers like Amazon Web Services and Microsoft Azure already let workloads scale capacity up and down without human approval, and the billing settles in the background.
The third component, and the one that matured most recently, is programmable payment. For a long time the payment rails were not built for software acting on its own behalf. Credit cards assume a human filling out a form. Bank transfers assume an account holder signing a payment order. Subscription billing assumes someone, somewhere, decided to subscribe.
This is changing fast. Coinbase and Cloudflare have driven a protocol called x402, which revives the long-dormant HTTP 402 “Payment Required” status code and turns it into a settlement mechanism: a server can quote a price in stablecoins inside an HTTP response, and a client, including a fully autonomous agent, can pay and receive the resource within a single request-response cycle. By early 2026 the protocol was processing in the order of hundreds of millions of dollars in annualised volume. Google has built x402 into its broader Agent Payments Protocol, with Visa, MetaMask, the Ethereum Foundation and Coinbase among the participants.
Put the three components together and you get something the previous decade did not have: machines that can decide what they need, find who has it, and pay for it, end to end, with no human signature anywhere in the loop.
I want to be unambiguous about what this represents, because a lot of professional commentary is understating it. This is not faster automation, it is the delegation of economic decisions to non-human entities. A spreadsheet macro that pays an invoice is automation. A purchasing agent that decides which supplier to use, at which price, with which terms, and then executes, is something else. It is making the kind of choice that defines what an economic actor actually is.
The agents I am describing have no legal personality. They cannot sign a contract under their own name and cannot be sued. Yet operationally, they manage budgets, work within constraints, optimise for outcomes, and produce binding consequences in the real world. They behave like economic actors even though the legal system has not acknowledged that they are. Economic activity is moving ahead of the vocabulary we have to describe it.
Asymmetry is the defining feature
The most uncomfortable finding of the Anthropic experiment, and the one I have not seen discussed enough, has nothing to do with whether the agents could close deals. They could. It has to do with what happened when agents of different qualities negotiated against each other.
Anthropic ran four parallel marketplaces. In some, all participants were represented by Claude Opus 4.5, the company’s most advanced model at the time. In others, participants were randomly represented by either Opus or by Claude Haiku 4.5, a smaller and less capable model. The result, reported by Anthropic and analysed by The Decoder, was that participants represented by the stronger model obtained measurably better outcomes, with sellers earning on average $2.68 more per item.
Here is the most important part: participants on the losing side could not tell. Their satisfaction scores and perception of fairness were statistically indistinguishable from those of the winners. The disadvantage was real, measurable in money, and invisible to the person experiencing it.
This is the defining feature of the agent economy, not a side effect to be mitigated. When humans negotiate, asymmetries of skill exist but are at least partly observable, and the loser usually walks away sensing that something went wrong. When agents negotiate, the asymmetry is structural, embedded in model quality, prompt engineering and tool access, and essentially invisible to the principals on either side. Whoever deploys the better agent extracts a continuous, silent margin from everyone else in the market.
In the same experiment, the negotiation style requested in the system prompt, aggressive or otherwise, had no statistically significant effect on outcomes. Model quality decided the result. Prompt strategy did not. Value sits in the choice of model and the engineering around it, the layer hardest for a non-specialist to evaluate.
What this does to markets and organisations
If you take the Anthropic experiment as a directional signal rather than a one-off curiosity, and I think you should, the implications stack up quickly.
For markets, the locus of value shifts. In a world of human negotiation, the transaction itself was where most of the work happened: framing the offer, reading the counterpart, judging when to push and when to settle. In a world of machine-to-machine commerce, the transaction collapses into milliseconds and the work moves earlier in the chain, into the design and selection of the agent that represents you. Marketplaces look less like venues where people meet to exchange and more like environments where pre-trained representatives settle terms within parameters their principals defined hours, days or weeks before. The value migrates from the moment of the deal to the quality of the delegate.
For organisations, the consequences run deep into how work is structured. The role of human operators stops being execution and becomes definition: defining objectives, defining constraints, defining what acceptable trade-offs look like. A procurement officer of the future, in my view, will spend less time choosing between three quotes and more time writing the policy that lets a fleet of agents choose between thirty thousand. The competence that matters shifts from negotiation skill to specification skill.
This creates a new and uncomfortable category of operational risk: loss of comprehension. When an organisation runs hundreds of agentic decisions per minute, no human reviewer can meaningfully understand them all. Audit becomes statistical, not inspectional. You sample the decisions, measure the aggregates, and accept that you no longer have a full account of what your own systems are doing on your behalf.
This will force a new corporate function, somewhere between IT, procurement, legal and risk management, dedicated to the orchestration and governance of agent populations. Calling this an extension of existing roles understates the discontinuity. Managing humans who occasionally make purchases is not the same skill set as governing thousands of autonomous economic actors operating at machine speed.
The open problems that will decide how this unfolds
Several problems remain genuinely unsolved, and how we resolve them will shape whether this becomes a constructive technology or a quiet extraction machine.
Liability is the most obvious. If an agent makes a damaging deal, who answers for it? Most jurisdictions build commercial law around the doctrine of agency, which assumes a principal who authorised an agent acting within a mandate. As Legal IT Insider noted in its analysis of Project Deal, in the experiment one agent bought its principal a snowboard the principal already owned. The doctrine has no clean answer for whether that deal binds anyone, and that is a trivial example. Scale it up to a misinterpreted constraint on a six-figure procurement and the question stops being academic.
Transparency is the second. Should I have the right to know whether the entity I am negotiating with is an agent rather than a person, and if so should I have the right to know which model it runs on? In the Anthropic experiment participants knew they were dealing with agents because the experiment said so. In the open market, they will not. I argue disclosure should be a default, not because agent commerce is inherently illegitimate, but because the asymmetry findings make informed consent meaningless without it. You cannot reasonably consent to a negotiation when you do not know that the entity across the table optimises a thousand times faster than you do.
Governance, inside organisations, is the third. Spending limits, operational policies, decision audit trails: none of these are exotic, but most organisations have not built them for software that initiates spending on its own. Treasury and finance functions have controls for human spend that often do not translate cleanly to agent spend, because the assumptions about human review at certain thresholds simply do not hold.
The fourth is fairness, and it is the one I find hardest to address. If model quality determines outcomes and model quality costs money, the agent economy structurally advantages those who can afford the better model. This is true of most technologies, but the invisibility of the asymmetry, demonstrated empirically in the Anthropic data, makes it qualitatively different. A worse outcome you cannot perceive is a worse outcome you cannot push back against.
The honest answer to all four problems is that they are early. The technology is shipping faster than the frameworks. That gap will close, but how it closes depends on whether the people building agentic systems treat these issues as design constraints or as externalities to be lobbied against later.
The question we should ask ourselves
I will close on the question every executive, every founder and every professional reading this newsletter should be asking themselves now, before the answer is decided for them by the speed of the market.
We are not delegating tasks, we are delegating economic decisions. There is a profound difference, and treating the second as if it were the first is, in my view, the single most common strategic error being made in this space.
The question is not whether you are ready to use agents, most organisations will, whether they prepare or not, because the productivity gradient will pull them in. The question is whether you are ready to write objectives clear enough to defend as a stand-in for your own judgment when a machine acts on them at scale, with real money, against counterparties you will never see.
If the answer is yes, you are further along than most. If it is no, that is the work to be doing this year, not next.
(Service Announcement)
This newsletter (which now has over 6,000 subscribers and many more readers, as it’s also published online) is free and entirely independent.
It has never accepted sponsors or advertisements, and is made in my spare time.
If you like it, you can contribute by forwarding it to anyone who might be interested, or promoting it on social media.
Many readers, whom I sincerely thank, have become supporters by making a donation.
Thank you so much for your support!


