The CommonsBlog


What If... AI Compute Costs Soar?

Many groups are investing heavily in Claude Code, OpenAI Codex, and similar tools for using generative AI for software development, especially for actual code creation. I have played with Claude Code and OpenCode (for use with local Ollama models), and it is genuinely impressive what they can accomplish. The argument for leaning heavily on genAI is: it is cost-effective. One pundit wrote:

And the economics are deliberately provocative: roughly ten dollars an hour of compute versus a developer’s cost of around $150 an hour.

However, that statement makes two assumptions:

  • Developers cost $150/hour, which is only true in select locations and for select people — the cited source of that value says that is an estimate for ā€œa professional senior software developer in the United Statesā€

  • That the cost of AI compute will remain steady, increasing roughly in line with the increase in the cost of software developers

The economics may not work out so well if either of those assumptions are unfounded. The first one certainly is, as many software developers over much of the world cost a lot less than $150/hour. And my bet is that the second assumption is also flawed, and that the cost of AI computer will climb drastically in the next few years.

ā€œIf Something Cannot Go On Forever, It Will Stopā€

Uber, and to some extent Lyft, built their businesses up by spending investor cash on subsidizing rides:

Uber passengers were paying only 41% of the actual cost of their trips; Uber was using these massive subsidies to undercut the fares and provide more capacity than the competitors who had to cover 100% of their costs out of passenger fares.

(from ā€œCan Uber Ever Deliver? Part One – Understanding Uber’s Bleak Operating Economicsā€)

While extremely aggressive, this is a well-known play in the VC-backed startup playbook. The subsidies help in (at least) two ways:

  1. They get your customer base ā€œhookedā€ on using your product or service, by making it so cheap that they do not think twice about using it

  2. They help build a moat, by preventing competition from entering the market (unless they too use the same subsidy approach), until such time as you are so large that competition is much less of a risk

There are signs that major AI vendors are doing the same thing, and frankly it would be surprising to me if they were not doing that. Ed Zitron wrote:

Based on an analysis of many users’ actual token burn on Claude Code, I believe Anthropic is burning anywhere from $3 to $20 to make $1, and that the product that users are using (and the media is raving about) is not one that Anthropic can actually support long-term.

Partly, that is based on the Anthropic financial reports. Partly, that is based on a crowdsourced look at how many tokens you get to use with one of the Claude monthly subscriptions, such as Claude Max. Effectively, you get a ā€œbulk rateā€ discount with those subscriptions, so Anthropic gets less per-token revenue from subscription usage than they do from full-price API tokens.

If that analysis is correct, then the ā€œten dollars an hour of computeā€ really needs to be more like $30-200/hour, just for Anthropic to break even. It might have to go higher if Anthropic wishes to turn a profit.

(Not So) Tiny Bubbles

That, of course, assumes that Anthropic survives long enough to raise the prices.

Lots has been written about the prospect that the US stock market is in an ā€œAI bubbleā€. Much of the recent run-up of the market has been tied to the ā€œMagnificent Sevenā€ firms: Alphabet, Amazon, Apple, Meta, Microsoft, NVIDIA and Tesla. And a lot of that is on the back of a financial ouroboros with AI Company A investing in AI Company B, which turns around and agrees to buy a bunch of AI Company A’s stuff.

Whether due to AI-related issues (e.g., insufficient demand for all the data centers being built) or other impacts (e.g., war), there is a distinct chance that the bubble pops. If we wind up with a 2008-style financial crisis, what happens to OpenAI, Anthropic, and other similarly-placed firms?

It is certainly possible that they could survive, especially if they jack up their revenue by raising prices a lot. It is also possible that they will wind up being acquired by firms that already have hosting prowess (e.g., Alphabet, Amazon, Meta, Microsoft), with a side-effect of reducing competition in the generative AI market. And it is not out of the question that some AI model creators will simply crash and burn… which also reduces competition, by eliminating competitors the hard way.

Oligopolies Gonna Oligopple

(note: ā€œoligoppleā€ is not a real word, though it should be)

When there are few competitors in a market, the resulting oligopoly often results in higher prices. To an extent, that is the point of trying to establish an oligopoly in the first place.

It’s not like there are a lot of hosted frontier model providers. Right now, it feels like a race for market share and to build the moat to preclude other competitors from entering the market. But, eventually, the oligopoly will start raising rates, just because they can. If competition shrinks due to the AI bubble popping, the smaller oligopoly may get to the point of raising rates faster.

A lot of what Cory Doctorow’s ā€œenshittificationā€ stems from is oligopoly: too few competitors in the market and few reasonable substitutes for what they offer. The resulting market power lets those few competitors do all sorts of things that are bad for their customers, like jacking up rates, because those customers have nowhere else to turn.

So, what happens if that Claude Max subscription climbs to $300/month? What if it climbs to $2,000/month? What if the cost of additional tokens rises commensurately? What if the other surviving oligopoly members raise their prices along the same lines? And what if it is no longer Claude Max but Microsoft Copilot Max Powered By The AI Formerly Known As Claude, due to a shakeout as part of an AI bubble pop?

(yeah, that name is long, but branding is hard)

Even at today’s prices, the economics of using frontier models for code generation only make sense when software developers are expensive. In a lot of the world, that is not the case today. With a serious bump in the price of AI compute, the economics will narrow the use cases even further.

What I’m Doing

I have serious ethical issues with hosted frontier models, so while I have a Claude Pro subscription (presently $17/month), my attention is more on local models. Qwen 3.5 has some promise, and it runs comfortably on a 64GB Mac. It is not nearly as capable as, say, Claude Opus 4.6, but some of that can be addressed by advance planning. I am experimenting with OpenCode commands to provide better prompts for Qwen 3.5, for things that I want done routinely. If I run into OpenCode limitations, I can use Koog and set up my own lightweight bespoke coding agent.

I am not trying to have something like Qwen 3.5 completely replace Claude Opus 4.6. I am trying to minimize my use of Claude, though, so I want to find ways that local models can do some of the routine (ā€œgruntā€) work. I will reserve Claude for those handful of places where it might be faster than me. As a result, my techniques will handle a massive spike in AI compute cost — I would drop Claude and rely on Natural Intelligenceā„¢ (🧠) for its role, while still getting benefit from the local model ecosystem.

Mar 06, 2026


Random Musings on the Android 17 Beta 2

Less than two weeks after the release of Android 17 Beta 1 (and its associated random musings), Beta 2 dropped. And since the ā€œPlatform Stability milestoneā€ is slated for March, we could get that in just a few days.

šŸ˜‘

What You Should Test Very Soon

While the whole ā€œbubblesā€ thing seems cute, and while Google thinks that just supporting multi-window is sufficient… I recommend testing it.

If your app might need to talk to devices on the local network, I also recommend examining ACCESS_LOCAL_NETWORK closely. Google’s documentation on this is very muddled:

What Seems Nice

The ACTION_PICK_CONTACTS API, for getting user-specified field-level access to contact data, seems promising.

The ACTION_OPEN_EYE_DROPPER API, for getting a specific on-screen color from the user, is interesting.

What Seems Cute But Largely Pointless

There is a new ā€œhandoffā€ API, centered around setHandoffEnabled() on Activity, that enables ā€œhanding offā€ running apps between devices, such as a phone and a tablet. My guess is that the percentage of Android app users owning 2+ devices is very low and will not be increasing much.

What Might Be Bubblicious

A task can now have a TaskLocation, ā€œthat consists of the host display identifier and rectangular bounds in the pixel-based coordinate system relative to host displayā€.

What Is Going Away

There had been a setContentCaptureEnabled() function on ContentCaptureManager to opt out of content capture. That is now deprecated – use FLAG_SECURE.

The Bluetooth SCO support in AudioManager is largely deprecated, as they are routing you to newer ā€œcommunication deviceā€ APIs.

isCredential() and setIsCredential() on View are deprecated, as ā€œthe purpose and usage expectations of this property were never clearly definedā€. Frankly, we can say that about a lot of the Android SDK.

What Else Seems Interesting

There is a new nativeService attribute, presumably going on the <service> manifest element. This will bypass the Android Runtime for the process hosting the service, and instead loads a native library of yours. For isolated services that eschew the Android SDK, this avoids the overhead of the Android Runtime.

There are 7 new permissions, though most are role-specific.

There is a new Notification.Metric class and related classes like Notification.Metric.FixedFloat, but it is unclear what they are for.

DocumentsProvider now offers an API for trash.

A network security policy can now request domain encryption modes.

Feb 27, 2026


25 February 2026 Artifact Wave

In new artifacts in this wave, Lifecycle ViewModel Compose added many more multiplatform targets, and XR got a new testing artifact:

  • androidx.lifecycle:lifecycle-viewmodel-compose-iosarm64
  • androidx.lifecycle:lifecycle-viewmodel-compose-iossimulatorarm64
  • androidx.lifecycle:lifecycle-viewmodel-compose-js
  • androidx.lifecycle:lifecycle-viewmodel-compose-linuxarm64
  • androidx.lifecycle:lifecycle-viewmodel-compose-linuxx64
  • androidx.lifecycle:lifecycle-viewmodel-compose-macosarm64
  • androidx.lifecycle:lifecycle-viewmodel-compose-mingwx64
  • androidx.lifecycle:lifecycle-viewmodel-compose-tvosarm64
  • androidx.lifecycle:lifecycle-viewmodel-compose-tvossimulatorarm64
  • androidx.lifecycle:lifecycle-viewmodel-compose-wasm-js
  • androidx.lifecycle:lifecycle-viewmodel-compose-watchosarm32
  • androidx.lifecycle:lifecycle-viewmodel-compose-watchosarm64
  • androidx.lifecycle:lifecycle-viewmodel-compose-watchosdevicearm64
  • androidx.lifecycle:lifecycle-viewmodel-compose-watchossimulatorarm64
  • androidx.xr.projected:projected-testing

The roster of 800+ updated artifacts can be found here!

Feb 25, 2026


Random Musings on the Android 17 Beta 1

Last time around, with Android 16, we had two developer previews before a beta. This time, not so much. We went straight to Beta 1 for Android 17. Not only did we lose months of developer preview time, but Beta 1 came out three weeks later than did Android 16 Beta 1.

The good news is: there is not all that much in this release. The bad news is: the point of the release seems to be to break apps. Whether it’s immediately or a year out when the Android 17 targetSdk becomes required, there is quite a bit in this release that might screw up your app. While things like adaptive screens and background audio hardening get all the attention:

There is a fair bit of documentation, but it is a bit disjointed. For example, the Android 17 Beta 1 announcement blog post mentions a change to the size of custom notification views that takes effect when you move to target Android 17… but the page on behavior changes for targeting Android 17 does not mention it.

Other documentation bugs include:

  • The API differences report claims a lot of changes were made in API Level 1 when that does not appear to be the case

  • They claim to have added a new DEVICE_PROFILE_FITNESS_TRACKER companion request, but that seems to have been around for years

  • While some places in the docs indicate that Android 17 will result in an API version of 37, note that Build.VERSION_CODES.CINNAMON_BUN is 10000 for Beta 1, continuing a long-standing tradition of using high temporary values for new version codes until the API changes stabilize

(though I do appreciate the choice of cinnamon bun as the tasty treat for this release! šŸ˜‹)

My usual random musings focus on changes in the API differences report that do not seem to be covered elsewhere. Android 17 Beta 1 is a tiny release in terms of API changes, so there is not much to report:

  • FINGERPRINT_SERVICE is being officially removed from Context. Since that system service has not been around in a while, hopefully this will not affect you.

  • There is a new STREAM_ASSISTANT volume level defined in AudioManager

  • There is a new ACTION_SUPERVISION_SETTINGS in Settings. This action string may ā€œshow screen to manage supervision settingsā€. While the documentation does not state this, please assume that any given device might not offer this screen and plan accordingly. Personally, I worry about the potential ramifications of Android getting capabilities like X-ray vision that would be enabled in a super-vision screen.

  • Wait… I am now receiving word that ā€œsupervision settingsā€ may not refer to actual super-vision powers. Carry on!

  • startSafeBrowsing() on WebView is now deprecated. Unsafe browsing presumably is still supported.

Feb 14, 2026


11 February 2026 Artifact Wave

There were only two new artifacts this week:

  • androidx.media3:media3-effect-lottie
  • androidx.media3:media3-inspector-frame

The roster of 600+ updated artifacts can be found here!

Feb 11, 2026


Older Posts