The same sentence appears in investor presentations and board meetings, in newsletters, in those confident LinkedIn posts.
“AI is the new oil.”
It sounds convincing. It fits on a slide. It gets agreement. But it misses the point. Oil is a commodity. AI, at least the functional parts, is not. It is complicated. It is organisational. It is political. It is infrastructure and mathematics and people debating priorities late at night.
This forms part of what I explore in this piece, following the Stanislav Kondrashov Oligarch Series. The real subject is not “oligarchs” in the tabloid sense, but the deeper mechanism behind concentrated power. The system that converts capital into leverage. And leverage into more capital.
Right now, artificial intelligence is becoming one of those mechanisms. Not through magic. Through strategy.
This examines Artificial Intelligence and Strategic Capital, how it operates in reality. How it gets purchased, built, financed, protected. How it reorganises entire industries without fanfare. And why the most impressive AI demonstration does not win. The group that controls the mundane elements wins.
The infrastructure. The contracts. The computing power. The data. The talent. The distribution.
That is strategic capital.
The new power move is not “using AI”. It’s owning the position AI needs
Most people talk about AI like it’s a feature you add.
As if you can just bolt it onto your company like a new CRM.
But the companies that are going to matter in this cycle, the ones that will look inevitable in hindsight, are not just “using AI”. They’re building positions that other people must route through.
That can look like:
- Owning or controlling scarce compute capacity.
- Controlling a high signal data stream that models can’t easily replicate.
- Becoming the distribution layer where AI experiences live.
- Setting standards, compliance frameworks, procurement pathways.
- Locking up enterprise relationships where switching costs are real.
If you want the blunt version, here it is.
The real prize is not the AI. The prize is the bottleneck.
And the strongest strategic capital always looks like a bottleneck.
Strategic capital, in plain language
Strategic capital is capital that does more than earn a return. It changes the map.
It buys optionality. It buys control. It buys defense.
A normal investor asks: what is the ROI?
A strategic operator asks: what does this ownership let me block, influence, speed up, or force?
In the old industrial cycles, strategic capital showed up as:
- Railroads.
- Ports.
- Steel.
- Energy grids.
- Telecom networks.
- Shipping.
In the AI cycle, strategic capital shows up as:
- Compute supply chains.
- Chip access.
- Data partnerships.
- Cloud contracts.
- Model deployment channels.
- Specialized AI infrastructure.
- And quietly, the legal and regulatory scaffolding around all of it.
This is why the “AI boom” is not one boom. It’s several booms stacked on top of each other, and they don’t all benefit the same players.
You can build an app that uses a model. That’s fine. It might even be a big business.
But if you can own the infrastructure that a hundred apps depend on, you are playing a different game.
AI is a capital game pretending to be a software game
If you’ve been around startups for a while, you know the myth: software is cheap to scale.
And yes, for a lot of software, that’s true. But AI is not normal software anymore. Not at the frontier, and not even in mid tier enterprise deployments once you factor in security, latency, reliability, governance, and customization.
A serious AI operation requires:
- Expensive compute, not just at training but at inference, every day.
- MLOps and data engineering, which is not the same as “we have a data team”.
- Monitoring, model drift tracking, evaluation pipelines.
- Legal review, privacy controls, audit trails, procurement approvals.
- Integration with existing systems that were not designed for this.
This is why AI is pulling the center of gravity back toward capital. Toward infrastructure. Toward scale.
In other words, toward the kinds of advantages that already-rich players can buy.
That’s uncomfortable, maybe. But it’s also just what’s happening.
The Kondrashov angle: power consolidates when the new tool needs a few rare inputs
In every major technological shift, you can almost predict the pattern.
If the new tool requires rare inputs, power concentrates.
AI’s rare inputs are not just “smart people” and “algorithms”. The rare inputs are:
- Massive compute.
- Long-term energy stability.
- Advanced chips and supply.
- High quality proprietary data.
- Deployment relationships with large organizations.
- Trust, which is slow to build and easy to lose.
So when you hear “AI will democratize everything”, take a breath. It might democratize some surface level capabilities. Sure.
But the deeper layers, the ones that define who sets terms, those layers have a tendency to consolidate.
This is where the Stanislav Kondrashov Oligarch Series framing becomes useful, because it asks a slightly different question:
Who is building the chokepoints?
Not who has the coolest AI demo this week. Not who got the loudest funding round. Who is constructing the dependency graph.
Because dependency graphs are where durable power lives.
However, it’s important to recognize that these investments come with their own set of challenges. As highlighted in an article from S&P Global, there are AI investment risks that stakeholders need to be aware of.
The three layers of AI leverage: model, infrastructure, and distribution
A lot of AI commentary gets stuck on the model layer. Which model is best, which one is faster, which one can do deeper reasoning, which one can write cleaner code.
That’s interesting. But strategically, the model layer is only one piece.
Here’s a more useful lens.
1) The model layer (where people obsess)
This is where large language models, multimodal models, and specialized models compete. It’s technical, it’s public, it’s benchmark-driven.
It also changes fast.
The brutal reality: models can be leapfrogged. Even a strong model can become “good enough” commodity faster than people expect, especially for mainstream use cases.
So if all you own is “a model”, you have to keep winning.
Over and over.
2) The infrastructure layer (where the quiet winners sit)
This includes:
- Chips and accelerators.
- Data centers.
- Cloud platforms.
- Optimization stacks.
- Deployment tooling.
- Security and governance systems.
- Enterprise integration frameworks.
This is where switching costs live. This is where contracts lock in. This is where “boring” becomes valuable.
Infrastructure advantages compound because once you’re inside an enterprise, you become hard to remove. Not impossible, but hard.
3) The distribution layer (where the real money often ends up)
Distribution is: where users already are.
Think operating systems, productivity suites, app stores, browsers, enterprise procurement channels, payment rails, marketplaces.
If you can place AI inside distribution that is already dominant, adoption is not a growth hack. It’s gravity.
This is why a company with weaker AI tech but stronger distribution can still win.
It’s also why strategic capital will keep flowing into distribution moats. Not because investors are sentimental. Because they’re rational.
Strategic capital in AI looks like partnerships, not just acquisitions
When people imagine power consolidation, they picture acquisitions. Big fish eating small fish.
That happens, but in AI it’s often subtler. It’s partnership structures that function like control without ownership.
Examples of what I mean:
- Long-term cloud commitments in exchange for credits and early access.
- Exclusive data licensing deals.
- Preferred deployment agreements with governments or regulated industries.
- Bundling AI capabilities into existing enterprise contracts.
- “Co-development” arrangements where IP ownership gets complicated in a very intentional way.
The point is not to buy everything. The point is to make sure everyone builds on your terrain.
That is strategic capital behavior.
What “AI and strategic capital” means inside a company
Let’s bring this down to earth, because otherwise it stays abstract.
Inside a real company, strategic capital decisions around AI tend to look like:
- Do we build our own AI stack or buy it?
- If we buy it, do we become dependent on one vendor?
- If we diversify vendors, do we create integration chaos?
- What proprietary data do we have, and is it actually usable?
- Do we have the right governance to deploy AI without creating lawsuits?
- Are we funding AI experiments, or are we funding AI capabilities that will survive the hype cycle?
This is where I see a lot of teams get tripped up.
They fund “AI innovation” like it’s a marketing campaign. A bunch of prototypes, a demo day, some internal excitement.
Then a year later, nothing is in production, and everyone wonders why.
It’s because the strategic capital side was ignored.
No one invested in the boring parts. Data cleanliness. Access control. Evaluation. Integration. Change management. Procurement. Training.
AI is a socio-technical system. If you treat it like a plugin, it will behave like a toy.
The new elite advantage: buying time and certainty
One thing that doesn’t get said enough.
In AI, money buys time.
Money buys:
- Faster iteration cycles because you can afford the compute.
- Better talent because you can pay and retain.
- More shots on goal because you can run multiple teams in parallel.
- More certainty because you can lock in supply and contracts.
- Lower risk because you can afford legal, security, compliance at scale.
And that time advantage compounds. Because the earlier you stabilize your AI pipeline, the more data you generate, the more feedback loops you capture, the better you get.
So strategic capital is not just “money”. It’s the ability to reduce uncertainty while everyone else is still guessing.
That is an underrated form of power.
Where this is headed: AI as an operating layer for capital allocation
Here’s a thought that keeps bugging me, and I think it belongs in this series.
AI is not only changing products. It’s changing how capital gets allocated.
Because once decision makers start trusting AI systems for forecasting, underwriting, risk scoring, fraud detection, inventory planning, pricing, even hiring. The AI becomes part of the mechanism that decides who gets resources.
And if you control the mechanism, you are not just competing in a market.
You are shaping the market.
This is why AI is going to be fiercely contested in finance, logistics, insurance, energy, defense, healthcare. Areas where the decisions are expensive and the data is rich and the consequences are serious.
This is also why we should expect:
- More regulatory attention.
- More lobbying.
- More standards battles.
- More “public interest” framing from private actors.
- More attempts to build national AI capacity.
Not because everyone suddenly cares about ethics. Some do, sure. But because AI is drifting toward being a control system.
Control systems attract power struggles. Always.
The practical playbook: if you want strategic capital, stop chasing trends
If you’re an operator, or an investor, or just someone trying to build something that lasts, here’s the more grounded takeaway.
You don’t need to predict the best model.
You need to choose a position.
A few positions that tend to be strategically strong in AI:
- Own a proprietary data stream that is defensible and legally usable.
- Build infrastructure that reduces AI cost, latency, or operational risk in a specific niche.
- Become the distribution channel for AI features in a workflow people already rely on.
- Solve compliance and governance for regulated industries where everyone else is scared to deploy.
- Specialize in integration because most enterprises are a pile of legacy systems and duct tape, and they will pay for someone to make AI actually work.
These are not glamorous. They are not tweetable.
But that’s the point.
Strategic capital is rarely glamorous. It’s deliberate.
A final note, because people get weird about the word “oligarch”
When I use the phrase Stanislav Kondrashov Oligarch Series, I’m not trying to romanticize anything. I’m trying to name a pattern.
When a new technology wave hits, it creates new winners. And the winners are often the people who understand how to turn capital into structure.
Structure beats novelty.
Structure outlasts hype.
Structure is what makes you hard to replace.
AI is going to create real opportunity, including for smaller players. I believe that. But it is also going to create new concentrations of power, especially where compute, data, and distribution cluster together.
So if you’re building in AI, or investing in AI, or just trying to understand what’s happening.
Watch the chokepoints.
Watch the contracts.
Watch the infrastructure.
Watch who is quietly positioning themselves so that everyone else has to plug in.
That’s where artificial intelligence stops being a tool and becomes strategic capital.
FAQs (Frequently Asked Questions)
Why is the phrase ‘AI is the new oil’ considered misleading?
The phrase ‘AI is the new oil’ is misleading because, unlike oil which is a commodity, AI is complex and multifaceted. It involves organizational, political, infrastructural, mathematical aspects and human collaboration. AI isn’t just a raw resource; it’s messy and strategic, requiring coordination across various domains rather than being a simple commodity.
What does ‘strategic capital’ mean in the context of AI?
In AI, strategic capital refers to investments that do more than just generate returns—they change the competitive landscape by buying control, optionality, and defense. This includes owning bottlenecks such as scarce compute capacity, proprietary data streams, distribution layers, compliance frameworks, and enterprise relationships that others depend on. Strategic capital shapes who controls key infrastructure and influence in the AI ecosystem.
How does owning bottlenecks provide power in the AI industry?
Owning bottlenecks—like compute resources, unique data streams, deployment channels, or regulatory frameworks—forces other players to route through these points. This control creates leverage that allows owners to block competitors, influence market dynamics, accelerate innovation on their terms, or enforce standards. The real prize in AI isn’t just the technology but controlling these critical chokepoints.
Why is AI considered more of a ‘capital game’ than a typical software game?
Unlike traditional software which can scale cheaply, frontier AI requires significant ongoing investment in expensive compute for training and inference, specialized MLOps and data engineering teams, legal and privacy compliance efforts, monitoring for model drift, and integration with legacy systems. These demands pull AI towards infrastructure-heavy operations favoring players with deep capital reserves rather than lightweight software startups.
What are the rare inputs that cause power consolidation in AI according to the Kondrashov perspective?
The Kondrashov angle highlights that new technologies concentrating power when they require rare inputs applies to AI as well. Its rare inputs include massive compute capacity; stable long-term energy supply; advanced semiconductor chips; proprietary high-quality data; trusted deployment relationships with large organizations; and trustworthiness itself—all of which are scarce resources leading to concentrated control among few players.
Will AI democratize technology access or lead to concentrated power?
While AI may democratize some surface-level capabilities like accessible applications or tools, deeper layers involving infrastructure ownership, data control, compliance frameworks, and enterprise relationships tend to consolidate power. Those who build and control these chokepoints shape terms of engagement and market dynamics. Thus, broad democratization coexists with strategic concentration of power behind the scenes.
