An operator's guide to building in the AI era

How to Start a
Startup in the
AI Era

The way to find a startup idea in the AI era is not to think about AI. It's to notice things that are broken, then ask: could AI fix this in a way that couldn't have been possible two years ago?

— The thesis of this guide
© 2026 rameshnuti.com · Ramesh NutiHow to Start a Startup in the AI EraOperator First · Svyam Ventures · ActionEDI

Paul Graham wrote "How to Get Startup Ideas" in 2012. The principles he laid down were so sturdy that most of them still hold. Work on problems you have yourself. Find the narrow well of users who desperately need what you're building. Become the kind of person who notices what's missing, rather than someone who brainstorms ideas in a vacuum.

But the terrain has shifted. The arrival of large language models, AI agents, and cheap inference has done something unusual: it changed which problems are solvable, almost overnight. Problems that required 20 engineers in 2021 can be prototyped by one person with a credit card and a good prompt in 2026.

That changes the filter. Not the direction of the filter — you still need a real problem, a real user with urgent need, a path to a big market. But it changes the set of problems that are now within reach. And it changes the kind of founder who can reach them.

This is a guide written from the operator's seat — for founders who've run something, who understand execution friction from the inside, and who want to apply the AI moment to real, gnarly, difficult-to-fix problems.

01

Problems Before Technology

The biggest mistake I see in the AI era is founders who start with a model. They discover that GPT-4 can summarize documents, so they build a document summarizer. They see that Claude can write code, so they wrap it in an IDE. This is backwards.

Graham's original insight holds, but it needs to be said louder in a world where the technology is suddenly, visibly, dramatically impressive: start with the problem, not the capability.

"Most AI startups I pass on are in love with the technology. The ones I fund are in love with the problem."

The right question is never "What can I build with AI?" It's "What breaks, hurts, or costs too much in my industry — and is there now an AI-powered approach that makes a real dent?"

The AI is a new ingredient in your pantry. But first you need to decide what you're cooking, and for whom.

The founder test

Write down the three most painful operational problems you faced in your last job or company. Now ask, for each one: was this problem technically unsolvable before 2023? If the answer is yes — even partially — you're holding a possible AI startup idea.

This is not brainstorming. This is memory. The best ideas are sitting in your own scar tissue.

Graham said the best startup ideas are ones you yourself want, can build, and that few others realize are worth doing. In the AI era, add a fourth condition: the solution is now technically feasible in a way it wasn't. That fourth condition is the unlock. It's why now is different from 2019.

02

Operator First, Investor Second

This is my personal investment identity, but it applies to founding too. The operators — the people who have run something, who understand what it actually takes to onboard a customer, close a contract, deal with compliance, hire and fire, manage cash — they see AI startup ideas that pure technologists miss.

Why? Because the hard problems aren't technical. They never were. The hard problems are the ones with messy human processes behind them. Procurement workflows. Claims adjudication. Compliance documentation. EDI transaction processing. These aren't glamorous. But they're enormous, they're painful, and they're sticky in ways that consumer social apps never are.

The Operator Advantage

You know where the bodies are buried

An operator who spent years in healthcare revenue cycle management knows exactly which step in the workflow causes 80% of the denials. A pure technologist would take 12 months of customer discovery to get to the same insight — if they ever do. Your operational experience is your unfair advantage at the idea stage. Don't undervalue it in favor of building something that looks more technically impressive.

The best AI founders I've backed or evaluated are not AI researchers. They're domain experts who learned enough about AI to recognize where the technology could eliminate the most painful friction in their domain. That combination is rare and enormously valuable.

If you're a programmer who has never operated anything, that's fine — but go find a co-founder who has. Not a business co-founder in the generic sense. A domain operator. Someone who has lived the problem from the inside.

03

The Narrow Well Still Wins

Graham's metaphor of the well versus the shallow lake remains the single most useful filter for evaluating startup ideas. You want a small number of people who urgently need what you're building — not a large number who could vaguely imagine using it someday.

In the AI era, this is actually more true, not less. The surface area of AI capabilities is vast and shallow. Millions of people can imagine some vague use for an AI assistant, a co-pilot, a summarizer. None of them need it urgently enough to pay for it, switch from existing tools, or become advocates.

Find the hundred people who will be furious if you shut your product down. Build for them first. The million users come later, or they don't — but you need that hundred first.

What does a narrow well look like in practice? It looks like a compliance officer at a mid-market manufacturer who currently spends three days a month on EDI mapping errors. It looks like a VP of Finance at a Series B company who is rebuilding every financial model from scratch every time the board changes a projection. It looks like a solo medical biller who is getting 40% of claims rejected on technicalities they can't track manually.

These people have urgency. They have budget. They have a specific workflow that is broken and costing real money. They are the narrow well.

Well-depth test

Describe your target user in one sentence. If that sentence contains the words "anyone," "people who," or "businesses that want to," your well is too broad. Try again. Add an industry, a role, a specific pain point, a specific consequence of not solving it. Keep adding specificity until the sentence sounds almost embarrassingly narrow. That's the well.

04

The New Surface Area

One thing has genuinely changed: the scope of what one or two founders can build has expanded dramatically. In 2012, Graham said you should learn programming because software was eating the world. In 2026, the advice is different — you should learn how to direct AI, because AI-assisted building has collapsed the cost of creating sophisticated software.

This means the set of valid startup ideas is larger. Problems that once required a 10-person engineering team can now be prototyped by a domain expert with strong taste and the ability to iterate quickly with AI tools. The bottleneck has shifted from "can we build this?" to "do we deeply understand the problem we're solving?"

Dimension2012 era Software startup2026 era AI-native startup
Core requirementAbility to write codeAbility to deeply understand a problem domain
Prototype speedWeeks to months for MVPDays to weeks with AI-assisted development
Moat sourceTechnical differentiation, network effectsProprietary data, workflow depth, trust, distribution
Key bottleneckEngineering capacityDomain insight and customer trust
Competitive threatBetter-funded team with more engineersFoundation model providers building down-market
Best founder profileTechnical founder with domain exposureDomain operator who can direct AI development

This table is not an argument for non-technical founders. Technical literacy still matters — probably more than ever, because you need to understand what AI systems can and cannot do, where they hallucinate, where they fail, and what the reliability thresholds are for your specific use case. But "technical" in 2026 means something different than it did in 2012. You don't need to be able to write a database schema from scratch. You need to be able to evaluate a system critically and direct its construction precisely.

05

Live in the Future, Build What's Missing

Graham's most elegant formulation: "Live in the future, then build what's missing." It came from Paul Buchheit's observation that people at the leading edge of a rapidly changing field live in the future. Their daily experience is of a world that doesn't exist yet for most people. The startup ideas are the gaps between that future and today.

This has a specific meaning in the AI era. If you are a heavy user of current AI tools — not just ChatGPT for writing emails, but actually using agents, evaluating models, building workflows, understanding where the edges are — you will naturally encounter the places where the tools break, the processes that aren't yet automated, the gaps where a real product is missing.

# The right question to ask yourself daily:

$ What did I do today that I expect AI to do for me in 2 years?
$ What frustrated me about current AI tools?
$ What would I have built if I had 30 engineers?

# If you can't answer these, you're not living close enough to the frontier.

Living in the future in the AI era means staying close to the research, yes — but more importantly it means being a power user of AI systems in your domain of expertise. The convergence of domain depth and tool familiarity is where the organic startup idea lives.

This is how I found ActionEDI. I wasn't looking for a startup. I was living in the operational reality of EDI transaction processing — something most people have never heard of, something that runs under the surface of billions of dollars of B2B commerce — and the gap became obvious. The technology had reached the point where it was solvable. The timing was right. The insight came from living in the problem, not from brainstorming.

06

The New Timing Question

Graham talks about the "prepared mind" — the founder who is at the leading edge of a fast-changing field, so when an external stimulus arrives, they recognize the opportunity where others don't.

In the AI era, timing has a new dimension. The question isn't just "is this a real problem?" It's also: "is this the exact moment when solving this problem has become feasible, and when the market is ready to pay for a solution?"

The AI moment has created a specific type of timing opportunity: the infrastructure is now available, but the vertically-integrated application layer does not yet exist. The models are commodity. The deployment tools are commodity. The gap is the domain-specific, workflow-deep, trust-earning application layer that serves a specific industry or function with something genuinely better than the existing workflow.

  1. The technology crossed a threshold. Something that required unreliable, expensive bespoke ML in 2021 can now be done reliably with off-the-shelf models. Find where that happened in your domain.

  2. The incumbents are flat-footed. Legacy software vendors in most B2B verticals are 18-36 months behind on AI integration. Their customers know this and are actively looking for alternatives.

  3. The regulation clock is ticking. In some verticals, AI adoption is being actively pushed by regulatory changes — or conversely, the compliance burden is becoming a forcing function for automation. Both create urgency.

  4. Labor dynamics have shifted. In specific industries, the talent shortage for certain skilled roles has reached a point where AI augmentation isn't a nice-to-have — it's existential for the businesses that need those roles filled.

If you can check two or more of these boxes for your target problem, the timing is real. If you can only check one, the timing may be early — which means you're right but you may run out of runway before the market catches up.

07

New Filters to Turn Off

Graham identified two filters to turn off: the schlep filter (fear of tedious work) and the unsexy filter (distaste for unglamorous problems). These still apply in the AI era. If anything, the schlep filter is more important to override now, because the most valuable AI applications are in exactly the messiest, most compliance-heavy, most process-dense domains that nobody wants to deal with.

But there are new filters to watch for:

New filters to turn off in the AI era

F1
The "just a wrapper" filter

The fear that your product is "just" calling an API and therefore not defensible. Most great B2B applications are workflow systems built on top of commodity infrastructure. That's fine. The value is in the workflow depth, not the model.

F2
The "AI will solve it" filter

The assumption that foundation model providers will eventually build what you're building. They might. But they also might not — they're optimizing for horizontal platform reach, not vertical workflow depth. Specialization is your edge.

F3
The "not technical enough" filter

The belief that because you're not an ML researcher, you can't build a serious AI company. Domain expertise paired with the ability to direct AI systems is often more valuable than research depth for application-layer companies.

F4
The "hallucination blocker" filter

The assumption that because AI makes mistakes, it can't be deployed in high-stakes workflows. The question is whether the AI-plus-human workflow is better than the human-only workflow. In most cases, the bar is low enough that it is.

The "just a wrapper" filter deserves special attention because it's particularly insidious. Founders who have been around long enough to remember the SaaS era know that "Salesforce is just a database" or "Dropbox is just a folder" were the wrong dismissals at the time. The application-layer company that goes deep on a specific workflow and earns customer trust will be difficult to displace even as the underlying models commoditize.

08

Competition in an AI Market

Graham's advice on competition holds: don't be deterred by finding competitors. A crowded market means real demand and no solution good enough. Your job is to find the specific insight that existing competitors are missing — the dimension on which you can be clearly better for a specific set of users.

In the AI era, the competitive dynamics have a specific wrinkle: the barriers to entry for a first version are very low, but the barriers to becoming the default are very high. Anyone can spin up a GPT wrapper in a weekend. But becoming the system of record for a specific workflow — the thing people depend on daily, the thing with deep integrations, the thing with trained models on proprietary data — takes years.

The race in AI is not to launch first. It's to reach operational depth first. The startup that understands a workflow deeply enough to get embedded in it wins, regardless of who launched first.

This means your thesis about what competitors are overlooking should be articulated in terms of workflow depth, not feature breadth. "We go deeper on X workflow" is a credible thesis. "We do everything the incumbent does but with AI" is not.

The specific patterns I look for when evaluating whether a startup has a real competitive thesis:

  1. They can name the exact step in the workflow where their product is 10x better than the existing solution — and they can demonstrate it live.

  2. They have customers who have gotten value and can explain specifically what changed for them — not "it saves time" but "it reduced our DSO by 12 days" or "we went from 40% denial rate to 14%."

  3. They have a theory about why the incumbent can't copy this: regulatory complexity, data moat, distribution advantage, or the fact that the incumbent's architecture makes this kind of deep workflow integration impossible.

09

Moats That Actually Hold

This is the question I spend the most time on as an investor, and the one founders are most likely to get wrong. In the AI era, many things that look like moats aren't. And some things that look like commodities are actually deep competitive advantages.

What doesn't hold as a moat: Model quality (commoditizing fast), prompt engineering (replicable), basic automation of a known workflow (low switching costs), speed (temporary).

What does hold as a moat:

Durable AI-era moats

M1
Proprietary data generated by use

Every transaction, correction, and human review makes your model better in ways a competitor starting from scratch cannot replicate. This compounds. Start designing for data flywheel from day one.

M2
Workflow depth and integration cost

The deeper you are in a customer's operational workflow, the more expensive it is to rip you out. Be the system of record, not the nice-to-have add-on. Charge accordingly.

M3
Trust and regulatory compliance

In regulated industries, being the vendor that cleared compliance review is an enormous barrier to replacement. Get certified early. It's a schlep, and that's exactly why it creates a moat.

M4
Network effects from multi-party workflows

If your product sits between multiple parties — buyer and supplier, provider and payer, employer and employee — and it gets better as more nodes join, you have a real network effect. These are rare and extremely durable.

The moat question I always ask founders: if OpenAI or Anthropic decided to build your product next quarter, how long would it take before they could replace you with your best current customers? If the answer is "under 18 months," you have a product, not a company. If the answer is "they'd have to replicate 3 years of domain-specific training data and a dozen enterprise integrations," you have a business.

10

Practical Recipes

Graham was skeptical of recipes — he believed the organic approach (becoming the kind of person who has good ideas) was superior to deliberate brainstorming. I agree with the underlying principle. But when you need to pressure-test an idea you already have, or when you're stuck and need a direction, the following frameworks have worked for me.

Recipe 01

The Process Archaeology Approach

Go back to your last job or company. Write down every process that involved a human doing something that felt like it should be automated. Focus especially on the ones where the reason it wasn't automated was complexity, not neglect. Those are the ones that are now solvable.

Recipe 02

The Bottleneck Map

Pick an industry you know well. Draw the value chain from input to output. Circle every step where a knowledge worker currently acts as a bottleneck — reviewing, approving, translating, escalating. Each circle is a potential AI startup. Now ask which circle has the highest cost per transaction and the most room for AI assistance. Start there.

Recipe 03

The Angry Expert Test

Find five domain experts in a target industry. Ask them what tool, workflow, or process makes them the most irrationally angry. Not annoyed — furious. The thing they rant about at conferences. That frustration is usually evidence of a gap between what exists and what should exist. Now ask whether AI has crossed a threshold that could fill that gap.

Recipe 04

The Time Machine Question

Imagine it's 2030 and you're looking back. What products exist that everyone uses daily and says "I can't believe we didn't have this in 2026"? Write down five. For each one, ask: could I build the 2026 version of that today, for the early adopters who already see the gap? One of those is probably worth starting.

The Last $5K Rule, applied to your own idea

Before you commit to building something, ask yourself: if this were the last $5,000 I could ever invest — of my own time, my own credibility, my own relationships — would I still do this?

No credit for vision or a pretty deck. No glossing over the execution risk. If you can honestly say yes with a full view of the downside, you've passed your own gut check. That's the bar.

The final thing I'll say about recipes: the best way to validate an AI startup idea is not to survey potential customers. It's to build a rough version and try to sell it. Not fake it, not show a mockup — actually build a version that does the core thing, go to someone who has the problem, and try to get them to pay for it. Everything else is speculation. The market will tell you faster than any framework.

Graham quotes Paul Buchheit: "The best technique for dealing with bad ideas is to tell the founder to go sell the product immediately." In the AI era, the cost of building a rough first version is low enough that there's almost no excuse to wait. Build it, sell it, find out if it's real. The answer will come back within weeks, not years.

* * *

What I Look for as an Investor, Reframed as Founder Advice

I evaluate every deal through a 10-criteria framework at Svyam Ventures. When I flip it around and ask what it implies for founders, it reduces to a few principles:

  1. Be the founder who is obviously right about the problem. When you explain the problem you're solving, anyone who knows the industry should be nodding immediately. If you have to convince people the problem is real, you're in trouble.

  2. Show asymmetric upside at a reasonable valuation. Don't price yourself like you've already won. Give early investors and early customers a reason to take the risk with you. Generosity at the early stage creates advocates who will matter later.

  3. Have a specific, falsifiable traction milestone for the next 90 days. Not a vague goal. A number. A customer name. A contract. Something that can be checked. Founders who think in milestones are founders who are operating, not just vision-casting.

  4. Know your unit economics direction early, even if the numbers are small. You don't need to be profitable at Series A. But you need to know whether your margins improve or compress as you scale, and why. Founders who don't know this have usually built the wrong thing.

  5. Pass the Last $5K Rule. Would you bet your last dollar on this? If not, don't expect anyone else to. That conviction — grounded in reality, not optimism — is what separates founders who last from founders who pivot into oblivion.

The AI era has lowered the cost of starting and raised the ceiling of what's possible. That's a generational opportunity. But the fundamentals — real problems, real urgency, deep domain understanding, honest conviction, operational grit — haven't changed. They never do.

Build something you'd be furious not to have. Find the narrow well. Stay close enough to the frontier that the gaps become obvious. The rest is execution.