On Fixing Accelerators

TLDR; Accelerators prepare founders for conversations investors don’t want to have until founders have done the work accelerators don’t teach.

Over the past 6 months, I’ve been asked by two different pre-seed accelerator programs to look at what’s working and what’s not. In both cases, I see significant mismatches between what the founders need, what the market is asking for, and the structure of the program.

However, this post isn’t about picking on these specific programs, as all these accelerator programs are modeled after the same Ur-program.

The model pioneered by YCombinator twenty years ago; a dozen startup teams (2-3 founders each) relocate to San Francisco for 12 weeks. Through immersion, camaraderie, and intensity, they emerge at Demo Day with a highly polished investor pitch. The price of this experience is typically 7% of your company.

Today, some programs are no equity, some are fully remote, and some that run 6 months or a year. Today, these programs exist in every moderately sized city from Minneapolis to Boulder to Columbus to Mobile.

These programs are typically run by venture capital firms, as such they’re reliant on investors believing this model will provide outsized returns. In a low interest rate environment, from say 2009-2022, this argument was easy to make as more traditional investment vehicles weren’t providing the target returns and money is cheap. The investment argument goes something like: “We’re going to invest a little bit of your money across a wide portfolio of startup ideas. Most will fail, but one will be so wildly successful you’ll thank us. Besides, where else are you going to get your returns from?”. Thus speculative investments like pre-revenue, pre-product, pre-seed tech startups are much more attractive.

This leads us to the first modern-day problem with the current accelerator model: Demo Day.

The original idea behind Demo Day was to get a bunch of seed-stage investors in a room, have a dozen freshly polished founders pitch, and trust it leads to follow-on investment. Win-win.

Unfortunately, since 2022 fewer and fewer investors are attending Demo Days and accelerator programs have noticed. The end-of-program event has shifted from investor pitches to trade show / science fair / showcase to the general public. I’ve attended both formats from multiple accelerators – as personable as the showcase events are, I still find them anticlimactic by comparison – even if they’re more honest about the chances of follow-on funding happening immediately.

According to 2023 Census.gov numbers, 30.4M businesses in the US with zero employees – whether it be 1-person business or a partnership of 2-3 people (like would be accepted into these accelerator programs). Also in 2023, there were 13,608 VC deals nationwide, with 2,040 of those in this Angel/Pre-Seed stage.

That means, 0.0067% of these businesses secure pre-seed funding, which by delightful coincidence is the same chances as a 3+1 PowerBall win.

For one of the programs I dug into, just 6% of their founders secured follow-on investment. Better odds but still, that’s 1 startup out of every 16.5.

It’s not just investors stepping back, ‘mentors’ are as well. Mentors are the volunteer network of the accelerator programs, their implied goals is to understand the startups’ value proposition, provide introductions and connections to help the startup, and support the startup in a complimentary way to the accelerator team.

Perhaps you can already spot the mismatches;

  • The accelerator program needs 12 startups to profitably run the program.
  • These 12 startups need to be roughly similar in their stage, roughly aligned to the investment hypothesis sold to investors.
  • The mentors may not have any subject matter expertise directly relevant to the startup team. One of the persistent founder complaints I heard in my research was how mentors’ advice just wasn’t relevant to the stage founders were at.
  • Programs promise follow on investment, but depending on the startups value proposition and investor hypothesis; that may take an additional 9-12 months post program
  • 30% of these companies shutter each year (“this program accelerates everything” – I still remember one managing director telling a new cohort.)
  • The 1-company-to-return-the-portfolio mentality means ideas that could be very sustainable small businesses are pressured into incompatible growth trajectories.
  • Today’s early-stage investors want evidence of sales and sales velocity, not just an idea.

This last point aligns with the most consistent regret I heard from founder in my research, “I didn’t talk to enough customers.”

They also admitted, part of this is; not knowing how to define their customer, not knowing how to find them once defined, and not knowing how to have that conversation once they did.

Some businesses, for example a marketing service for restaurants, you could take an afternoon and walk into restaurant after restaurant, potential customer after potential customer. Customers in other domains are more widely distributed and more difficult to identify – say a the buyer for food manufacturing supply chain automation software.

Either way, another systematic mismatch.

This doesn’t surprise me – for as we’ve already established, these programs are tuned for investor pitches. Not customer discovery or sales conversations. Some startup founders have hired me to sit in on conversations with their prospects – every single time, my first note to them is, “Why are you pitching for the first 20 minutes of a 30 minute call?”

Oh. Right. Pitch practice.

Thankfully, a different model is emerging in 2026.

One tightly focused on a specific domain, and because of this focus – ready access executive buyers clamoring for new solutions.

No need for a Demo Day.

No need for volunteer mentors unfamiliar with the domain.

No need to fill cohorts with sub-optimal teams.

Just build something these executives will buy.

My 2026 Position on LLMs

Claude and I talk regularly.

I’ve reached the end of multiple context windows with Claude.

I credit Claude with helping me articulate the long running thread through my career and personal interests (e.g “Find what we forgot”) and clarifying the current positioning of my business (e.g. “When technology breaks your revenue model – call me.”). Once every couple of days I’ll paste an email draft into Claude and say, “I’m having a difficult time saying what I’d like to say succinctly.” and I’ll often take the suggestions.

Additionally, I have a long running chat in Gemini where it scores and ranks companies I offer it against the characteristics of my best clients.

I have the same 4,000 characters of instructions in both Gemini and Claude.

I don’t use ChatGPT anymore.

I would characterize my current usage as:

  • Rubber ducking
  • Summarizing qualitative patterns
  • English-to-English translations
  • Ensuring I’m being comprehensive / appreciating conventional wisdom.
  • Raising the minimum quality of communication

Overall, I’d characterize this as “Help me minimize common mistakes”. Claude calls it “Sharpening“.

The more I work with Claude, the more it feels like the Mirror of Erised, simply reflecting back whatever you give it in a more narcissistic package.

There’s still nothing new here and I’m still of the position that “AI is amazing about the thing I know nothing about….but it’s absolute garbage at the stuff I’m expert in.”

I don’t pay for Claude, which is to me is a sign it’s not yet become an indispensable co-worker. The reports I’ve read of Claude CoWork and other ‘agentic assistants’ mainly make me wonder why the human operator even bothered with automating the task in the first place.

The greatest value of CoPilot and Claude Code seem to lay in an assumption that all software must start from scratch in Node. As soon as you say, maybe I don’t need to start from scratch, suddenly an entire world of options opens up. Admittedly, they’re not distinct applications that can be discretely monetized. On the flip side, someone else is maintaining the system, has already fixed and squashed thousands bugs you’ve yet to even experience.

But creating software hasn’t been hard for more than 25 years. There have always been piles of frameworks and environments to accelerate the creation of software; Hypercard, VisualBasic, Ruby on Rails, WordPress, etc, etc.

The challenge is in maintaining it, hardening it, justifying why it should continue to exist – that’s the perennial problem.

And that’s the first question I ask, should we create this custom software at all? Or perhaps we frame the problem differently and solve it in a different way altogether; we can minimize our overall maintenance costs while still getting the same output…or completely eliminate problem.

Recently I upgraded all my Google Homes to Gemini…and now the voice is even more tediously verbose. When I set a timer, I want confirmation not a conversation.

I picked up a copy of Bots when it first came out and was enamored by the promise of software applications appearing so human and useful. This promise, like software maintenance itself has held constant over the past 30 years. Perennially unsolved.

Netflix: Bottleneck Driven Innovation

On the news of Netflix acquiring Warner Bros, I’m reminded of how good Netflix has been at innovating their business model.

Over the past 27 years, their business model has changed multiple times and each evolution appears to be in direct response to the bottleneck of growth, from maintaining inventory of DVD to acquiring global streaming rights.

YearBusiness ModelBottleneck to Growth
1998Sell DVDs over the internetNeed to continually replenish DVD inventory,
1999-2006Rent DVDs over the internetUSPS delivery & return times
2007Stream movies over the internetAcquiring US streaming rights to a massive library of movies
2009Start producing movies
(Netflix Originals)
Number of subscribers watching Netflix Originals
2010-2012Global expansion; Canada, South America, EuropeMaintaining rights globally
2025Acquire Warner Bros Discovery

Calendars are for Commitments or How I Use a Calendar in 2025

Patrick reminded us a decade ago that Everything Happens in Time, said another way: “everything happens at a time – whether deliberate or not.”

It’s been three and a half years since I wrote about my ever evolving calendaring practice. Everything I wrote about in 2022 is still true. The biggest change has been in that time is my heavy use of cal.com as a scheduling service for conversations with clients, prospects, customers of clients, friends, everyone and everything.

Heavily using a scheduling service like Cal.com is great, but I’ve found it requires a couple of non-obvious changes to my calendaring practice.

  1. My commitments are now split across 2 calendars; “Committed” and “Tentative”. As I described in 2022, all my hard commitments – my morning run, prep & next steps for client meetings, travel time to & from in-person meetings, fixed personal commitments – whether to myself or others goes, into “Committed”. Everything else goes into “Tentative”. Cal.com integrates with “Committed”. Yes, this means stuff on “Tentative” may get over-ridden – and that’s OK. If there’s nothing on “Committed”, I’m not wondering how to make the best use of the new found time, I’ve got a plan. There’s probably a cleaner way to solve this, but at this point, it’s working fine.
  2. Some people prefer I declare the date & time. So, if only to keep things moving, I’ve needed to create an Event Type within Cal.com for the times where I’m handle the scheduling. By scheduling it myself – rather than creating a normal calendar event and inviting the person – I still get Cal.com’s auto-population of a video conferencing URL, reminders the day before, and a rescheduling link.

    I have 3 standard durations; 30min, 45min, 60min. At a certain level of clarity, every conversation can fit within a 30min or 45min block.

The great thing about Cal.com (and other scheduling tools) is setting a minimum time between bookings, I set it at 15min, and I’ve found it so helpful that I now even space my own commitments by 15min.

Reminder: There’s Always Room for Improvement Somewhere

Person sweeping up dirt in an unfinished room with exposed wooden framing.

A 5 gallon bucket with a spout that’s also a paint tray and a dustpan.

At the risk of evoking all of my past writing on chindogu, this is a nice reminder that there’s always room for improvement.

It’s just a matter of determining that developing improvement is worth your opportunity cost. It might not be. But that doesn’t mean everything’s been invented.

I’m also reminded of the 2010 Design of the Decade winner, Clear RX prescription bottle design;

Deborah Adler Design | Clear Rx Medication System

Mike Doughty on Art & Computer Technology (AI)

I recommend AI: How/Why I Use It in its entirety here are just a couple of my favorite passages:

“As any musician knows intimately, the most interesting part of a new musical technology is its glitches: the inventors of the synthesizer hoped to position it as a replacement for strings or horns, but what we loved is the weird blorps; the amplifier was invented just to make a guitar more audible, but we loved distortion; Autotune et al. were invented to correct bad notes, but we loved crazy space-laser voices.”

“Every day the AIs “improve” their ability to make images (actually, I use one of my go-to AIs because it is hilariously bad). I believe that eventually the uncanniness will be refined away, and AIs will evolve from fascinatingly odd to comprehensively mediocre.”

“Expertise will not be sufficient to make a living…Hacks are in trouble. If somebody is making work that is uninspired, and unindividual, then they can indeed be replaced by a machine that just spits up boring chunks of mid-ness.”

An Increasingly Worse Response

Vintage Push Paddle 1970's Collapsible Giraffe Animal Puppet Marked TM - Picture 1 of 17

Generative AI and LLMs continue to provide the least controversial answer to any question I ask them. For my purposes, this makes them little more than a calculator for words, a generator of historical fiction short stories.

As I mentioned two years ago, this doesn’t make LLMs useless, but it does greatly shrink their usefulness – to those places where you want a general idea of the consensus…whether or not it’s correct, accurate, or legal. Just an average doesn’t necessarily represent any individual datapoint.

For, the more training data the generative AI providers shovel into their models, the greater the drift from credibility toward absurdity the generated consensus.

It’s one thing to train the models on all the scientific research. It’s another to train on all the books ever published (copyright issues aside for the moment). It’s quite another to train it on Reddit and Twitter. It’s yet another thing all together to treat all data equal independent of parody, satire, or propaganda.

18 years ago, I figured out that a 3 in Netflix’s then 5 star rating meant “looks good on paper, but probably not very”. The same seems to be true of the nondeterministic responses from LLMs, an avalanche of Gell-Mann Amnesia Effect or Knoll’s Law of Media Accuracy “AI is amazing about the thing I know nothing about….but it’s absolute garbage at the stuff I’m expert in.”

Again, there are use cases for this (e.g. getting familiar with the basics of a topic in record time), but the moment you expect quality, credibility, or specifics…it collapses like a toy giraffe.

A toy giraffe that, when a person engages with it, can only – collapse.

As a metaphor for new technologies, this toy giraffe’s message is worth considering, “we break when any pressure is applied.”

General purpose LLMs will only get worse the more data they digest. Special purpose LLMs only trained on a specific context, a specific vertical, a rigidly curated & edited set of sources may achieve the level of expert these applications are hyped up to be.

But we may never know they exist because the most valuable use cases – national defense, cybersecurity, fraud detection – will never need (or desire) the visibility the general purpose LLMs require.

More Jobs and More Automation than Ever

When I first got a robot vacuum cleaner,

the first thing I noticed was how much worse its cleaning quality was compared against a person.

The second thing I noticed: it added a job to the household (one that came with zero training):

robot vacuum maintenance

Just as we handwash dishes even though we have an automatic dishwasher, we still vacuum, abeit less frequently, with a non-robotic vacuum.

The introductions of these machines only shifted the labor.

It didn’t eliminate the labor.

Hardware eventually breaks.
Software eventually works.

The work still needs to get done.
That’s why we have people.

Not to mention Jevon’s Paradox or the Bureau of Labor Statistics.

Total employment, 2003 to projected 2033

Calm Tech Principles

  • Technology should require the smallest possible amount of attention
  • Technology should inform and create calm
  • Technology should make use of the periphery
  • Technology should amplify the best of technology and the best of humanity
  • Technology can communicate, but doesn’t need to speak
  • Technology should work even when it fails
  • The right amount of technology is the minimum needed to solve the problem
  • Leverage familiar behaviors to introduce new ones

https://www.calmtech.institute/calm-tech-principles