The Simple Tactic To Win With AI: Try Again (In 6 Months)

Why most AI failures are just timing problems

"I tried AI for that. It didn't work."

And then they move on. They cross it off the list. They tell their team, "AI can't do that," and everyone nods and goes back to doing it manually.

As someone who spends way too much time reading comments on LinkedIn, I've noticed this is a big problem many people adopting AI face.

In fact, this is the most common AI mistake I'm seeing right now.

Too many people treat AI capability as static when it's actually growing exponentially.

So today, I want to break down a simple rule that fixes this thinking:

  • Why "AI can't do that" is almost always wrong

  • The 6-month rule (and why it works)

  • What you should retry right now

The Common Mistake

Before we get into the framework, let me lay the groundwork a bit more.

What’s happening is that people find a problem in their job that they want to try to automate with AI, it doesn't work, and they write it off permanently.

Don’t get me wrong, I’m not dismissing hype-y demos, “AI psychosis“, or improper AI training. But I do think this mindset is setting your future self up for failure.

The main problem with writing AI capability off is that it’s anchored to outdated data.

Here's a (totally made-up) example of what I mean:

Bob tries ChatGPT for a specific task, Cursor for code generation, or an agent for some workflow automation. It doesn't quite work, but maybe it's 70% there, maybe 40%, maybe it fails completely.

And then Bob concludes: "AI can't do that."

This actually isn’t wrong — for a point in time. If instead Bob had concluded: “AI can’t do that…yet.” It’d be more accurate, and more helpful.

Let’s go back a couple of years. Back then, when you asked ChatGPT something that involved accessing some data/knowledge/content that wasn’t used in its pre-training, it didn’t know anything about it. So, it might just hallucinate — even on a simple question like “What company did Meta acquire last week?”. Then, we introduced the idea of tools to LLMs — including the very useful tool of being able to do a web search. With that, ChatGPT (and others) were able to answer a broad range of questions pretty well.

So, the AI applications got a lot better with the introduction of tools.

Back to Bob…

The problem is that Bob is treating that capability as a permanent state... thinking if AI couldn't do it in January, it won't be able to do it in June.

But that's not how exponential curves work.

The thing that failed six months ago might work perfectly today. The agent who couldn't handle your workflow in Q1 might crush it in Q3.

So, my answer when I hear people writing off AI capability is always the same: it doesn't work for that now.

On an exponential curve, you need to retry it. And it might just work.

The 6-Month Rule

I encourage people at HubSpot to do this, and I mean it literally -- though they don't always take me literally.

Here's the rule:

If it was important enough to try, put a date on your calendar for six months from now and try it again.

That's it. That's the rule.

  • You tried using ChatGPT to automate some part of your workflow, and it didn't quite work? Calendar it for six months from now.

  • You tested Claude Code for a specific coding task, and the output was too unreliable? Calendar it.

  • You experimented with an agent for customer support but the accuracy wasn't there? Calendar it.

I mean this literally. Don't just think "I should retry this someday." Put an actual recurring calendar event in six months and try it again.

Why six months?

Because this stuff moves so quickly. On an exponential curve, six months is enough time for meaningful capability jumps -- models get better, tools get more refined, the thing that was 40% reliable becomes 90% reliable.

But it's also short enough that you haven't completely forgotten the original use case or why it mattered.

What to Retry Right Now (And Why It Matters)

By the way, this isn't just about getting AI to work for specific tasks. It's about building the right mental model for operating in a world where capability expands exponentially.

Here's what you should do:

Go back through your "AI didn't work for this" list.

If you experimented with AI six months ago for a problem you really wanted to automate and haven't revisited it, that's your homework.

What did you try in Q3 or Q4 of last year that failed? What workflows did you attempt to automate that weren't quite ready?

Try them again. It’s likely that at least one of them works now.

I’ll give you a personal example from HubSpot. We have an AI tool in HubSpot called Breeze Assistant. It’s a conversational interface that allows you to ask questions about marketing, data in your HubSpot account — and how to do things in HubSpot. 6 months ago, it didn’t work that well for some of the prompts our customers would type in.

But, the models got a lot better — and so did Breeze Assistant. And, this shows up in the numbers. The CSAT score measurably improved.

Some more use cases you may have tried that didn’t work:

  • Code generation for a specific type of function

  • Automated summarization of meeting notes

  • Customer support ticket routing

  • Content repurposing across formats

  • Data analysis that required too much cleanup before

Whatever it was, the models have improved. The tools have gotten better. The integrations are more robust.

Most importantly, the failure rate from six months ago doesn't predict the failure rate today.

The people who win aren't the ones who get it right on the first try. They're the ones who keep finding new problems to automate, testing new tools, and retrying systematically.

They're the ones who understand that "doesn't work yet" is fundamentally different from "doesn't work."

So here's the action item: Set a recurring 6-month reminder titled "Retry AI experiments that failed."

When it goes off, go back through everything you tried that didn't quite work, pick the three most valuable ones, and try them again.

And if it doesn't? Calendar it for another six months.

On an exponential curve, persistence beats perfection.

—Dharmesh (@dharmesh)

P.S. If you're looking for a more systematic framework for how to approach AI experimentation in the first place, I wrote about the 60/30/10 rule a while back -- spending 60% of your AI usage on proven workflows, 30% on iteration, and 10% on pure experimentation. Hope it’s useful!

What'd you think of today's email?

Click below to let me know.

Login or Subscribe to participate in polls.