AI has progressed, marketing has not


Marketers were among the first professionals to truly embrace generative AI. We opened ChatGPT, typed something in and got a pretty amazing result. We were among the first to experience “wow” moments in AI and began integrating LLMs into our days. By most measures, marketing has been ahead of the early adoption curve.

And then, somewhere along the way, we stopped evolving.

Eighteen months later, a surprising number of marketing teams are still doing essentially the same thing they did on day one. They open a chat window, enter a request, modify the result and continue. The workflow around AI hasn’t really changed. And only one step of the old process has been removed. We replaced the blank page with a draft. Everything else has, for the most part, remained exactly as it was.

How we got stuck

It happened for understandable reasons. Inertia is one. It’s just easier to keep doing things the same way (see my article on pave the cow paths).

The first results have burned confidence. The first time you asked an AI to write something really important, and it came back with mind-blowing facts, using your competitor’s name or producing something so generic it hurt, you learned something. You have learned to keep the AI ​​on a leash. You used it for low-stakes drafts and kept the real judging in human hands. It was rational at the time. The problem is that the lesson has turned into a habit.

No one owned AI adoption. In most marketing organizations I’ve spoken with, the use of AI has grown like kudzu: everywhere and without structure. Individual contributors have developed their own tips. The tools have proliferated. Someone bought five subscriptions, someone else bought three different ones. There was no shared workflow, no center of gravity, no one asking the big question: what should this really change in the way we work? Without appropriation, experimentation remained individual and superficial.

The number of tools was truly overwhelming. At last count, there are more than 1,000 AI tools marketed specifically to marketing teams. If you spend 30 minutes evaluating each one, that’s over 500 hours. Most marketers did what any reasonable person would do: they picked one or two familiar tools and used them for everything. Which mainly meant text generation. Which mainly meant the chatbot loop.

So the pattern of prompting, responding, and copying and pasting became frozen. The ambition ceiling has remained low.

Your customers are searching everywhere. Make sure your brand introduces himself.

The SEO toolkit you know, plus the AI ​​visibility data you need.

Start free trial

Start with

Semrush One Logo

But. The models that aroused your skepticism have evolved

The hardest part of AI is speed. The AI ​​you tried 18 months ago and the AI ​​you have today are not the same technology.

Eighteen months ago (fall 2023). The GPT-4 generation excelled at drafting, summarizing, and generating. But if you asked him to reason about a problem in multiple steps, maintain the context of a complex task, use external tools, or check his own work, he fell apart. He was a brilliant single-task performer who couldn’t handle a project.

Twelve months ago (spring 2024). GPT-4o and Claude 3 Opus brought longer pop-ups and better reasoning. Claude 3 Opus, in particular, could handle document-length analysis that would have broken previous models. But the use of the tools was still experimental and unreliable. Agent workflows (sequences of AI actions executing without handling) existed primarily in demos and developer sandboxes. The gap between generation and editing was still large.

Six months ago (fall 2025). This is where the real change happened. Reasoning models such as OpenAI’s o1 and Claude 3.7 introduced AI that thought before responding. They solved problems step by step, detecting their own mistakes and revising their approach. Anthropic’s Model Context Protocol (MCP), launching in late 2024, gave models a standardized way to connect to external tools such as databases, calendars, CMS and messaging platforms, transforming a chat interface into something closer to a software agent. Results that once required five rounds of correction have started landing in two.

Now (March 2026). Claude Sonnet 4.5 can independently perform complex multi-step tasks for over thirty hours. GPT-5.2 reduced rates of hallucinations to less than seven percent. METR researchers, who tracked AI performance across five generations of models, found that the duration of tasks that AI can complete independently has doubled every seven months. The models that failed you in 2023 have been replaced by systems that can plan a campaign, extract competitive data, write variations, evaluate them against your brand guidelines, and flag the best option for your consideration, all while you’re up in the morning.

I had my own “wow” moment recently. I had been using AI for content drafts for over a year, always with the same low ceiling. On a whim, I had a current-gen model take a published blog post, research three competitive angles I hadn’t covered, write a follow-up post with a different argument, identify the three best distribution channels for that post based on our audience data, and write custom intro copy for each channel, all in one session, without me touching the keyboard again until it was finished.

It worked. Not perfectly. But close enough that my editing time was 20 minutes, not two hours. The ceiling had moved. And I didn’t realize how much he had moved until I pushed him away.

What you could actually build right now

Let me give you a concrete example.

Every quarter, marketing teams produce a competitive landscape update. Someone searches three competing websites, reads their latest blogs, checks their social cadence, and writes a summary. It takes a day. With a current generation AI model connected via MCP to your web tools and CRM data, which can be triggered by a calendar event, run overnight and waiting in your inbox, complete with a comparison of changes since last quarter and a marked section of things to watch out for. Your job is to review and decide, not to collect and summarize.

The best part? You don’t need to know how to build it. You can simply put the context into the LLM, tell it what you are trying to do and ask it to suggest the best approach. You don’t always get it right the first time. But we have come a long way since November 2022.

These new approaches require a willingness to rethink workflows and break through the ceiling of chatbots.

The essentials

I know the reflex. I felt it myself. Wait until it is more reliable. Wait until there is good practice. Wait until someone else proves it. It takes too long to build automation.

But METERBenchmarks show capacity doubling every seven months. This means that now is the time to start experimenting.

Try an experiment this week. Choose a workflow in your team that involves at least three handoffs and takes more than a day from initiation to delivery. Map it. Then ask yourself how a sequence of agents would handle this end-to-end, with a human decision point at the end? Then ask your favorite AI tool how to make it happen.

You might surprise yourself.

The era of chatbots was a good start. We just don’t need to stay there.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *