As AI does more of the work, are we developing the right leaders?


AI is accelerating how marketing teams analyze performance, but it’s also changing how future leaders are trained. As more work moves toward automated systems, fewer analysts are exposed to the fundamental, messy issues that shape their judgment. It’s easy to overlook this trade-off until it translates into real decisions. This is one of those times.

It’s April 2026 and the first quarter results are in. You’re in a conference room with your team of managers and senior analysts who will soon be holding meetings like this themselves. The year-over-year figures are displayed on the screen and the team is ready to present their findings and recommendations.

On the surface it appears solid. But you hesitate, knowing that you’re still facing the same measurement realities you’ve been struggling with for years. Data is fragmented, taxonomies are inconsistent, metrics don’t always align across platforms, and definitions vary. Some relationships are modeled, others estimated, and still others are simply not comparable. AI has not solved this problem and, instead, may have obscured these issues or amplified biases already embedded in the data.

You remember how last year’s numbers came together. You’ve increased your investments in podcasting, commercial media, and the creator economy, even though none of them fit neatly into your framework. You added attention metrics while standards were still evolving, new streaming partners launched mid-year, and tracking errors emerged late. Some campaigns were mislabeled and then corrected and an identity issue in the third quarter was carried forward in year-end reports. Throughout the year, your team revisited naming conventions and classifications to eliminate inconsistencies between systems and developed your plans for 2026 based on this work.

Before the meeting moves on to ideas and next steps, you pause and ask, “Before we get too comfortable comparing Q1 to 2025, have any of the same issues resurfaced?” »

Senior leaders look at each other, knowing exactly what you mean. They explain where the estimates were used, where gaps existed and what assumptions were made. Across the table, junior analysts lean in and listen. They are smart and tool savvy, but this conversation is different. It’s not about what the system surfaced. It’s about what the system missed, and that makes this a moment of leadership, where experience, judgment and context matter more than what appears on the screen.

Your customers are searching everywhere. Make sure your brand introduces himself.

The SEO toolkit you know, plus the AI ​​visibility data you need.

Start free trial

Start with

Semrush One logo

AI cannot replace the experience that builds judgment

Q1 results came faster than a year ago because AI handled most of the modeling and surfaced suggested actions, allowing the team to move straight to analysis. This efficiency is real and AI is integrated into planning, forecasting, anomaly detection and reporting, with most organizations still calibrating where automation actually adds value.

The problem is not that AI is doing more work. This is what happens to those who never learned how to do it without AI. This first-quarter discussion forced senior leaders to remember what happened in 2025, understand the ripple effects of identity disruption, and recognize that a number may be technically correct but still incomplete.

This knowledge does not come from looking at a dashboard. This came from assembling datasets, correcting labeling errors, restructuring taxonomies, and rebuilding assumptions when frameworks did not hold. Beginning analysts increasingly learn in environments where much of this reconstruction occurs upfront, leaving them less faced with practical problem solving and, in some cases, less aware of the existence of underlying measurement problems.

If an analyst is primarily trained to look at results, they may become excellent at reading what is in front of them without ever understanding how the report was constructed, where the assumptions lie, how fragile those assumptions may be, or how gaps in the data should be filled.

Senior leaders are questioning last year’s figures because they experienced tracking failures, identity disruptions and structural reclassifications. They championed investments that were in the right direction, but difficult to prove and adjust when external changes wiped out the benchmarks. They learned that clear reporting does not always mean accurate reporting and that while AI reduces the need for emerging practitioners to do the same hard work, we need to honestly ask what experiences will shape their judgment as they move forward.

Purposefully Develop Leaders

Are we exposing development analysts to what’s underneath the dashboard, giving them the context needed to spot anomalies, identify embedded biases, and recognize when something has been mislabeled or tracked incorrectly?

Can they connect the dots between systems and understand how these issues shape the bigger picture?

Or are we allowing AI efficiencies to quietly restrict the experiences that build true leadership capacity?

These are not rhetorical questions. These are decisions that team leaders, hiring managers, and organizational designers must make deliberately, because the default path, without intention, produces analysts who are ill-prepared when something goes wrong.

AI will continue to handle more of the operational workload and there is no going back. The real question is whether the next generation understands what is behind the results, knows when the results need to be revisited, and can recognize when something is wrong rather than assuming the system is correct.

If we are deliberate, AI can elevate the industry by allowing leaders to focus on strategy and growth while strengthening their ability to diagnose and solve complex problems. If we don’t, we risk developing a generation of leaders who are systems-savvy but ill-prepared when metrics fail, classifications drift, or data simply doesn’t make sense.

What this looks like in practice

Making this choice deliberately starts with a few concrete actions:

  • Assign junior analysts to data correction work, not just reporting. When breakage or classification tracking needs to be rebuilt, it is a development opportunity and not just a cleanup task. The analysts who do this work will advance the experience in a way that dashboard reviewers simply won’t.
  • Don’t just present conclusions in reviews, tell the story of how you got there. In front of your team, review what is wrong, what you have explored further, and the assumptions you have decided to challenge. This common comment is exactly what development practitioners need to hear.
  • Establish “below the dashboard” checkpoints in your workflow. Before results are finalized, require a structured review of the areas in which the estimates were used, any discrepancies that exist, and the assumptions made. This keeps critical thinking in the process rather than assuming the AI ​​handled it upfront.
  • Rethink how you evaluate talent development. If your performance frameworks only measure how well a person works within the system, you will never know if they can recognize when the system is going wrong nor will you develop that ability in them.

Remember, there will always be new tools and capabilities, but successful leaders won’t just know how to use AI. They will know how to question it. This ability develops over time, through experience, not through a dashboard. Invest in the next generation the same way you invested in yourself, by providing them with experiences that truly strengthen their judgment.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *