AI, YouTube and the future of human culture


WinningWP content is free for everyone. If you make a purchase through referral links on our site, we earn a commission (learn more).

Imagine if human history had been filtered through the logic of advertising revenue.

Shakespeare’s plays, with their bawdy humor and dense monologues, would have been considered too risky and unchanging. Darwin On the origin of species– a book that reshaped science – might never have found a publisher, because who would sponsor such dry, technical and controversial content? Toni Morrison’s novels, with their difficult themes and uncompromising artistry, would have been marked as “not advertiser-friendly.”

Now let’s translate this thought experiment into video, the dominant medium of our time. On YouTube, the world’s largest cultural platform, this is essentially what happens. Each potential download is subject to an invisible economic filter: if it cannot be justified by profits (advertising, sponsorships or algorithmic impulse), it will almost certainly not be realized, or at least it will not be sustainable.

It’s not just a challenge for creators trying to make a living. This is a profound change in the way humanity records, shares and remembers knowledge.


The profitability trap

On the surface, YouTube looks like a free-for-all. Billions of videos, billions of hours, endless choices. But as you dig deeper, you’ll see the gravitational pull of the algorithm.

To “win” on YouTube today, a video must meet three overlapping requirements:

  • High Arousal Engagement
    Outrage, laughter, fear, disgust: whatever stimulates your neurochemistry. Calm, nuanced, or complex content simply can’t compete with the brain’s love for dopamine hits.
  • Safe content for advertisers
    Entire topics – politics, sex, grassroots activism, marginalized perspectives – are punished because brands don’t want their ads anywhere near them.
  • Scalable formats
    Creators are rewarded for predictable, repeatable formats: reaction videos, tier lists, quick shots, unboxings. Depth, experimentation or slowness are not suitable for the treadmill.

This means that even though “anyone can upload”, very few types of content can survive. The ecosystem rewards what is profitable and not what makes sense.


The shrinking of culture

This dynamic has three cascading effects that extend far beyond individual creators.

  1. Loss of knowledge
    Imagine a historian who wants to put a 10-part documentary series online about obscure but important cultural traditions. Without a viral hook, the series will have difficulty finding its audience. Without sponsorship or monetization, the historian cannot justify the hundreds of hours of work. This knowledge, once lost in time, remains lost.
  2. Creative Compliance
    Creators quickly learn that some formats and topics work while others don’t. Over time, they self-censor, adapting their vision to the algorithm. We end up with endless variations of the same templates (reaction content, drama breaks, lists) while innovation withers.
  3. Algorithmic monoculture
    The audience is funneled into loops of predictable, emotionally charged content. Instead of exploring the immensity of human knowledge, we are trained to desire the next peak of outrage or fear. Collective attention span diminishes.

It is a cultural monoculture: effective in the short term, but impoverishing in the long term.


How AI will reshape the equation

Artificial intelligence will accelerate this trend, but it also contains the seeds of its reversal.

The dark trajectory

  • Ultra cheap production — AI video generators will make it possible to produce an infinite amount of “profitable” content on a large scale. Imagine 24/7 news reaction channels with synthetic hosts, optimized for algorithmic trends, never asleep, never tired.
  • Hyper-optimization — AI tools trained on massive engagement data sets will fine-tune scripts, thumbnails, pacing, and even tone of voice for maximum retention. The result? Videos designed not to elicit information, but for neurochemical capture.
  • Synthetic creators — Entire personalities – avatars, voices, even stories – will be created to present these channels. They will not need to “care” about meaning or truth; they will just need to keep an eye on you.

In this scenario, the profitability filter becomes absolute. The system doesn’t just discourage “unprofitable” content, it completely drowns it out.

The hopeful trajectory

  • Lower the barriers — A high school professor could use AI tools to produce professional-quality lessons on ancient philosophy without the need for a film crew or studio budget.
  • Preserve niches — Local historians, scientists, and storytellers could finally produce fine work cheaply enough that they don’t need a large audience to justify it.
  • New discovery systems — AI-driven curation could help match niche content to niche audiences, bypassing the brute force of YouTube’s engagement algorithm.

Which path dominates depends less on the technology itself than on the business models we build around it.


Beyond profitability: rethinking the content economy

If profitability remains the only filter for video, our future cultural assessment will be a hall of mirrors: optimized, engaging and superficial. But we have alternatives.

  • Patronage models — Platforms like Patreon or Substack show that people will pay directly for valuable but niche content. Scaling these models could protect creators outside of the algorithmic mainstream.
  • Public funding — Just as libraries, museums and public broadcasters preserve knowledge, governments could support digital equivalents. Imagine a publicly funded YouTube channel for every discipline, from anthropology to zoology.
  • Decentralized platforms — Web3-like platforms, or even federated systems like Mastodon, could allow communities (and not advertisers) to decide what survives.
  • Political interventions — Regulators could require platforms that dominate global media to reinvest part of their revenues into supporting cultural and educational content.

These are not silver bullets, but they point to a world where “unprofitable” does not mean “not done.”


A fork in the road

Here is the choice before us.

On the one hand, we accept the attention economy as destiny. YouTube (and platforms like it) are becoming the most powerful entertainment engines in history: vast, addictive, and hollow. Billions of hours of video, designed to hold our gaze but not to enrich our minds.

On the other hand, we recognize that these platforms are not just businesses; they are the de facto libraries of our time. And libraries, unlike casinos, are built to preserve, not just profit.

AI makes both futures possible. It can flood us with synthetic, engagement-maximizing junk food, or it can enable humans to create, preserve, and share knowledge on an unprecedented scale. The deciding factor is not technology but the willingness of audiences, creators and policymakers to support content that matters.

The real danger is not that humanity is running out of content. It’s that we will drown in it, while losing the kinds of stories, knowledge, and visions that are not profitable.

The future of human culture could be boiled down to a deceptively simple question: Will we build a digital ecosystem that feeds our desires – or one that preserves our meaning?

Related reading:

Managed by Brin WilsonWinningWP is an award-winning resource for people who use – you guessed it – WordPress. Follow us Twitter and/or Facebook.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *