Why we trust AI when it invents things


I went to a developer conference and, by accident, learned something profound about human nature. It all started innocently enough: the “All Things AI Conference”» in Durham, North Carolina, had a title too good to pass up.

What I didn’t expect was to be the only marketer among 2,500 developers, nodding as Whurly (yes, that’s his real name), CEO of quantum computing company Strangeworks, took a deep dive into quantum computing and AI. I was in over my head. But sometimes that’s where the best ideas are hidden.

It wasn’t until Luis Lastras, director of language and multimodal technology at IBM, started talking about “small models” that I finally understood something. Luis said something that struck me that I hadn’t realized: “The hallucinations are intentional.” »

Say what?

The answer is…

According to Luis, hallucinations are a way for developers to learn how models work. Since the models operate autonomously, they don’t filter what they generate – at least not yet. Consider letting your grandfather, who lost his filter, loose at a dinner party.

This is one of the things IBM learned from working with small models. These models validate their outputs at certain times as they generate them, to reduce hallucinations.

Anyone who has worked with AI has had hallucinations – from made-up sources to statistics that are just plain wrong. Lastras said this is little extra information that the AI ​​thinks is useful, but wasn’t asked for in the prompt.

He showed a demo of a prompt asking how many moons Mars has, and the answer came back with two and their names, with the added bonus – distance from Earth, which was not asked. The distance between the planets might have been correct, but validating it would require another step, so that might not have been the case.

How it evolved

However, humans tend to think that AI is always right.

In a study of 500 AI users (US adults) conducted last year by Elon University, almost 70% believe AI models are at least as intelligent as themselves, and 26% believe they are “much smarter.”

What’s more concerning is that we think AI thinks like a human. An article from the Wall Street Journal, Even smart people believe that AI is really a thought,” said: “Our cognitive biases have evolved to help us survive in complex social environments… (We have) evolved to view language proficiency as an indicator of intelligence, and engagement and usefulness as indicators of trustworthiness. »

The same tendency that leads us to trust our linguistically gifted peers to survive leads us to trust systems that seem to listen to us, understand, and want to help us.

So the more AI tools and robots act like humans, the more likely we are to trust them. Which brings us back to the hallucination. The more AI tools appear to be useful, the more likely we are to miss that “extra bit” of information that wasn’t asked for.

Conclusion

The convergence of intentional hallucinations and our deeply ingrained human instinct to trust smooth, helpful communicators creates a perfect storm of misplaced trust.

As AI tools become more sophisticated and human-like, our evolutionary instincts will only make it more difficult to maintain the critical distance needed to detect the errors, embellishments, and unasked-for additions that slip through.

The good news is that awareness is the first step. Whether it’s IBM’s small models validating results in real time or simply slowing down to check what AI is giving us, the antidote to a cognitive bias millions of years in the making is something refreshingly simple: a healthy dose of human skepticism.

Your customers are searching everywhere. Make sure your brand introduces himself.

The SEO toolkit you know, plus the AI ​​visibility data you need.

Start free trial

Start with

Semrush One logo



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *