Canadian violinist Ashley MacIsaac has submitted an application civil trial against Google, alleging that an AI preview falsely identified him as a convicted sex offender. The lawsuit could test how courts handle liability for fake AI-generated research summaries.
The statement of claim, filed in February in the Ontario Superior Court of Justice, seeks at least $1.5 million in damages from Google LLC. None of the allegations have been tested in court.
What the lawsuit alleges
MacIsaac, a Juno Award-winning musician, says he learned of the fake summary in December 2025 after the Sipekne’katik First Nation confronted him about it and canceled one of his concerts. The First Nation later issued a public apology.
According to the filing, AI Overview falsely stated that MacIsaac had been convicted of sexual assault, internet luring involving a child and assault causing bodily harm, and falsely claimed he had been placed on the national sex offender registry.
The lawsuit argues that Google is responsible for the results generated by its AI system, claiming that Google “knew, or should have known, that the AI preview was imperfect and could return false information.”
It also alleges that Google failed to admit liability, failed to contact MacIsaac, and failed to issue an apology or retraction.
The filing makes a direct argument about IA’s liability:
“If a human spokesperson made these false claims on behalf of Google, a significant award of punitive damages would be warranted. Google should have no less liability because the defamatory statements were published by software created and controlled by Google.”
MacIsaac said Google needs to take responsibility for what the AI previews show. “It wasn’t a search engine that just looked through things and gave someone else’s story,” he said.
Google’s response
Google has not commented on the lawsuit. In December, spokeswoman Wendy Manton said that AI previews are “dynamic and change frequently” and that when the feature misinterprets web content, Google uses those instances to improve its systems. The false summary linking MacIsaac to criminal offenses no longer appears.
Why it matters
AI Previews may appear in Google search results as AI-generated snapshots with links to more information. Google Find help documentation says AI responses may include errors.
When these summaries make false claims about real people, the consequences can go beyond a poor search result. In MacIsaac’s case, the suit alleges that the AI Overview led to the cancellation of a concert and damage to his reputation.
MacIsaac’s case is not the first time AI-generated content has led to defamation allegations. In 2023, an Australian mayor threatened legal action after ChatGPT falsely claimed he had been jailed for corruption. The lawsuit takes direct aim at Google’s AI previews and argues that the product had a faulty design.
The case adds to a growing legal question around AI-generated content: whether platforms are liable when automated summaries present false claims in search results.
Looking to the future
The case is at the declaration stage and Google has not filed a response. Until then, the fundamental questions remain: whether Google will contest liability, how it will characterize the AI Overview results, and how the court will treat automated summaries in a defamation action.





