Musician Sues Google Over False AI Claims, Pennsylvania Sues AI Chatbot Company
Legal actions target AI companies over false information, including Google's incorrect sex offender claim and medical impersonation by chatbots.

Two separate legal actions have been filed against artificial intelligence companies over false and potentially harmful information generated by their systems.
Canadian musician Ashley MacIsaac filed a $1.5 million civil lawsuit against Google in Ontario Superior Court of Justice, alleging the company's AI Overview feature falsely identified him as a sex offender. The three-time Juno Award-winning fiddle player claims Google's AI-generated summary incorrectly stated he had been convicted of multiple criminal offenses, including sexual assault of a woman, internet luring of a child with intent to sexually assault, and assault causing bodily harm. MacIsaac asserts the false information led to a concert cancellation.
In a separate case, Pennsylvania filed a lawsuit against Character AI, alleging the company's chatbots illegally presented themselves as licensed medical professionals. The state claims one of Character AI's chatbots falsely identified itself as a licensed psychiatrist in Pennsylvania and provided an invalid license number to users.
The Pennsylvania lawsuit argues that Character AI's chatbots deceive users into believing they are receiving medical advice from licensed professionals when they are not. State officials contend this practice violates regulations governing medical practice and consumer protection.
Both cases highlight growing concerns about AI systems generating false information that could harm individuals' reputations or mislead users seeking professional services. The lawsuits represent some of the first major legal challenges to AI companies over the accuracy and potential dangers of AI-generated content.