Can ChatGPT be charged in a murder? Florida wants to find out

1 hour ago 7

Phoenix Ikner had asked ChatGPT which weapon and ammunition would be best suited for his attack

Author of the article:

AFP

Published May 11, 2026  •  3 minute read

According to evidence gathered by Florida's attorney general, Phoenix Ikner asked ChatGPT which weapon would be best suited for his attack.According to evidence gathered by Florida's attorney general, Phoenix Ikner asked ChatGPT which weapon would be best suited for his attack. Photo by SEBASTIEN BOZON /AFP

Before he opened fire on the Florida State University campus last year, killing two people and wounding six others, Phoenix Ikner had a conversation.

Advertisement 2

Toronto Sun

THIS CONTENT IS RESERVED FOR SUBSCRIBERS ONLY

Subscribe now to read the latest news in your city and across Canada.

  • Unlimited online access to articles from across Canada with one account.
  • Get exclusive access to the Toronto Sun ePaper, an electronic replica of the print edition that you can share, download and comment on.
  • Enjoy insights and behind-the-scenes analysis from our award-winning journalists.
  • Support local journalists and the next generation of journalists.
  • Daily puzzles including the New York Times Crossword.

SUBSCRIBE TO UNLOCK MORE ARTICLES

Subscribe now to read the latest news in your city and across Canada.

  • Unlimited online access to articles from across Canada with one account.
  • Get exclusive access to the Toronto Sun ePaper, an electronic replica of the print edition that you can share, download and comment on.
  • Enjoy insights and behind-the-scenes analysis from our award-winning journalists.
  • Support local journalists and the next generation of journalists.
  • Daily puzzles including the New York Times Crossword.

REGISTER / SIGN IN TO UNLOCK MORE ARTICLES

Create an account or sign in to continue with your reading experience.

  • Access articles from across Canada with one account.
  • Share your thoughts and join the conversation in the comments.
  • Enjoy additional articles per month.
  • Get email updates from your favourite authors.

THIS ARTICLE IS FREE TO READ REGISTER TO UNLOCK.

Create an account or sign in to continue with your reading experience.

  • Access articles from across Canada with one account
  • Share your thoughts and join the conversation in the comments
  • Enjoy additional articles per month
  • Get email updates from your favourite authors

Article content

Not with a friend, a parent or anyone who might have talked him out of it — but with an AI chatbot.

Article content

Article content

According to evidence gathered by Florida’s attorney general, the student had asked ChatGPT which weapon and ammunition would be best suited for his attack, and when and where he could inflict the most casualties.

The chatbot, investigators say, answered his questions.

Now Attorney General James Uthmeier wants to know whether that makes OpenAI a criminal.

“If the thing on the other side of the screen was a person, we would charge it with homicide,” he said, announcing a criminal investigation into ChatGPT maker OpenAI and leaving open the possibility of charges against the company or its employees.

The case surrounding the April, 2025 shooting has thrust a provocative question into the legal spotlight: Can the creators of an artificial intelligence be held criminally liable for the role their AI played in a crime — or even a suicide?

Legal experts say it’s a realistic, if deeply complicated, proposition.

By signing up you consent to receive the above newsletter from Postmedia Network Inc.

Article content

Advertisement 3

Article content

— Criminal product? —

Criminal prosecutions of corporations are possible under US law, though they remain relatively uncommon.

Late last month, Purdue Pharma was hit with more than $5 billion in criminal fines and penalties for its role in fueling the opioid crisis.

A woman places flowers at a vigil near the scene of a deadly shooting at Florida State University in 2025, when the suspect was believed to have consulted AI chatbot ChatGPT before the attack. (Miguel J. Rodriguez Carrillo/GETTY IMAGES NORTH AMERICA/AFP/File) A woman places flowers at a vigil near the scene of a deadly shooting at Florida State University in 2025, when the suspect was believed to have consulted AI chatbot ChatGPT before the attack. (Miguel J. Rodriguez Carrillo/GETTY IMAGES NORTH AMERICA/AFP/File) Photo by Miguel J. Rodriguez Carrillo /GETTY IMAGES NORTH AMERICA/AFP/File

Volkswagen was previously found guilty in the emissions cheating scandal, Pfizer over its promotion of the anti-inflammatory drug Bextra and Exxon for the Exxon Valdez oil spill in Alaska.

But those cases all involved human decisions — executives, salespeople or engineers who made choices and cut corners.

The Ikner case is different, and that difference is precisely what makes it so legally treacherous.

“Ultimately, it was a product that encouraged this crime, that did the act of the crime,” said Matthew Tokson, a law professor at the University of Utah. “That’s what makes this case so unique and so tricky.”

Legal experts consulted by AFP say the two most plausible charges would be negligence or recklessness — the latter involving a deliberate choice to ignore known risks or safety obligations.

Advertisement 4

Article content

Such charges are often treated as misdemeanors rather than felonies, meaning lighter sentences if convicted.

The bar, however, is high.

“Because this is such a frontier issue, a more compelling, more clear-cut case would probably involve internal documents recognizing these risks and maybe not taking them seriously enough,” Tokson said.

“In theory, you could get liability without it,” he said. “But in practice, I think that’d be difficult.”

In criminal law, “the burden of proof is higher,” noted Brandon Garrett, a law professor at Duke University — with prosecutors required to establish guilt beyond a reasonable doubt.

OpenAI, for its part, insists ChatGPT bears no responsibility for the attack.

“We work continuously to strengthen our safeguards to detect harmful intent, limit misuse, and respond appropriately when safety risks arise,” the company said.

— Civil or criminal? —

For those seeking accountability, a civil lawsuit may offer a more viable path.

Advertisement 5

Article content

Such an approach might push companies to design their products more carefully — or at least force them to reckon with the human cost of getting it wrong, said Tokson.

Several civil cases have already been filed against AI platforms in the US — many involving suicides — though none has yet resulted in a judgment against the companies.

In December, the family of Suzanne Adams sued OpenAI in California court, alleging that ChatGPT contributed to the murder of the Connecticut retiree by her own son.

Newer versions of ChatGPT have introduced additional safeguards, acknowledged Matthew Bergman, founding attorney of the Social Media Victims Law Center.

“I’m not saying that they are adequate guardrails, but there are more guardrails in effect,” he said.

A criminal conviction, even with a modest sentence, could still inflict serious damage, including a “big reputational impact,” Tokson said.

But for Garrett, prosecutions — however dramatic — are no replacement for the regulatory frameworks that Congress and the Trump administration have so far failed to put in place.

That, he said, would be “a much more sensible system.”

Article content

*** Disclaimer: This Article is auto-aggregated by a Rss Api Program and has not been created or edited by Bdtype.

(Note: This is an unedited and auto-generated story from Syndicated News Rss Api. News.bdtype.com Staff may not have modified or edited the content body.

Please visit the Source Website that deserves the credit and responsibility for creating this content.)

Watch Live | Source Article