Expert says the law is trying to catch up to technology
Published May 11, 2026 • Last updated 2 minutes ago • 3 minute read

The fear of Artificial Intelligence (AI) is both rational and irrational.
Advertisement 2
THIS CONTENT IS RESERVED FOR SUBSCRIBERS ONLY
Subscribe now to read the latest news in your city and across Canada.
- Unlimited online access to articles from across Canada with one account.
- Get exclusive access to the Toronto Sun ePaper, an electronic replica of the print edition that you can share, download and comment on.
- Enjoy insights and behind-the-scenes analysis from our award-winning journalists.
- Support local journalists and the next generation of journalists.
- Daily puzzles including the New York Times Crossword.
SUBSCRIBE TO UNLOCK MORE ARTICLES
Subscribe now to read the latest news in your city and across Canada.
- Unlimited online access to articles from across Canada with one account.
- Get exclusive access to the Toronto Sun ePaper, an electronic replica of the print edition that you can share, download and comment on.
- Enjoy insights and behind-the-scenes analysis from our award-winning journalists.
- Support local journalists and the next generation of journalists.
- Daily puzzles including the New York Times Crossword.
REGISTER / SIGN IN TO UNLOCK MORE ARTICLES
Create an account or sign in to continue with your reading experience.
- Access articles from across Canada with one account.
- Share your thoughts and join the conversation in the comments.
- Enjoy additional articles per month.
- Get email updates from your favourite authors.
THIS ARTICLE IS FREE TO READ REGISTER TO UNLOCK.
Create an account or sign in to continue with your reading experience.
- Access articles from across Canada with one account
- Share your thoughts and join the conversation in the comments
- Enjoy additional articles per month
- Get email updates from your favourite authors
Article content
Irrational due to our tendency to fear what we don’t understand, and rational in that no one seems to really understand what AI will bring to us for good or bad, and that includes its creators.
Article content
Article content
Is AI really intelligent? Can AI be guilty of a crime?
That question is being tested in court in Florida, where the FSU shooter asked ChatGPT what weapons would be best and when the student union would be busiest. ChatGPT apparently provided answers, without throwing up any warning flags for authorities.
The AFP reported, “Now Attorney General James Uthmeier wants to know whether that makes OpenAI a criminal.
“’If the thing on the other side of the screen was a person, we would charge it with homicide,’ he said, announcing a criminal investigation into ChatGPT maker OpenAI and leaving open the possibility of charges against the company or its employees.’”
Gavin Tighe, senior partner at Gardiner Roberts LLP in Toronto, says this is a case of the law trying to catch up to technology.
By signing up you consent to receive the above newsletter from Postmedia Network Inc.
Article content
Advertisement 3
Article content
“What jail would you send AI to? I do think it is a problem to fit the technology into the law, as opposed to the people or companies that own the technology, that they may be responsible for what it does, just like people who own any type of technology are responsible when it commits harm to others.”
Lawsuits involving teens and AI chatbots
The Florida case is not dissimilar to the Tumbler Ridge shooting in Canada where the victims are bringing a civil suit against the owners of ChatGPT, which had identified a potential shooter, but the humans behind it did not warn police.
But Tighe says, “AI is just responding to questions asked. It is very difficult to put the round peg into the square hole of law which is human law for humans.”
The Los Angeles Times reports, “Adam Raine, a California teenager, used ChatGPT to find answers about everything.
“But his conversations with a chatbot took a disturbing turn when the 16-year-old sought information from ChatGPT about ways to take his own life before he died by suicide in April.
Advertisement 4
Article content
“Now the parents of the teen are suing OpenAI, the maker of ChatGPT.”
AI chatbots potentially perceived as ‘friends’
Francis Syms, Associate Dean of Information and Communications Technology at Humber Polytechnic, says that some people form a relationship with a chatbot that goes beyond getting answers to questions. It becomes a conversation as if with a real person.
Syms said, “These stories are becoming so common that the premier of Manitoba, Wab Kinew is looking at banning AI chatbots in the classroom. People become addicted and they can’t give it up.”
“I think what is really scary is that people are using ChatGPT to talk about work, and then over time it sounds like a companion and then it becomes your friend or maybe potentially your special friend.”
Could AI, a “friend” with no moral compass, send us in the wrong direction?
“They are sycophantic,” Syms says, “in that they reinforce what you feed them, so if you have negative ideas about who you are, it reinforces that. It mostly tells you that the thing you are thinking is the right thing to think.”
We are living in a real-life dystopian novel, and these court cases will be hugely consequential.
Read More
-
Florida student suspected in double murder asked AI how to dump bodies
-
Alberta looking at legislation to rein in the harmful side of artificial intelligence, Smith says
Article content
.png)
1 hour ago
6


















Bengali (BD) ·
English (US) ·