this post was submitted on 21 Feb 2026
90 points (95.9% liked)

Canada

11630 readers
1093 users here now

What's going on Canada?



Related Communities


🍁 Meta


🗺️ Provinces / Territories


🏙️ Cities / Local Communities

Sorted alphabetically by city name.


🏒 SportsHockey

Football (NFL): incomplete

Football (CFL): incomplete

Baseball

Basketball

Soccer


💻 Schools / Universities

Sorted by province, then by total full-time enrolment.


💵 Finance, Shopping, Sales


🗣️ Politics


🍁 Social / Culture


Rules

  1. Keep the original title when submitting an article. You can put your own commentary in the body of the post or in the comment section.

Reminder that the rules for lemmy.ca also apply here. See the sidebar on the homepage: lemmy.ca


founded 5 years ago
MODERATORS
 

ChatGPT-maker OpenAI has said it considered alerting Canadian police last year about the activities of a person who months later committed one of the worst school shootings in the country’s history.

OpenAI said last June the company identified the account of Jesse Van Rootselaar via abuse detection efforts for “furtherance of violent activities”.

The San Francisco tech company said on Friday it considered whether to refer the account to the Royal Canadian Mounted Police (RCMP) but determined at the time that the account activity did not meet a threshold for referral to law enforcement.

OpenAI banned the account in June 2025 for violating its usage policy.

top 27 comments
sorted by: hot top controversial new old
[–] Glide@lemmy.ca 25 points 5 days ago (1 children)

Ha, no, fuck off, OpenAI.

And how many times have you flagged someone for "furtherance of violent activities" that DIDN'T go forward to shoot up a school, or do much of anything you should intervene in? ChatGPT can't even brainstorm multiple choice questions on a short story without hallucinating bullshit, and you want us to believe it'd be effective as the thought police?

This is a cherry-picked argument being used to begin legitimizing AI for more serious uses, such as making legal decisions. This is not Minority Report; AI can fuck off with charging people with pre-crime.

"Never let a good crisis go to waste."

[–] hector@lemmy.today 1 points 4 days ago

Yeah this is transparent, trying to play our emotions to give them license to run threat detection on us, which they are already doing as much as they are able. They are using age controls and the like to id every account with likeness and id and everything you say or look at along with all the cameras and microphones and records of you, to make half baked conclusions to be used against you in secret in ways you can't know and won't be able to challenge.

Bank loans, background checks, police attention, court treatment, government treatment in general, business treatment, digital price tags you are given, what search results the engines will show you, etc. All done by these soul less silicon valley lords that some of the least trustworthy pieces of shit in the world.

[–] tangonov@lemmy.ca 12 points 5 days ago* (last edited 4 days ago) (1 children)

We need to recognize that this was a preventable crime without OpenAI's intervention. Let's stop making excuses to open up a Minority Report police state

[–] maplesaga@lemmy.world 1 points 4 days ago (1 children)

I'm afraid to upvote because I'm unsure if this is facetious.

[–] tangonov@lemmy.ca 2 points 4 days ago (1 children)

I'm being serious but I also don't want to argue with people about it on the Internet to be honest.

[–] maplesaga@lemmy.world 0 points 4 days ago

Well you'll just get a bunch or 1984 and historian textbook quotes if you do, I don't suggets it.

[–] fourish@lemmy.world 15 points 5 days ago (1 children)

Before passing judgement (not that our opinions matter) I would’ve liked to see what was in the OpenAI transcripts.

Now that we know they exist, I'm sure the police will somehow get ahold of them, could we not then eventually do a freedom of information act request for it from the police?

[–] snoons@lemmy.ca 20 points 6 days ago

+6 to the AI kill count

[–] frankring@lemmy.ca 1 points 3 days ago

They spy on you to make money, not to save lives. Got it.

[–] GameGod@lemmy.ca 13 points 5 days ago* (last edited 5 days ago) (1 children)

I think this should piss off a lot of people. Instead of doing something, they opted to do nothing, and now they're exploiting the tragedy as a PR opportunity. They're trying to shape their public image as an all-powerful arbiter. Worship the AI, or they will allow death to come to you and your family.

Or perhaps this is all just rage bait, to get us talking about this piece of shit company, to postpone the inevitable bursting of the AI bubble.

Edit: This is a sales pitch from OpenAI to the RCMP, with them saying they'll sell police forces an intelligence feed. It just comes across as horribly tone deaf and is problematic for so many reasons.

[–] non_burglar@lemmy.world 6 points 5 days ago (2 children)

I understand your point, but there are also legal ramifications and scary potential consequences should this have transpired.

For instance, do we want ICE to have access to data about user behaviour? They might already have that.

Who decides the bar of acceptable behaviour?

[–] GameGod@lemmy.ca 3 points 5 days ago

I'm confident that ICE and other US law enforcement agencies already have access to it. There is no presumption of privacy on anything you enter into any cloud-based LLM like ChatGPT, or even any search engine.

The consequences are already there and have been for like 15 years.

[–] hector@lemmy.today 1 points 4 days ago

Peter Thiel and his ilk decide acceptable behavior with our politicians and their appointees sadly. Officials will also be given ways to put names that they don't like in the categories of those that get bad scores too, even if they don't qualify by their own rules to be in those categories, that is always one of the selling points to the authorities.

[–] melsaskca@lemmy.ca 4 points 4 days ago

Sure they did. The thought police are coming for you, if they feel so inclined.

[–] Werewolf_Cop@lemmy.ca 3 points 4 days ago

Certain tech companies do not care what happen to Canadian lives. Thanks for the lesson.

[–] Tigeroovy@lemmy.ca 4 points 4 days ago

So glad that Canada will be investing so much money in this shit show!

Fucking magic beans ass technology.

[–] nik282000@lemmy.ca 3 points 4 days ago

Remember when facebook ran the numbers to predict if certain users were gonna kill themselves but didn't tell anyone? As long as Canada is gonna go full China then we should follow suit and install a government overseer in EVERY big corp that operates in Canada.

[–] TheDoctorDonna@piefed.ca 8 points 5 days ago* (last edited 5 days ago) (1 children)

So AI is always ready to sell you out if someone is willing to pay them enough and there's a non-zero chance that AI convinced someone to shoot up a school after already convincing several people to commit suicide.

This sounds like monitor and cull.

*Edited for Grammar.

[–] HubertManne@piefed.social 3 points 5 days ago

if ai can do that it will make money hand over fist and no guys will be able to get a date.

[–] orbituary@lemmy.dbzer0.com 8 points 6 days ago
[–] HubertManne@piefed.social 5 points 5 days ago

This reminds me of similar things with google searches. These should require warrants.

[–] Jack_Burton@lemmy.ca 2 points 4 days ago

"You bet on black and lost. I knew it would be red and considered telling you but decided not to."

[–] Reannlegge@lemmy.ca 5 points 6 days ago

What did ChatGPT tell the OpenAI people that they could play 1984, but opening those pod bay doors is something that cannot be closed.

[–] masterspace@lemmy.ca 2 points 6 days ago (1 children)

OpenAI said the threshold for referring a user to law enforcement was whether the case involved an imminent and credible risk of serious physical harm to others. The company said it did not identify credible or imminent planning. The Wall Street Journal first reported OpenAI’s revelation.

OpenAI said that, after learning of the school shooting, employees reached out to the RCMP with information on the individual and their use of ChatGPT.

Not defending them, but OP's selections seemed intentionally rage baiting.

[–] HellsBelle@sh.itjust.works 5 points 6 days ago* (last edited 6 days ago) (1 children)

I copied the first four paragraphs of the article.

[–] masterspace@lemmy.ca 0 points 5 days ago

Why'd you pick 4? Why not all?