this post was submitted on 27 Feb 2026
28 points (100.0% liked)

Canada

11654 readers
1038 users here now

What's going on Canada?



Related Communities


🍁 Meta


🗺️ Provinces / Territories


🏙️ Cities / Local Communities

Sorted alphabetically by city name.


🏒 SportsHockey

Football (NFL): incomplete

Football (CFL): incomplete

Baseball

Basketball

Soccer


💻 Schools / Universities

Sorted by province, then by total full-time enrolment.


💵 Finance, Shopping, Sales


🗣️ Politics


🍁 Social / Culture


Rules

  1. Keep the original title when submitting an article. You can put your own commentary in the body of the post or in the comment section.

Reminder that the rules for lemmy.ca also apply here. See the sidebar on the homepage: lemmy.ca


founded 5 years ago
MODERATORS
 

O’Leary said that “with the benefit of our continued learnings,” the company “would refer the account banned in June 2025 to law enforcement if it were discovered today.”

The changes come after OpenAI’s head of policy, Chan Park, O’Leary and five others from the company met with members of Carney’s Cabinet on Tuesday in Ottawa — a meeting ministers later described as “disappointing.”

“We are reviewing OpenAI’s letter carefully and will have more to say in the coming days,” a spokesperson for Canada’s AI Minister Evan Solomon said in a statement on Thursday.

Canada’s Liberal government threatened to regulate AI chatbots if tech companies, like OpenAI, cannot demonstrate they have safeguards to protect Canadian users.

“Of course a failure occurred here. I mean, look what happened. This is a horrific tragedy,” Solomon said Wednesday.

you are viewing a single comment's thread
view the rest of the comments
[–] givesomefucks@lemmy.world 11 points 2 days ago (1 children)

The company said it is making it harder for banned users to sneak back onto the platform and that it will refer users to local resources when it appears they are in distress or “pursuing prohibited behavior.”

Still not buying it...

At first it felt like they were trying to get around privacy laws disclosing what you tell a chatbot.

Now it seems like they also want to use it for IDing users like Discord and other places got recently.

They're not worried about lawsuits and definitely not worried about ethics. They see tragedy as an opportunity to get people to agree to shit we normally wouldn't.

[–] WhatAmLemmy@lemmy.world 10 points 2 days ago* (last edited 2 days ago)

Mass surveillance has also been a complete security failure that has made everyone significantly more vulnerable to attack from the most dangerous and destructive foreign and domestic enemies, including surveillance capitalism and "intelligence" agencies (authoritarians, fascists, pedophiles, terrorists, and other criminally corrupt psychopaths).

Now we know that our secret police (aka "intelligence") either created, enabled, or (at the very least) allowed Epstein's child sex trafficking — for decades — under grounds of "national security", and our tax dollars have been used to rape children and protect pedophiles the entire time... so all their claims about security, safety, and child protection are equivalent to a pedo-fox in the hen house claiming it's chaining up the chickens to protect their chicks.