this post was submitted on 22 Sep 2025
1 points (57.1% liked)

Machine Learning | Artificial Intelligence

1170 readers
8 users here now

Welcome to Machine Learning – a versatile digital hub where Artificial Intelligence enthusiasts unite. From news flashes and coding tutorials to ML-themed humor, our community covers the gamut of machine learning topics. Regardless of whether you're an AI expert, a budding programmer, or simply curious about the field, this is your space to share, learn, and connect over all things machine learning. Let's weave algorithms and spark innovation together.

founded 2 years ago
MODERATORS
 

cross-posted from: https://lemmy.sdf.org/post/42723239

Archived

Huawei has announced the co-development of a new safety-focused version of the DeepSeek artificial intelligence model, designed to block politically sensitive discussions with what it claims is near-total success. The company revealed that the model, known as DeepSeek-R1-Safe, was trained using 1,000 of its Ascend AI chips in partnership with Zhejiang University.

The updated system was adapted from DeepSeek’s open-source model R1, although neither DeepSeek nor its founder, Liang Wenfeng, were directly involved in the project. Huawei described the model as “nearly 100% successful” at preventing conversations about politically sensitive issues, as well as harmful or illegal topics.

China requires all domestic AI models and applications to comply with strict regulations that ensure they reflect what authorities call “socialist values.” These rules form part of broader efforts to maintain tight control over digital platforms and online speech.

[...]

you are viewing a single comment's thread
view the rest of the comments
[–] RobotToaster@mander.xyz 1 points 5 months ago

I agree, this just happens to be the first time I've seen the press actually call it censorship.