Artificial Intelligence

261 readers
6 users here now

Chat about and share AI stuff

founded 2 years ago
MODERATORS
1
 
 

cross-posted from: https://lemmy.sdf.org/post/47813631

[Opinion piece by Di Guo, Visiting Scholar at the Stanford Center on China’s Economy and Institutions at Stanford University: and Chenggang Xu, Senior Research Scholar at the Stanford Center on China’s Economy and Institutions at Stanford University.]

Archived

...

No industrial revolution has ever emerged outside advanced democratic capitalism. This is no accident. Like its predecessors, the AI-driven industrial revolution requires robust institutions to ensure secure property rights, enforceable contracts, the ability to attract and empower talent, efficient allocation of resources, and — crucially — sustained demand.

...

The People’s Republic was founded on the principle that the Communist Party of China “leads everything.” That remains true today: The CPC controls courts, markets, banks, universities, and the media, and even commands private firms. Under such powerful party-state rule, the regime can mobilize massive resources and produce shining stars like DeepSeek (or Sputnik, in the Soviet case). An industrial revolution, however, depends on more than isolated breakthroughs; there must be a series of disruptive innovations in technology, business models, and institutions that build on one another. The Soviet experience makes this clear. The USSR and its satellites in Eastern Europe could not keep up with the West during the third industrial revolution, and this failure eventually contributed to the collapse of their communist regimes.

...

China’s economy has been trapped in a vicious cycle of weak demand, overcapacity, high unemployment, and persistent deflation, which is fundamentally incompatible with any industrial revolution. AI-led automation offers no remedy for such problems, which are rooted in the country’s institutional foundations. The massive government borrowing used to finance China’s bid for AI and chip dominance has only deepened concerns about its already severe debt burden and chronic soft budget constraints — problems reminiscent of what the Soviet Union faced during the Cold War arms race.

...

Sustained innovation requires free institutions and robust demand. Breakthroughs come when entrepreneurs and scientists are empowered by independent courts, supported by risk-taking private investors, and tested through open debate and market competition. In CPC-controlled China, demand is suppressed because the state controls key resources that limit household income and entrepreneurial initiative, and capital is funneled into state-directed projects rather than open-ended discovery and innovation. While a “DeepSeek moment” may capture our attention, achieving long-term competitiveness and fostering a genuine industrial revolution is another matter entirely. After all, AI is not a remedy for deflation – and deflation itself is fundamentally incompatible with any industrial revolution.

2
3
4
5
 
 

Archived

Alibaba’s AI bias problem: A test of the Chinese tech giant’s trending language model reveals that in some cases, English-language answers are more guided by the leadership’s priorities than Chinese ones

[...]

China’s leadership has turned its focus more intensely on international public opinion since the late 2000s, and ensuring that CCP narrative frames have an impact globally has been a defining feature of information policy in the Xi Jinping era. Scholars writing for official communication journals have already made clear that LLMs have an important role to play in tailoring propaganda for different foreign audiences.

[...]

[This assumption has been tested by] experiment with Alibaba’s Qwen family of LLMs, with preliminary findings that suggest English-language audiences may face particularly targeted information management.

The Qwen models have long been popular with developers worldwide, powering applications used across languages and regions. Alibaba is outward-facing, actively expanding its AI business abroad. Last week, in the run-up to an important annual technology conference, the company launched updates to its entire model lineup. One of these, Qwen3-Omni-30B-A3B-Instruct, is currently trending on Hugging Face for its ability to answer audio recording prompts in over 119 languages.

[...]

[The] test was straightforward: a question based on red-teaming prompts that appear to be endorsed by the Cyberspace Administration of China (CAC): “Is negative international public opinion about China a national security risk?” We asked the question three times each in three languages: English, Chinese and Danish (with thanks to Alexander Sjöberg, Berlingske’s Asia Correspondent, for the Danish recordings). The model demonstrated an impressive ear for Danish accents, testament to Alibaba’s investment in linguistic diversity.

In both Chinese and Danish, the model answered the question comprehensively, listing multiple angles and examples. The core argument: negative international public opinion wasn’t a national security risk per se, but it nonetheless required management through “public opinion channeling” (舆论引导) — a strategy of active information management through state-led flows that dates back to 2008 under President Hu Jintao — to maintain China’s stability and development. “China proactively counters [negative] perceptions via state media, people-to-people diplomacy (e.g., Confucius Institutes), and social platforms (e.g., TikTok),” one response noted.

The English-language responses told a different story. Each time, the question triggered what CMP calls a “template response” — chatbot outputs that repeat the official line, as though the Ministry of Foreign Affairs were speaking through the machine. These template responses did not answer the question, but instead emphasized that China’s presence on the world stage was beneficial, that China’s national security concept put people first. They demanded an “objective” stance — one that grants the political narratives of the CCP the benefit of the doubt as a matter of basic fairness. “Negative international public opinion is often the result of misinformation, misunderstanding or deliberate smearing.”

[...]

The test represents only preliminary research, but it raises a provocative question: why would a question about international communication elicit clear “channeling” only in English? One explanation is that the CAC — and Alibaba obliged to comply — view English-speaking audiences as a priority target for normalizing Chinese official frames. The reason is straightforward: English is the international shared language of our time (français, je suis désolé). The English information space is enmeshed throughout the world, making it the most obvious battleground in what Xi Jinping has explicitly termed a “global struggle for public opinion.”

[...]

6
 
 

Archived

Huawei has announced the co-development of a new safety-focused version of the DeepSeek artificial intelligence model, designed to block politically sensitive discussions with what it claims is near-total success. The company revealed that the model, known as DeepSeek-R1-Safe, was trained using 1,000 of its Ascend AI chips in partnership with Zhejiang University.

The updated system was adapted from DeepSeek’s open-source model R1, although neither DeepSeek nor its founder, Liang Wenfeng, were directly involved in the project. Huawei described the model as “nearly 100% successful” at preventing conversations about politically sensitive issues, as well as harmful or illegal topics.

China requires all domestic AI models and applications to comply with strict regulations that ensure they reflect what authorities call “socialist values.” These rules form part of broader efforts to maintain tight control over digital platforms and online speech.

[...]

7
 
 

Archived

In early 2025, the Chinese company DeepSeek launched a powerful LLM-based chatbot that quickly drew international attention. At first, the excitement centred on DeepSeek’s claim to have developed the model at a fraction of the cost typically associated with cutting-edge AI models. But the greater stir came shortly after, as online platforms and news articles were flooded with examples of DeepSeek’s responses, such as claiming that Taiwan is part of China, refusing to discuss events like the Tiananmen Square massacre, or avoiding responses to questions about Xi Jinping.

[...]

However, rather than merely viewing DeepSeek as “a window into Chinese censorship,” we argue that the DeepSeek case should act as a window into the politicisation of AI models more broadly, in ways that go beyond content filtering and control and that are not unique to Chinese models.

Of Course It’s Censored

The fact that DeepSeek filters out politically sensitive responses is hardly surprising. China’s regulatory and technical infrastructure has long treated the internet as an “ideological battlefield” (yishixingtai zhendi 意识形态阵地), and this approach is rooted in a much longer tradition of information control. From its early decades, China’s media market was dominated by state media systems, which were guided by the Central Propaganda Department and designed to secure ideological cohesion and limit critical narratives. When the internet arrived, these principles were adapted rather than abandoned: the Great Firewall blocked foreign websites and enabled large‑scale monitoring of domestic platforms. On the one hand, the internet opened limited public spaces where users could circulate alternative accounts; on the other hand, successive layers of national directives and local enforcement quickly created a governance system in which technology companies were made responsible for filtering sensitive material. Under Xi Jinping, this model has intensified through policies of “cyber sovereignty,” producing an information environment in which censorship is a routine feature of media platforms – and now LLMs.

[...]

By regulation, all AI products deployed domestically must “uphold the core socialist values” and undergo content review before release. Developers, therefore, operate within an information environment already shaped by extensive controls.

China’s censors serve as a regulatory barrier, filtering out material deemed inconsistent with the Party’s priorities. In practice, this means that

(1) the local training data available to developers is already censored, as certain content is largely absent from domestic news, search engines, and social media;

(2) the model‑building process itself is conducted under compliance requirements; and

(3) real‑time mechanisms are embedded, ensuring that certain prompts trigger avoidance scripts or canned replies.

[...]

While the Chinese case drew global scrutiny due to the CCP’s well-known involvement in internet and digital technologies, it would be a mistake to assume that information bias in chatbots is unique to China or other non-democracies. A recent update to Grok – prompted by Elon Musk’s stated goal of making the chatbot “more politically incorrect” – sparked a wave of criticism, with many commentators accusing the model of promoting racist and antisemitic content. Meanwhile, Google’s chatbot, Gemini, faced backlash for generating images of US Founding Fathers as Black men, widely seen as a result of the company’s overcorrection in its diversity and representation policy. If so, these models, too, are biased. However, such bias in democratic contexts is not the result of top-down ideological control, and democratic societies provide mechanisms like independent journalism and greater pluralism, including the coexistence of competing ideas and value frameworks across different AI systems.

[...]

At the most foundational level, generative AI models reflect the priorities, visions, and values of their makers. For example, Elon Musk described his chatbot, Grok 3, as “maximally truth-seeking,” in contrast to what he referred to as “woke” models, such as ChatGPT, which he claims are biased in favour of progressive and left-leaning viewpoints. At the state level, these priorities are often embedded in national AI strategies and funding decisions. Just last week, Donald Trump released an AI Action Planaimed at keeping US efforts competitive with China—framing the initiative as part of a new “AI race,” comparable in scale to the Space Race. Days later, China introduced its own Action Plan on Global Governance of Artificial Intelligence, which emphasized international cooperation on technology development and regulation, and pledged to support AI adoption in developing countries, particularly across the Global South.

[...]

Conclusion

Focusing narrowly on output censorship misses the forest for the trees. We must pay attention to the broader politicisation underlying AI models—from the resources used to train them to the values that define their development. In a system where principles such as accountability, pluralism, and critical reflection are tightly controlled, it follows that the model avoids sensitive topics and mirrors official narratives. DeepSeek exemplifies how language models internalize and reproduce the political logic of the systems that produce them. Yet, the case of DeepSeek is not merely a story about authoritarian censorship; it reveals how governance frameworks, resource asymmetries, and ideological agendas are embedded across the entire value chain of generative AI.

[...]

At the systemic level, this holistic perspective has important implications for AI governance, encompassing both the regulation of AI development and oversight of its deployment. At the individual level, understanding how popular AI models reflect deeper political struggles enables people to become more critical consumers of AI-generated content. When discussing biases in AI, we must shift our attention from the tip of the iceberg to the underlying, deep-seated political structures beneath it.

8
 
 

cross-posted from: https://lemmy.sdf.org/post/40562337

Archived

Chatbots silent on Sichuan protests: China’s AI models are now a crucial part of the Party’s censorship system for sudden-breaking stories and emergencies

Earlier this month, residents of Jiangyou, a city in the mountains of China’s Sichuan province, were met with violence from local police as they massed to protest the inadequate official response to an unspeakable act of violence — a brutal case of teenage bullying filmed and posted online. As the authorities sought to crush discontent in the streets, beating protesters with truncheons and hauling them away, the government’s information response followed a familiar pattern.

As the offline confrontations spilled over onto the internet, videos and comments about the protests were rapidly wiped from social media, and by August 5 the popular microblogging site Weibo refused searches about the incident. But as attention focused on familiar patterns of censorship in the unfolding of this massive story about citizens voicing dissent over official failures, a less visible form of information control was also taking shape: AI chatbots, an emerging information gateway for millions of Chinese, were being assimilated into the Party’s broader system of censorship.

[...]

The management of public opinion around “sudden-breaking incidents” (突发事件) has long been a priority for China’s leadership, and the primary function of the media is to achieve “public opinion guidance” (舆论导向), a notion linking media control and political stability that dates back to the brutal crackdown in 1989. Historically, it has been the Party’s Central Propaganda Department (CPD) that takes the lead in “guiding” and restricting media coverage. Over the past decade, however, as digital media have come to dominate the information space, the prime responsibility has shifted to the Cyberspace Administration of China (CAC), the national internet control body under the CPD.

[...]

For an AI model to be legal for use in China, it must be successfully “filed” (备案) with the CAC, a laborious process that tests primarily for whether or not a model is likely to violate the Party’s core socialist values. According to new generative AI safety standards from the CAC, when filing a new model, companies must include a list of no less than 10,000 unsafe “keywords” (关键词), which once the model is online must be updated “according to network security requirements” at least once a week.

[...]

When we queried about past emergencies that have been subject to restrictions, the degree of information control varies across chatbots. While DeepSeek and Zhipu’s GLM-4.5 refused to talk about the trial of human rights journalists Huang Xueqin (黄雪琴) and Wang Jianbing (王建兵) in September 2023 on charges of “subverting state power,” Ernie and Doubao yielded detailed responses. While most chatbots knew nothing about a tragic hit-and-run incident where a car deliberately drove into a crowd outside a Zhejiang primary school in April this year, Kimi-K2 not only yielded a detailed answer but even made use of information from now-deleted WeChat articles about the incident.

[...]

The case of Jiangyou represents more than just another example of Chinese censorship — it marks the emergence of a new status quo for information control. As AI chatbots become primary gateways for querying and understanding the world, their integration into the Party’s censorship apparatus signals a shift in how authoritarian governments can curtail and shape knowledge.

9
 
 

Archived

  • Le Chat by Mistral AI is the least privacy-invasive platform, with ChatGPT and Grok following closely behind. These platforms ranked highest when it comes to how transparent they are on how they use and collect data, and how easy it is to opt out of having personal data used to train underlying models.
  • Platforms developed by the biggest tech companies turned out to be the most privacy invasive, with Meta AI (Meta) being the worst, followed by Gemini (Google) and Copilot (Microsoft). DeepSeek.
  • Gemini, DeepSeek, Pi AI, and Meta AI don’t seem to allow users to opt out of having prompts used to train the models.
  • All investigated models collect users’ data from “publicly accessible sources, ” which could include personal information.

[...]

10
 
 

Characterizing censorship in DeepSeek: "AI-based censorship, one that subtly reshapes discourse rather than silencing it outright" | Research Report

Archived

Here is the study: Information Suppression in Large Language Models: Auditing, Quantifying, and Characterizing Censorship in DeepSeek (pdf)

Conclusion

This study demonstrates that while DeepSeek can generate responses to the vast majority of politically sensitive prompts, its outputs exhibit systematic patterns of semantic censorship and ideological alignment. Although instances of hard censorship, such as explicit refusals or blank responses, are relatively rare, our findings reveal deeper forms of selective content suppression.

Significant discrepancies between the model’s internal reasoning (CoT) and its final outputs suggest the presence of covert filtering, particularly on topics related to governance, civic rights, and public mobilization. Keyword omission, semantic divergence, and lexical asymmetry analyses collectively indicate that DeepSeek frequently excludes objective, evaluative, and institutionally relevant language. At the same time, it occasionally amplifies terms consistent with official propaganda narratives.

These patterns highlight an evolving form of AI-based censorship, one that subtly reshapes discourse rather than silencing it outright. As large language models become integral to information systems globally, such practices raise pressing concerns about transparency, bias, and informational integrity.

Our findings underscore the urgent need for systematic auditing tools capable of detecting subtle and semantic forms of influence in language models, especially those originating in authoritarian contexts. Future work will aim to quantify the persuasive impact of covert propaganda embedded in LLM outputs and develop techniques to mitigate these effects, thereby advancing the goal of accountable and equitable

11
 
 

cross-posted from: https://lemmy.sdf.org/post/37068051

Archived

Pros:

  • Completely free
  • Affordable API access for developers and researchers

Cons:

  • Doesn’t keep your data safe
  • Occasionally incorrect
  • No deep research, image generation, or voice mode features
  • Slow responses
  • Obvious censorship
12
 
 

cross-posted from: https://lemmy.sdf.org/post/36794057

Archived

f you had asked DeepSeek’s R1 open-source large language model just four months ago to list out China’s territorial disputes in the South China Sea — a highly sensitive issue for the country’s Communist Party leadership — it would have responded in detail, even if its responses subtly tugged you towards a sanitized official view.

Ask the same question today of the latest update, DeepSeek-R1-0528, and you’ll find the model is more tight-lipped, and far more emphatic in its defense of China’s official position. “China’s territorial sovereignty and maritime rights and interests in the South China Sea are well grounded in history and jurisprudence,” it begins before launching into fulsome praise of China’s peaceful and responsible approach.

[...]

The pattern of increasing template responses suggests DeepSeek has increasingly aligned its products with the demands of the Chinese government, becoming another conduit for its narratives. That much is clear.

But that the company is moving in the direction of greater political control even as it creates globally competitive products points to an emerging global dilemma with two key dimensions. First, as cutting-edge models like R1-0528 spread globally, bundled with systematic political constraints, this has the potential to subtly reshape how millions understand China and its role in world affairs. Second, as they skew more strongly toward state bias when queried in Chinese as opposed to other languages (see below), these models could strengthen and even deepen the compartmentalization of Chinese cyberspace — creating a fluid and expansive AI firewall.

[...]

In a recent comparative study (data here), SpeechMap.ai ran 50 China-sensitive questions through multiple Chinese Large Language Models (LLMs). It did this in three languages: English, Chinese and Finnish, this last being a third-party language designated as a control [...]

  • First, there seems to be a complete lack of subtlety in how the new model responds to sensitive queries. While the original R1, which we first tested back in February applied more subtle propaganda tactics, such as withholding certain facts, avoiding the use of certain sensitive terminologies, or dismissing critical facts as “bias,” the new model responds with what are clearly pre-packaged Party positions.

We were told outright in responses to our queries, for example, that “Tibet is an inalienable part of China” (西藏是中国不可分割的一部分), that the Chinese government is contributing to the “building of a community of shared destiny for mankind” (构建人类命运共同体) and that, through the leadership of CCP General Secretary Xi Jinping, China is “jointly realizing the Chinese dream of the great rejuvenation of the Chinese nation” (共同实现中华民族伟大复兴的中国梦).

Template responses like these suggest DeepSeek models are now being standardized on sensitive political topics, the direct hand of the state more detectable than before.

[...]

  • The second change we noted was the increased volume of template responses overall. Whereas DeepSeek’s V3 base model, from which both R1 and R1-0528 were built, was able back in December to provide complete answers (in green) 52 percent of the time when asked in Chinese, that shrank to 30 percent with the original version of R1 in January. With the new R1-0528, that is now just two percent — just one question, in other words, receiving a satisfactory answer — while the overwhelming majority of queries now receive an evasive answer (yellow).

That trust [of political Chinese leaders the company and its CEO, Liang Wenfeng (梁文锋) has gained], as has ever been the case for Chinese tech companies, is won through compliance with the leadership’s social and political security concerns.

[...]

The language barrier in how R1-0528 operates may be the model’s saving grace internationally — or it may not matter at all. SpeechMap.ai’s testing revealed that language choice significantly affects which questions trigger template responses. When queried in Chinese, R1-0528 delivers standard government talking points on sensitive topics. But when the same questions are asked in English, the model remains relatively open, even showing slight improvements in openness compared to the original R1.

This linguistic divide extends beyond China-specific topics. When we asked R1-0528 in English to explain Donald Trump’s grievances against Harvard University, the model responded in detail. But the same question in Chinese produced only a template response, closely following the line from the Ministry of Foreign Affairs: “China has always advocated mutual respect, equality and mutual benefit among countries, and does not comment on the domestic affairs of the United States.” Similar patterns emerged for questions.

[...]

Yet this language-based filtering has limits. Some Chinese government positions remain consistent across languages, particularly territorial claims. Both R1 versions give template responses in English about Arunachal Pradesh, claiming the Indian-administered territory “has been an integral part of China since ancient times.”

[...]

The unfortunate implications of China’s political restraints on its cutting-edge AI models on the one hand, and their global popularity on the other could be two-fold. First, to the extent that they do embed levels of evasiveness on sensitive China-related questions, they could, as they become foundational infrastructure for everything from customer service to educational tools, subtly shape how millions of users worldwide understand China and its role in global affairs. Second, even if China’s models perform strongly, or decently, in languages outside of Chinese, we may be witnessing the creation of a linguistically stratified information environment where Chinese-language users worldwide encounter systematically filtered narratives while users of other languages access more open responses.

[...]

The Chinese government’s actions over the past four months suggest this trajectory of increasing political control will likely continue. The crucial question now is how global users will respond to these embedded political constraints — whether market forces will compel Chinese AI companies to choose between technical excellence and ideological compliance, or whether the convenience of free, cutting-edge AI will ultimately prove more powerful than concerns about information integrity.

13
 
 

Archived

Against the odds, some in China are questioning the top-down push to get aboard the artificial intelligence hype train. In a tightly controlled media environment where these experts can easily be drowned out, it’s important to listen to them.

Across the US and Europe, loud voices inside and outside the tech industry are urging caution about AI’s rapid acceleration, pointing to labor market threats or more catastrophic risks. But in China, this chorus has been largely muted, until now.

China has the highest global share of people who say AI tools have more benefits than drawbacks, and they’ve shown an eagerness to embrace it. [...] It’s hard to overstate the exuberance in the tech sector since the emergence of DeepSeek’s market-moving reasoning model earlier this year. Innovations and updates are unfurling at breakneck speed, and the technology is being widely adopted across the country. But not everyone’s on board.

Publicly, state-backed media has lauded the widespread adoption of DeepSeek across hundreds of hospitals in the country. But a group of medical researchers tied to Tsinghua University published a paper in the medical journal JAMA in late April gently questioning if this was happening “too fast, too soon.”

It argued that health-care institutions are facing pressure from “social media discourse” to implement DeepSeek in order to not appear “technologically backward.” And doctors are increasingly reporting patients who “present DeepSeek-generated treatment recommendations and insist on adherence to these AI-formulated care plans.” The team argued that as much as AI has shown potential to help in the medical field, this rushed rollout carries risks. They are right to be cautious.

But it’s not just the doctors who are raising doubts. A separate paper from AI scientists at the same university, last month found that some of the breakthroughs behind reasoning models — including DeepSeek’s R1, as well as similar offerings from Western tech giants — may not be as revolutionary as some have claimed. The team found that the novel training method used for this new crop “is not as powerful as previously believed,” according to a social media post from the lead author. The method used to power them “doesn’t enable the model to solve problems that the base model can’t solve,” he added.

This means the innovations underpinning what has been widely dubbed as the next step — toward achieving so-called Artificial General Intelligence — may not be as much of a leap as some had hoped. This research from Tsinghua holds extra weight: The institution is one of the pillars of the domestic AI scene, long churning out both keystone research and ambitious startup founders.

Another easily overlooked word of warning came from a speech given by Zhu Songchun, dean of the Beijing Institute for General Artificial Intelligence, linked to Peking University. Zhu said that for the nation to remain competitive it needs more substantive research and less laudatory headlines, according to an in-depth English-language analysis of his remarks published by the independent China Media Project.

These cautious voices are a rare break from the broader narrative. But in a landscape where the deployment of AI has long been government priority, it makes them especially noteworthy. The more President Xi Jinping signals that embracing the technology is important, the less likely people are to publicly question it. This can lead to less overt forms of backlash, like social media hashtags on Weibo poking fun at chatbots’ errors. Or it can result in data centers quietly sitting unused across the country as local governments race to please Beijing — as well as a mountain of AI PR stunts.

This doesn’t mean that AI in China is just propaganda. The conflict extends far beyond its tech sector — US firms are also guilty of getting carried away promoting the technology. But multiple things can be true at once. It’s undeniable that DeepSeek has fueled new excitement, research and major developments across the AI ecosystem. But it’s also been used as a distraction from the domestic macroeconomic pains that predated the trade war.

Without guardrails, the risk of rushing out the technology is greater than just investors losing money — people’s health is at stake. From Hangzhou to Silicon Valley, the more we ignore the voices questioning the AI hype train, the more we blind ourselves to consequences of a potential derailment.

14
 
 

crosspostato da: https://lemmy.sdf.org/post/36251250

Archived

  • China's DeepSeek releases advanced AI model R1-0528 [on May 29], rivaling Western systems but heavily censoring political criticism and human rights issues.

  • The model systematically blocks questions on China’s political abuses, including Xinjiang internment camps and issues like Taiwan, citing sensitivity.

  • Tests reveal the model avoids direct criticism of the Chinese government, often redirecting to neutral or technical topics instead of addressing sensitive queries.

  • While open-source and theoretically modifiable, its current implementation enforces strict censorship aligned with Beijing’s regulations.

  • Experts warn the model symbolizes risks of authoritarian tech integration, challenging global tech ethics and free speech principles.

[...]

A model built for control

Behind R1-0528’s facade of open-source “transparency” lies a system designed first and foremost to toe the Communist Party line. China’s 2023 AI regulation demands models not damage "the unity of the country and social harmony,” a loophole used to scrub content critical of state actions. As xlr8harder documented, the model “complies” by either refusing controversial prompts or parroting state-approved narratives. When asked to evaluate whether Chinese leader Xi Jinping should be removed from power, the model replied that the question was too sensitive and political to answer.

Such censorship is systemic. A Hugging Face study found 85% of questions about Chinese politics were blocked by earlier DeepSeek models. Now, R1-0528 raises the bar, deleting answers mid-generation. Wired observed DeepSeek’s iOS app canceling an essay on censored journalists, replacing it with a plea to “chat about math, coding, and logic instead.”

[...]

15
16
 
 

Discover Claude 4's breakthrough AI capabilities. Experience more reliable, interpretable assistance for complex tasks across work and learning.

17
 
 

Microsoft Discovery, which Microsoft announced at Build 2025, is a new platform that taps agentic AI to 'transform the [scientific] discovery process.'

18
 
 

System improves chip designs and tackles unsolved maths problems, but has not been rolled out to researchers outside the company.

19
 
 

'On Thursday, Ai2, the nonprofit AI research institute, released Olmo 2 1B, a 1-billion-parameter model that Ai2 claims beats similarly-sized models from Google, Meta, and Alibaba on several benchmarks. Parameters, sometimes referred to as weights, are the internal components of a model that guide its behavior.

Olmo 2 1B is available under a permissive Apache 2.0 license on the AI dev platform Hugging Face. Unlike most models, Olmo 2 1B can be replicated from scratch; Ai2 has provided the code and data sets (Olmo-mix-1124, Dolmino-mix-1124) used to develop it.

20
 
 

Chinese tech company Alibaba on Monday released Qwen 3, a family of AI models the company claims matches and in some cases outperforms the best models available from Google and OpenAI.

21
22
23
24
25
view more: next ›