Fuck AI

3019 readers
663 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
1
 
 

I want to apologize for changing the description without telling people first. After reading arguments about how AI has been so overhyped, I'm not that frightened by it. It's awful that it hallucinates, and that it just spews garbage onto YouTube and Facebook, but it won't completely upend society. I'll have articles abound on AI hype, because they're quite funny, and gives me a sense of ease knowing that, despite blatant lies being easy to tell, it's way harder to fake actual evidence.

I also want to factor in people who think that there's nothing anyone can do. I've come to realize that there might not be a way to attack OpenAI, MidJourney, or Stable Diffusion. These people, which I will call Doomers from an AIHWOS article, are perfectly welcome here. You can certainly come along and read the AI Hype Wall Of Shame, or the diminishing returns of Deep Learning. Maybe one can even become a Mod!

Boosters, or people who heavily use AI and see it as a source of good, ARE NOT ALLOWED HERE! I've seen Boosters dox, threaten, and harass artists over on Reddit and Twitter, and they constantly champion artists losing their jobs. They go against the very purpose of this community. If I hear a comment on here saying that AI is "making things good" or cheering on putting anyone out of a job, and the commenter does not retract their statement, said commenter will be permanently banned. FA&FO.

2
3
 
 

Alright, I just want to clarify that I've never modded a Lemmy community before. I just have the mantra of "if nobody's doing the right thing, do it yourself". I was also motivated by the decision from u/spez to let an unknown AI company use Reddit's imagery. If you know how to moderate well, please let me know. Also, feel free to discuss ways to attack AI development, and if you have evidence of AIBros being cruel and remorseless, make sure to save the evidence for people "on the fence". Remember, we don't know if AI is unstoppable. AI uses up loads of energy to be powered, and tons of circuitry. There may very well be an end to this cruelty, and it's up to us to begin that end.

4
5
6
7
8
 
 

It's impossible, i got this instance to just see lemmy from my own instance, but no, it was slow as hell the whole week, i got new pods, put postgres on a different pod, pictrs on another, etc.

But it was slow as hell. I didn't know what it was until a few hours before now. 500 GETs in a MINUTE by ClaudeBot and GPTBot, wth is this? why? I blocked the user agents, etc, using a blocking extension on NGINX and now it works.

WHY? So google can say that you should eat glass?

Life is now hell, if before at least someone could upload a website, now even that is painfull.

Sorry for the rant.

9
10
 
 

Sen. Ted Cruz (R-Texas) wants to enforce a 10-year moratorium on AI regulation by making states ineligible for broadband funding if they try to impose any limits on development of artificial intelligence.

11
 
 

A day after announcing new AI models designed for U.S. national security applications, Anthropic has appointed a national security expert, Richard Fontaine, to its long-term benefit trust.

In a statement, Anthropic CEO Dario Amodei said... “Richard’s expertise comes at a critical time as advanced AI capabilities increasingly intersect with national security considerations,” Amodei continued. “I’ve long believed that ensuring democratic nations maintain leadership in responsible AI development is essential for both global security and the common good.”

Fontaine, who as a trustee won’t have a financial stake in Anthropic, previously served as a foreign policy adviser to the late Sen. John McCain and was an adjunct professor at Georgetown teaching security studies. For more than six years, he led the Center for A New American Security, a national security think tank based in Washington, D.C., as its president.

Anthropic has increasingly engaged U.S. national security customers as it looks for new sources of revenue. In November, the company teamed up with Palantir and AWS, the cloud computing division of Anthropic’s major partner and investor, Amazon, to sell Anthropic’s AI to defense customers.

To be clear, Anthropic isn’t the only top AI lab going after defense contracts. OpenAI is seeking to establish a closer relationship with the U.S. Defense Department, and Meta recently revealed that it’s making its Llama models available to defense partners. Meanwhile, Google is refining a version of its Gemini AI capable of working within classified environments, and Cohere, which primarily builds AI products for businesses, is also collaborating with Palantir to deploy its AI models.

12
 
 

Source (Bluesky)

13
14
 
 

On Thursday, Anthropic unveiled specialized AI models designed for US national security customers. The company released "Claude Gov" models that were built in response to direct feedback from government clients to handle operations such as strategic planning, intelligence analysis, and operational support. The custom models reportedly already serve US national security agencies, with access restricted to those working in classified environments...

Anthropic joins other major AI companies competing for lucrative government work, reports TechCrunch. OpenAI is working to build closer ties with the US Defense Department, while Meta recently made its Llama models available to defense partners. Google is developing a version of its Gemini AI model that can operate within classified environments. Business-focused AI company Cohere is also collaborating with Palantir to deploy its models for government use.

The push into defense work represents a shift for some AI companies that previously avoided military applications. These specialized government models often require different capabilities than consumer AI tools, including the ability to process classified information and work with sensitive intelligence data without triggering safety restrictions that might block legitimate government operations.

15
 
 
16
 
 

20 likes, 1200 comments, 2 Adam's apples

17
 
 

Source (Bluesky)

18
 
 

Artist (Bluesky), Source (Bluesky)

19
 
 

Source (Facebook)

20
 
 
21
 
 

One use of this ANTIVIBE tool and all the vibe coders around you flee away.

22
23
 
 
24
 
 

Artificial intelligence is on full display at an exposition in Washington, DC, where one of the main focuses is how to incorporate AI into weapons systems. Organisers say the technology will lead to a better future, but critics are warning of the dangers that come with the high-tech advances. Al Jazeera’s Shihab Rattansi shows us around.

Youtube: https://www.youtube.com/watch?v=HYTpd1fr8xE

25
 
 

I sent a co-worker a link to a file on my OneDrive via Teams and he got this warning. What the hell is up with that warning artwork? I feel like a human wouldn't have designed it that way.

view more: next ›