Fuck AI

5153 readers
1774 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
1
2
 
 

I want to apologize for changing the description without telling people first. After reading arguments about how AI has been so overhyped, I'm not that frightened by it. It's awful that it hallucinates, and that it just spews garbage onto YouTube and Facebook, but it won't completely upend society. I'll have articles abound on AI hype, because they're quite funny, and gives me a sense of ease knowing that, despite blatant lies being easy to tell, it's way harder to fake actual evidence.

I also want to factor in people who think that there's nothing anyone can do. I've come to realize that there might not be a way to attack OpenAI, MidJourney, or Stable Diffusion. These people, which I will call Doomers from an AIHWOS article, are perfectly welcome here. You can certainly come along and read the AI Hype Wall Of Shame, or the diminishing returns of Deep Learning. Maybe one can even become a Mod!

Boosters, or people who heavily use AI and see it as a source of good, ARE NOT ALLOWED HERE! I've seen Boosters dox, threaten, and harass artists over on Reddit and Twitter, and they constantly champion artists losing their jobs. They go against the very purpose of this community. If I hear a comment on here saying that AI is "making things good" or cheering on putting anyone out of a job, and the commenter does not retract their statement, said commenter will be permanently banned. FA&FO.

3
4
 
 

Alright, I just want to clarify that I've never modded a Lemmy community before. I just have the mantra of "if nobody's doing the right thing, do it yourself". I was also motivated by the decision from u/spez to let an unknown AI company use Reddit's imagery. If you know how to moderate well, please let me know. Also, feel free to discuss ways to attack AI development, and if you have evidence of AIBros being cruel and remorseless, make sure to save the evidence for people "on the fence". Remember, we don't know if AI is unstoppable. AI uses up loads of energy to be powered, and tons of circuitry. There may very well be an end to this cruelty, and it's up to us to begin that end.

5
 
 

A leading Hong Kong think tank has called for a centralised platform for AI in schools, revealing that while 95 per cent of students use the technology, nearly one in four struggle to finish homework without it, putting their problem-solving and analytical thinking skills at risk.

6
7
 
 

Defense Secretary Pete Hegseth said Monday that Elon Musk’s artificial intelligence chatbot Grok will join Google’s generative AI engine in operating inside the Pentagon network, as part of a broader push to feed as much of the military’s data as possible into the developing technology.

“Very soon we will have the world’s leading AI models on every unclassified and classified network throughout our department,” Hegseth said in a speech at Musk’s space flight company, SpaceX, in South Texas.

8
464
Current Mood (startrek.website)
submitted 22 hours ago* (last edited 22 hours ago) by IcedRaktajino@startrek.website to c/fuck_ai@lemmy.world
 
 

My electric rate got hiked again.

I'm already planning a ~7 KW solar setup in the spring but I may see if I can go bigger and sooner.

9
 
 

A lot of damage to his bottom line, I hope.

10
 
 

The nicest thing I saw today

11
 
 

A substantial number of AI images generated or edited with Grok are targeting women in religious and cultural clothing.

Wow, Grok is very safe and follows the appstore and playstore rules.

12
13
1261
On familiarity (pawb.social)
submitted 1 day ago* (last edited 1 day ago) by ThefuzzyFurryComrade@pawb.social to c/fuck_ai@lemmy.world
 
 

Source (Bluesky)

Transcript

recently my friend's comics professor told her that it's acceptable to use gen Al for script- writing but not for art, since a machine can't generate meaningful artistic work. meanwhile, my sister's screenwriting professor said that they can use gen Al for concept art and visualization, but that it won't be able to generate a script that's any good. and at my job, it seems like each department says that Al can be useful in every field except the one that they know best.

It's only ever the jobs we're unfamiliar with that we assume can be replaced with automation. The more attuned we are with certain processes, crafts, and occupations, the more we realize that gen Al will never be able to provide a suitable replacement. The case for its existence relies on our ignorance of the work and skill required to do everything we don't.

14
 
 
  • Nvidia CEO Jensen Huang said there's a real cost to AI doomerism.
  • Without naming names, Huang blamed "very well-respected people" for end-of-the-world narratives.
  • Huang said the rhetoric is "scaring people" from making investments in the improvement of AI.
15
16
 
 

cross-posted from: https://pawb.social/post/37886953

Alarmed by what companies are building with artificial intelligence models, a handful of industry insiders are calling for those opposed to the current state of affairs to undertake a mass data poisoning effort to undermine the technology.

Their initiative, dubbed Poison Fountain, asks website operators to add links to their websites that feed AI crawlers poisoned training data. It's been up and running for about a week.

AI crawlers visit websites and scrape data that ends up being used to train AI models, a parasitic relationship that has prompted pushback from publishers. When scaped data is accurate, it helps AI models offer quality responses to questions; when it's inaccurate, it has the opposite effect.

17
18
 
 
19
25
thoughts on this? (files.catbox.moe)
submitted 2 days ago* (last edited 1 day ago) by carotte@lemmy.blahaj.zone to c/fuck_ai@lemmy.world
 
 

TranscriptTo distill my thoughts into one screenshot, I think the best analogy here is cars. I hate car-centric infrastructure. It's bad for the planet, bad for communities, bad for people. Environmental damage, pedestrian deaths, infrastructure that destroys communities, oil dependence, suburban sprawl.

AND there are obvious use cases where they need to exist. Ambulances, disability access, rural transportation, moving goods. AND we need clear safety regulations. AND we should design a world that relies on them as little as possible.

We don't solve car problems by scolding individuals for driving to work. Nor do we solve them through arguments that are either factually incorrect, OR harmful in and of themselves ("everyone should just bike"). We solve them through safety regulations, emissions standards, public transit investment, walkable design. The same applies here: the solution to AI harms isn't individual guilt. It's structural, so regulation, safety requirements, platform accountability, worker protections.

I can be annoyed that AI slop is everywhere, that AI culture is dangerous and that serious work needs to be done to curb it and build systems that don't rely on it, AND think the nearly 1 billion people using ChatGPT weekly to help them code or write an email aren't just like, stupid and evil (doctors using AI to detect tumours is obviously good; someone with a learning disability getting a concise explanation with something that will be patient with them is obviously good. Or translation! We need human translators. But when I got into a cab in Turkey with a driver who spoke no English, he used Gemini to translate and we had a lovely conversation. You can't have a bilingual human in every cab. Google Translate has existed for years, but LLMs are more natural/better with context and idiom. Deepfakes are obviously bad. Hell, I don't think AI should replace adult actors but I kind of do think AI should replace child actors! That is an inherently unethical job!), and that some arguments against AI do more harm than good. Those aren't contradictory positions. Much like with cars, I think the harm outweighs the benefits, and my primary desire is to want those harms to be addressed in a way that doesn't cause *more* harm.

Bans on facial recognition in policing. Required safety features for AI companions. Algorithmic impact assessments for public benefits systems. Product liability that holds companies accountable for harms. Focusing on the labour issues and not copyright; threat to artists and writers isn't that their "style" was stolen, it's that their labour is being devalued and replaced without any safety net. Stronger IP benefits Disney. Worker power benefits workers. I'm also just pretty fond of UBI. Excluding a full-on revolution, everyone's needs being met would certainly help.

We need alternatives too; just like we need public transit before we can reduce car dependence, we need social infrastructure, so mental health support, community spaces, worker protections, so people aren't driven to AI companions out of desperation. I mean, a major reason suicidal people rely on (very dangerous!!!) AI “therapists” is because a real human therapist can have them forcibly institutionalized! That's a root cause that needs to be addressed.

I hope that makes sense!

i think it's a take on AI that's much more productive than the usual "this tech and the people who use it are inherently evil"

the rest of their thread is worth a read too, imo: https://bsky.app/profile/sarahz.bsky.social/post/3mbrq3c6rqc2n

20
21
 
 

1000 child porn ai pictures an hour is the current estimate. Being made, on Musk's twitter, now known as shit.

Some are suggesting Musk is creating lists of Child porn makers on twitter, who dont realise they're being filmed, 👈 so they can be blackmailed into criminal acts, like Putin compromised Trump. 👈

OG

22
 
 

cross-posted from: https://lemmy.nz/post/32851793

23
 
 

Since X’s users started using Grok to undress women and children using deepfake images, I have been waiting for what I assumed would be inevitable: X getting booted from Apple’s and Google’s app stores. The fact that it hasn’t happened yet tells me something serious about Silicon Valley’s leadership: Tim Cook and Sundar Pichai are spineless cowards who are terrified of Elon Musk.

24
 
 

Unavailable at source, here's their Bluesky.

25
 
 

Front tires, 1.8psi?!

Phooey, I forgot what site I stumbled into to find that one, I was basically looking to see what official tire and rim sizes fit my mom's car.

Fuck AI, they'd just as soon have you driving on flats if you took that 'information' literally.. 🤦

view more: next ›