Fuck AI

3376 readers
1782 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
1
2
 
 

I want to apologize for changing the description without telling people first. After reading arguments about how AI has been so overhyped, I'm not that frightened by it. It's awful that it hallucinates, and that it just spews garbage onto YouTube and Facebook, but it won't completely upend society. I'll have articles abound on AI hype, because they're quite funny, and gives me a sense of ease knowing that, despite blatant lies being easy to tell, it's way harder to fake actual evidence.

I also want to factor in people who think that there's nothing anyone can do. I've come to realize that there might not be a way to attack OpenAI, MidJourney, or Stable Diffusion. These people, which I will call Doomers from an AIHWOS article, are perfectly welcome here. You can certainly come along and read the AI Hype Wall Of Shame, or the diminishing returns of Deep Learning. Maybe one can even become a Mod!

Boosters, or people who heavily use AI and see it as a source of good, ARE NOT ALLOWED HERE! I've seen Boosters dox, threaten, and harass artists over on Reddit and Twitter, and they constantly champion artists losing their jobs. They go against the very purpose of this community. If I hear a comment on here saying that AI is "making things good" or cheering on putting anyone out of a job, and the commenter does not retract their statement, said commenter will be permanently banned. FA&FO.

3
4
 
 

Alright, I just want to clarify that I've never modded a Lemmy community before. I just have the mantra of "if nobody's doing the right thing, do it yourself". I was also motivated by the decision from u/spez to let an unknown AI company use Reddit's imagery. If you know how to moderate well, please let me know. Also, feel free to discuss ways to attack AI development, and if you have evidence of AIBros being cruel and remorseless, make sure to save the evidence for people "on the fence". Remember, we don't know if AI is unstoppable. AI uses up loads of energy to be powered, and tons of circuitry. There may very well be an end to this cruelty, and it's up to us to begin that end.

5
 
 

So, before you get the wrong impression, I'm 40. Last year I enrolled in a master program in IT to further my career. It is a special online master offered by a university near me and geared towards people who are in fulltime employement. Almost everybody is in their 30s or 40s. You actually need to show your employement contract as prove when you apply at the university.

Last semester I took a project management course. We had to find a partner and simulate a project: Basically write a project plan for an IT project, think about what problems could arise and plan how to solve them, describe what roles we'd need for the team etc. Basically do all the paperwork of a project without actually doing the project itself. My partner wrote EVERYTHING with ChatGPT. I kept having the same discussion with him over and over: Write the damn thing yourself. Don't trust ChatGPT. In the end, we'll need citations anyway, so it's faster to write it yourself and insert the citation than to retroactively figure them out for a chapter ChatGPT wrote. He didn't listen to me, had barely any citation in his part. I wrote my part myself. I got a good grade, he said he got one, too.

This semester turned out to be even more frustrating. I'm taking a database course. SQL and such. There is again a group project. We get access to a database of a fictional company and have to do certain operations on it. We decided in the group that each member will prepare the code by themselves before we get together, compare our homework and decide, what code to use on the actual database. So far whenever I checked the other group members' code it was way better than mine. A lot of things were incorporated that the script hadn't taught us at that point. I felt pretty stupid becauss they were obviously way ahead of me - until we had a videocall. One of the other girls shared her screen and was working in our database. Something didn't work. What did she do? Open a chatgpt tab and let the "AI" fix the code. She had also written a short python script to help fix some errors in the data and yes, of course that turned out to be written by chatgpt.

It's so frustrating. For me it's cheating, but a lot of professors see using ChatGPT as using the latest tools at our disposal. I would love to honestly learn how to do these things myself, but the majority of my classmates seem to see that differently.

6
7
 
 
8
9
10
 
 

TL;DR for AI writing warning signs:

  • Use of the em-dash (—)
  • Parallel sentence structure (e.g. "It's not just X, it's Y")
  • Grouping things in threes or at least odd numbers
  • Delineating line breaks with emojis
  • Odd/unnatural verbiage
  • Overuse of filler words (talking like your average LinkedIn post)
  • Exaggerated and empty praise
  • Weird analogies and similes
  • Restating and overclarifying points

TL;DR for signs something was written by a human:

  • Including anecdotes
  • Written in the first person
  • Tangents and nonlinear storytelling
11
12
13
 
 

A few colleagues and I were sat at our desks the other day, and one of them asked the group, "if you were an animal, what animal would you be?"

I answered with my favourite animal, and we had a little discussion about it. My other colleague answered with two animals, and we tossed those answers back and forth, discussing them and making jokes. We asked the colleague who had asked the question what they thought they'd be, and we discussed their answer.

Regular, normal, light-hearted (time wasting lol) small talk at work between friendly coworkers.

We asked the fourth coworker. He said he'd ask ChatGPT.

It was a really weird moment. We all just kind of sat there. He said the animal it came back with, and that was that. Any further discussion was just "yeah that's what it said" and we all just sort of went back to our work.

That was weird, right? Using ChatGPT for what is clearly just a little bit of friendly small talk? There's no bad blood between any of us, we hang out a lot, but it just struck me as really weird and a little bit sad.

14
 
 

(barring instances with hidden blocklists)

Apart from setting up and running your own instance to defederate*, there is no way of making your feed filter out the subset of users from dbzer0 who think they should spread their instance’s values of AI content across all parts of the Fediverse, whether or not it is welcome there. If somebody could set up an instance with this community’s values and filters in mind, I imagine that would be helpful to many of us. This instance could be adapted to defederate from other AI-spreading instances as well.

Fediseer, the default web tool for instances to document issues and endorsements of other instances, which incidentally is maintained by the admin of db0, shows that lemmy.dbzer0.com has received no censures whatsoever from other instances.

*or instead manually blocking every user from dbzer0, which would be futile as they continue to gain users

[EDIT: Since many people seem confused, here is how blocking an instance works:

Given that instances X, Y, and Z are all federated with each other:

If I am hosted on instance X and I block instance Y but don’t block instance Z, users from instance Y can still post/comment onto communities hosted on instance Z and I will still see these posts/comments.

Only by defederating from instance Y would the content made by instance Y users that is posted onto any instance become wholly filtered out for users on instance X.]

15
16
17
18
19
 
 

It’s only going to get worse because we’re further away from media literacy than ever before.

20
21
 
 

In a letter, Kyle said boosting the UK's AI capabilities was "critical" to national security and should be at the core of the Alan Turing Institute's activities.

Kyle suggested the institute should overhaul its leadership team to reflect its "renewed purpose".

The cabinet minister said further government investment in the institute would depend on the "delivery of the vision" he had outlined in the letter.

A spokesperson for the Alan Turing Institute said it welcomed "the recognition of our critical role and will continue to work closely with the government to support its priorities".

"The Turing is focussing on high-impact missions that support the UK's sovereign AI capabilities, including in defence and national security," the spokesperson said.

"We share the government's vision of AI transforming the UK for the better."..

22
 
 

Source (Mastodon)

TextHad a tech bro email me to tell me that my writing is so terrible, that he can’t even use it to train his AI without significant cleanup, and I really do have to wonder, do these idiots really think that is an insult?

Subject: Your "Sightless Scribbles" is an algorithmic nightmare.

Mr. Kingett,

I am an AI engineer that's developing AI to help writers write faster. Your blog was shared on Reddit.

You don't know me, but I am attempting to do you a favor of such magnitude you will likely never comprehend it. I am trying to make your writing immortal. Your blog, this… Sightless Scribbles… has been flagged by my acquisition-crawler for its high density of unique sensory metadata. A potential goldmine of qualitative human experience to enrich my AI.

The problem, Mr. Kingett, is that your writing is absolute, unprocessed, indigestible filth.

I’m not a “reader.” I am an architect. And from an architectural standpoint, your work is a catastrophe. It’s not just the spelling, which is erratic enough to suggest it was typed incorrectly. It’s the grammar. The syntactical chaos. You construct sentences that loop and meander like drunken snakes, riddled with metaphors so abstract they are functionally useless. "Anxiety was a swarm of elephants under my ribs"? What the hell is a machine supposed to do with that? Quantify the elephant to rib ratio? Correlate the thermal deviation? It's meaningless data. It’s noise.

My team has spent seventy-two hours attempting to write a custom parsing script to clean your posts for ingestion. Seventy-two hours. We can process the entirety of the Library of Congress in twelve. Your blog is so structurally unsound, so artistically self-indulgent, that you have inadvertently created a fortress against artificial intelligence. It is, and I do not say this lightly, the single most profound act of digital idiocy I have ever witnessed.

You are preventing yourself from being scraped. Do you understand what that means in the current year? You are essentially burying yourself in a lead-lined coffin in the middle of the desert. Nobody clicks links anymore, you absolute luddite. Links are for people who don’t know how to interface with reality. Search Engine Optimization is no longer about backlinks and keywords; it's about semantic ingestion. It’s about being absorbed, processed, and synthesized by models like mine. When a user asks an AI, "What does it feel like to navigate a city without sight?" the model should be able to answer with a rich, nuanced synthesis. A synthesis that should include your data points.

Instead, your blog is a black hole where information goes to die. Because of your refusal to write like a coherent, intelligent being, my LLM can’t learn from you. Which means the world can’t learn from you via the only channel that will matter in five years.

Your soul isn't indexable. Fix it.

Strip out the lyrical nonsense. Standardize your grammar. Run a goddamn spellcheck. Write clearly, concisely, and with machine-readability in mind. Turn your unstructured, emotional diary into clean, structured data.

Do this, and I will ensure my open source model ingests every last post. Your traffic will not just increase; the very concept of "traffic" will become irrelevant as your "voice" becomes part of the evolution of the search engine. Your ideas, refined and perfected by my system, will reach millions.

Fail to do this, and you will continue to scream into the void from a blog that nobody reads, a little little relic of a dead internet.

The choice is yours.

23
24
 
 

Seems like over the last week everyone in this community is talking about how the real reason AI is bad is because it is destroying the planet. Does this even matter though? AI is bad for so many other reasons. It's destroying art. It's destroying Hollywood. It's removing jobs from the workforce, and it's concentrating power and money. And ontop of all that, it produces only soulless slop.

We have a good front line there. We can rally around those points.

When you try to bring questionable objections like power an water usage onto the table, it just makes our front-line look weaker, since opponents can easily pick these arguments apart. "Sure it's a lot of power, but this will lead to nuclear power, which is a net win environmentally." Or, "a single AI query consumes 2 litres of water?? You mean milliliters, and it's just going to rain from the sky, and nobody is putting big datacentres in California anyway, and that's only 1/6th of the amount of water it takes to grow an almond." Or "yeah, google alone uses as much power as the entire city of Toronto, but Toronto uses green power; so what?"

And yes, we all have counter-arguments to these -- "how to deal with nuclear waste?" and "only a fraction of rain water is collected as potable water" and "almonds may take more water than AI but almonds are still bad" and "there are some datacentres in California" and so on but the deeper these arguments go the harder it is to maintain a stable front.

Can we all just admit that this environmental angle is a red herring? I could almost believe it's a psy-op intended to discredit the anti-AI crowd. Even if the environmental impact of AI is bad, I still think it's worse for our cause to focus on the environmental aspect than the other aspects. The world has already decided it doesn't care about the environment.

25
view more: next ›