this post was submitted on 12 Mar 2026
743 points (94.1% liked)

Games

47109 readers
2580 users here now

Welcome to the largest gaming community on Lemmy! Discussion for all kinds of games. Video games, tabletop games, card games etc.

Rules

1. Submissions have to be related to games

Video games, tabletop, or otherwise. Posts not related to games will be deleted.

This community is focused on games, of all kinds. Any news item or discussion should be related to gaming in some way.

2. No bigotry or harassment, be civil

No bigotry, hardline stance. Try not to get too heated when entering into a discussion or debate.

We are here to talk and discuss about one of our passions, not fight or be exposed to hate. Posts or responses that are hateful will be deleted to keep the atmosphere good. If repeatedly violated, not only will the comment be deleted but a ban will be handed out as well. We judge each case individually.

3. No excessive self-promotion

Try to keep it to 10% self-promotion / 90% other stuff in your post history.

This is to prevent people from posting for the sole purpose of promoting their own website or social media account.

4. Stay on-topic; no memes, funny videos, giveaways, reposts, or low-effort posts

This community is mostly for discussion and news. Remember to search for the thing you're submitting before posting to see if it's already been posted.

We want to keep the quality of posts high. Therefore, memes, funny videos, low-effort posts and reposts are not allowed. We prohibit giveaways because we cannot be sure that the person holding the giveaway will actually do what they promise.

5. Mark Spoilers and NSFW

Make sure to mark your stuff or it may be removed.

No one wants to be spoiled. Therefore, always mark spoilers. Similarly mark NSFW, in case anyone is browsing in a public space or at work.

6. No linking to piracy

Don't share it here, there are other places to find it. Discussion of piracy is fine.

We don't want us moderators or the admins of lemmy.world to get in trouble for linking to piracy. Therefore, any link to piracy will be removed. Discussion of it is of course allowed.

Authorized Regular Threads

Related communities

PM a mod to add your own

Video games

Generic

Help and suggestions

By platform

By type

By games

Language specific

founded 2 years ago
MODERATORS
 

A user asked on the official Lutris GitHub two weeks ago "is lutris slop now" and noted an increasing amount of "LLM generated commits". To which the Lutris creator replied:

It's only slop if you don't know what you're doing and/or are using low quality tools. But I have over 30 years of programming experience and use the best tool currently available. It was tremendously helpful in helping me catch up with everything I wasn't able to do last year because of health issues / depression.

There are massive issues with AI tech, but those are caused by our current capitalist culture, not the tools themselves. In many ways, it couldn't have been implemented in a worse way but it was AI that bought all the RAM, it was OpenAI. It was not AI that stole copyrighted content, it was Facebook. It wasn't AI that laid off thousands of employees, it's deluded executives who don't understand that this tool is an augmentation, not a replacement for humans.

I'm not a big fan of having to pay a monthly sub to Anthropic, I don't like depending on cloud services. But a few months ago (and I was pretty much at my lowest back then, barely able to do anything), I realized that this stuff was starting to do a competent job and was very valuable. And at least I'm not paying Google, Facebook, OpenAI or some company that cooperates with the US army.

Anyway, I was suspecting that this "issue" might come up so I've removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what's generated and what is not. Whether or not I use Claude is not going to change society, this requires changes at a deeper level, and we all know that nothing is going to improve with the current US administration.

you are viewing a single comment's thread
view the rest of the comments
[–] Crozekiel@lemmy.zip 30 points 21 hours ago (4 children)

AI is actively destroying the environment and harming people. Data centers have been caught using methane burner generators (which are banned for use by the EPA) which significantly increase health risk to residents that live nearby (cancer and asthma rates already significantly increased). Then you have the ridiculous effects it is having on computer hardware markets, energy and water infrastructure and prices.

Then after all of that, the AI themselves are hallucinating somewhere in the neighborhood of 25% of the time, and multiple studies have found that people that use them regularly are losing their own skills.

I can't figure out why people would choose to use them. I can't figure out why programming is the one place where people that might have otherwise been considered experts in the field are excited to use them. Writers, artists, lawyers, doctors, basically every other professional field that AI companies have suggested these would be good for, they get trashed by experts in the fields for making garbage. I have a hard time believing the only thing AI can do well is write code when it sucks so badly at everything else it does. Does development suck this much? Do developers have so little idea what they are doing that this seems like a good idea?

[–] Netrunner@programming.dev 1 points 7 hours ago* (last edited 7 hours ago)

You can't fathom why someone would use AI and maybe hurt the environment a little while watching 18 F150s being driven by one person go by while drinking through a paper straw and a billionaire flies a private jet to the neighboring city overhead.

Okay. Sounds like jealousy that you're masking in social justice.

Ever been a developer? If you have it's very easy to see why having AI give you a massive second wind on a project you've given up on is a massive boon.

I've been a developer my entire life and AI is amazing. Sorry you hate it. Does it make mistakes? Yes. Can I fix them? Yes. Can I build skyscrapers now? Yes.

[–] antihumanitarian@lemmy.world 8 points 15 hours ago (2 children)

If you're honestly asking, LLMs are much better at coding than any other skill right now. On one hand there's a ton of high quality open source training data that appropriated, on the other code is structured language so is very well suited for what models "are". Plus, code is mechanically verifiable. If you have a bunch of tests, or have the model write tests, it can check its work as it goes.

Practically, the new high end models, GPT 5.4 or Claude Opus 4.6, can write better code faster than most people can type. It's not like 2 years ago when the code mostly wouldn't build, rather they can write hundreds or thousands of lines of code that works first try. I'm no blind supporter of AI, and it's very emotionally complicated watching it after years honing the craft, but for most tasks it's simple reality that you can do more with AI than without it. Whether it's higher quality, higher volume, or integrating knowledge you don't have.

Professionally I don't feel like I have a choice, if I want to stay employed in the field at least.

[–] veniasilente@lemmy.dbzer0.com 1 points 4 hours ago

Professionally I don’t feel like I have a choice, if I want to stay employed in the field at least.

On the contrary!

I've seen quite a number of "AI cleanup specialist" job offerings so far, and even a few consulting positions on training juniors away from using AI in development.

(No, I have not seen any position open on training management away from using AI...)

[–] Arkthos@pawb.social 1 points 7 hours ago

Now software architecture on the other hand? Oh boy Claude Opus and the rest suck ass at that.

My own experience has been that if you have relatively isolated discrete chunks of code it works pretty well, and it's really nice at reviewing as well. Just unleashing it on a code base and you'll end up with a massive mess.

[–] tonytins@pawb.social 7 points 20 hours ago* (last edited 20 hours ago) (1 children)

Thank you. Another issue that sort of overlaps with the hallucination problem is the fact that it is basically is referring to snapshot in time. Based on my past attempts, no amount of searching the web will improve results because it has no idea to account for future outcomes like actual programmers can. Meaning, it isn't very flexible and can't adopt to new, breaking or quality of life changes.

Programming is a hobby for me and my preferred language is C#. I work on the bleeding edge for fun and so I can benefit from .NET's recent quality of life changes. Naturally, I'm Microsoft's target audience. And yet for the reasons stated above, these chatbots can't work for me in the long run.

[–] G_M0N3Y_2503@lemmy.zip 2 points 13 hours ago* (last edited 13 hours ago)

For the reasons you are stating the snapshot is actually a boon. More than I'd like to admit I've had to write something that has been done so many times before with some slight structural differences. And of course there isn't a library flexible enough nor enough time to write that library. Instead of just busywork mindlessly writing something that should just exist already. You can just slop it out quickly then spend the time it would have taken to write that, to refine it into something maintainable with all the new changes that are actually interesting and useful improvements. I see it as raising the bar of starting point.

That said, I just license my own stuff as MIT because I want to raise the bar for everyone, though I know it's likely the AI companies haven't respected the wishes of those who don't do/want that.

[–] thedeadwalking4242@lemmy.world 2 points 19 hours ago (1 children)

I'm honestly sure the failure rate is higher then 25% those test they boat about are currated.

[–] FauxLiving@lemmy.world 0 points 5 hours ago (1 children)

A rational person would question why they have beliefs that, when confronted with evidence against those beliefs they believe the evidence is wrong and not their beliefs.

It could indicate that the person's beliefs are not built on rational grounds.

[–] thedeadwalking4242@lemmy.world 1 points 4 hours ago

Because in my personal experience through use 25% doesn't seem quite right.

Besides these companies have a monetary incentive to ensure LLMs show high numbers on these tests. One of the most widely use tests (bench verified) is itself a currated selection of problems. In real world usage the failure rate is going to be much higher.

A rational person trust but verifies, and at least for me the verification doesn't hold up to even a tiny bit of scrutiny so having doubts is a perfectly healthy thing to do.

Just because someone disagrees with with a data source does not make them irrational. There are some extremely well verified truths that are irrational to dismiss but not all data sources / studies have had that amount of rigor applied against them. Data can tell a story, but it doesn't always tell the whole truth. People manipulate data to their own benifit.

People confuse the scientific method and academic research for "this one academic source says this it must be true" when really you need more then that.