this post was submitted on 26 Mar 2026
57 points (80.0% liked)

Steam Hardware

21605 readers
98 users here now

A place to discuss and support all Steam Hardware, including Steam Deck, Steam Machine, Steam Frame, and SteamOS in general.

As Lemmy doesn't have flairs yet, you can use these prefixes to indicate what type of post you have made, eg:
[Flair] My post title

The following is a list of suggested flairs:
[Deck] - Steam Deck related.
[Machine] - Steam Machine related.
[Frame] - Steam Frame related.
[Discussion] - General discussion.
[Help] - A request for help or support.
[News] - News about the deck.
[PSA] - Sharing important information.
[Game] - News / info about a game on the deck.
[Update] - An update to a previous post.
[Meta] - Discussion about this community.

If your post is only relevant to one hardware device (Deck/Machine/Frame/etc) please specify which one as part of the title or by using a device flair.

These are not enforced, but they are encouraged.

Rules:

Link to our Matrix Space

founded 4 years ago
MODERATORS
 

So when the news circulated recently that the Lutris developer was using Claude to help write the code (and the angry posts/articles appeared) I figured I'd reach out to Mathieu to hear his side of things.

I chatted to him a little, asking for his side of the story. He goes into some depth on how he uses it as part of his work-flow, the transparency in open-source projects in general, licensing and ownership of code that A.I. writes, safety and so on. Plenty of answers from Lutris, if you're curious on the topic. As ever, you can find the link here:

https://gardinerbryant.com/mathieu-comandon-explains-his-use-of-ai-in-lutris-development/

all 11 comments
sorted by: hot top controversial new old
[–] Fizz@lemmy.nz 2 points 17 hours ago

Ai is becoming a very good tool in the.software industry. I think people are going to have to really consider their AI stance and really hone in on what they actually find to be the unethical parts because it will be so widespread and you need to fight against its parts instead of it as a whole.

For me the copyright asymmetry and hostile integration with existing life. I dont want to live in a world where openAI can train a model off all works but i can't do the same. I dont want openAI to scrape every website relentlessly while I get blocked from scraping any large website.

For power usage I dont care. Thats a local government issue. They choose to let an ai data center drive up costs and water usage then they suck and I'll hate them for approving that. Theres plenty of places to put a data center where power isnt an issue.

For art Its awful because its trained unconsensually off artists works and two because it has no intention behind its creation. Ive come to believe that the reason we appreciate art is because of the human intention that goes into its creation. Thats why there is objectively bad art that we resonante with more than a perfect still life because the artist has a story alongside the piece and gives it unique value that ai could never truely replicate.

This is why I can accept AI usage in software development and still hate AI. If its built off an open source model its fine but i dont want to support development using these closed source models and end up in a world where american megacorps control the tools to create software.

[–] CoyoteFacts@piefed.ca 44 points 2 days ago (1 children)

I already read a lot of the lutris devs' honest feelings about AI and their willingness to obfuscate what they're doing with it in the initial issues/discussions. No offense, but I'm not all that interested in watching them attempt to whitewash and downplay what happened after they've had time to figure out how to spin it.

[–] Alaknar@sopuli.xyz 1 points 1 hour ago

The only reason they decided to obfuscate the use of Claude was due to the community starting wars and sending them death threats over it. Nobody is downplaying anything, they literally stated that they did that because managing shit-tier Issues that were all basically "why use AI" was becoming too damaging to the project.

[–] Zedstrian@sopuli.xyz 53 points 2 days ago* (last edited 2 days ago)

Also, there is enough open source code available that I would hope Anthropic doesn’t feel the need to train their models on potentially litigious code base.

The problem with this statement is twofold. Firstly, it is unrealistic to assume that leading AI companies are staying entirely above board in terms of code licensing. With how widespread AI is, this makes it all the harder for developers to enforce their licenses when many developers inevitably violate their terms without knowing.

Even if that code is open source, licensing terms typically require attribution that an AI is unlikely to provide for every segment of code cobbled together. When the developers that had their code taken and reused are unable to know who reused it, it is disingenuous to work under a 'take first, ask later (if found out)' mentality.

[–] rozodru@piefed.world 40 points 2 days ago

After reading the interview (great job btw) I can see he's utilizing Claude Code in the correct way. As someone whose contracting day job is to code review and report on the various fuck ups companies make utilizing AI him stating it's more used as a sort of rubber duck or peer programming is honestly, like it or not, the correct way to utilize these tools.

Now him stating that he hopes Anthropic won't feed on what he's produced...I wouldn't bet on it bud. your code base has already been utilized.

[–] memphis@sopuli.xyz 30 points 2 days ago

Cancelled my patreon membership over this