this post was submitted on 31 Mar 2026
1 points (54.5% liked)

Change My View

40 readers
28 users here now

A place to learn something new, or strengthen your own position. Progress is impossible without a willingness to change.

#Rules

  1. Remain civil and friendly. Personal attacks, excessive snark, or similar will not be tolerated. Downvoting based on disagreement (rather than quality of discourse) may also be bannable.

  2. All posts should contain a view as the title, and should have an explanation of the reasoning in the body.

  3. All top level comments should address the original viewpoint, either challenging it, or seeking clarification.

founded 1 day ago
MODERATORS
 

Generative AI has a number of uses that are already widespread, and I don't see going anywhere. Things like clipart and stock art, and initial contact customer support. AI automates these jobs, making it far, far cheaper than hiring a human to do the same job. It only takes one higher-end PC to do a job that a human would have had to be paid for. The economic incentive is already there.

Furthermore, Generative AI is a genie thats been let out of the bottle, and I don't see ever being put back in. These models are just files, which have already been replicated and become widespread. Sure, progress may slow as the "We're making a general purpose AI." bubble bursts, but if these tools work, they'll continue to be developed, and people will continue to get better at manipulating or augmenting them. I don't see any reason that would stop generative AI from continuing to exist from this point forward.

Generative AI isn't going anywhere, and will replace a number of jobs.

Change my view.

you are viewing a single comment's thread
view the rest of the comments
[–] mindbleach@sh.itjust.works 2 points 16 hours ago (1 children)

It does the thing it's for.

People have to get used to the idea of computers doing things they previously couldn't. Neural networks are a whole new kind of software, driven by examples instead of comprehension. They're already powerful enough for gigabyte-sized models to write code, animate cartoons, and photoshop images. Results vary wildly, but they're improving all the time, and it doesn't have to be genuinely intelligent to drag whatever you provide closer to whatever you describe.

Local models will be expected software. An OS lacking an offline do-as-a-I-say chatbot will be like shipping without a word processor and a music program.

Which is why - in line with rule three - it's gonna become a professional tool. Spreadsheets drastically changed accounting, to the point small firms could suddenly be one guy and an Apple II, doing the week's work by lunch on Monday. What happened next was not fewer accountants. What happened next was not less accounting. To whatever extent an LLM's ability to code is like a junior coder, you can hire a fully sentient human being to manage a bunch of junior coders. Or maybe it'll be a human instructing a single model, but getting an hour of fiddly changes within seconds, then spending ten minutes ensuring it did what you friggin' asked. For video game art, we can simply return-with-a-v to team sizes that fit in an elevator, and release games once a year for sensible quantities of money. Will a workflow with qwen-image-3 be exactly what an artist intended? No. But neither is a square meter of concrete that required three weeks of notes and revisions, inside a team with a hundred fucking people, for a live-service game that became a tax write-off after two months.

The people most worried about this technology are animators. Animation suuucks. Animation takes for-fucking-ever. Which is why everyone who fantasized about nuking Adobe HQ, after the Animate erasure fakeout, is liable to come around and quietly start using a program that simply fills in between whatever they draw by hand. Did it get things wrong? Alright, draw more stuff in-between. Once people can manually animate a guy, and then draw a medal on his jacket and have it Just Work for every other frame of the shot, you'll hear a lot less hue and cry about the stupid models that make a static picture dance real sexy.

[–] shads@lemy.lol 1 points 14 hours ago (1 children)

So I'm curious about a few points you mentioned, Neural Networks are a whole new thing? You know the prior art for that goes back almost a 90 years right? Neural Networks at scale, yeah maybe.

Do you believe that the publishers will realise the limitations of generative AI before or after they have completely decimated the gaming industry, because as much as we like to talk about what devs are doing with generative AI they aren't the people with money.

Across multiple industries we are seeing people being forced to engage with tools that actively slow them down, there has been study after study showing that use of generative AI has negative impacts on memory, cognitive ability and productivity. Who is going to wrangle these "virtual junior devs" when everybody drops out of a fraught industry to breed alpacas or hand make timber furniture?

Very soon I suspect we will see the end of free access to current gen models, there will be a generation or two of diminished free models then the screws will start to turn. Simply put the companies doing the training will be wanting to monetise usage to offset the training costs. The companies can't afford to lose money hand over fist forever and every local model that doesn't phone home to rack up charges will be considered a "lost sale". BTW I don't consider myself amazingly computer savvy, but my OS doesn't "come with" a word processor or music player I need neither on a day to day basis. I can and have installed them as I need them. I think you will find that a lot of very capable people in tech view generative AI like smart home devices, what's that old joke about "the only smart device in my house is a printer and I keep a loaded gun next to it in case it starts acting funny". Oh and let's be honest Amazon would rather you were renting a PC in the cloud and accessing it via what is essentially a modern equivalent of a dumb terminal, why would they let you run a "local" model when they could be chucking the generative AI fee on your monthly rental?

You seem to be seeing end state solutions here, I'd contend that we are a ways off any of this stuff working as it's currently being sold to us. The problem I forsee with this is that the bubble bursting isn't going to be a simple "Oops looks like we unbalanced the economy... Silly us." it seems to me it will be more like a whole bunch of rich people trying to explain to the general populace why they deserve to keep all the money when they have been manipulating the worldwide economy to feather their nests. Especially if the collapse happens fast enough to crater stock markets across the western world.

If the US economy wasn't grift maxxing these days I am reasonably sure assessed risk would have escalated past potential gains long before now and fiscal watchdogs would be asking some very difficult questions of Nvidia, Oracle, Microsoft, etc. Unfortunately we got generative AI hitting its stride as we were recovering from the Covid economic instability and the market manipulation presidency starting.

[–] mindbleach@sh.itjust.works 2 points 5 hours ago

Deep neural networks didn't work until quite recently. The theory was there. Single-layer models existed, but were limited to toy applications like single-character OCR. Now there's a whole ecosystem to go from 'what's Python?' to a working prototype within the week. The durable product of this trillion-dollar bubble will be a mountain of whitepapers for how to efficiently design and train models of bewildering complexity.

If the big boys stop releasing local versions, they will cease to matter. They've already created the tools for interested randos to continue development, after the bubble bursts. If they'd like to become irrelevant while they still have funding then that's their prerogative. Qwen Image 2 might never come out, at this rate, but we already know it's a fraction the size of prior models, and outperforms all of them... so there's no point pursuing big-iron mainframe models, once the community has to roll their own.

Pessimistic studies are mostly overblown. 'Doctors using detector get worse at eyeballing things,' no mention of accuracy whilst using that detector. 'Expert programmers slowed down by virtual amateur,' yeah I'll bet, like with a real amateur. 'Artist unimpressed by automated version of thing he's good at,' okay seriously - why do we keep asking professionals about these tools? They already learned things the hard way, at the highest level humans can reach. If they were getting shown-up, there'd be nothing to discuss.

I'm seeing videos where 'and then I vibe-coded the mechanical integration' is mumbled like a punchline. If you truly understand what you want then existing models can probably just do that. It turns doing things the normal way into a fallback. Like whining that you have to do the dishes by hand, when the dishwasher breaks.

The gaming industry has been a hellscape for decades. (Same with buying gizmos that spy on you.) This hype cycle obviously has not helped, but shit's been fucked since before that. If civilization on the whole is turbo-fucked then it's not primarily attributable to spicy autocomplete.