this post was submitted on 22 Feb 2026
43 points (70.9% liked)

Asklemmy

53268 readers
370 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 6 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] danciestlobster@lemmy.zip 118 points 3 days ago (3 children)

Even for people who generally like the function of AI (which seem to be fairly rare here) the absolutely obscene climate impact and implications for peopes jobs and livelihoods, privacy breaches, and general internet enshittification is surely reason enough to be against it.

[–] hanrahan@slrpnk.net 35 points 2 days ago (2 children)

The jobs thing i don't understand, its the distribution of productivity gains that's the issue, why we keep voting for the same politicians ensuring it goes to the wealthy is the real mystery.

[–] Juice@midwest.social 2 points 1 day ago

The distribution of productivity gains and development of new technology are intrinsically and historically connected. New technology is only developed in order to exploit workers, either to make individual something which was previously socialized, or to directly replace workers with industrial advances; and in many cases both.

Marx said it best: Machines were the weapon employed by the capitalist to quell the revolt of specialized labor.

This was true for the Luddites and it is true today.

[–] danciestlobster@lemmy.zip 17 points 2 days ago (2 children)

Oh, I absolutely agree. But currently, the people in charge of making those decisions have demonstrated moral bankruptcy and will absolutely ensure the productivity gains funnel to the top. Until that changes, AI impact on jobs will likely be devastating.

And I'm all for changing it. It's just going to be a long and/or violent process.

[–] Juice@midwest.social 3 points 1 day ago* (last edited 6 hours ago)

It isnt moral bankruptcy, it is systematic. The capitalist who produces profit stays in business, the capitalist who does not goes bankrupt. It isnt morals of individuals, the dehumanization of the poor by the rich is a symptom of a system that prioritizes profits over humanity.

Capitalism is, among other things, a system of forced competition.

I'm glad to hear you are on the right side of it. But in order to be effective we have to name the actual problems. I am above all a humanist, and certainly the capitalist class contains some vile and hateful individuals. That is more clear now than ever before. But we are not made rich or poor by our morality; our morality comes from the conditions that dictate whether we are rich or poor.

Even individualism is structural.

[–] runsmooth@kopitalk.net 6 points 2 days ago

Productivity gains are not across the board, and is a subject of scrutiny and debate.

But what AI really has done is basically redistributed American wealth to a smaller group of people, and therefore a smaller pool for the US politicians to focus on satisfying. If there is an AI bubble pop, what market watchers suspect is there's actually no other American sector to mitigate what is otherwise a recession.

[–] iceberg314@midwest.social 7 points 2 days ago (1 children)

That I why I like small, specialized, locally hosted AI. Runs acceptably fast and quite on my gaming PC, it's private, and I can give it knowledge is small doses in specific topics and projects.

[–] ctrl_alt_esc@lemmy.ml 3 points 2 days ago (1 children)

Which model do you use and what are your specs? I ran a couple using an RTX5060 with 16gb and it's too slow to be usable for larger models while the smaller ones are mostly useless.

[–] iceberg314@midwest.social 2 points 2 days ago (1 children)

I also have a 5060 (ti) with 16GB of RAM. I tend to use GPT-OSS:20B or Qwen3:14B with a context of ~30k. I have custom system prompt for my style of reponse I like on open web ui. That takes up about 14GB of my 16GB VRAM

But yeah it is slower and not as "smart" as the cloud based models, but I think the inconvenience of the speed and having to fact check/test code is worth the privacy and environmental trade offs

[–] Hexarei@beehaw.org 3 points 2 days ago

Ive had good success on similar hardware (5070 + more ram) with GLM-4.7-Flash, using llama.cpp's --cpu-moe flag - I can get up to 150k context with it at 20ish tok/sec. I've found it to be a lot better for agentic use than GPT-OSS as well, it seems to do a much more in depth reasoning effort, so while it spends more tokens it seems worth it for the end result.

[–] errer@lemmy.world 4 points 2 days ago (1 children)

It has its uses but it feels like more of a 10-20% productivity boost when used effectively, not the 500%, “lets have openclaw replace my whole company!” kind of BS being pushed by AI companies.

[–] black0ut@pawb.social 5 points 2 days ago (1 children)

If it is a productivity boost for you, it is at the cost of someone else who will have to proofread and test everything you do. LLMs (and genAI) are useless.

[–] errer@lemmy.world 2 points 2 days ago

It’s no more work than proofreading any other code I write. Sounds like someone just slopped out code with an LLM and didn’t do the due diligence of checking it themselves. Using an LLM doesn’t mean no work. I think that’s when people get in trouble.