this post was submitted on 08 Mar 2026
694 points (95.5% liked)

Off My Chest

1828 readers
70 users here now

RULES:


I am looking for mods!


1. The "good" part of our community means we are pro-empathy and anti-harassment. However, we don't intend to make this a "safe space" where everyone has to be a saint. Sh*t happens, and life is messy. That's why we get things off our chests.

2. Bigotry is not allowed. That includes racism, sexism, ableism, homophobia, transphobia, xenophobia, and religiophobia. (If you want to vent about religion, that's fine; but religion is not inherently evil.)

3. Frustrated, venting, or angry posts are still welcome.

4. Posts and comments that bait, threaten, or incite harassment are not allowed.

5. If anyone offers mental, medical, or professional advice here, please remember to take it with a grain of salt. Seek out real professionals if needed.

6. Please put NSFW behind NSFW tags.


founded 2 years ago
MODERATORS
 

I’ve been working with so many students who turn to it as a first resort for everything. The second a problem stumps them, it’s AI. The first source for research is AI.

It’s not even about the tech, there’s just something about not wanting to learn that deeply upsets me. It’s not really something I can understand. There is no reason to avoid getting better at writing.

you are viewing a single comment's thread
view the rest of the comments
[–] kautau@lemmy.world 3 points 1 week ago

Yeah this is the important bit, I’m switching roles to principal engineer: ai at my company. It cannot be a crutch. We’re building multi agentic frameworks that second guess and push back. A real thing here is that OpenAI models are trained on “make the user happy” and don’t push back.

Anthropic models, while not perfect either, structured in the right way, become augmentations and learning tools, primed to admit what they don’t know, primed to push back if it seems like the person doesn’t really understand what they’re really asking. The problems are generally the classic PEBKAC and blindly trusting ai and that’s a human training thing. It’s been in the software world for years. People blindly pasting StackOverflow code into their repos because they don’t grasp the problem and want the quick fix.

Unfortunately, as we’ve seen with with openclaw, it’s a lot of people with an aggressive end goal and no understanding about the tools they are working with, the importance of the human in the loop. Like I said, it’s not perfect but the problems are also just humans getting positive feedback from models designed to do that and now those models are going to be used for autonomous weapons and surveillance.