Off My Chest
RULES:
I am looking for mods!
1. The "good" part of our community means we are pro-empathy and anti-harassment. However, we don't intend to make this a "safe space" where everyone has to be a saint. Sh*t happens, and life is messy. That's why we get things off our chests.
2. Bigotry is not allowed. That includes racism, sexism, ableism, homophobia, transphobia, xenophobia, and religiophobia. (If you want to vent about religion, that's fine; but religion is not inherently evil.)
3. Frustrated, venting, or angry posts are still welcome.
4. Posts and comments that bait, threaten, or incite harassment are not allowed.
5. If anyone offers mental, medical, or professional advice here, please remember to take it with a grain of salt. Seek out real professionals if needed.
6. Please put NSFW behind NSFW tags.
view the rest of the comments
I'm in software development and land on both sides of this argument.
Having to review or maintain AI slop is infuriating.
That said, it has replaced traditional web searching for me. A good assistant setup can run multiple web searches for me, distill the useful info cutting through the blog spam and ads, run follow up searches for additional info if needed, and summarize the results in seconds with references if I want to validate its output.
There was a post a couple days ago about it solving a hard math problem with guidance from a mathematician. Sparked a discussion about AI being a powerful tool in the right hands.
That hasn’t been my experience. If it’s trivial then sure, but trivial stuff could easily be looked up.
If it’s an actual problem, then chances are it’s gonna send me down a rabbit hole full of red herrings.
Don’t get me wrong, it sometimes works better than a google search, but it’s not often enough or good enough to justify the cost, and that’s with all the free investor money.
We've solved the problem of enshittification of the web by having robots consume the shit for us!
And create an equal amount, if not more shit. Take that entropy!
i think part of the problem is that web search has enshitified over the years, back in the day you would enter the relevant key words and get the info you needed on the top results most of the time, nowadays it's all ads. now ai goes to the point, but less reliable. almost like Gemini trying to solve a problem that Google itself created
Well, AI was also quite instrumental in making web search useless. It made it trivial to create infinite spam pages, which search engines have to filter out. Naturally, too much will get filtered out as a result, meaning you can't find a lot of useful results anymore either.
You trust it to "distill the useful info"? How do you know it's not throwing out important pieces just to lead you down the garden path, or, maybe because it "thinks" you wouldn't be interested because of all it "knows" about you? If you need to check everything it does, why not just do it yourself?
I don't use it much as a dev, but sometimes a response to a question, while not correct will guide me to a solution. The trick is that you have to have the knowledge to know what's right or wrong. I will also use it to troubleshoot code when I have a red squiggly because something is wrong. It can find missing brackets, a semi colon, or if I just called a function incorrectly.
If AI just up and disappeared tomorrow, I'd be so happy, but I can't discount some of it's benefits. Things I'd find on stack overflow before can be done directly within my ide with context to my project. I never accept an AI response, but instead type everything out so that I know that it's doing what I want and so it doesn't modify any of my code.
Linters have been finding missing brackets and extra semis since forever.
Truth. This does a bit more than a typical linter, that was just a simple example I riffed off. Sometimes it helps me find logic errors as well. I'll highlight a block of code, ask why it's doing or not doing the thing I expect, and go from there. I've probably only used it a dozen times for basic troubleshooting over the past 6 months when I get stumped on something.
Yeah so I've not used claud but have used a number of models from hugging face.
I haven't used them extensively.
In my experience, they provide a great starting point for things I haven't interacted with much. So I might spend 10,000 hours with js, but never touched a firefox extension, or maybe a docker container, or nix script. With js an LLM is not much more productive than just coding by myself with non-AI tools. With the other things it can give you a really good leg up that saves a heap of effort in getting started.
What I have noticed though is that it's not very good at fine tuning things. Like your first prompt might do 80% of the job of creating a docker file for you. Refining your prompt might get you another 5% of the way, but the last 15% involves figuring out what it's doing and what the best way to do it might be.
With these sorts of tasks models really seem to suffer from not knowing what packages or conventions have been deprecated. This is really obvious with an immature ecosystem like nix.
IMO, LLMs are not completely without virtue, but knowing when and when not to use them is challenging.
This is where custom setups will start to shine.
https://github.com/upstash/context7 - Pull version specific package documentation.
https://github.com/utensils/mcp-nixos - Similar to above but for nix (including version specific queries) with more sources.
https://github.com/modelcontextprotocol/servers/tree/main/src/sequentialthinking - Break down problems into multiple steps instead of trying to solve it all at once. Helps isolate important information per step so "the bigger picture" of the entire prompt doesn't pollute the results. Sort of simulates reasoning. Instead of finding the best match for all keywords, it breaks the queries down to find the best matches per step and then assembles the final response.
https://github.com/CaviraOSS/OpenMemory - Long conversations tend to suffer as the working memory (context) fills up so it compresses and details are lost. With this (and many other similar tools) you can have it remember and recall things with or without a human in the loop to validate what's stored. Great for complex planning or recalling of details. I essentially have a loop setup with global instructions to periodically emit reinforced codified instructions to a file (e.g., AGENTS.md) with human review. Combined with sequential thinking it will identify contradictions and prompt me to resolve any ambiguity.
The quality of the output is like going from 80% to damn near 100% as your knowledge base grows from external memory and codified instructions in files. I'm still lazy sometimes and will use something like Kagi assistant for a quick question or web search, but they have a pretty good baseline setup with sequential thinking in their online tooling.
It's really not that different from a traditional web search under the hood. It's basically a giant index and my input navigates the results based on probability of relevance. It's not "thinking" about me or deciding what I should see. When I say a good assistant setup, I mean I don't use Gemini or ChatGPT or any of the prepackaged stuff that tries to build a profile on you. I run my own setup, pick my own models, and control what context they get. If you check my post history I'm heavily privacy conscious, I'm not handing that over to Google or OpenAI.
The summary helps me evaluate if my input was good and the results are actually relevant to what I'm after without wading through 20 minutes of SEO garbage to get there. For me it's like getting the quality results you used to get before search got enshitified. It actually surfaces stuff that doesn't even show up on the front page of a traditional search anymore.
Yeah this is the important bit, I’m switching roles to principal engineer: ai at my company. It cannot be a crutch. We’re building multi agentic frameworks that second guess and push back. A real thing here is that OpenAI models are trained on “make the user happy” and don’t push back.
Anthropic models, while not perfect either, structured in the right way, become augmentations and learning tools, primed to admit what they don’t know, primed to push back if it seems like the person doesn’t really understand what they’re really asking. The problems are generally the classic PEBKAC and blindly trusting ai and that’s a human training thing. It’s been in the software world for years. People blindly pasting StackOverflow code into their repos because they don’t grasp the problem and want the quick fix.
Unfortunately, as we’ve seen with with openclaw, it’s a lot of people with an aggressive end goal and no understanding about the tools they are working with, the importance of the human in the loop. Like I said, it’s not perfect but the problems are also just humans getting positive feedback from models designed to do that and now those models are going to be used for autonomous weapons and surveillance.