this post was submitted on 01 Mar 2026
73 points (100.0% liked)

United States | News & Politics

8987 readers
232 users here now

founded 5 years ago
MODERATORS
 

Clearly the whole drama with Pentagon making a big deal of showing that they're trying to force AI companies to build autonomous AI killing machines and spy on citizens is completely manufactured.

Anthropic was always going to comply, and the goal is to just create a marketing campaign them as heroically resisting. All the media has been running the story of a plucky Anthropic defying US military to defend ethical AI and protect humanity.

you are viewing a single comment's thread
view the rest of the comments
[–] venusaur@lemmy.world 1 points 20 hours ago* (last edited 17 hours ago) (6 children)

I switched from ChatGPT to Claude last month and let me tell you, I am not impressed. What are yall using besides hosting your own model? I don’t have the money for GPU’s.

EDIT: this comment is not for anti-AI circlejerk. I’m only looking for actual recommendations. calling anything slop is hypocritical with how monotonous and low effort the circlejerking is.

[–] RedstoneValley@sh.itjust.works 5 points 16 hours ago (1 children)

You gotta be really tone-deaf to be asking for AI advice in a thread about government weaponizing AI capabilites.

[–] venusaur@lemmy.world 1 points 14 hours ago* (last edited 14 hours ago) (1 children)

You’re reaching. The topic is about unethical Ai company do bad thing. So a question about what people are using in the light of this is absolutely on point. What’s silly is expecting Lemmy users to actually have anything to offer on the topic besides stroking each other off.

[–] RedstoneValley@sh.itjust.works 1 points 11 hours ago (1 children)

Well you can always vibe code yourself some friends if Lemmy isn't to your liking.

[–] venusaur@lemmy.world 0 points 9 hours ago

It makes sense somebody who uses Lemmy for friends would partake in whatever the popular circlejerk at the time is. People just want to belong.

[–] deacon@lemmy.world 9 points 19 hours ago (1 children)
[–] venusaur@lemmy.world 1 points 17 hours ago (1 children)
[–] deacon@lemmy.world 2 points 17 hours ago

Oh, well I do beg your pardon.

[–] Zedd00@lemmy.dbzer0.com 6 points 17 hours ago (1 children)

I'm running ollama with qwen2.5:3b in docker on a rtx3050 8gb. I also use DeepSeek.

[–] venusaur@lemmy.world -1 points 14 hours ago* (last edited 14 hours ago) (1 children)

Thanks! Can you explain what you just wrote? Do you own these GPU’s? Are you in China?

[–] Zedd00@lemmy.dbzer0.com 3 points 8 hours ago (1 children)

No problem. My desktop has an nvidia RTX 350 card that has 8GB of ram on it. It's a basic modernish video card. Ollama is an open source framework for running large language models. The model I'm using is qwen 2.5. It has 3 billion (3b) parameters(basically the size of the LLM) . Docker is a program that allows you to basically run smaller dedicated computers on your computer.

I am not in China. I'm an American living in Albania. I recommended DeepSeek because it's free, works well, and if a company is going to have the information on what you're chatting about, it might as well be one that isn't in the same country as you.

[–] venusaur@lemmy.world 1 points 44 minutes ago

Thanks for all the info! I’d love to run a model locally, but I don’t have the money for a decent enough setup right now, but I know it’s getting close. How effective is the 3b model? Does it do the job for you or you feel like it’s lacking? Are requests pretty slow on that machine?

[–] TheOubliette@lemmy.ml 5 points 18 hours ago (2 children)

What do you even get out of it? Chatbots are too often full of shit to do anything consistently useful except send manager-coddling emails.

[–] gnuthing@lemmygrad.ml 3 points 18 hours ago (1 children)

The only legit use case is to use the semantic ablation of AI to disguise your writing voice to thwart potential lexical fingerprinting

[–] TheOubliette@lemmy.ml 2 points 5 hours ago

Hell yeah.

And by that I mean "what a great point, you are absolutely correct".

[–] venusaur@lemmy.world 0 points 17 hours ago* (last edited 17 hours ago) (2 children)

All kinda of stuff! Coding, automation, research. It’s a tool just like anything. If you went on Google and just read the headlines of search results you’d be pretty dumb. Arrogance and virtue signaling is just as bad as using GPT’s.

[–] TheOubliette@lemmy.ml 1 points 5 hours ago (1 children)

Emphasis on consistently.

Coding: AI slop gives devs undue confidence to introduce glaring bugs and security holes and unmaintainable structures as they are not accustomed to doing proper code reviews (which is now their role - reviewing bad junior dev code). It works great at first, seemingly, and then racks up a massive cost later in the form of fixing its problems. Of course, you can just not fix those problems and live with terrible security and constantly rewriting half the codebase to try and imolement a single feature. LLMs can reproduce patterns but can't really think. You will end up spending just as much time, if not more, building something half decent using it, but then likely end up not properly understanding what was built. And God help you if you want to implement using version 4.3 of some library rather than he much more publicly documented version 3.x.

Automation: I dunno the only irl examples I have seen of automation have been catastrophes because the person trusted a broken implementation. They were real excited at first and then had a bad time a couple months later. But I'm sure there are examples of this where "good enough" meshes reasonably well with the capabilities of LLMs.

Research: Oh I strongly discourage this. These are pattern regurgitation machines, they will reproduce what is common and that is not the same as what is true, and that is before accounting for "hallucinations", which is really just more pattern-making, it is the same as the non-hallucinatioms but just more obviously wrong rather than subtly wrong. This is a surefire way to unlearn how to do good research and adopt false ideas without even knowing it.

Re: reading and believing headlines: yes that will also lead you astray. Doesn't make the lie regurgitation machine a good idea for most topics.

Re: "Arrogance and virtue signaling" I have absolutely no idea what you are referring to.

[–] venusaur@lemmy.world 1 points 47 minutes ago

These are all examples that you don’t know how to use GPT’s effectively. You’re not even trying. It’s a tool. It’s not a replacement brain.

[–] trilobite@lemmy.ml 1 points 16 hours ago (2 children)

Folks, is the debate about whether we sould use it or not, or is it about how to use it? The point really is that every time we use these tools, we are training the to become better. The question is rather how smart do we want these tools to become? Stick good AI on good robots and Terminator won't be sci-fi in a few years ... lol I'm already starting to develop my anti robot nuke system .... :-)

[–] venusaur@lemmy.world 3 points 14 hours ago* (last edited 14 hours ago)

A lot of people on here think their brains belong in a jar next to Einstein. These models are gonna be trained to be just as smart without you.

[–] deacon@lemmy.world 2 points 15 hours ago

The problem is that - Already and forever more - the consumer and the AI have divergent definitions of “better”.

AI is not being served to us in a neutral space, it is largely developed and controlled by companies that also control important Algorithms, and that is no coincidence.

Use AI if you want to, but essentially You’re The Product and the cost is only the environment. Non-billionaires who think there is an actual value prop for them are basically concussed, they are so short sighted.

[–] bluesheep@sh.itjust.works 1 points 14 hours ago (1 children)

calling anything slop is hypocritical with how monotonous and low effort the circlejerking is.

Maybe but at least I'll be able to say I actually did it myself instead of asking a glorified chat bot to do it for me

[–] venusaur@lemmy.world 2 points 14 hours ago

Nobody is asking GPT’s to write comments for them. What is this paranoia?

[–] bobs_guns@lemmygrad.ml 1 points 18 hours ago

I dunno, I only use models if I'm getting paid to do it.