this post was submitted on 27 Feb 2026
503 points (98.6% liked)

Technology

81933 readers
4117 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Hacker News.

The Department of War has stated they will only contract with AI companies who accede to “any lawful use” and remove safeguards in the cases mentioned above. They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a “supply chain risk”—a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.

Regardless, these threats do not change our position: we cannot in good conscience accede to their request.

It is the Department’s prerogative to select contractors most aligned with their vision. But given the substantial value that Anthropic’s technology provides to our armed forces, we hope they reconsider. Our strong preference is to continue to serve the Department and our warfighters—with our two requested safeguards in place. Should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions. Our models will be available on the expansive terms we have proposed for as long as required.

you are viewing a single comment's thread
view the rest of the comments
[–] XLE@piefed.social 3 points 1 day ago (1 children)

Anthropic's "ethical" concerns were performative. They only fearmonger about fictional things that will make their product sound powerful (read: worth throwing money into).

They try to scare people with fictional stories of AGI, a thing that isn't happening, while ignoring widespread CSAM and sexual harassment generation, a thing that is happening.

[–] Iconoclast@feddit.uk -1 points 1 day ago (1 children)

Are we not moving toward AGI? Because from where I stand, I only see three scenarios: either AI research is going backwards, no progress is being made whatsoever, or we're continuing to improve our systems incrementally - inevitably moving toward AGI. Unless, ofcourse, you think we'll never going to reach it which I view as a quite insane claim in itself.

If we're not moving toward it, then I'd love to hear your explanation for why we're moving backwards or not making any progress at all.

Whether we're 5 or 500 years away from AGI is completely irrelevant to the people who worry about it. It's not the speed of the progress - it's the trajectory of it.

[–] XLE@piefed.social 4 points 1 day ago* (last edited 1 day ago) (1 children)

We are not "moving towards AGI" in any way with any modern technology, in the same way that we are not "moving towards FTL travel" because a car company added cylinders to an engine.

The real "AI" dangers are people like Eli Yudkowski, a man who scares vulnerable people, sexually abuses them, and has spawned at least one murderous cult.


Dario is one of the biggest AGI bullshit peddlers.

In October 2023, Amodei joined The Logan Bartlett show, saying that he “didn’t like the term AGI” because, and I shit you not, “...because we’re closer to the kinds of things that AGI is pointing at,” making it “no longer a useful term.” He said that there was a “future point” where a model could “build dyson spheres around the sun and calculate the meaning of life,” before rambling incoherently and suggesting that these things were both very close and far away at the same time. He also predicted that “no sooner than 2025, maybe 2026” that AI would “really invent new science.”

[–] Iconoclast@feddit.uk 1 points 23 hours ago* (last edited 23 hours ago) (1 children)

We are not “moving towards AGI” in any way with any modern technology

So that means you believe AI research is completely frozen still or moving backwards. Please explain.

Comparisons to faster-than-light travel are completely disingenuous and bad faith - that would break the laws of physics and you know it.

You can also keep your red herrings to yourself. I'm discussing ideas here - not people.

[–] XLE@piefed.social 1 points 23 hours ago (1 children)

According to Dario Amodei, this is the year we are getting New Science. And apparently he believes in Dyson Spheres too. How do we feel about that?

Anthropic is not special. They're doing the LLM thing like everybody else. The Godfather of AI, Yann LeCun himself, said LLMs were a dead end on this front. But even if he didn't chime in, it's your job to show they'll lead to AGI, it's your job to show us how, not my job to show you it won't.

[–] Iconoclast@feddit.uk 1 points 22 hours ago (1 children)

If you're just gonna keep ignoring every single point I make and keep rambling about unrelated shit, then there's nothing left to discuss here. If you actually had an argument, you would've made it by now.

[–] XLE@piefed.social 1 points 22 hours ago* (last edited 22 hours ago) (1 children)

Your claim: AI seems to be getting better, therefore AGI will happen

My rebuttal: they aren't linked

Other important things you must reconcile with: the sexual abuse, the death toll, etc from the True Believers

Does that clear matters up?

[–] Iconoclast@feddit.uk 1 points 22 hours ago (1 children)

My argument is that we'll incrementally keep improving our technology like we have done throughout human history. Assuming that general intelligence is not substrate dependent - meaning that what our brains are doing cannot be replicated in silicon - or that we destroy ourselves before we get there, then it's just a matter of time before we create a system that's as intelligent as we are: AGI.

I already said that the timescale doesn't matter here. It could take a hundred years or two thousand - doesn't matter. We're still moving toward it. It does not matter how slow you move. As long as you keep moving, you'll eventually reach your destination.

So, how I see it is that if we never end up creating AGI ever, it's either because we destroyed ourselves before we got there or there's something borderline supernatural about the human brain that makes it impossible to copy in silicon.

[–] XLE@piefed.social 1 points 22 hours ago (1 children)

So do you think Dyson Spheres are inevitable too? Because things advance?

You're also shifting your goalposts tremendously. First you were implying that today's AI would bring about AGI and now you're saying that something, somewhere, might happen in some sci-fi future.

I'm not sure if you're actually worried about present day destruction, though, because you seemed to not like it when I brought up with the AGI true believers are doing to the vulnerable people that flock to them. Dario is on board with Trump's fossil fuel, anti-green buildout too.

If you believe so much in AI, but allegedly believe in the things you've talked about, perhaps it's time to start criticizing the people you hold so dear.

[–] Iconoclast@feddit.uk 1 points 21 hours ago (1 children)

So do you think Dyson Spheres are inevitable too?

I'm less certain about that than I am about AGI - there may be other ways to produce that same amount of energy with less effort - but generally speaking, yeah, it seems highly probable to me.

First you were implying that today’s AI would bring about AGI

I've never made such a claim. I've been saying the exact same thing since around 2016 or so - long before LLMs were even a thing. It's in no way obvious to me that LLMs are the path to AGI. They could be, but they don't have to be. Either way, it doesn't change my core argument.

people you hold so dear

C'moon now.

[–] XLE@piefed.social 1 points 21 hours ago (1 children)

I've been saying the exact same thing since around 2016 or so - long before LLMs were even a thing

You really aren't beating the Yudkowsky/LessWrong allegations with this one, you know.

If you really think LLMs might mean nothing at all when it comes to actually achieving AGI, then maybe you should speak out against the environmental destruction they're doing today with full endorsement from Anthropic and all the other corporate AI perverts.

[–] Iconoclast@feddit.uk 1 points 21 hours ago (1 children)

That doesn't have anything to do with my claim about the inevitability of AGI.

[–] XLE@piefed.social 1 points 21 hours ago (1 children)

It is everything to do with your claim about its inevitability, because we're witnessing real life in the present day, not some fantasy prediction of the future. If people like Dario and Eli get their way, there will be no future to get AGI.

... I am growing increasingly concerned you really are a Yudkowskist rationalist

[–] Iconoclast@feddit.uk 1 points 21 hours ago (1 children)

You don't seem very interested in sticking to the topic, do you? This conversation has been all over the place, complete with ad-hominems, concern-trolling, red herrings, strawmen, gish galloping - as if you're trying to break some kind of record.

It's pretty clear you've built up a cartoon-villain version of me in your head and now you're fighting that imagined version like it's real. I made a pretty simple claim about AGI, you've piled an entire story on top of it, and now you're demanding I defend views I don't even hold.

I've been trying to have a good-faith conversation here, but if this is what you're going to keep doing, then I'll just move on.

[–] XLE@piefed.social 1 points 20 hours ago

The topic of...LLMs? Because that's what this thread is. If you come in here and you start talking about something that's entirely unrelated to LLMs (what was that about red herrings?) I'll point it out.

And if it's based on Yudkowskism, all the more reason to call it out. You're aware of the sexual abuse and death Eli Yudkowski is either directly or indirectly responsible for, right?