this post was submitted on 14 Mar 2026
154 points (95.8% liked)

Technology

82620 readers
5808 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 13 comments
sorted by: hot top controversial new old
[–] chunkystyles@sopuli.xyz 7 points 3 hours ago

I hate how these kinds of things are always framed. The implied message is always that "AI" can autonomously decide to go of the rails. Similar to the Moltbot craze. The agents have to be told to go do the things they do. They don't have free will.

Using a combination of network science and large language models, the same underlying technology that powers systems like ChatGPT, the researchers created and monitored synthetic bot agent personas, their posts, and their interactions with one another, simulating what a coordinated AI-powered social media network might look like.

So yeah, LLMs can used nefariously to great effect. They're essentially more sophisticated bots.

[–] J3N5T4R@thelemmy.club 1 points 1 hour ago

Automated psyop brain rot robot.

[–] aceshigh@lemmy.world 2 points 2 hours ago

Is AI stealing influencer jobs?

[–] ChunkMcHorkle@lemmy.world 5 points 6 hours ago* (last edited 6 hours ago)

Imagine it is two weeks before a major election in a closely contested state. A controversial ballot measure is on the line. Suddenly, a wave of posts floods X, Reddit, and Facebook, all pushing the same narrative, all amplifying each other, all generating the appearance of a massive grassroots movement. Except none of it is real. ...Trust in the information people encounter on X, Facebook, and Reddit, already eroded, could fall even farther.

It's much more difficult to be propagandized by any means, including autonomous AI, when you're not freely offering up your own time and devices daily to have it fed to you, individualized just for you by means of your own data, which you are also donating to the cause of propagandizing you.

I get why people do, there are lots of good reasons, but at a certain point the good outweighs the bad. And there's no time like the present to make a change.

So if you're reading this and you are still interacting with these centralized corporate-owned propaganda sites regularly, maybe it's time to rethink that strategy.

[–] subignition@fedia.io 22 points 15 hours ago (2 children)

A somewhat more hopeful take is that this strategy could be weaponized against misinformation too.

[–] Tiresia@slrpnk.net 16 points 12 hours ago (1 children)

The truth has the advantage of objective evidence and the disadvantage of needing to be more complicated to incorporate objective evidence.

When it comes to news from out of town, there is no objective evidence, only appeal to authority. The few people willing to personally travel somewhere to testify that it is real can be written off as paid actors (or as AI-generated if you aren't seeing their testimony live).

So in almost all scenarios with this technology, the truth would have the disadvantage but not the advantage. An arms race between pro-truth and anti-truth AI would be the anti-truth AI winning because it can tell the more convenient lie.

My hopeful take is that it will make proper citation an essential life skill, with everyone who believes stories without citation getting scammed until they know better and everyone who doesn't cite sources being disbelieved. And that, as such, people will organically build up transparent citation networks that they rely on for information, meaning they can more effectively filter out advertisement, propaganda, memes, and lies.

[–] IronBird@lemmy.world 3 points 9 hours ago

if it's like everything else LLM, the quality of propaganda will noticeably drop to the point where maybe the normies will catch on.

arguably legacy media has been falling there for awhile, as evidenced by their cratering revenue streams, maybe this will just accelerate things even further

[–] oozy7@piefed.social 5 points 13 hours ago* (last edited 13 hours ago)

It's already happening. Aren't spam bots somewhat like AI agents?

[–] UnderpantsWeevil@lemmy.world 13 points 16 hours ago (2 children)

Excited to see smear campaigns that become increasingly surreal and disturbing

[–] prex@aussie.zone 1 points 4 hours ago

While also making genuine bad press easier to dismiss.
eg: "fake news" says the worlds ugliest person.

[–] Steve@startrek.website 12 points 16 hours ago

At least 30% of the population will never notice

[–] bibbasa@piefed.social 7 points 16 hours ago (1 children)

well shit, first writing hit pieces, now this.

[–] zd9@lemmy.world 4 points 15 hours ago

first writing hit pieces, then hitting targets with drones