I hate how these kinds of things are always framed. The implied message is always that "AI" can autonomously decide to go of the rails. Similar to the Moltbot craze. The agents have to be told to go do the things they do. They don't have free will.
Using a combination of network science and large language models, the same underlying technology that powers systems like ChatGPT, the researchers created and monitored synthetic bot agent personas, their posts, and their interactions with one another, simulating what a coordinated AI-powered social media network might look like.
So yeah, LLMs can used nefariously to great effect. They're essentially more sophisticated bots.