AI has made it vastly easier for malicious hackers to identify anonymous social media accounts, a new study has warned.
In most test scenarios, large language models (LLMs) – the technology behind platforms such as ChatGPT – successfully matched anonymous online users with their actual identities on other platforms, based on the information they posted.
The AI researchers Simon Lermen and Daniel Paleka said LLMs make it cost effective to perform sophisticated privacy attacks, forcing a “fundamental reassessment of what can be considered private online”.
In their experiment, the researchers fed anonymous accounts into an AI, and got it to scrape all the information it could. They gave a hypothetical example of a user talking about struggling at school, and walking their dog Biscuit through a “Dolores park”.
In that hypothetical case, the AI then searched elsewhere for those details and matched @anon_user42 to the known identity with a high degree of confidence.
Study link - https://arxiv.org/abs/2602.16800
I've said it before and I will reiterate it a thousand times if need be. AI based analysis, particularly LLM based analysis, will strictly lead to bogus results. The utility for this is intimidation and culling (!) of the have-nots.
Imagine entering a country and being flagged for antidemocratic rhetoric because the computer said so. It doesn't matter if you said it or not, the machine has a claimed .01% error rate. Furthermore, it doesn't matter if this error rate is correct or not, because how the machine got its results is a process that is impossible to pry open. Anyone who manages an algorithm can plant something and it's very difficult to know who because the type of person to be able to do so will bring with them the precaution to wipe their tracks.
That said, you can mark my words. You'll be seeing a lot more of this type of brutalisation in the future. The target is anyone who is among the patsies of state spun narratives, trans people, immigrants, specific nationalities, yadda-yadda.