this post was submitted on 09 Mar 2026
73 points (86.9% liked)
Privacy
9170 readers
293 users here now
A community for Lemmy users interested in privacy
Rules:
- Be civil
- No spam posting
- Keep posts on-topic
- No trolling
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
We’re Training Students To Write Worse To Prove They’re Not Robots, And It’s Pushing Them To Use More AI
If students have to use AI in order to make it look like they're not using AI — what on earth will a system like this do to people? Quite how it will be able to read the intent of people's actions without throwing up a huge number of false-positives is something that I don't understand.
And quite what workers are supposed to do when they receive an 'alert' of this nature, I'm not sure. Go up to the individual and tell them that their behaviour has been flagged as suspicious? Way to make me feel more anxious in public.
No, it already does. Facial-ID stuff already throws hundreds of false positives.