Take a good long look at your bot-ass username and then ask me again me why you think you're getting suspended.
That is the Youtube link parameter for Rick Astley's 'Never Gonna Give You Up'. It's understandable if Moderators unknowingly suspend an account thinking it is a bot due to such a username. But the lack of clarification or provision of reason for suspension is incompetence on their part.
Think about an actual bot farm trying to infiltrate a website. They're creating dozens of new accounts a minute using random strings of characters as usernames, and eventually to reduce the load on their admins the website implements algorithmic username screening: if it doesn't follow at least some rules of a known language, the username is kicked out and the account banned.
So now the owner of the bot farm realizes that essentially none of their bot usernames are getting through, but maybe this is an opportunity! They could use the bots to try to overwhelm the admins with reports/appeals; lock them up with handling those, and maybe they can sneak some human-generated usernames past the goalie and wreak some havoc while the admins are working through the garbage in the appeal backlog.
At this point, the admins could either turn off new users for a while, until the bot farm owner gets bored and tries somewhere else (unfortunately this would probably mean a bunch of real human users also getting bored and trying Bluesky or Threads). Or they could use the information the bot farm is helpfully providing against the bot farm: snag the IP addresses from every nonsense username that's trying to sign up, and add it to a global report/appeal ban list. Extra bonus: if you run two instances, sync your global ban list so that a bot that tries at one of them doesn't also waste time at the other.
Unfortunately, I think op just got false-positived.
To be sure, all of this is fan fiction at best. I don't know for sure if any of this is how Mastodon is running things. It's all just educated conjecture at this point.
If I were a bot farm owner, I would likely just generate more "realistic" person usernames. Generating a unique username which doesn't look like random letters is trivial, and I don't really think that creating that obstacle is a real hinderance to anyone.
Yes, but when creating a new system you can't just defend against new attacks. You have to defend against all the old attack vectors too.
I just don't see how the username is an attack vector. The sign-up has email verification and CAPTCHA. Requiring the username to be something sensible seems excessive.
But honestly, I don't know. Maybe this stops a lot more bot farms than I'd expect.
Captchas and email verifications can be easily bypassed.
Emails, sure. Captchas require a fair bit of elbow grease. Generating a random username which looks fine is nothing in the landscape of bot protection.
Bot farmers could find an exploit in reCaptcha. Or they could train up a neutral network to accurately defeat them (I saw someone demonstrating a GPT4 prompt that could handle it quickly and flawlessly with just a little bit of prompt engineering). When (not if) they find a way to defeat captcha, those lower level protections become way more important and relevant.
It's an ever-moving set of problems; fixing it today is no guarantee that it'll still be fixed tomorrow, so everything has to stay in place until it's proven to no longer be effective or to cause more problems than it fixes.
It just seems like the perspective is off. Implementing some script which reads images of the website which depicts the CAPTCHA, sending it to some AI-solution which can succeed some percentage of the time. Adding this to something which can interact with the website (not sure if you'll need to indirectly act through something like selenium or if you can make direct web-calls), while also ensuring that the CAPTCHA doesn't receive other suspicious data.
If you go through that trouble, I would be amazed if combining 2 or 3 words from a dictionary into a username would be the kryptonite of your bot farm.
Again, I don't know, and it might be a much more preventative solution than I can understand, but it feels like a strange security by obscurity.
You're not wrong, but it's also one of those things where you don't want to make things easier for the bad actor, especially since most people aren't going to be signing up with random strings.
This does not make suspicious random usernames not spam. They generally are spam accounts.
A recent spam I just received five days ago was from @oyPhFrxPx0@mastodon.social.
It's part of the URL to a very specific YouTube video. I hate that I recognized it immediatly.
Yep, maybe that's it. It has been my username on reddit for ~12 years, and I carried over to lemmy when I joined here. And joining mastodon, I'd like to keep it still. But if the large mastodon servers are suspending and ignoring appeals due to a suspicious username, I'm kinda unhappy with those instances.
this must be the least dangerous version of how we are living in Terry Gilliam's Brazil
An answer in very bad form
Some people need direct answers.
Back in the reddit days I used to frequent "banned from" subreddits. People there appreciate cold bluntness over fake politeness.
Error 400000001: Username is password
I'm a bit surprised by the lack of engagement with you after appeal, that alone would signify that the person behind the account is a genuine human and not a bot.
Can't speak to .online but I know .Social is infamous for being a poorly modded instance (largely because they're inundated with members). This obviously doesn't make it okay and certainly doesn't look good on their end.
One thing I would say, and I know this isn't an issue users should be dealing with, but I'd highly recommend signing up to a smaller instance. Tighter community with (in theory) a more personal modding system in place.
Is this same issue happening when you create an account where the username isn't akin to someone rolling their forehead on the keyboard?
It's the video ID on YouTube for "Never gonna give you up"
I know the reference.
I'm more focused on if OP is doing proper testing vs just assuming.
The correct answer is it’s because you are on someone else’s server and subject to the whims of its admin
I completely understand that mastodon are compiled of individual admins per server, and they can do what they want with their instance. But I'd expect the highest suggested instances to at least answer the appeals when suspending users. If I joined a random tiny instance of someone who wants to keep it to themselves, I'd understand, but the instances I joined are huge with a welcomming message etc.
But I’d expect the highest suggested instances to at least answer the appeals when suspending users.
No, quite the contrary: the fact that these instances are the most popular also makes them the biggest targets for automated sock puppet and bot account creation. These guys are even more paranoid than many smaller instances about user names that appear to be randomly generated. Your own user name, as others in this thread suggested, would easily trigger their auto-ban rules. And a human moderator would take one look at the name and think the same thing.
And it is possible these auto-ban rules are builtin to the Mastodon server reference implementation and enabled by default, meaning it is likely that all other Mastodon instances you might try to sign up for would also have these same auto-ban rules. I don't know for sure, but I am not willing to play around and find out. So it looks to me like your only choice is to choose a different username. Sorry.
These guys are even more paranoid than many smaller instances about user names that appear to be randomly generated. Your own user name, as others in this thread suggested, would easily trigger their auto-ban rules. And a human moderator would take one look at the name and think the same thing.
God damnit, it took me until now to actually read their username. It's the youtube URL for the rickroll video (ending in XcQ). I think you're right though - it was obviously autogenerated originally and only took on its current meaning via Youtube's use of the string.
You quoted the appeal-part of my comment. I would understand if a bot is implemented to suspend users with usernames which is just a generated string of high entropy, like my own. But rejecting an appeal should not be an automated process.
I can't imagine that the automated ban helps a lot either. Generating random usernames which looks like real people's usernames is pretty much a trivial task. Using a high-entropy string is just a choice on the developers side.
But rejecting an appeal should not be an automated process.
My point is that a human can't tell the difference between a name generated by a bot and your username either. So you're right that the appeal ought not be automated, but regardless of whether it is or not, you are not likely to get an appeal. It will just go straight to ban, and a human in the loop would take a look at the name, see high entropy, and not wouldn't think twice about whether the automated ban was correct. Like I said, they are paranoid because they are the largest Mastodon instances, and they have had to deal with concerted bot attacks a few times already.
Sure, and I'd probably understand it from the instance owners perspective better if I were in their shoes. And to be fair to them, my username was randomly generated by youtube at some point. So if they just outright reject appeals from generated usernames, I definitely fall into that category. I just feel like that's a bad process and practice for instances which are among the top of the suggested list for new users.
Considering that some bots might also have automatic appeals integrated makes it more reasonable to expect that automated rejection.
I'd also argue that regardless of the server size, it takes literal seconds to explain why your account is getting banned, it's absolutely within the realm of reason that if your system allowed automated signups that you as an admin have an obligation to not be a shitcunt and at least give a reason as to why you're banning accounts.
The problem is scaling. If it takes say, 30 seconds per name, but the number of n is very large, then if you ban say, 1000 dudes a day then the 30,000 seconds becomes a requirement of 8 hours of mod explanations (collectively) per day. It's a scaling issue, in short.
Mastodon
Decentralised and open source social network.