@Melatonin Yeah, I usually run into people who either assume everything is a hallucination or don't understand that hallucinations happen and are unavoidable. Even less people even understand why or how they happen (ie. if you ask a question about anything not in the provided info, it'll most likely hallucinate as if the answer was there)
@finkrat @Melatonin if you read it's output and learn nothing then you've only saved yourself time, if you don't understand and learn nothing, then you shouldn't use it because you can't vouch for it
@r3df0x @SuddenDownpour That's not remotely what this is referring to and it makes me wonder if you read the article at all?
They were comparing public vs private actions of allistics vs autistics and basically determined that autistics are more likely to be charitable/kind without needing recognition or attention to it.
The real findings:
* We're less likely to differ our choices based on whether or not they're perceived
* We're more kind by default
What you're talking about is a separate, but also common thing, called fawning. A trauma response that many of us also have in which we do whatever we think a person wants to avoid perceived threats and harm, even if that action itself causes us further harm.
This test did not examine fawning and did not examine charity at great personal cost. It was just whether or not someone would act charitably at personal expense or uncharitably at personal gain... an allistics basically were only good when people were watching while autistics were consistent regardless.
@nichtsowichtig @teraflopsweat "Scripting" is a common tool for us, though in most cases it's just rehearsing lines and establishing conversation patterns and flows.
Groups like this are also great places to get together and workshop communication ideas, from figuring out something accessible to say to get your needs met to other forms of communication, or even just validation when there isn't a reasonable way forward.
@mudeth I 110% agree faeranne, especially in that this is much like the topic of encryption and how people (especially politicians) keep arguing that we just need to magically come up with a solution that allows governments to access all encrypted communication somehow without impacting security and preventing people from using existing encryption to completely bypass it. It's much like trying to legislate math into functioning differently.
The closest you can get to a federated moderation protocol is basically just a standard way to report posts/users to admins.
You could absolutely build blocklists that are shared around, but that's already a thing and will never be universal.
Basically what you're describing is that someone should come up with a way to *force* me to apply moderation actions to my server that I disagree with. That somehow such a system would be immune to abuse (ie. because it's external to my server, it would magically avoid hackers and trolls manipulating it) and that I would have no choice in whether or not to allow that access despite running a server based on open source software in which I can edit the code myself if I wish (but somehow in this case wouldn't be able to edit it to prevent the external moderation from working).
You largely miss the point entirely of my other arguments: email is a perfect reference point because, despite private vs public, it faces all the same technical, social, and legal challenges. It's just an older system with a slightly different purpose (that doesn't change it's technical foundations, only just how it's interacted with), but the closest relative to activitypub with much much larger scale adoption. These issues and topics have already been discussed ad nauseum there.
And I didn't say users would moderate themselves, we decide what is worth taking action on. If you're not an admin, you choose whether or not something is worth reporting and whether or not you find the server you're on acceptable to your wants/needs. If you take issue with anti-vaxxers, climate change deniers, and nazis and your server allows all of that (either on the server itself, or has no issue with other servers that allow it)... then you move to a server that doesn't.
Finally, this doesn't end in centralization because of all the aforementioned gray areas. There are many things that I don't consider acceptable on my server but aren't grounds for defederation.
For example: I won't tolerate the ignoring of minority voices on topics of cultural appropriation and microaggressions... but I don't consider it a good idea to defederate other servers for it because the admins themselves often barely understand it and I would be defederation 90% of the fediverse at that point. If I see such from my users I will talk to them and take action as appropriate, but from other servers I'll report if the server looks remotely receptive to it.
@mudeth @pglpm The grey area is all down to personal choices and how "fascist" your admin is (which goes on to which instance is best for you?)
Defederation is a double-edged sword, because if you defederate constantly for frivolous reasons all you do is isolate your node. This is also why it's the *final* step in moderation.
The reality is that it's a whole bunch of entirely separate environments and we've walked this path well with email (the granddaddy of federated social networks). The only moderation we can perform outside of our own instance is to defederate, everything else is just typical blocking you can do yourself.
The process here on Mastodon is to decide for yourself what is worth taking action on. If it's not your instance, you report it to the admin of that instance and they decide if they want to take action and what action to take. And if they decide it's acceptable, you decide whether or not this is a personal problem (just block the user or domain on in your user account but leave it federating) or if it's a problem for your whole server (in which case you defederate to protect your users).
Automated action is bad because there's no automated identity verification here and it's an open door to denial of service attacks (harasser generates a bunch of different accounts, uses them all the report a user until that user is auto-suspended).
The backlog problem however is an intrinsic problem to moderation that every platform struggles with. You can automate moderation, but then that gets abused and has countless cases of it taking action on harmless content, and you can farm out moderation but then you get sloppiness.
The fediverse actually helps in moderation because each admin is responsible for a group of users and the rest of the fediverse basically decides whether they're doing their job acceptably via federation and defederation (ie. if you show that you have no issue with open Nazis on your platform, then most other instances aren't going to want to connect to you)
@shortwavesurfer The propulsion is absolutely linear, the perk of an ion drive is that it's mostly electrical with minimal fuel consumption.
It's also something we're already using, the first one actually launched was in 1964, though for some reason we never stopped hyping it.
An ion engine would absolutely make the trip take *longer* as you'd have to wait for better transfer windows (9 months is the timeframe *after* we wait for a good transfer window), we'd have to wait even longer for one with an ion drive and it absolutely wouldn't be a shorter window.
@thezeesystem I feel you, these are the best I've gotten. They also have in-ear which I've used and are equally good.
App isn't required, but very recommended (firmware updates, calibrating the audio and noise cancelling to your hearing, managing multipoint ie. switching connections between devices)
Also has incredible pass through mode (allowing you to optionally hear people without taking them off, almost as good as without wearing them at all imo).
Plus insane battery life, advertised as 40 hours... enough that by the time I get the low battery warning I've completely forgetten when I charged them last.
us.soundcore.com/products/spac…