News and Discussions about Reddit
Welcome to !reddit. This is a community for all news and discussions about Reddit.
The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:
Rules
Rule 1- No brigading.
**You may not encourage brigading any communities or subreddits in any way. **
YSKs are about self-improvement on how to do things.
Rule 2- No illegal or NSFW or gore content.
**No illegal or NSFW or gore content. **
Rule 3- Do not seek mental, medical and professional help here.
Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.
Rule 4- No self promotion or upvote-farming of any kind.
That's it.
Rule 5- No baiting or sealioning or promoting an agenda.
Posts and comments which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.
Rule 6- Regarding META posts.
Provided it is about the community itself, you may post non-Reddit posts using the [META] tag on your post title.
Rule 7- You can't harass or disturb other members.
If you vocally harass or discriminate against any individual member, you will be removed.
Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.
Rule 8- All comments should try to stay relevant to their parent content.
Rule 9- Reposts from other platforms are not allowed.
Let everyone have their own content.
Rule 10- Majority of bots aren't allowed to participate here. This includes using AI responses and summaries.
view the rest of the comments
I sometimes wonder how prevalent bots are on Lemmy. On one hand, the barrier for entry might be lower / the effectiveness of bans harder to gauge. On the other, I'd think we're a smaller target, less attractive as a target.
Either way, the potential to accuse dissenters of being bots or paid actors is a symptom of the general toxicity and slop spilling all over the internet these days. A (comparatively) few people can erode fundamental assumptions and trust. Ten years ago, I would've been repulsed by the idea of dehumanising conversational opponents that way (which may have been just me being more naive), but today I can't really fault anyone.
In terms of risk assessment (value÷effort), I'm inclined to think something with the reach of Ex-Twitter or reddit would be a more lucrative target, and most people here actually are people—people I disagree with, maybe, but still a human on the other side of the screen. Given the niche appeal, the audience here may overall be more eccentric and argumentative, so it's easy to mistake genuine users for propaganda bots instead of just people with strong convictions.
But I hate that the question is a relevant one in the first place.
We are the web. There is no web without the we.
It is ultimately humans who add value to the internet. We can make decisions, take action, have bank accounts, bots for the most part still can't. If we keep growing, there will come a time where swaying opinions, impressing advertisements or driving dissent will reach that value/effort threshold, especially with the effort term shrinking more everyday
I think that we are genuinely witnessing the end of the internet as we know it and if we want meaningful online contact to persist after this death, then we should come up with ways that communities can weather the storm.
I don't know what the solution is, but I want to talk and think about it with others that care.
On the individual level we can maybe fortify against the reasons that might make someone want to extract that value.
I would love to hear what others think.
Hey, no judging my sleep ~~schedule~~ arbitrary times when biological necessity triumphs over all the fun things I could do while awake!
Serious reply:
On the collective level, we should do something about the mechanisms that incentivise that malicious extraction of value in the first place, but that's a whole different beast...
Agreed, though we should also stress that "less likely" or "unlikely" doesn't mean "never" and that we're not immune against being influenced by ads. That's a point I've seen people in my social circles overlook or blatantly ignore when pointed out, hence me emphasising it.
This is probably one of the most critical deficits in general. Even with the best intentions, people make mistakes and it's critical to be aware of and able to compensate that.
Same as media literacy, I feel like this is a point that would apply even in a world where we're all humans arguing in good faith: Others may have a different, perhaps limited or flawed perspective, or just make mistakes — just as you yourself may overlook things or genuinely have blind spots — so we should consider whose voice we give weight in any given matter.
On the flipside, we may need to accept that our own voice might not be the ideal one to comment on something. And finally, we need to separate those issues of perspective and error from our worth as persons, so that admitting error isn't a shame, but a mark of wisdom.
That's the arms race we're currently running, isn't it? Developers of bots put effort into making them appear authentic—I overheard someone mention that their newest model included an extra filter to "screw up" some things people have come to consider indicators of machine-generated texts, such as these dashes that are mostly used in particular kinds of formal writing and look out of place elsewhere.
If at all, people tend to just use a hyphen instead - it's usually more convenient to type (unless you've got a typographic compulsion to go that extra step because that just looks wrong). And so the dev in question made their model use less dashes and replace the rest with hyphens to make the text look more authentic.
I wanted to spew when I heard that, but that's beside the point.
So basically, we'd have to constantly be running away from the bots' writing style to set ourselves apart, even as they constantly chase our style to blend in. Our best weapon would be the creative intuition to find a way of phrasing things other humans will understand but bots won't (immediately) be able to imitate.
Being creative on demand isn't exactly a viable solution, at least not individually, and coordinating on the internet is like harding lolcats, but maybe we can work together to carve out some space for humanity.
Thanks for your comments. I agree with everything you said especially that these traits are desirable for broader life IRL. In a way the web culture is a reflection of our own cultures just more mixed, extreme, amplified and with a good dose of parasociallity. I desperately want people to break free of their cycles. Think, talk, discuss, empathize and form communities, use your free will for good damit. These are the real antidotes that will enable the cultural shift that will allow us to reject the smothering of the human spirit in the current way of life.
Anyways, it is a terrible thing that there is an armsrace to be authentic. This really ought to be solved on the user registration side. And also yes, saying something profound with hidden meaning through creative intuition is great, I write poems sometimes. But its not the solution to authenticity online.