13

What a good browser...

6
[-] ticoombs@reddthat.com 16 points 1 week ago

This is sso support as the client. So you could use any backend that supports the oauth backend (I assume, didn't look at it yet).

So you could use a forgejo instance, immediately making your git hosting instance a social platform, if you wanted.
Or use something as self hostable like hydra.

Or you can use the social platforms that already exist such as Google or Microsoft. Allowing faster onboarding to joining the fediverse. While allowing the issues that come with user creation to be passed onto a bigger player who already does verification. All of these features are up for your instance to decide on.
The best part, if you don't agree with what your instance decides on, you can migrate to one that has a policy that coincides with your values.

Hope that gives you an idea behind why this feature is warranted.

[-] ticoombs@reddthat.com 18 points 1 week ago

We enabled the CloudFlare AI bots and Crawlers mode around 0:00 UTC (20/Sept).

This was because we had a huge number of AI scrapers that were attempting to scan the whole lemmyverse.

It successfully blocked them... While also blocking federation 😴

I've disabled the block. Within the next hour we should see federation traffic come through.

Sorry for the unfortunate delay in new posts!

Tiff

6

to be paired with tang

65

Highly relevant to us (as admins)

3

Not so much a sploit but an easy way to do broadcasting!

10

We had a brief outage today due to the server running out of space.

I have been tracking our usage but associated it with extra logging and the extra build caches/etc that we've being doing.

Turns out the problem was the frontend Next-UI which has been caching every image since the container was created! All 75GB of cached data!

Once diagnosed it was a simple solution to fix. I'm yet to notify the project of this error/oversight and I'll edit this once Issues/PRs are created.
I also haven't looked at turning the caching off yet as my priority was recovering the main Reddthat service.

Thanks all for being here!

Tiff

56
21
4

Recently I've taken the docker compose example from SChernykh and have started a p2pool for Reddthat!

https://github.com/SChernykh/p2pool/tree/master/docker-compose (many thanks here!). After some minor changes I removed the IP listing from statistics and increase the visibility to 100 "supporters". It's viewable at donate.reddthat.com. (if @admin@monero.town wants the code change I can provide diff)

The idea was to also allow people to donate to instances via CPU instead of actual $.

My question for the community is whether I am creating a centralised pool or am I still participating in a decentralized fashion?

14
2

here's a graph showing when we did the deploy!

15
Tony Hawk's Pro Strcpy (icode4.coffee)
[-] ticoombs@reddthat.com 36 points 1 month ago* (last edited 1 month ago)
[-] ticoombs@reddthat.com 20 points 6 months ago* (last edited 6 months ago)

That's a big decision I won't make without community input as it would affect all of us.

If we purely treated it as just another instance with no history then I believe our stance on it would be to allow them, as we are an allow-first type of instance. While there are plenty of people we might not want to interact with, that doesn't mean we should immediately hit that defederate button.

When taking history into account it becomes a whole different story. One may lean towards just saying no without thought.

All of our content (Lemmy/Fediverse) is public by default (at the present time) searchable by anyone and even if I were to block all of the robots and crawlers it wouldn't stop anyone from crawling one of the many other sites where all of that content is shared.

A recent feature being worked on is the private/local only communities. If a new Lemmy instance was created and they only used their local only communities, would we enact the same open first policy when their communities are closed for us to use? Or would we still allow them because they can still interact, view comments, vote and generate content for our communities etc?

What if someone created instances purely for profit? They create an instance corner stone piece of the "market" and then run ads? Or made their instance a subscription only instance where you have to pay per month for access?

What if there are instances right now federating with us and will use the comments and posts you make to create a shit-posting-post or to enhance their classification AI? (Obviously I would be personally annoyed, but we can't stop them)

An analogy of what threads is would be to say threads is a local only fediverse instance like mastodon, with a block on replies. It restricts federation to their users in USA, Canada and Japan and Users cannot see when you comment/reply to their posts and will only see votes. They cannot see your posts either and only allow other fediverse users to follow threads users.

With all of that in mind if we were to continue with our open policy, you would be able to follow threads users and get information from them, but any comments would stay local to the instance that comments on the post (and wouldn't make it back to threads).

While writing up to this point I was going to stay impartial... But I think the lack of two way communication is what tips the scales towards our next instance block. It might be a worthwhile for keeping up-to-date with people who are on threads who don't understand what the fediverse is. But still enabled the feature because it gives their content a "wider reach" so to speak. But in the context of Reddthat and people expressing views and opinions, having one sided communication doesn't match with what we are trying to achieve here.

Tiff

Source(s): https://help.instagram.com/169559812696339/about-threads-and-the-fediverse/

PS: As we have started the discussion I'll leave what I've said for the next week to allow everyone to reply and see what the rest of the community thinks before acting/ blocking them.

Edit1:(30/Mar) PPS: we are currently not federated with them, as no one has bothered to initiate following a threads account

[-] ticoombs@reddthat.com 13 points 6 months ago* (last edited 6 months ago)

I managed to streamline the exports and syncs so we performed them concurrently. Allowing us to finish just under 40 minutes! Enjoy the new hardware!

So it begins: (Federation "Queue")
Federation queue showing a upwards trend, then down then slightly back up again

[-] ticoombs@reddthat.com 13 points 6 months ago

Successfully migrated from Postgres 15 to Postgres 16 without issues.

[-] ticoombs@reddthat.com 21 points 10 months ago

It's a sad day when something like this happens. Unfortunately with how the Lemmy's All works it's possible a huge amount of the initial downvotes are regular people not wanting to see the content, as downvotes are federated. This constituted as part of my original choices for disabling it when I started my instance. We had the gripes people are displaying here and it probably constituted to a lack in Reddthat's growth potential.

There needs to be work done not only for flairs, which I like the idea of, but for a curated All/Frontpage (per-instance). Too many times I see people unable to find communities or new content that piques their interest. Having to "wade through" All-New to find content might attribute to the current detriment as instead of a general niche they might want to enjoy they are bombarded with things they dislike.

Tough problem to solve in a federated space. Hell... can't even get every instance to update to 0.18.5 so federated moderation actions happen. If we can't all decide on a common Lemmy instance version, I doubt we can ask our users to be subjected to not using the tools at their disposal. (up/down/report).

Keep on Keeping on!

Tiff - A fellow admin.

[-] ticoombs@reddthat.com 23 points 1 year ago

Don't forget & in community names and sidebars.

Constantly getting trolled by &

[-] ticoombs@reddthat.com 13 points 1 year ago

No worries & Welcome!

That is correct, we have downvotes disabled though this instance. There was a big community post on it earlier over here: https://reddthat.com/post/110533
Basically, it boils down to: If we are trying to create a positive community, why would we have a way to be negative?

While that is also a very limited view on the matter, it's one I want to instill into our communities. Sure downvotes do help with "offtopic" posts & the possible spam, but at the current time of that post. No application thirdparty (a mobile app) or firstparty (lemmy-ui) had any features about hiding negatively voted content. (ie; if it was -4 don't show it to me).

By default (which is the same now as it was then) "Hot" only takes into account the votes as one of many measures about how "hot" a post is for the ranking. Up & Down votes are only really good for sorting by "Top".

My biggest concern, from then and now. Is that because we now federate with over 1000 different instances, and by design Lemmy accepts all votes from any instance that you are federating against, vote manipulation (10's of thousands) of accounts could downvote every post on our instance into oblivion. Or the even more subtle and more nefarious, down vote every post until it hits 0/1 constantly. You might assume that the posts are not doing well, and nothing is happening.

As Reddthat is basically run by a single person at this point in time and for the foreseeable future (3-6 months), adding downvotes would have added extra effort on my part in monitoring and ensuring nothing nefarious is happening. Moderation is still a joke in Lemmy, reports are a crapshoot and the ability for people to spam any lemmy server is still possible.

Until we get bigger, have more mods in our communities, and I can find others who are equally invested in Reddthat as myself to become Admins, I won't be enabling downvotes (unless the community completely usurps me on the matter of course).

I hope that answers your question.

Cheers

Tiff

[-] ticoombs@reddthat.com 20 points 1 year ago

Updates hiding in the comments again!

We are now using v0.18.3!

There was extended downtime because docker wouldn't cooperate AT ALL.

The nginx proxy container would not resolve the DNS. So after rebuilding the containers twice and investigating the docker network settings, a "simple" reboot of the server fixed it!

  1. Our database on the filesystem went from 33GB to 5GB! They were not kidding about the 80% reduction!
  2. The compressed database backups went from 4GB to ~0.7GB! Even bigger space savings.
  3. The changes to backend/frontend has resulted in less downtime when performing big queries on the database so far.
  4. The "proxy" container is nginx, and because it utilises the configuration upstream lemmy-ui & upstream lemmy. These are DNS entries which are cached for a period of time. So if a new container comes online it doesn't actually find the new containers because it cached all the IPs that lemmy-ui resolves too. (In this example it would have been only 1, and then we add more containers the proxy would never find them). 4.1 You can read more here: http://forum.nginx.org/read.php?2,215830,215832#msg-215832
  5. The good news is that https://serverfault.com/a/593003 is the answer to the question. I'll look at implementing this over the next day(s).

I get notified whenever reddthat goes down, most of the time it coincided with me banning users and removing content. So I didn't look into it much, but honestly the uptime isn't great. (Red is <95% uptime, which means we were down for 1 hour!).

Actually, it is terrible.

With the changes we've made i'll be monitoring it over the next 48 hours and confirm that we no longer have any real issues. Then i'll make a real announcement.

Thanks all for joining our little adventure!
Tiff

[-] ticoombs@reddthat.com 28 points 1 year ago

These were because of recent spam bots.

I made some changes today. We now have 4 containers for the UI (we only had 1 before) and 4 for the backend (we only had 2)

It seems that when you delete a user, and you tell lemmy to also remove the content (the spam) it tells the database to mark all of the content as deleted.

Kbin.social had about 30 users who posted 20/30 posts each which I told Lemmy to delete.
This only marks it as deleted for Reddthat users until the mods mark the post as deleted and it federates out.

The problem

The UPDATE in the database (marking the spam content as deleted) takes a while and the backend waits(?) for the database to finish.

Even though the backend has 20 different connections to the database it uses 1 connection for the UPDATE, and then waits/gets stuck.

This is what is causing the outages unfortunately and it's really pissing me off to be honest. I can't remove content / action reports without someone seeing an error.

I don't see any solutions on the 0.18.3 release notes that would solve this.

Temp Solution

So to combat this a little I've increased our backend processes from 2 to 4 and our front-end from 1 to 4.

My idea is that if 1 of the backend processes gets "locked" up while performing tasks, the other 3 processes should take care of it.

This unfortunately is an assumption because if the "removal" performs an UPDATE on the database and the /other/ backend processes are aware of this and wait as well... This would count as "locking" up the database and it won't matter how many processes I scale out too, the applications will lockup and cause us downtime.

Next Steps

  • Upgrade to 0.18.3 as it apparently has some database fixes.
  • look at the Lemmy API and see if there is a way I can push certain API commands (user removal) off to its own container.
  • fix up/figure out how to make the nginx proxy container know if a "backend container" is down, and try the other ones instead.

Note: we are kinda doing #3 point already it does a round-robbin (tries each sequentially). But from what I've seen in part of the logs it can't differentiate between one that is down and one that is up. (From the nginx documentation, that feature is a paid one)

Cheers, Tiff

[-] ticoombs@reddthat.com 13 points 1 year ago* (last edited 1 year ago)

Inflated. There are lots of instances with 20-80k bots.

Edit: https://the-federation.info/platform/73 . order by total users and then see the total active users. Totally different.

view more: next ›

ticoombs

joined 1 year ago
MODERATOR OF