[-] gabe565@lemmy.cook.gg 4 points 1 year ago

Thank you! I have to admit, it's really satisfying seeing sponsored segments get skipped. Would definitely recommend!

118

Hi everyone! I've been using sponsorblockcast for a while (which is a great project), but I always wished it was written in Go. The go-chromecast library that it uses is written in Go, so a Go app could connect to all devices within a single process instead of creating child processes for every device. I finally decided to spend some time writing my own, called CastSponsorSkip. All of the features of sponsorblockcast are re-implemented in Go, plus some additional privacy features. I wrote a comparison if anybody is curious!

[-] gabe565@lemmy.cook.gg 14 points 1 year ago

That's just the default, and I assume it's mainly to make it easier for new users to start using Lemmy. It lets you change to any other instance during login.

[-] gabe565@lemmy.cook.gg 2 points 1 year ago

Definitely! I'm hosting in Kubernetes so I won't post the full thing, but here's the actual command that I run hourly. Make sure to replace the values for database, username, and password.

PGPASSWORD=password psql --dbname=database --username=username --command="DELETE FROM activity WHERE published < NOW() - INTERVAL '3 days';"
[-] gabe565@lemmy.cook.gg 6 points 1 year ago

The activity table is also used to deduplicate incoming federation data, so instead of truncating it, I'd suggest deleting rows after a certain amount of time.

For my personal instance, I set up a cron to delete entries older than 3 days, and my db is only ~500MB with a few weeks of content! I also haven't seen any duplicated posts or comments. Even with Lemmy's retries, 3 days seems to be long enough before dropping rows from that table.

[-] gabe565@lemmy.cook.gg 2 points 1 year ago

Awesome! A separate nginx container is fine, so if it's working I'd probably leave it. I'll look through and see if there's anything I missed in my comment though for brevity.

[-] gabe565@lemmy.cook.gg 1 points 1 year ago

That whole room is amazing. I used that pic as a Zoom background for a while lol

[-] gabe565@lemmy.cook.gg 3 points 1 year ago* (last edited 1 year ago)

That's awesome! I love his Helm chart. It's the most impressive Helm library I've ever seen. I maintain a bunch of charts and I exclusively use his library chart :)

I just mentioned in a response to @seang96@exploding-heads.com, but I feel like deploying a separate nginx is probably cleaner, I just didn't want another SPOF that I could break at some point in the future.

[-] gabe565@lemmy.cook.gg 2 points 1 year ago

Hmm I'm not sure! That code snippet should only affect routing conditionally. When you added the configuration snippet, did your ingress logs show the requests to / going to the frontend or backend?

An nginx container behind ingress seems cleaner, I just didn't want to add another point that I could possibly break lol

[-] gabe565@lemmy.cook.gg 4 points 1 year ago

I use Plex with Plexamp and love it other than the forced online account, which is minor enough in my opinion that it's been hard to justify looking for an alternative. What did you move to?

[-] gabe565@lemmy.cook.gg 2 points 1 year ago

Up to 400MB after two days here. I took a look at the code and it looks like Lemmy keeps all ActivityPub JSON for 6 months. It would be nice if it was possible to shorten that.

I'm still happy that I'm hosting my own instance, but I hope this thing doesn't get too big!

[-] gabe565@lemmy.cook.gg 4 points 1 year ago

+1 for Borg! I use Borgmatic to backup files and databases to BorgBase. It costs me $80/yr for 1TB of backups which I think is sensible. I also selfhost an instance of Healthchecks.io for monitoring.

[-] gabe565@lemmy.cook.gg 2 points 1 year ago* (last edited 1 year ago)

Yep I'm still working on a helm chart. Currently, each service is deployed with the bjw-s app-template helm chart, but I'd like to combine it all into a single chart.

The hardest part was getting ingress-nginx to pass ActivityPub requests to the backend, but we settled on a hack that seems to work well. We had to add the following configuration snippet to the frontend's ingress annotations:

nginx.ingress.kubernetes.io/configuration-snippet: |
  if ($http_accept = "application/activity+json") {
    set $proxy_upstream_name "lemmy-lemmy-8536";
  }
  if ($http_accept = "application/ld+json; profile=\"https://www.w3.org/ns/activitystreams\"") {
    set $proxy_upstream_name "lemmy-lemmy-8536";
  }
  if ($request_method = POST) {
    set $proxy_upstream_name "lemmy-lemmy-8536";
  }

The value of the variable is $NAMESPACE-$SERVICE-$PORT.
I tested this pretty thoroughly and haven't been able to break it so far, but please let me know if anybody has a better solution!

2
view more: next ›

gabe565

joined 1 year ago