snowe

joined 2 years ago
MODERATOR OF
[–] snowe@programming.dev 5 points 1 week ago

AI bots will sometimes get stuck requesting the same URL over and over again for no reason. Make sure you check the user agent of the requests.

[–] snowe@programming.dev 1 points 3 months ago

I did. Didn’t function at all. Had a ton of issues with the sound card and the nvidia graphics driver also shit the bed multiple times and many games didn’t work. Same for mint though.

[–] snowe@programming.dev 1 points 3 months ago

I mean sure, but you can say the same about any resolution possible. At some microscopic distance on a 42k display you will be able to see the difference. Your scenario here is pretty much “if you use a display in a manner it wasn’t intended then you’ll be able to see the difference in resolution when you compare it to a display that is used in the manner intended”

[–] snowe@programming.dev 3 points 3 months ago (2 children)

But that could easily be due to the quality of the projector rather than the resolution. Everyone in here saying they notice differences is completely missing the point. You’d need to compare against the exact same panel type, manufacturer, model year, etc with the exact same manufacturing processes in order to come to this conclusion yourself.

[–] snowe@programming.dev 1 points 3 months ago

I mean that part makes sense. This is essentially the exact same community. Linux users will spread to any “general” Linux community on the web.

[–] snowe@programming.dev 3 points 3 months ago (1 children)

It’s pretty hard to fit all of what you just said into a meme. For example your meme does not say that windows users are “dismissing any alternative out of hand”. It says windows users that refuse to switch. Maybe they hate windows but they literally must use Fusion 360 or AutoDesk or Meshmixer or RealityCapture or one of the numerous other software options that just do not work on windows.

Anyway, if what your meme is actually about is people that only use the browser and then refuse to switch but still constantly complain then yeah you’re dead on.

[–] snowe@programming.dev 1 points 3 months ago

No I’m talking about complaining about the problems you’re having with Linux not asking for help with anything. It’s literally happened in this exact community here, where people ask what issues people encounter with Linux, and if I (or others) say I have any issues I get downvoted to hell.

[–] snowe@programming.dev 33 points 3 months ago (10 children)

Yeah it’s super weird, if you go to a specific forum for help, like cachyos or bazzite, the community is bearable, and sometimes very helpful without being rude. But if you go to a general forum and state you’re having any issues with Linux you’ll be downvoted to hell and told Linux is still better than windows.

[–] snowe@programming.dev 3 points 4 months ago

Sorry for the silence. My years on the internet make me hesitant to claim I'll do something publicly until I'm already almost done. Else it's unlikely to get done and I'm then not keeping my word.

[–] snowe@programming.dev 6 points 4 months ago

and thank you for being a great contributor to the community! this site would be nothing without all of you!

[–] snowe@programming.dev 8 points 4 months ago

it is helping, thank you for the sponsorship. I should have migrated a long time ago because the costs really were adding up. I'll update my sponsor page after I have a fresh month of data for the bucket costs (which are still on Vultr) and the new server costs (which hopefully should be static). Thanks for the suggestion!

[–] snowe@programming.dev 11 points 4 months ago (2 children)

programming.dev is the 9th largest lemmy server. https://join-lemmy.org/instances

That stat was probably that low due to the server being down for around 90% of the last two weeks. If you look now it's at 220 and it will continue to go up.

On top of that, every action on every server that is federated is relayed to every instance. So all of lemmy.world's activity is still relayed to us and we have to handle it. Same for the other servers.

On top of that we also operate many other services:

  • bytes.programming.dev
  • git.programming.dev
  • blocks.programming.dev
  • etc (there's a lot)

But really it was mostly just postgres thrashing on all the requests. Here's a look at our Cloudflare dashboard for number of requests:

Yes this should be handle-able by a server that small (think actor paradigm), but I was unable to tune postgres to get it to that point as I'm not great at database stuff. I'm sure a DBA would have done a better job. I will note that some of the queries being used in the lemmy code are very badly optimized and were taking 20+ seconds to run each time, locking up the instance. With that on top of some other badly optimized selects for things like reading comments (which would take like 7s mean), there wasn't much I could do.

With the cost difference it was well worth it to just upgrade to a cheaper better server all around.

 

Hi all,

First off, I want to apologize for all the server instability. We long ago outgrew our instance size, but I was unable to afford a larger node on our provider, Vultr. We were maxing out every part of the server whenever any even slightly significant number of users were on the fediverse.

I've finally found the time to migrate us to a new provider, which allows us to step up to a much more powerful configuration. That migration has now been completed. I actually intended to post about the downtime on this community this morning before beginning, but when I went to do so, the server was already down and struggling to come back up. So I went ahead with the migration.

Server before 4cpu/16GB/400GB NVMe Server after 8cpu/64GB/1Tb NVMe

Please update this thread if you are seeing any issues around any part of the site. This means duplicate threads, things that aren't federating, inability to load profiles, etc.

There is still database tuning that needs to occur, so you should expect some downtime here and there, but otherwise the instance should be much more stable from now on.

During this process I also improved several other aspects of operating the server, so any 'actual' downtime should be accompanied by proper maintenance pages (that hopefully don't get wiped by ansible anymore), so that will also be a good indicator of legitimate maintenance.

Once again, I really apologize for all of the downtime. It's very frustrating to use a server that operates like this, I understand.

snowe

 

We are going to be upgrading to 0.19.5 at 3:00 UTC on Thursday Oct 3 (10 minutes from now).

Downtime is expected to be about an hour. Hopefully it is not more.

 

I will no longer be able to assist with development nor debugging actual issues with the software... Quite juvenile behavior from the devs. It stemmed from this issue where the devs continuously argued in public by opening and closing an issue. Anyway, thought I would keep y'all apprised of the situation, since these are the people maintaining the software you are currently using.

 

Over the weekend we had a large intermittent outage, followed up by unplanned maintenance that I had put off for way too long.

Lemmy runs with several different services.

  • lemmy-ui (the reactesque frontend)
  • lemmy (the rust backend)
  • postgres (the data store for operations, comments, posts, etc)
  • pictrs (the image data store)

The outage concerns itself with the last one. We always knew we'd eventually need to migrate to an object based store, but Lemmy defaults to file based picture storage and that's what we stuck with up until now. This eventually caused the VPS that programming.dev is running on to seize up, and resulted in the outage over the weekend.

Saturday night I spent several hours testing out the object migration on the beta.programming.dev site in order to validate that it worked. During this time I struggled with some very obtuse ansible errors that I hadn't encountered before and so I was not able to start the migration that night. I delayed until the next morning (thank goodness).

I began work Sunday morning at 10:00 America/Denver time. Initially the migration started off quite well, but was moving incredibly slowly. Looking back on it now, the migration would have taken over 144 hours if I left it to do its thing. I let this run for about an hour before messaging the pictrs dev to understand why logs weren't showing up for the migration (even though objects were showing up in the store). Apparently lemmy-ansible is set to use 0.4.0 of pictrs, which not only is quite old, but doesn't have the ability to run migrations concurrently. There was the issue. I asked the dev is it was possible to stop a migration in the middle of the running, upgrade, and continue. They told me what changes I'd need to make, I made them, did the upgrade, and restarted the migration. It immediately failed. This was the start of my issues.

The server was now too full of data to do anything, including running apt update or apt install to install tools to assist me. I was able to attach more block storage, but I'm not enough of a linux guru to figure out how to mount it where the current pictrs filesystem would be able to take advantage of it. I had to result to copying the entire pictrs filesystem to a fresh ~500gb mount, fixing permissions, and then rerunning the migration from there. By the time I got to this point, it was about 12:30PM. The migration from then on took several hours.

After the migration completed, I needed to deploy the new stack with the correct settings. The ansible script needed to run apt though, and, well, that wouldn't work when the server was still full. At this point I was not confident in the migration and I also hadn't realized that you could do the migration while the site was running (big oversight from me). I therefore wanted to maintain the entire pictrs file store until I proved the object store was working. I created another block storage, copied the entire pictrs directory over to it again (another 20 minutes or so) and then deleted the original directory. I was now able to run the ansible script and deploy the new settings for pictrs, confident that I had a backup available in case something went wrong (this is not the main backup method, the server is backed up externally as well, but I didn't want to have to resort to those during the migration).

That completed the migration, some 5 hours after it originally started.

There were several things that exacerbated the issue that made it take several hours longer than I wanted.

  1. I let it go so long before doing the migration to object storage that the server was too full to even perform an apt update. This resulted in me not being able to install tools I needed, along with a host of other issues as mentioned
  2. pict-rs was at a very suboptimal version. If it had just been two minor versions newer it would have migrated perfectly fine, in a few hours.
  3. my limited knowledge around ansible led me on wild goose chases several times

Things I would change if I had to do it again:

  1. Dig in a bit deeper on the concurrency flag in the pictrs docs. It was not present in the original guide I followed (from a lemmy post on another instance), and thus I didn't realize that it wouldn't run with concurrency at all.
  2. Don't wait so long so that the server is full
  3. Migrate while the server is running. That would have been dumb in this case, since the server wouldn't stay up anyway, and could have caused other issues. But there was no reason to take the server down if it had been stable, and other instances have done so with no problems.
 
 

It seems like the password limit is set to 60 characters so I’m unable to login to my instance. There probably should be no limit in the app because each server could have different limits set.

 
 

There's gods for everything, but of course computers didn't exist in ancient Roman and Greek times. What God or Goddess in your opinion would personify Testing?

And yes these answers matter. 😬

 

Start by reading these two articles:

Ok, now that you've done that (hopefully in the order I posted them), I can begin.

I have always been a strong supporter of Open Source Software (OSS), so much so that all of my projects (yes all) are OSS and fully open for anyone to use. And with that, I knew that things could be used for good... and bad. I took that risk. But I also made sure to build stuff that wasn't, in itself, inherently bad. I didn't build anything unethical to my eyes (I understand the nuance here).

But I've seen what unethical devs can do.

Just take a look at those implementing the ModFascismBot for Reddit (that's not its name, but that's what it is). That is an incredibly unethical thing to build. Not because it's a private company controlling what they want their site to do, no, that's fine by me. Reddit can do whatever they want. But because it's an attempt to lie about reality, to force users to do something through manipulation not through honesty. Even subreddits that voted overwhelmingly to shut down still got messaged by the bot telling them that the users (that voted for it) didn't want it and they had to open back up or they would be removed from mod position. This is not ethical. This is not right. This is not what the internet is about.

Or the unethical devs at Twitter, who:

It's one thing for an organization to have political lean...that is just a part of life, and that will never end. It's another to actually sow disinformation in order to accomplish nefarious things to further your profits. It is what has caused massive addiction to tobacco, the continuation of climate change, death and disfiguration from forever chemicals, ovarian cancer and mesothelioma from undisclosed exposure to asbestos, or selling 'health products' that claim to cure everything under the sun, but can "interfere with clinical lab tests, such as those used to diagnose heart attacks".

Please do not confuse this for saying that companies shouldn't be able to sell things and make a profit. If you want to sell someone something that kills them if they misuse it and you market it as such, you go for it. That's literally how every product in the cleaning aisle of your grocery store works. That's how guns work, that's how fertilizers work, that's why we have labels. But manipulation for profit is unethical, and that's why companies hide it. It hurts their bottom line. They know that their products will not be used if they reveal the truth. Instead of doing something good for humanity, they choose the subterfuge. Profits over people. Profits over Earth honestly. Profits over continuing the human race. Absolutely nothing matters to companies like this. And unethical developers enable this.


Facebook (ok, fine, Meta, still going to refer to them as FB though) is trying to join the Fediverse. We as a community, but honestly each of you as individuals, have a decision to make. Do they stay or do they go? Let's put some information on the table.

Facebook...

  • lies about the amount of misinformation it removes ^1
  • increased censorship of 'anti-state' posts ^1 ^2 ^3
  • lied to Congress about social networks polarizing people, while FB's own researchers found that they do ^2
  • attempted to attract preteens to the platform (huh, wonder where all that "you must be 13" stuff went) ^4
  • rewards outrage and discord ^3

Facebook also...

  • Allows for checking on friends and family in disasters ^6
  • Created and maintained some of the most popular open source software on the planet (including the software that runs the interface you're looking at right now) [^7][^8]

From my perspective... There's not much good about FB. It has single handedly caused the deaths of tens of thousands of people across the planet, if not hundreds of thousands. It continually makes people angrier and angrier. It's a launching pad for scammers, thieves, malevolent malefactors, manipulators, dictators, to push their conquests onto the world through manipulation, lies, tricks, and deceit. Its algorithms foster an echo chamber effect, exacerbating division and animosity, making civil discourse and mutual understanding all but impossible. Instead of being a platform for connection, it often serves as a catalyst for discord and misinformation. FB's propensity for prioritizing user engagement over factual accuracy has resulted in a global maelstrom of confusion and mistrust. Innocent minds are drawn into this vortex, manipulated by fear and falsehoods, consequently promoting harmful actions and beliefs. Despite its potential to be a tool for good, it is more frequently wielded as a weapon, sharpened by unscrupulous entities exploiting its vast reach and influence. The promise of a globally connected community seems to be overshadowed by its darker realities.


As a person, I believe that we need to choose things as a community. I do not believe in the 'BDFL'...the Benevolent Dictator For Life. Graydon Hoare, creator of Rust, wrote an article just recently about how things would have been different if they had stayed BDFL of Rust. From my position the BDFLs we currently have on this planet really suck. Not just politically, but even in tech. I don't think that path is good for society. It might work in specific circumstances, but it usually fails, and when it does, people get hurt. Badly.

So, with that in mind, I've been working on a polling feature for Lemmy. I seriously doubt I'll be done with it soon, but hopefully FB takes a while longer to implement federation. I understand there's a desire for me, or the other admins to just make a decision, but I really don't like doing that. If it comes down to it, I will implement defederation to start with, but I will still be holding a vote as soon as I can get this damn feature done.


[^8]: the website actually uses Inferno, but from what I can tell it was forked directly from React, judging from the actually documentation and references in the repo.

41
submitted 2 years ago* (last edited 2 years ago) by snowe@programming.dev to c/meta@programming.dev
 

I will be updating the instance to v18 at ~~20:00~~ 22:00 UTC.

See https://programming.dev/post/181191 for the changes

edit: Lemmy.ml updated and seems to have gone down. We're going to wait and see what the outage was caused by and then proceed from there.

edit 2: lemmy was down due to a ddos attack. We will upgrade at 22:00 UTC

edit 3: we had issues with the email setup getting overridden again. If you tried to sign up in the past 8 hours please try to just log in. If you can't, please message me (discord, matrix, or mastodon)

2
submitted 2 years ago* (last edited 2 years ago) by snowe@programming.dev to c/meta@programming.dev
 

I'm going to be working on getting the instance upgraded to 17.4 today. I had tried in the past, but had some issues with cloudflare and dns resolution.

First attempt will be at 17:00 UTC.

I'll update this post if that doesn't work and provide a second time.

view more: next ›