Software engineers and game designers should be allowed 4 gb ram.
Programming
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Rules
- Follow the programming.dev instance rules
- Keep content related to programming in some way
- If you're posting long videos try to add in some form of tldr for those who don't want to watch videos
Wormhole
Follow the wormhole through a path of communities !webdev@programming.dev
*4 kilobytes is all they will ever need.
But how can I get anything done with these meager 128 GB computers?
It's not running out. It's being hoarded for the entropy machine.
Edit: anyone know if entropy machine ram can be salvaged for human use? If they use the same sticks?
Yes but you’ll need special hardware. Enterprise systems use registered “RDIMM” modules that won’t work in consumer systems. Even if your system supports ECC that is just UDIMM aka consumer grade with error correction.
This all being said I would bet you could find some cheap Epic or Xeon chips + an appropriate board if/when they crash comes.
Okay so I'd just need an enterprise board.
Yah. And a CPU to match. Either Epic or Xeon.
Engineering and quality samples float around sometimes which makes them more reasonable prices too. They have minor defects sometimes but I've never had an issue yet that matter
Server memory is probably reusable, though likely to be either soldered and/or ECC modules. But a soldering iron and someone sufficiently smart can probably do it (if it isn't directly usable).
So it's salvageable if they don't burn it out running everything at 500c
500°C would be way above the safe operating temps, but most likely yes.
You think the slop cultists care?
Yes, actually. Data centers are designed to cool down components pretty efficiently. They aren't cooking the RAM at 500°C.
500 might be hyperbole, but they do burn the things pretty hard. Not at like a real data center, but for the slop cultists.
Yeah, gonna be interesting. Software companies working on consumer software often don't need to care, because:
- They don't need to buy the RAM that they're filling up.
- They're not the only culprit on your PC.
- Consumers don't understand how RAM works nearly as well as they understand fuel.
- And even when consumers understand that an application is using too much, they may not be able to switch to an alternative either way, see for example the many chat applications written in Electron, none of which are interoperable.
I can see somewhat of a shift happening for software that companies develop for themselves, though. At $DAYJOB, we have an application written in Rust and you can practically see the dollar signs lighting up in the eyes of management when you tell them "just get the cheapest device to run it on" and "it's hardly going to incur cloud hosting costs".
Obviously this alone rarely leads to management deciding to rewrite an application/service in a more efficient language, but it certainly makes them more open to devs wanting to use these languages. Well, and who knows what happens, if the prices for Raspberry Pis and cloud hosting and such end up skyrocketing similarly.
I feel like it's as much the number of libraries as the language. There are many bloated C/C++ applications.
One of the main culprits in all of this is the Windows users themselves. How can poeple still degrade themselves like that? Stop installing Windows on your clients or servers...for the sake of humanity. You will do just fine without it, just gotta re-learn some controlls at most. Linux has been client friendly for YEARS now.
As a programmer myself I don't care about RAM usage, just startup time. If it takes 10s to load 150MB into memory it's a good case for putting in the work to reduce the RAM bloat.
I mean, don't get me wrong, I also find startup time important, particularly with CLIs. But high memory usage slows down your application in other ways, too (not just other applications on the system). You will have more L1, L2 etc. cache misses. And the OS is more likely to page/swap out more of your memory onto the hard drive.
Of course, I don't either sit in front of an application and can tell that it was a non-local NUMA memory access that caused a particular slowness, so I can understand not really being able to care for iterative improvements. But yeah, that is also why I quite like using an efficient stack outright. It just makes computers feel as fast as they should be, without me having to worry about it.
Side-note
I heavily considered ending this comment with this dumbass meme:

Then I realized, I'm responding to someone called "Caveman". Might've been subconscious influence there. 😅
I should learn C because of Unga Bunga reasons. I fully agree that lower RAM usage is better and cache misses are absolute performance killers but at the company I'm at there's just no time or people or scale to do anything remotely close to that. We just lazy load and allow things to slowly cost more RAM while keeping the experience nice.
I mean, for me, it's also mostly a matter of us doing embedded(-adjacent) software dev. So far, my company would hardly ever choose one stack over another for performance/efficiency reasons. But yeah, maybe that is going to change in the future.
As a programmer myself I don’t care about RAM usage,
But you repeat yourself.
Not sure why you said that. In programming I lean DRY unless it's a separate use case. The repetitions come from the hundreds of left pad implementations in node_modules
Add to the list: doing native development most often means doing it twice. Native apps are better in pretty much every metric, but rarely are they so much better that management decides it's worth doing the same work multiple times.
If you do native, you usually need a web version, Android, iOS, and if you are lucky you can develop Windows/Linux/Mac only once and only have to take the variation between them into account.
Do the same in Electon and a single reactive web version works for everything. It's hard to justify multiple app development teams if a single one suffices too.
"works" is a strong word
Works good enough for all that a manager cares about.
iOS
At my last job we had a stretch where we were maintaining four different iOS versions of our software: different versions for iPhones and iPads, and for each of those one version in Objective-C and one in Swift. If anyone thinks "wow, that was totally unnecessary", that should have been the name of my company.
At this rate I suspect the best solution is to cram everything but the UI into a cross-platform library (written in, say, Rust) and have the UI code platform-specific, use your cross-platform library using FFI. If you're big enough to do that, at least.
but the UI into a cross-platform library (written in, say, Rust)
Many have tried, none have succeeded. You can go allllll the way back to Java's SWING, as well as Qt. This isn't something that "just do it in Rust" is going to succeed at.
many chat applications written in Electron, none of which are interoperable.
This is one of my pet peeves, and a noteworthy example because chat applications tend to be left running all day long in order to notify of new messages, reducing a system's available RAM at all times. Bloated ones end up pushing users into upgrading their hardware sooner than should be needed, which is expensive, wasteful, and harmful to the environment.
Open chat services that support third party clients have an advantage here, since someone can develop a lightweight one, or even a featherweight message notifier (so that no full-featured client has to run all day long).
Just glad I invested in 64GBs when it only cost $200. Same ram today is nearly $700.
640k ought to be enough for anybody
TUI enthusiasts: "I've trained for this day."
P.S. Yes, I know a TUI program can still be bloated.
The tradeoff always was to use higher level languages to increase development velocity, and then pay for it with larger and faster machines. Moore's law made it where the software engineer's time and labor was the expensive thing.
Moore's law has been dying for a decade, if not more, and as a result of this I am definitely seeing people focus more on languages that are closer to hardware. My concern is that management will, like always, not accept the tradeoffs that performance oriented languages sometimes require and will still expect incredible levels of productivity from developers. Especially with all of nonsense around using LLMs to "increase code writing speed"
I still use Python very heavily, but have been investigating Zig on the side (Rust didn't really scratch my itch) and I love the speed and performance, but you are absolutely taking a tradeoff when it comes to productivity. Things take longer to develop but once you finish developing it the performance is incredible.
I just don't think the industry is prepared to take the productivity hit, and they're fooling themselves, thinking there isn't a tradeoff.
Hah, wishful thinking