this post was submitted on 06 Feb 2026
6 points (100.0% liked)
Sysadmin
13083 readers
5 users here now
A community dedicated to the profession of IT Systems Administration
No generic Lemmy issue posts please! Posts about Lemmy belong in one of these communities:
!lemmy@lemmy.ml
!lemmyworld@lemmy.world
!lemmy_support@lemmy.ml
!support@lemmy.world
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Unless you are running at really large scales, or really small scales and trying to fit stuff that quite fit, memory compression may not be significant enough of an optimization to spend a lot of time experimenting a lot. But I'm bored and currently on an 8 GB device so here are my thoughts dumped out from my recent testing:
Zram vs Zswap (can be done at hypervisor or at host):
Kernel same page merging (KSM) (would be done at hypervisor level) (esxi also has an equivalent feature called something different):
In my opinion, the best thing is to enable zram or zswap at the virtual machine level and kernel same page merging at the hypervisor level, assuming you take into account and accept the marginal security risk and slightly weaker isolation that comes with KSM. There isn't any point running zswap at two layers, because the hypervisor is just gonna spend a lot of time trying to see if it can compress stuff that's already been compressed. Than KSM deduplicates memory across hosts. Although you may actually see worse savings overall if zram/zswap compression is only semi-deterministic and makes deduplication ahrder.
I agree with the other commenter as well about zram being weird with some workloads. Like I've heard of I think it was blender interacting weirdly with zram since zram is swap, making less total memory available in ram, whereas zswap compresses memory. If you really need to know you gotta test.