-43
submitted 10 months ago by madasi@lemmy.dbzer0.com to c/linux@lemmy.ml

I'm not smart enough to verify the accuracy of this claim, nor exactly what the implications are, but it seems like it might improve performance if fixed.

top 6 comments
sorted by: hot top controversial new old
[-] Frederic@beehaw.org 22 points 10 months ago

Title bullshit, we have multicore machine for years, I can guarantee you this had about no impact else people running Xeon or Threadripper would have saw it at first try 15 years ago.

This looks like to have an impact on the scheduler but not on how many cores are used.

[-] drwho@beehaw.org 3 points 10 months ago

I agree. Some of the Linux servers I used to run at work in the early 00's were 12 to 16 core monsters (for the time) and the kernel didn't even blink.

[-] AernaLingus@hexbear.net 18 points 10 months ago

There's a variable that contains the number of cores (called cpus) which is hardcoded to max out at 8, but it doesn't mean that cores aren't utilized beyond 8 cores--it just means that the scheduling scaling factor will not change in either the linear or logarithmic case once you go above that number:

code snippet

/*
 * Increase the granularity value when there are more CPUs,
 * because with more CPUs the 'effective latency' as visible
 * to users decreases. But the relationship is not linear,
 * so pick a second-best guess by going with the log2 of the
 * number of CPUs.
 *
 * This idea comes from the SD scheduler of Con Kolivas:
 */
static unsigned int get_update_sysctl_factor(void)
{
	unsigned int cpus = min_t(unsigned int, num_online_cpus(), 8);
	unsigned int factor;

	switch (sysctl_sched_tunable_scaling) {
	case SCHED_TUNABLESCALING_NONE:
		factor = 1;
		break;
	case SCHED_TUNABLESCALING_LINEAR:
		factor = cpus;
		break;
	case SCHED_TUNABLESCALING_LOG:
	default:
		factor = 1 + ilog2(cpus);
		break;
	}

	return factor;
}

The core claim is this:

It’s problematic that the kernel was hardcoded to a maximum of 8 cores (scaling factor of 4). It can’t be good to reschedule hundreds of tasks every few milliseconds, maybe on a different core, maybe on a different die. It can’t be good for performance and cache locality.

On this point, I have no idea (hope someone more knowledgeable will weigh in). But I'd say the headline is misleading at best.

[-] madasi@lemmy.dbzer0.com 8 points 10 months ago

I took the title from the link, which doesn't exactly match the title on the page. That's why one says 20 years and the other says 15 years.

[-] Pantherina@feddit.de 6 points 10 months ago

Heard of that, because you know, a core Duo was a thing. They didnt think anything bigger was possible, some time in the past. But afaik thats pretty old news

[-] merthyr1831@lemmy.world 2 points 9 months ago

Would've been nice of them to compile the kernel with a fix applied to see how much of an impact it has (though even in the post they seem to suggest that it's not that impactful unless you run massive clusters)

this post was submitted on 14 Nov 2023
-43 points (28.7% liked)

Linux

47210 readers
753 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS