88
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 23 Jun 2024
88 points (66.3% liked)
Technology
59710 readers
2068 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
The TL;DR for the article is that the headline isn't exactly true. At this moment in time their PPU can potentially double a CPU's performance - the 100x claim comes with the caveat of "further software optimisation".
Tbh, I'm sceptical of the caveat. It feels like me telling someone I can only draw a stickman right now, but I could paint the Mona Lisa with some training.
Of course that could happen, but it's not very likely to - so I'll believe it when I see it.
Having said that they're not wrong about CPU bottlenecks and the slowed rate of CPU performance improvements - so a doubling of performance would be huge in this current market.
Putting the claim instead of the reality in the headline is journalistic malpractice. 2x for free is still pretty great tho.
Just finished the article, it's not for free at all. Chips need to be designed to use it. I'm skeptical again. There's no point IMO. Nobody wants to put the R&D into massively parallel CPUs when they can put that effort into GPUs.
Not every problem is amenable to GPUs. If it has a lot of branching, or needs to fetch back and forth from memory a lot, GPUs don't help.
Now, does this thing have exactly the same limitations? I'm guessing yes, but it's all too vague to know for sure. It's sounds like they're doing what superscalar CPUs have done for a while. On x86, that starts with the original Pentium from 1993, and Crays going back to the '60s. What are they doing to supercharge this idea?
Does this avoid some of security problems that have popped up with superscalar archs? For example, some kernel code running at ring 0 is running alongside userspace code, and it all gets the same ring 0 level as a result.
I get that we have to impress shareholders, but why can’t they just be honest and say it doubles CPU performance with the chance of even further improvement with software optimization. Doubling performance of the same hardware is still HUGE.
They... they did?
Not in the title
They didn't write the title.
I don't know what "they" you're talking about, but I think it's clear I'm referring to the person responsible for writing the original title. Not OP and not the article author if the publisher is choosing the title.
And I think it's pretty clear I'm not. And it seems pretty clear the OP wasn't either.
So... are you just stating random things for the fuck of it, or did you have an actual reason for bringing up a non-sequitur?
Was it though?
I'm just glad there are companies that are trying to optimize current tech rather than just piling over new hardware every damn year with forced planned obsolescence.
Though the claim is absurd, I think double the performance is NEAT.
This is new hardware piling. What they claim to do requires reworking manufacturing, is not retroactive with current designs, and demands more hardware components. It is basically a hardware thread scheduler. Cool idea, but it won't save us from planned obsolescence, if anything it is more incentive for more waste.
Ah, good ol’ magic wishful thinking…