44
submitted 1 month ago by alessandro@lemmy.ca to c/pcgaming@lemmy.ca
top 14 comments
sorted by: hot top controversial new old
[-] schizo@forum.uncomfortable.business 53 points 1 month ago* (last edited 1 month ago)

Yes those are all lovely fancy numbers, but the only ones I really give a shit about come after the $, and the one that comes before the W on the power supply requirements.

[-] recursive_recursion@programming.dev 11 points 1 month ago* (last edited 1 month ago)

Coming soon to Costco: 10 packs of 5090s.

[-] neidu2@feddit.nl 23 points 1 month ago* (last edited 1 month ago)

Yeah, about clock speeds.... remember when they were front and center 20 years ago while marketing CPUs? Intel started marketing CPUs by their clock speeds in the 90's, hilighting that as a selling point over their competitors that usually ran at slightly lower clock speeds.

But Intel painted themselves into a corner: Clock speeds don't matter - instruction sets and floating point ops per seconds do. In the mid 2000s they had to slowly phase out the clock speed marketing, as clock speeds had reached such levels that further increases would be detrimental to performance, so they had to change their marketing and branding strategy.

As soon as clock speed marketing had been phased out, Intel CPUs actually ran at lower speeds than the previous generation, while still outperforming them.

I'm curious to see whether nvidia is about to do the same thing.

[-] deegeese@sopuli.xyz 17 points 1 month ago* (last edited 1 month ago)

GPU code is more amenable to high clock speeds because it doesn’t have the branch prediction and data prefetch problems of general purpose CPU code.

Intel stopped chasing clock speed because it required them to make their pipelines extremely long and extremely vulnerable to a cache miss.

[-] Dudewitbow@lemmy.zip 10 points 1 month ago* (last edited 1 month ago)

also to bring a rudamentary comparison:

a cpu is a few very complicated cores, a gpu is thousands of dumb cores.

its easier to make something doing something low in instructions(gpu) faster than something that has a shit ton of instructions(cpu) due to like you mention, branch prediction.

modern cpu performance gains is focusing more on paralellism and in the case of efficiency cores, scheduling to optimize for performance.

GPU wise, its really something as simple as GPUs are typically memory bottlenecked. memory bandwidth (memory speed x bus width with a few caveats with cache lowering requirements based on hits) its the major indicator on GPU performance. bus width is fixed on a hardware chip design, so the simpilist method to increase general performance is clocks.

[-] ramble81@lemm.ee 19 points 1 month ago

Cool cool…. What about the price? That’s all I care about at this point.

[-] ArtVandelay@lemmy.world 14 points 1 month ago

No no 5090 is the price, not the model

[-] _sideffect@lemmy.world 9 points 1 month ago
[-] drasglaf@sh.itjust.works 4 points 1 month ago

And it will cost 3000€

[-] wreckedcarzz@lemmy.world 8 points 1 month ago
[-] TomAwsm@lemmy.world 2 points 1 month ago
[-] SuckMyWang@lemmy.world 4 points 1 month ago

It could be, yes of course

this post was submitted on 11 Jul 2024
44 points (86.7% liked)

PC Gaming

8006 readers
240 users here now

For PC gaming news and discussion. PCGamingWiki

Rules:

  1. Be Respectful.
  2. No Spam or Porn.
  3. No Advertising.
  4. No Memes.
  5. No Tech Support.
  6. No questions about buying/building computers.
  7. No game suggestions, friend requests, surveys, or begging.
  8. No Let's Plays, streams, highlight reels/montages, random videos or shorts.
  9. No off-topic posts/comments.
  10. Use the original source, no clickbait titles, no duplicates. (Submissions should be from the original source if possible, unless from paywalled or non-english sources. If the title is clickbait or lacks context you may lightly edit the title.)

founded 1 year ago
MODERATORS