95
you are viewing a single comment's thread
view the rest of the comments
[-] golli@lemm.ee 8 points 3 weeks ago

Trying to do 10nm without EUV was a forgivable error

How so? Literally no one uses EUV for 10nm and this wasn't the problem. Isn't SMCI even pushing DUV toproducing 5nm?

My limited understanding is that they were too ambitious with e.g. using cobalt interconnects and at the same time had the issue that they tied their chip designs to specific nodes. Meaning that when the process side slipped they couldnt just take the design and use it on a different node without a lot of effort.

Also I think they were always going to lose apple at some point. With better products they might have delayed it further. But apple fundamentally has an interest in vertical integration and control. And they were already designing processors for their phones and tablets.

[-] Thrashy@lemmy.world 6 points 3 weeks ago

Keep in mind that when 10nm was in planning, EUV light sources looked very exotic relative to current tech, and even though we can see in hindsight that the tech works it is still expensive to operate -- TSMC's wafer costs increased 2x-3x for EUV nodes. If I was running Intel and my engineers told me that they thought they could extend the runway for DUV lithography for a node or two without sacrificing performance or yields, I'd take that bet in a heartbeat. Continuing to commit resources to 10nm DUV for years after it didn't pan out and competitors moved on to smaller nodes just reeks of sunk-cost fallacy, though.

had the issue that they tied their chip designs to specific nodes.

In fairness to Intel, every modern semi design house has that same issue: a chip is designed and laid out for a specific node, so this isn't really a failing so much as a how-it-works.

Of course, Intel was being very, very, very risky when they were designing for a process that basically didn't exist assuming that hey, they'll have it done by the time the design work is complete and they're RTM.

couldnt just take the design and use it on a different node without a lot of effort

Which is what they had to do once they failed to ship newer nodes on schedule with the new CPU designs, and well, we see how that ultimately cost them a whole hell of a lot, if not ultimately their entire business.

[-] golli@lemm.ee 1 points 2 weeks ago

In fairness to Intel, every modern semi design house has that same issue: a chip is designed and laid out for a specific node, so this isn’t really a failing so much as a how-it-works.

I thought i read somewhere that either their design was particularly tailored towards a specific node or that following that they made it a higher priority to be less bound to one. But i can't find a source for it, so i might be mistaken.

this post was submitted on 04 Dec 2024
95 points (97.0% liked)

Technology

60108 readers
2133 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS