95
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 04 Dec 2024
95 points (97.0% liked)
Technology
60108 readers
2133 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
How so? Literally no one uses EUV for 10nm and this wasn't the problem. Isn't SMCI even pushing DUV toproducing 5nm?
My limited understanding is that they were too ambitious with e.g. using cobalt interconnects and at the same time had the issue that they tied their chip designs to specific nodes. Meaning that when the process side slipped they couldnt just take the design and use it on a different node without a lot of effort.
Also I think they were always going to lose apple at some point. With better products they might have delayed it further. But apple fundamentally has an interest in vertical integration and control. And they were already designing processors for their phones and tablets.
Keep in mind that when 10nm was in planning, EUV light sources looked very exotic relative to current tech, and even though we can see in hindsight that the tech works it is still expensive to operate -- TSMC's wafer costs increased 2x-3x for EUV nodes. If I was running Intel and my engineers told me that they thought they could extend the runway for DUV lithography for a node or two without sacrificing performance or yields, I'd take that bet in a heartbeat. Continuing to commit resources to 10nm DUV for years after it didn't pan out and competitors moved on to smaller nodes just reeks of sunk-cost fallacy, though.
In fairness to Intel, every modern semi design house has that same issue: a chip is designed and laid out for a specific node, so this isn't really a failing so much as a how-it-works.
Of course, Intel was being very, very, very risky when they were designing for a process that basically didn't exist assuming that hey, they'll have it done by the time the design work is complete and they're RTM.
Which is what they had to do once they failed to ship newer nodes on schedule with the new CPU designs, and well, we see how that ultimately cost them a whole hell of a lot, if not ultimately their entire business.
I thought i read somewhere that either their design was particularly tailored towards a specific node or that following that they made it a higher priority to be less bound to one. But i can't find a source for it, so i might be mistaken.