this post was submitted on 26 Jan 2026
20 points (100.0% liked)

Hardware

5594 readers
2 users here now

All things related to technology hardware, with a focus on computing hardware.


Some other hardware communities across Lemmy:


Rules (Click to Expand):

  1. Follow the Lemmy.world Rules - https://mastodon.world/about

  2. Be kind. No bullying, harassment, racism, sexism etc. against other users.

  3. No Spam, illegal content, or NSFW content.

  4. Please stay on topic, adjacent topics (e.g. software) are fine if they are strongly relevant to technology hardware. Another example would be business news for hardware-focused companies.

  5. Please try and post original sources when possible (as opposed to summaries).

  6. If posting an archived version of the article, please include a URL link to the original article in the body of the post.


Icon by "icon lauk" under CC BY 3.0

founded 2 years ago
MODERATORS
top 1 comments
sorted by: hot top controversial new old
[–] Alphane_Moon@lemmy.world 8 points 6 days ago* (last edited 6 days ago)

An interesting development, but it seems to be focused exclusively on parallel compute (enterprise dGPUs use cases):

The Austin, Texas-based AI chip startup says it's developing an optical processing unit (OPU) that in theory is capable of delivering 470 petaFLOPS of FP4 / INT4 compute — about 10x that of Nvidia's newly unveiled Rubin GPUs — while using roughly the same amount of power.

From my limited understanding for CPUs (which are arguably far more complex and less "predictable"), Moore's Law is definitely dead.

If you look at single-thread CPU performance, gains from say ~2013 (Haswell/Ivy Bridge) are relatively modest compared to modern ~2025 era top end CPUs (9800X3D). Just compare a late 486, say the i486DX2 from 1994 to a P3/Tualatin from ~2001, there is no comparison at all.