Go ahead and destroy Nvidia and USA AI evil empire China, Make the People happy.
Technology
This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.
Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.
Rules:
1: All Lemmy rules apply
2: Do not post low effort posts
3: NEVER post naziped*gore stuff
4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.
5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)
6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist
7: crypto related posts, unless essential, are disallowed
Probably yet another overblown headline.
Does anyone have access to the full text of the paper?
https://doi.org/10.1126/science.adv7434
Abstract
Large-scale generative artificial intelligence (AI) is facing a severe computing power shortage. Although photonic computing achieves excellence in decision tasks, its application in generative tasks remains formidable because of limited integration scale, time-consuming dimension conversions, and ground-truth-dependent training algorithms. We produced an all-optical chip for large-scale intelligent vision generation, named LightGen. By integrating millions of photonic neurons on a chip, varying network dimension through proposed optical latent space, and Bayes-based training algorithms, LightGen experimentally implemented high-resolution semantic image generation, denoising, style transfer, three-dimensional generation, and manipulation. Its measured end-to-end computing speed and energy efficiency were each more than two orders of magnitude greater than those of state-of-the-art electronic chips, paving the way for acceleration of large visual generative models.
I'm hopeful but I will get excited once it is in actual builds and have been benchmarked.
Yeah for sure, I do think it's only a matter of time before people figure out a new substrate. It's really just a matter of allocating time and resources to the task, and that's where state level planning comes in.
100 times faster than 5090 ? While also being more efficient?
This thing need deeeper dive, sound too good to be true.
That would be the A100, not the 5090.
From what I can gather, the catch is that this is optical computing, which is up there with quantum computing as things that would be pretty great but good luck making it feasible, let alone mass produce it. You're not putting one in your home PC anytime soon, but hey, technology moves fast.
It's like saying silicon chips being orders of magnitude faster than vacuum tubes sounds too good to be true. Different substrate will have fundamentally different properties from silicon.
Optic computing is as old as the hills. Looking at the abstract, it appears to be the usual domain-specific architecture, which only does very well in a tailored case.
it's a pretty big case right now