this post was submitted on 19 Dec 2025
32 points (94.4% liked)

Technology

40621 readers
323 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 6 years ago
MODERATORS
top 17 comments
sorted by: hot top controversial new old
[–] chemicalwonka@discuss.tchncs.de 13 points 6 days ago

Go ahead and destroy Nvidia and USA AI evil empire China, Make the People happy.

[–] rcbrk@lemmy.ml 7 points 6 days ago

Probably yet another overblown headline.

Does anyone have access to the full text of the paper?

https://doi.org/10.1126/science.adv7434

Abstract

Large-scale generative artificial intelligence (AI) is facing a severe computing power shortage. Although photonic computing achieves excellence in decision tasks, its application in generative tasks remains formidable because of limited integration scale, time-consuming dimension conversions, and ground-truth-dependent training algorithms. We produced an all-optical chip for large-scale intelligent vision generation, named LightGen. By integrating millions of photonic neurons on a chip, varying network dimension through proposed optical latent space, and Bayes-based training algorithms, LightGen experimentally implemented high-resolution semantic image generation, denoising, style transfer, three-dimensional generation, and manipulation. Its measured end-to-end computing speed and energy efficiency were each more than two orders of magnitude greater than those of state-of-the-art electronic chips, paving the way for acceleration of large visual generative models.

[–] PanArab@lemmy.ml 5 points 6 days ago (1 children)

I'm hopeful but I will get excited once it is in actual builds and have been benchmarked.

[–] yogthos@lemmy.ml 5 points 6 days ago

Yeah for sure, I do think it's only a matter of time before people figure out a new substrate. It's really just a matter of allocating time and resources to the task, and that's where state level planning comes in.

[–] tfowinder@sh.itjust.works -1 points 6 days ago (2 children)

100 times faster than 5090 ? While also being more efficient?

This thing need deeeper dive, sound too good to be true.

[–] ShinkanTrain@lemmy.ml 7 points 6 days ago* (last edited 6 days ago)

That would be the A100, not the 5090.

From what I can gather, the catch is that this is optical computing, which is up there with quantum computing as things that would be pretty great but good luck making it feasible, let alone mass produce it. You're not putting one in your home PC anytime soon, but hey, technology moves fast.

[–] yogthos@lemmy.ml 9 points 6 days ago (1 children)

It's like saying silicon chips being orders of magnitude faster than vacuum tubes sounds too good to be true. Different substrate will have fundamentally different properties from silicon.

[–] eleitl@lemmy.zip 2 points 6 days ago (1 children)

Optic computing is as old as the hills. Looking at the abstract, it appears to be the usual domain-specific architecture, which only does very well in a tailored case.

[–] yogthos@lemmy.ml 3 points 6 days ago

it's a pretty big case right now