346
submitted 3 months ago by Hypx@fedia.io to c/technology@lemmy.world

AI’s voracious need for computing power is threatening to overwhelm energy sources, requiring the industry to change its approach to the technology, according to Arm Holdings Plc Chief Executive Officer Rene Haas.

you are viewing a single comment's thread
view the rest of the comments
[-] crispyflagstones@sh.itjust.works -5 points 3 months ago* (last edited 3 months ago)

The ENIAC drew 174 kilowatts and weighed 30 tons. ENIAC drew this 174 kilowatts to achieve a few hundred-few thousand operations per second, while an iPhone 4 can handle 2 billion operations a second and draws maybe 1.5w under heavy load.

Like, yeah, obviously, the tech is inefficient right now, it's just getting off the ground.

[-] AlotOfReading@lemmy.world 11 points 3 months ago

ML is not an ENIAC situation. Computers got more efficient not by doing fewer operations, but by making what they were already doing much more efficient.

The basic operations underlying ML (e.g. matrix multiplication) are already some of the most heavily optimized things around. ML is inefficient because it needs to do a lot of that. The problem is very different.

[-] crispyflagstones@sh.itjust.works 1 points 3 months ago

There's an entire resurgence of research into alternative computing architectures right now, being led by some of the biggest names in computing, because of the limits we've hit with the von Neumann architecture as regards ML. I don't see any reason to assume all of that research is guaranteed to fail.

[-] AlotOfReading@lemmy.world 2 points 3 months ago

I'm not assuming it's going to fail, I'm just saying that the exponential gains seen in early computing are going to be much harder to come by because we're not starting from the same grossly inefficient place.

As an FYI, most modern computers are modified Harvard architectures, not Von Neumann machines. There are other architectures being explored that are even more exotic, but I'm not aware of any that are massively better on the power side (vs simply being faster). The acceleration approaches that I'm aware of that are more (e.g. analog or optical accelerators) are also totally compatible with traditional Harvard/Von Neumann architectures.

[-] crispyflagstones@sh.itjust.works 1 points 3 months ago

And I don't know that by comparing it to ENIAC I intended to suggest the exponential gains would be identical, but we are currently in a period of exponential gains in AI and it's not exactly slowing down. It just seems unthoughtful and not very critical to measure the overall efficiency of a technology by its very earliest iterations, when the field it's based on is moving as fast as AI is.

load more comments (15 replies)
this post was submitted on 06 May 2024
346 points (95.1% liked)

Technology

57273 readers
4657 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS