They might be right but I read some of the linked articles on this blog (?), the authors just come off as not really knowing much about current AI technologies, and at the same time very very arrogant.
It's absurd that some of the larger LLMs now use hundreds of billions of parameters (e.g. llama3.1 with 405B).
This doesn't really seem like a smart usage of ressources if you need several of the largest GPUs available to even run one conversation.
I wonder how many GPUs my brain is
It's a lot. Like a lot a lot. GPUs have about 150 billion transistors but those transistors only make 1 connection in what is essentially printed in a 2d space on silicon.
Each neuron makes dozens of connections, and there's on the order of almost 100 billion neurons in a blobby lump of fat and neurons that takes up 3d space. And then combine the fact that multiple neurons in patterns firing is how everything actually functions and you have such absurdly high number of potential for how powerful human brains are.
At this point, I'm not sure there's enough gpus in the world to mimic what a human brain can do.
42
The Answer to the Ultimate Question of Life, The Universe, and Everything
I don't think your brain can be reasonably compared with an LLM, just like it can't be compared with a calculator.
LLMs are based on neural networks which are a massively simplified model of how our brain works. So you kind of can as long as you keep in mind they are orders of magnitude more simple.
Seeing as how the full unquantized FP16 for Llama 3.1 405B requires around a terabyte of VRAM (16 bits per parameter + context), I'd say way more than several.
I understand folks don't like AI but this "article" is like a reddit post with lots of links to subjects which are vague and need the link text to tell us what is important, instead of relying on the actual article.
I see a lot of links here and there to this domain but I haven’t really read anything from there. I’m literally just scrolling through these comments to see if anyone has a comment like yours.
My impression was that it’s just a blog but you calling it “a reddit post” is also interesting. What’s with this site? It looks like a decent amount of people think these takes are interesting. I have to deal with a lot of management people who love AI buzzwords, so a whole blog just ripping into it really speaks to me.
What the fuck you aren't kidding. I have comment replies to trolls that are longer than that article. The over the top citations also makes me think this was entirely written by an actual AI bot that was lrompted to supply x amoint of sources in their article. Lol
OpenAI, Google, Anthropic admit they can’t scale up their chatbots any further
Lol, no they didn't. The quotes this articles are using are talking about LLMs not chatbots. This is yet another stupid article from someone who doesn't understand the technology. There is a lot of legitimate criticism for the way this technology is being implemented but FFS get the basics right at least.
Are you asserting that chatbots are so fundamentally different from LLMs that "oh shit we can't just throw more CPU and data at this anymore" doesn't apply to roughly the same degree?
I feel like people are using those terms pretty well interchangeably lately anyway
Claiming that David Gerrard an Amy Castor "don't understand the technology" is uh.... Hoo boy... Well it sure is a take.
The title of the article is literally a lie which is easily fact checked. Follow the links to quotes in the article to see what the quoted individuals actually said about the topic.
Please learn the difference between "lying" and "presenting a conclusion."
I know the difference. Neither OpenAI, Google, or Anthropic have admitted they can't scale up their chat bots. That statement is not true.
A 4 paragraph "article" lol
Are you suggesting “pivot-to-ai.com” isn’t the pinnacle of journalism?
Though, I don't think that means they won't get any better. It just means they don't scale by feeding in more training data. But that's why OpenAI changed their approach and added some reasoning abilities. And we're developing/researching things like multimodality etc... There's still quite some room for improvements.
Though, I don't think that means they won't get any better. It just means they don't scale by feeding in more training data.
Agreed. There's plenty of improvement to be had, but the gravy train of "more CPU or more data == better results" sounds like it's ending.
It's a known problem - though of course, because these companies are trying to push AI into everything and oversell it to build hype and please investors, they usually try to avoid recognizing its limitations.
Frankly I think that now they should focus on making these models smaller and more efficient instead of just throwing more compute at the wall, and actually train them to completion so they'll generalize properly and be more useful.
I smell a sentient AI trying to throw us off it's plans for world domination..
Everyone ignore this comment please. I'm quite human. I have the normal 7 fingers (edit: on each of my three hands!) and everything.
Looks, like AI buble is slowly coming to end just like what happned to crypto and NFT buble.
Sure, except for the thousands of products working pretty well with current gen. And it's not like it's over, now we've hit the limit of "just throw more data at the thing".
Now there aren't gonna be as many breakthroughs that make it better every few months, instead there's gonna be thousand small improvements that make it more capable slowly and steadily. AI is here to stay.
The bubble popping doesn't have to do with its staying power, just that the days of, "Hey, I invented this brand new AI ~~that's totally not just a wrapper for ChatGPT~~. Want to invest a billion dollars‽" are over. AGI is not "just out of reach."
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed