this post was submitted on 23 Mar 2025
1226 points (98.3% liked)

Technology

67338 readers
4564 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] TheMightyCat@lemm.ee 10 points 2 days ago (3 children)

No?

Anyone can run an AI even on the weakest hardware there are plenty of small open models for this.

Training an AI requires very strong hardware, however this is not an impossible hurdle as the models on hugging face show.

[–] CodeInvasion@sh.itjust.works 7 points 1 day ago (2 children)

Yah, I'm an AI researcher and with the weights released for deep seek anybody can run an enterprise level AI assistant. To run the full model natively, it does require $100k in GPUs, but if one had that hardware it could easily be fine-tuned with something like LoRA for almost any application. Then that model can be distilled and quantized to run on gaming GPUs.

It's really not that big of a barrier. Yes, $100k in hardware is, but from a non-profit entity perspective that is peanuts.

Also adding a vision encoder for images to deep seek would not be theoretically that difficult for the same reason. In fact, I'm working on research right now that finds GPT4o and o1 have similar vision capabilities, implying it's the same first layer vision encoder and then textual chain of thought tokens are read by subsequent layers. (This is a very recent insight as of last week by my team, so if anyone can disprove that, I would be very interested to know!)

[–] cyd@lemmy.world 2 points 1 day ago* (last edited 1 day ago)

It's possible to run the big Deepseek model locally for around $15k, not $100k. People have done it with 2x M4 Ultras, or the equivalent.

Though I don't think it's a good use of money personally, because the requirements are dropping all the time. We're starting to see some very promising small models that use a fraction of those resources.

[–] riskable@programming.dev 2 points 1 day ago (1 children)

Would you say your research is evidence that the o1 model was built using data/algorithms taken from OpenAI via industrial espionage (like Sam Altman is purporting without evidence)? Or is it just likely that they came upon the same logical solution?

Not that it matters, of course! Just curious.

[–] CodeInvasion@sh.itjust.works 4 points 1 day ago* (last edited 1 day ago)

Well, OpenAI has clearly scraped everything that is scrap-able on the internet. Copyrights be damned. I haven't actually used Deep seek very much to make a strong analysis, but I suspect Sam is just mad they got beat at their own game.

The real innovation that isn't commonly talked about is the invention of Multihead Latent Attention (MLA), which is what drives the dramatic performance increases in both memory (59x) and computation (6x) efficiency. It's an absolute game changer and I'm surprised OpenAI has released their own MLA model yet.

While on the subject of stealing data, I have been of the strong opinion that there is no such thing as copyright when it comes to training data. Humans learn by example and all works are derivative of those that came before, at least to some degree. This, if humans can't be accused of using copyrighted text to learn how to write, then AI shouldn't either. Just my hot take that I know is controversial outside of academic circles.

[–] nalinna@lemmy.world 5 points 1 day ago (3 children)

But the people with the money for the hardware are the ones training it to put more money in their pockets. That's mostly what it's being trained to do: make rich people richer.

[–] riskable@programming.dev 7 points 1 day ago (1 children)

This completely ignores all the endless (open) academic work going on in the AI space. Loads of universities have AI data centers now and are doing great research that is being published out in the open for anyone to use and duplicate.

I've downloaded several academic models and all commercial models and AI tools are based on all that public research.

I run AI models locally on my PC and you can too.

[–] nalinna@lemmy.world 1 points 1 day ago

That is entirely true and one of my favorite things about it. I just wish there was a way to nurture more of that and less of the, "Hi, I'm Alvin and my job is to make your Fortune-500 company even more profitable...the key is to pay people less!" type of AI.

[–] TheMightyCat@lemm.ee 5 points 1 day ago (1 children)

But you can make this argument for anything that is used to make rich people richer. Even something as basic as pen and paper is used everyday to make rich people richer.

Why attack the technology if its the rich people you are against and not the technology itself.

[–] nalinna@lemmy.world 1 points 1 day ago

It's not even the people; it's their actions. If we could figure out how to regulate its use so its profit-generation capacity doesn't build on itself exponentially at the expense of the fair treatment of others and instead actively proliferate the models that help people, I'm all for it, for the record.

[–] Melvin_Ferd@lemmy.world -3 points 1 day ago

We shouldn't do anything ever because poors