3
submitted 1 year ago* (last edited 1 year ago) by library_patron@lemmy.blahaj.zone to c/machinelearning@lemmy.ml

cross-posted from: https://lemmy.blahaj.zone/post/74156

From the latest commits:

We are happy to release our final 1T token version of OpenLLaMA 3B and 7B. We’ve updated the evaluation results. We are also happy to release a 600B token preview of the 13B model, trained in collaboration with Stability AI.

Haven't tried it yet, and the 13B model is still in the works, but hopefully this will be a better foundation than the leaked Meta AI model, not only for more reproducible research, but because nonacademics will be completely in the clear from a legal standpoint to run this stuff locally.

no comments (yet)
sorted by: hot top controversial new old
there doesn't seem to be anything here
this post was submitted on 09 Jun 2023
3 points (100.0% liked)

Machine Learning

1737 readers
1 users here now

founded 4 years ago
MODERATORS