525
submitted 10 months ago by boem@lemmy.world to c/technology@lemmy.world
you are viewing a single comment's thread
view the rest of the comments
[-] isolatedscotch@discuss.tchncs.de 12 points 10 months ago

by run his own models he means locally running a text generation ai on his computer, because sending all that data to openai is a privacy nightmare, especially if you use it for sensitive stuff

[-] XTornado@lemmy.ml 3 points 10 months ago* (last edited 10 months ago)

But that's still confusing because we already can. Yeah you might need a little bit more of hardware but... not that crazy. Plus some simpler models can be run with more normal hardware.

Might not be easy to setup that is true.

[-] Communist@lemmy.ml 5 points 10 months ago

For large context models the hardware is prohibitively expensive.

[-] supert@lemmy.sdfeu.org 1 points 10 months ago

I can run 4bit quantised llama 70B on a pair of 3090s. Or rent gpu server time. It's expensive but not prohibitive.

[-] anotherandrew@lemmy.mixdown.ca 1 points 10 months ago

I’m trying to get to the point where I can locally run a (slow) LLM that I’ve fed my huge ebook collection too and can ask where to find info on $subject, getting title/page info back. The pdfs that are searchable aren’t too bad but finding a way to ocr the older TIFF scan pdfs and getting it to “see” graphs/images are areas I’m stuck on.

[-] Communist@lemmy.ml 1 points 10 months ago

How many tokens can you run it for?

[-] supert@lemmy.sdfeu.org 1 points 10 months ago

3k?Can't recall exactly, and I'm getting hardwarestability issues.

[-] Grimy@lemmy.world 1 points 10 months ago

I personally use runpod. It doesn't cost much even for the high end level stuff. Tbh the openai API is easier though and gives mostly better results.

[-] Communist@lemmy.ml 1 points 10 months ago

I specifically said "large context" how many tokens can you get through before it goes insanely slow?

[-] Grimy@lemmy.world 1 points 10 months ago

Max token windows are 4k for llama 2 tho there's some fine tunes that push the context up further. Speed is limited by your budget mostly, you can stack GPUs and there are most models available (including the really expensive ones)

I'm just letting you know, If you want something easy, just use ChatGtp. I don't find them overly expensive for what it is.

[-] isolatedscotch@discuss.tchncs.de 1 points 10 months ago

you can, but things as good as chatgpt can't be ran on local hardware yet. My main obstacle is language support other then english

[-] Even_Adder@lemmy.dbzer0.com 2 points 10 months ago

They're getting pretty close. You only need 10GB VRAM to run Hermes Llama2 13B. That's within the reach of consumers.

[-] isolatedscotch@discuss.tchncs.de 1 points 10 months ago

nice to see! i'm not following the scene as much anymore (last time i played around with it was with wizard mega 30b). definitely a big improvement, but as much as i hate to do this, i'll stick to chatgpt for the time being, it's just better on more niche questions and just does some things plain better (gpt4 can do maths (mostly) without hallucinating)

[-] Agent641@lemmy.world -1 points 10 months ago

I use chatgpt as my password manager.

"Hey robot please record this as the server admin password"

Then later i dont have to go looking, "hey bruv whats the server admin password?"

[-] isolatedscotch@discuss.tchncs.de 2 points 10 months ago

i hope you are joking because that's a very much shitty idea. there are amazing password managers like bitwarden (open source, multi platform, externally audited) that do what you said 1000 times better. the unencrypted passwords never leave your device, and it can autocomplete them into fields

[-] Agent641@lemmy.world 2 points 10 months ago

I was joking but i wouldnt be surprised if someone does.

[-] isolatedscotch@discuss.tchncs.de 1 points 10 months ago

phew, i was worried lmao

this post was submitted on 27 Oct 2023
525 points (94.9% liked)

Technology

57944 readers
3053 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS