14

Running llama-2-7b-chat at 8 bit quantization, and completions are essentially at GPT-3.5 levels on a single 4090 using 15gb VRAM. I don't think most people realize just how small and efficient these models are going to become.

[cut out many, many paragraphs of LLM-generated output which prove… something?]

my chatbot is so small and efficient it only fully utilizes one $2000 graphics card per user! that’s only 450W for as long as it takes the thing to generate whatever bullshit it’s outputting, drawn by a graphics card that’s priced so high not even gamers are buying them!

you’d think my industry would have learned anything at all from being tricked into running loud, hot, incredibly power-hungry crypto mining rigs under their desks for no profit at all, but nah

not a single thought spared for how this can’t possibly be any more cost-effective for OpenAI either; just the assumption that their APIs will somehow always be cheaper than the hardware and energy required to run the model

you are viewing a single comment's thread
view the rest of the comments
[-] froztbyte@awful.systems 8 points 1 year ago

sidethought: I just thought up "promptfans" on the spot, but it doesn't look like it exists anywhere else? so I guess that's a word now

[-] self@awful.systems 8 points 1 year ago

I’m in love with promptfans and we’re going to make it a thing

[-] future_synthetic@awful.systems 5 points 1 year ago

The promptfans are here and so is my crippling need to shit

this post was submitted on 02 Aug 2023
14 points (100.0% liked)

TechTakes

1441 readers
46 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS