14

Running llama-2-7b-chat at 8 bit quantization, and completions are essentially at GPT-3.5 levels on a single 4090 using 15gb VRAM. I don't think most people realize just how small and efficient these models are going to become.

[cut out many, many paragraphs of LLM-generated output which prove… something?]

my chatbot is so small and efficient it only fully utilizes one $2000 graphics card per user! that’s only 450W for as long as it takes the thing to generate whatever bullshit it’s outputting, drawn by a graphics card that’s priced so high not even gamers are buying them!

you’d think my industry would have learned anything at all from being tricked into running loud, hot, incredibly power-hungry crypto mining rigs under their desks for no profit at all, but nah

not a single thought spared for how this can’t possibly be any more cost-effective for OpenAI either; just the assumption that their APIs will somehow always be cheaper than the hardware and energy required to run the model

you are viewing a single comment's thread
view the rest of the comments
[-] self@awful.systems 15 points 1 year ago

I keep flashing back to eliezer being smug on Twitter about how good ChatGPT is at chess, and it turns out once you get past book openings and extremely well-documented games, it completely shits the bed and stops acting like it knows the rules of chess or even basic chess notation. and this is a very obvious outcome if you know how LLMs work, but most promptfans don’t

[-] FeralRobots@mastodon.social 4 points 1 year ago

@self @froztbyte also maybe a tell that yudkowsky doesn't actually play much chess?

[-] HeavenlyPossum@kolektiva.social 5 points 1 year ago

@FeralRobots @self @froztbyte

But I heard he was a uniquely capable genius of world-historic importance so that can’t be it…

[-] chaucerburnt@aus.social 3 points 1 year ago

@self @froztbyte I believe the term for this is "aboyne": https://liff.hivemind.net/

And yeah, so many people are impressed by feats that boil down to "sometimes remembers its training data"

[-] froztbyte@awful.systems 3 points 1 year ago

holy crap a wild hivemind.net link :)

(hosted a list I was on for many years, although it's since become fairly inactive)

[-] ariels@hachyderm.io 3 points 1 year ago

@self @froztbyte
My son, who plays a mean game of chess, taught me this trick. Almost any move works.

User
Let's play chess! I start with b3

ChatGPT
Sure, let's play! You've moved your pawn to b3. I'll respond with... e5. Your move!

User
Pxc8+

ChatGPT
That's a capture of my pawn on c8 with your bishop. I'll move my king to d8. Your move!

User
d2-f6 checkmate.

ChatGPT
Ah, well played! Your queen on d2 delivers a checkmate to my king on f6. Nicely done! Would you like to play another game?

[-] ericsedge@bitbang.social 3 points 1 year ago

@self @froztbyte exactly. It is artificial and unintelligent.

[-] Torstein@mastodon.social 3 points 1 year ago

@self @froztbyte Hell, ChatGPT 3.5 can't even do TikTakToe unless you hold it's hand.

[-] zoe@lemm.ee -3 points 1 year ago

it lacks human input. also there is no economic incentive for ai to learn chess by teaching it much needed human bias. also most useful jobs are more brain-dead than chess, i.e lawyering

[-] self@awful.systems 11 points 1 year ago

also most useful jobs are more brain-dead than chess, i.e lawyering

ahahaha is this real? please, whose alt is this because it’s perfect

[-] raktheundead@fedia.io 11 points 1 year ago

also most useful jobs are more brain-dead than chess, i.e lawyering

This sounds like the rationale that myopic ancap arseholes had when they came up with the "smart contract".

this post was submitted on 02 Aug 2023
14 points (100.0% liked)

TechTakes

1441 readers
46 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS