200
you are viewing a single comment's thread
view the rest of the comments
[-] astrsk@fedia.io 13 points 1 month ago

Isn’t OpenAI saying that o1 has reasoning as a specific selling point?

[-] froztbyte@awful.systems 14 points 1 month ago

they do say that, yes. it’s as bullshit as all the other claims they’ve been making

[-] astrsk@fedia.io 8 points 1 month ago

Which is my point, and forgive me, but I believe is the point of the research publication.

[-] homesweethomeMrL@lemmy.world 12 points 1 month ago

They say a lot of stuff.

[-] DarkThoughts@fedia.io 6 points 1 month ago

My best guess is it generates several possible replies and then does some sort of token match to determine which one may potentially be the most accurate. Not sure if I'd call that "reasoning" but I guess it could potentially improve results in some cases. With OpenAI not being so open it is hard to tell though. They've been overpromising a lot already so it may as well be just complete bullshit.

[-] lunarul@lemmy.world 4 points 1 month ago

My best guess is it generates several possible replies and then does some sort of token match to determine which one may potentially be the most accurate.

Didn't the previous models already do this?

[-] DarkThoughts@fedia.io 4 points 1 month ago

No idea. I'm not actually using any OpenAI products.

this post was submitted on 13 Oct 2024
200 points (100.0% liked)

TechTakes

1441 readers
46 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS