5
submitted 5 months ago by kromem@lemmy.world to c/chatgpt@lemmy.world

I've been saying this for about a year, since seeing the Othello GPT research, but it's great to see more minds changing on the subject.

top 2 comments
sorted by: hot top controversial new old
[-] beetus@lemmy.world 2 points 5 months ago* (last edited 5 months ago)

Neat to see, but I don't buy it yet. I think their assumptions are reasonable but without verifying the training data or working directly with these LLM firms on these studies, I don't see how they can claim it's producing novel non-trained output.

Regardless, it's a step in the right direction for us better understanding the seemingly black box we created.

[-] akrot@lemmy.world 1 points 5 months ago

Interesting read, basically the demonstrated that Gpt_4 can understand causality, using random graphs. Interesting take-away though is this excerpt:

"And indeed, as the math predicts, GPT-4’s performance far outshines that of its smaller predecessor, GPT-3.5 — to an extent that spooked Arora. “It’s probably not just me,” he said. “Many people found it a little bit eerie how much GPT-4 was better than GPT-3.5, and that happened within a year."

this post was submitted on 24 Jan 2024
5 points (66.7% liked)

ChatGPT

8663 readers
1 users here now

Unofficial ChatGPT community to discuss anything ChatGPT

founded 1 year ago
MODERATORS