121
Deleted
(lemmy.dbzer0.com)
A loosely moderated place to ask open-ended questions
Search asklemmy ๐
If your post meets the following criteria, it's welcome here!
Looking for support?
Looking for a community?
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~
Thats just an algebraic equation but with words.
This sounds like something chatGPT could easily convert. Although that is computationally high.
It would also have the side affects of blocking all math illiterate people from creating accounts.
I gave this to ChatGPT w/ GPT 3.5: Based on the given statements:
We can deduce the following:
The box is yellow, and according to statement 2, if the box is yellow, it is good.
Now, according to statement 4, for the box to be awesome, it needs to be both good and happy. However, we don't have information about the box being happy. Statement 5 only mentions that if the box is red, it is happy, but we know the box is yellow and red, not just red.
Therefore, based on the given information, we cannot determine whether the box is awesome or not.
A + B = C
A = X
D != Y
X + Y = Z
B = Y
Does C = Z?
exactly
From ChatGPT 4:
Yes, the box is awesome.
According to the provided statements, the box is yellow and red.
"If the box is yellow, it is good." - So, since the box is yellow, it is good.
"If the box is red, it is happy." - And since the box is red, it is happy.
Finally, "If the box is good and happy, the box is awesome." - Therefore, because the box is both good (due to being yellow) and happy (due to being red), the box is indeed awesome.
That's terrifyingly good wtf
I was going to say you could give it a math problem that uses big numbers but tried one on GPT4 and it succeeded. GPT3 though will absolutely fail at nontrivial math every time.
Yeah. It still definitely does! The interesting thing is that it seems to be very good at estimating and the final answer it gives is usually pretty close to correct in my experience. Of course close doesn't really count in math problems.
Bard however, does not seem to get the answer right:
Seems like it got kind of close, with the "The box is both yellow and red, so it is both good and happy"... but then falls apart afterwards.
Edit: I tried to debate with it:
Me:
Bard:
Which is interesting to say the least, its almost like its looking a bit too deeply into the question lol.
Bing had no trouble
Bing is GPT4 based, though I don't think the same version as ChatGPT. But either way GPT4 can solve these types of problems all day.
Not surprised. I got access to bard a while back and it does quite a lot more hallucinating than even GPT3.5.
Though it doubling down on the wrong answer even when corrected is something I've seen GPT4 do even in some cases. It seems like once it says something, it usually sticks to it.
Darn, foiled again. You win this round Chat GPT.
Um wtf, I'm starting to doubt if I'm a human. ๐ค