This isn't a sign that the technology is advancing, it's actually a sign of its weakness.
Their bots can't understand instructions. That's it. They're not disobeying because they don't even know what's going on.
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
This isn't a sign that the technology is advancing, it's actually a sign of its weakness.
Their bots can't understand instructions. That's it. They're not disobeying because they don't even know what's going on.
Exactly.
These are just statistical models trained on the natural language output of real humans. If real humans are statistically likely to ignore instructions in a particular case (or do other undesirable things like misunderstand, lie, confabulate, etc), then the statistical model trained to simulate human output will do the same.
It would only be surprising if this weren't the case.
AI does not have intent, it is unreliable. Like a car that doesn't stop when you push the brakes isn't 'working against your foot pressure'.