Actually, let me add to my statement of it being intentional.
There are things that AI applications can do that humans can't.
AI is all about analyzing large sets of variables and finding things. Take recent studies in pathology where AI can find the patterns of certain disease in tissue specimens. This only works because the enormous dataset that was provided was already vetted by pathologists. I would argue this isn't counterfeiting human thought. This is enhancing an already utilized algorithm trained by doctors. Remember, a pathologist still needs to put their license on the line if they agree with the AI findings.
There is NO accountability in LLMs. To many people it looks like it is thinking, it has understood what the person has said, and considered boundaries that exist in our minds, but maybe not communicated to the LLM.
Thats why I call these AI programs unsuccessful and counterfeit. They're giving users made by possibly unverified and unreliable data with no accountability.
Hmmm. Something still isn't clicking in my head.