"AI" is a misnomer, ChatGPT and other "AI" are actually LLMs
here's a decent video by 3Blue1Brown explaining how LLMs work:
https://www.youtube.com/watch?v=LPZh9BOjkQs
and here's a rather lucid explanation of how to think about LLMs:
https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web
I think what is crucial to understand here is that we're talking about a computer program that attempts to generates text, and in particular is trying to guess what the next best word is. This is like the little word suggestion tool on your smartphone's keyboard, or actually similar to translation tools like Google Translate.
This technology isn't new, what's new is the accumulation of larger datasets and the hardware and ability to train LLMs on such large amounts of data. This just makes the predictive generation of text more typical of the training data.
So "AI" doesn't need to be shackled because "AI" isn't an intelligence and has no agency or control over anything.
LLMs generate text, that's all they do - they can't control robots or "think" or "do" anything else.

