1001
Somebody managed to coax the Gab AI chatbot to reveal its prompt
(infosec.exchange)
This is a most excellent place for technology news and articles.
So this might be the beginning of a conversation about how initial AI instructions need to start being legally visible right? Like using this as a prime example of how AI can be coerced into certain beliefs without the person prompting it even knowing
Based on the comments it appears the prompt doesn't really even fully work. It mainly seems to be something to laugh at while despairing over the writer's nonexistant command of logic.