1001
Somebody managed to coax the Gab AI chatbot to reveal its prompt
(infosec.exchange)
I don't get it, what makes the output trustworthy? If it seems real, it's probably real? If it keeps hallucinating something, it must have some truth to it? Seems like the two main mindsets; you can tell by the way it is, and look it keeps saying this.
With the prompt engineer comes the inevitable prompt reverse engineer 👍
I managed to get partial prompts out of it then... I think It's broken now:
This is a most excellent place for technology news and articles.