Awesome. Love to see that we have our own local LLM researcher on the instance. Pretty cool paper, too!
It's impressive to see how much your group was able to conclude given the limitations imposed by working on a closed model. Really makes you wonder what might be possible if you could have a supervisor AI agent actively inspect the internal state of GPT as it runs, eh?