The thing about LLMs is that they "store" information about the shape of their training models, not about the information contained therein. That information is lost.
A LLM will produce text that looks like the texts it was trained with, but it only can only reproduce any information contained in them if it's common enough in its training data to statistically affect their shape, and even then it has a chance to get it wrong, since it has no way to check its output for fact accuracy.
Add to that that most models are pre-prompted to sound confident, helpful, and subservient (the companies' main goal not being to provide information, but to get their customers hooked on their product and coming back for more), and you get the perfect scammers and yes-men. Auto-complete mentalists that will give you as much confident sounding information shaped nonsense as you want, doing their best to agree with you and confirm any biases you might have, with complete disregard for accuracy, truth, or the effects your trust in their output might have (which makes them extremely dangerous and addictive for suggestible or intellectually or emotionally vulnerable users).
It seems to be a reference to the N1 rocket's track record.