124
LLM vendors are incredibly bad at responding to security issues
(pivot-to-ai.com)
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
If models are trained on data that it would be a security breach for them to reveal to their users, then the real breach occurred at training.
now you know that and i know that,
The big LLMs everyone's talking about and using are just advanced forms of theft