Nobody wants to use AI to bug our phones, or to build a sprawling nerve system to track our vitals, because our phones are already bugged. Everything we do on them is recorded a dozen times over, by our wireless carriers, by the websites we visit and the apps we use, by the vendors and ad networks those companies are sending their data to, and in the marketplaces that sell that data. We built the eyes of the Greco decades ago.
But that data has remained relatively secure—or maybe more precisely, its potential energy has remained relatively buried—largely because it’s tedious to work with. It’s messy; it’s scattered across different sources and in different formats; combining it together is a pain, and most of us are simply not interesting enough to investigate. Data analysts who work at shadowy government agencies have lives too, and they do not want to write 595-line SQL queries either.
But AI doesn’t mind. And that’s the boring danger of what happens next: Not of AI becoming a superintelligent Sherlock Holmes finding impossible patterns in its enormous mind palace, but of it being a million monkeys at a million typewriters, doing the grunt work no person wanted to do. Because when prying questions are a prompt away—rather than 24 hours of work away—who wouldn’t get tempted to pry?
How can you argue that trivial data somehow implies private data?
The term you're looking for is "security through obscurity". The effort require to create a coherent picture from that scattered information is more than its worth, so it doesn't get done. The argument here being that AI changes that calculation because it removes the effort part.
Security through obscurity has been disproven long ago and AI sifting through your obscure data with ease is a great example how this approach doesn't work.
Trivial looking data can be very relevant and private. Eg i saw a talk how just scraping the names of authors and the time when they publish articles on a news site can be used to make likely conclusions who is taking vacations "together". Thats just two parameters of probably dozens or even hundreds of the data points we leave behind by using tech.