Talk by Meredith Whittaker (president of the Signal Foundation) & Udbhav Tiwari (VP for Strategy and Global Affairs at Signal)
Agentic AI is the catch-all term for AI-enabled systems that propose to complete more or less complex tasks on their own, without stopping to ask permission or consent. What could go wrong? These systems are being integrated directly into operating systems and applications, like web browsers. This move represents a fundamental paradigm shift, transforming them from relatively neutral resource managers into an active, goal-oriented infrastructure ultimately controlled by the companies that develop these systems, not by users or application developers. Systems like Microsoft's "Recall," which create a comprehensive "photographic memory" of all user activity, are marketed as productivity enhancers, but they function as OS-level surveillance and create significant privacy vulnerabilities. In the case of Recall, we’re talking about a centralized, high-value target for attackers that poses an existential threat to the privacy guarantees of meticulously engineered applications like Signal. This shift also fundamentally undermines personal agency, replacing individual choice and discovery with automated, opaque recommendations that can obscure commercial interests and erode individual autonomy.
This talk will review the immediate and serious danger that the rush to shove agents into our devices and digital lives poses to our fundamental right to privacy and our capacity for genuine personal agency. Drawing from Signal's analysis, it moves beyond outlining the problem to also present a "tourniquet" solution: looking at what we need to do now to ensure that privacy at the application layer isn’t eliminated, and what the hacker community can do to help. We will outline a path for ensuring developer agency, granular user control, radical transparency, and the role of adversarial research.
The talk will provide a critical technical and political economy analysis of the new privacy crisis emerging from OS and application level AI agents, aimed at the 39C3 "Ethics, Society & Politics" audience.
Better not be using Winblows if you're concerned about security. Because it's not open source and they've been caught straight up lying about it repeatedly.