Surveillance By Default

In our hyperconnected world, does privacy even exist any longer? It’s hard to argue in the affirmative as networked devices permeate all aspects of our lives. What I find concerning is how we accept this condition. Many of the services we use are recording our digital lives en masse. Yet the details of these arrangements are constantly shifting as disclosures make clear. User privacy expectations are routinely violated and the cumulative implications of those violations are increasingly difficult to fathom.

Many respond by asking what options do we have? Can we afford to disconnect? Is that even possible in modern life?

While I have no definitive answers yet, I find the growing power imbalance disconcerting. Users have no verifiable options for controlling disclosure of personal data, particularly on mobile phones where location data alone reveals volumes. It seems we have to presume full compromise and live with that reality.

Or do we?

We could start by asking some questions about the people who are designing and building the global platforms we’ve come to rely on. How do they think about their relationship with their users? What particular values guide their design practices?

One particularly telling experience I had occurred at a dinner last year that brought together data scientists from across the Bay Area. Early in the evening, I shared an example of a personal data disclosure that I had no control over. One gentleman at our dinner table replied

“Do you know that in Russian there is no word for privacy? As a scientist, I hope that concept goes away.”

He paused for a moment as I contemplated my response.

“My wife works on spyware. Did you know that you can classify the gender of a computer user based on the motions of the mouse? Isn’t that cool?”

“No that’s not cool.”

My entreaties to respect the rights of individuals fell flat with him. I sensed no concern for the potential harms that come with constructing massive centralized repositories of user data. I insisted we should be discussing this more in the community. He looked at me with no reaction. Nothing.

That exchange left a deep impression on me. While I would hope this gentlemen’s perspective is rare, I imagine it is emblematic of a larger technocratic worldview in Silicon Valley that has led us to this point.

We find ourselves living through a profound time in history. One where it is possible to observe both online and offline social behavior at a global scale. From a scientific perspective, data collection at scale is undeniably attractive. From an economic perspective, it is undeniably lucrative.

We’ve rushed in to explore and expand this vast technological landscape. In many respects, one could argue we remain at the very beginning of that exploration. Yet some patterns are starting to become clear. There are risks to the rights of individuals and groups within our global society that need to be addressed. It’s time for the technology sector to reflect more deeply on those risks and take steps to mitigate them going forward.

In two future posts, I will lay out my initial contributions to the conversation. To help frame a meaningful conversation about present risks, I believe we need to define some supporting language. In my first post, I’ll define several different classes of risk that compose the digital vulnerability that we are collectively exposed to online. In a follow-on post, I’ll highlight present knowledge gaps within the data science community that limit our design options and suggest a way forward to close those gaps.

The scale of the challenge before us feels daunting. Yet with any formidable challenge, progress is possible with persistent effort. A long series of small steps can add up to significant positive change. Let’s see where the path leads…