Have we lost our privacy to the internet?
Google unveiled its new policy on protecting personal information, to
a chorus of disapproval last week. As alarm grows over how much data we
are handing over to large companies that mine it for profit, Joss Wright
and Tom Chatfield lock horns over how worried we should be.
Profound effects
The rise of the internet, from obscurity to ubiquity in the course of
a few decades, has had profound effects on our society and on us as
individuals. No one could have predicted how significant it would
become, and no one can predict with any certainty how it will develop.
The Social Network |
This is true of the underlying technologies, but it is equally and,
arguably more importantly, true of the changes that the internet has on
how we communicate and interact with each other. It has brought us an
overwhelming range of benefits, providing a universal tool to
communicate, to organise our lives and to access entertainment.
What concerns me most is that this tool brings with it risks that we
simply haven't grasped. Social networks encourage us to share every
aspect of our lives with our friends, but, by providing that service,
those networks see everything that we share, and use that information to
categorise, profile and predict us. These services aren't “free” – we
pay for them with our personal data, and the profits are huge.
It could be argued, and often is by the heads of large technology
companies, that privacy is dead and that the benefits of social networks
and similar services easily outweigh the perceived loss of privacy. I
would argue that we do not yet fully understand the power of the data we
have shared, and that by unwittingly giving this information to Google,
Facebook and others we are shaping the future of our society in ways we
cannot predict.
For the entirety of human history, we have operated on small scales
and in relative anonymity. Our words are heard by the few people close
to us and most are quickly forgotten. We walk down the street without
passers-by knowing our names or history. The internet has started to
change that. Our words and actions can easily be shared with billions of
people around the globe and archived indefinitely. The details of our
lives can be found simply by typing our name into Google.
We need to understand the risks of this type of technology so that we
can fully gain its benefits. We need protections, both technical and
legal, so that a small mistake cannot devastate our lives. We also need
education to help us function in a world where privacy is no longer the
natural state of being.
I think you've hit the nail on the head when you write that, by
sharing data, we are shaping the future of our society in ways that
cannot be predicted. How, then, can we mitigate against the worst
consequences of trends we can scarcely envisage?
If recent economic history suggests anything, it's that humans are
poor at risk assessment. Can we hope to comprehend these risks until at
least some of us have experienced them in their worst form? Quite
possibly not – not least because of the intuition- defying divide
between the daily delights of social networking and its potential
repercussions in the form of snooping, stalking, identity theft, state
surveillance and much else besides.
While it's difficult to disagree with your call for education and
effective protections, I find myself worrying about another set of
unintended consequences behind this warning: that we risk polarising a
vital debate into paralysis, rather than facing up to the knotty human
issues feeding it.
I'm not sure how helpful it is, for example, to lump the perspectives
of most large technology companies together under the banner “privacy is
dead”. Clearly, there's a lot of money being made today – and much of it
with breathtaking cynicism – by extracting as much personal information
as possible from the eager and the unwitting, as well as from the eager
and well-informed. Facebook's initial public offering is barely the
beginning and it will be both alarming and fascinating to see how its
heft continues to grow.
It's also increasingly clear, however, that granting and respecting
privacy of various kinds is becoming a vital part of most business
strategies reliant on users’ goodwill, attention and effort – for all
its faults, Google placed particular emphasis on this in creating
Google+ – and that many of the most worrying trends taking place across
the digital world are born more from corporate ignorance than malice.
This doesn't excuse ignorance or cynicism. But I do worry that much
of the “us” and “them” attitude directed towards tech companies may be
hitting the wrong targets, and that privacy itself risks becoming a
bugbear rather than a rallying call.
Those perpetrating both present and potential crimes and abuses –
whether they're members of a corporation, the criminal underworld, or a
national government – must be stopped, and the techniques they deploy
identified and disabled. But the “we” who are truly in this together
encompasses most of the companies it's tempting to vilify and the
elected officials it's easy to pillory as ignorant. Without them, there
will be no such thing as real progress.
The balance between privacy and innovation is crucial to this debate
and I wouldn't want to suggest we treat privacy as anything more than
one among several fundamental human rights.
Privacy risks
As you've noted, the pleasures of social networking are real and
tangible whereas the privacy risks tend to seem far away. From a
corporate perspective the profits are equally real, and the risks of
privacy violations remain as far away. I think that this is why we are
seeing, for example with Google's Street View as well as its recent
privacy policy changes, a tactic of asking forgiveness rather than
permission. I very much agree that treating companies, and indeed our
governments, as closed entities that we must fight every inch of the way
is not productive. It is certainly worth praising Google for the wide
promotion of its new privacy policy, even while we can criticise the
policy itself. Engagement, and Google is particularly active in engaging
researchers and activists, is vital.
The risks, however, aren't entirely unknown. Examples of people who
have had sensitive information revealed about, for example, their
sexuality are the more familiar stories. We should also be concerned
about our profiles becoming conflated with our “real” identity: Google
already presents us with search results that it believes we want and so
hides from us views or opinions with which we may disagree. Without
being overly alarmist, this raises fundamental issues concerning our
individuality, personal development and autonomy; not in the least
through our ability to present the face we wish to present in different
social contexts.
Can we mitigate these risks without seeing them? I think we can try.
If we drive a car with our eyes shut and foot flat on the accelerator
then we will not see the wall before we hit it; we need to open our
eyes, and I would also like a hand on the steering wheel – and
preferably not one that's invisible.
The distinction you draw between asking for forgiveness and seeking
permission seems a very useful one – and reminds me of that
all-too-familiar political rigmarole of the disgraced official's public
“apology”, where ostentatious contrition is somehow supposed to
substitute for facing consequences or making amends.
What, then, are the best mechanisms for making real consequences and
amends possible in the areas we're talking about?
I'm sure we agree that driving flat out with both eyes shut isn't a
recipe for progress. But I remain less sure whose hands should be
grasping the steering wheel and who gets to apply the brakes. This is
especially true when it comes to those overlapping issues you identify:
our ability to present ourselves to the world in ways we control and to
control how the world is presented to us in turn.
In each case I'm reminded of a phrase that the American legal scholar
Cass Sunstein used in his 2009 book On Rumours. He argued that the
dissemination of false or harmful materials via open technologies should
be counterbalanced by a legal “chilling effect”, making it easier to
issue takedown notices and hold service operators responsible for
content. Versions of this question have cropped up again and again, from
the Stop Online Piracy Act to the net neutrality debate. And what
interests me more than the rights or wrongs of such proposals (although
I think we'd largely agree about these) is the massive, intractable
scale of the impasse they seem to be circling.
Those with most to hide – from brave revolutionaries to sinister
government cronies – are, I think, already well aware that true privacy
means having no digital records at all. For the rest of us, though, the
question is whether preserving the unprecedented openness of a
decentralised global network is compatible with putting truly effective
mechanisms in place for chilling its excesses.
This may well be unanswerable, at least for now. But in a sense we
already face a miniature version of this question every day in the
services we choose to use, and what we entrust them with. Should we be
flocking towards the tidy, secure walled gardens of media ecosystems
like Apple's – or running as fast as we can in the opposite direction?
Open platform
I believe that the fewer enforced restrictions on individual
expression the better; that the true triumph of the internet is as an
open platform for endless types of creativity. My hope is for means to
limit the harm to individuals that comes from exploiting the inherent
inspectability that the internet brings.
I'm aware of the apparent paradox here: I'm asking for the internet
to be open and free for individuals, but restricted for corporations.
Let me instead argue that those who by offering us services place
themselves in a position of power over us be required to exercise that
power with responsibility. Chilling effects are, of course, a powerful
tool, the results of which we experience every day. Unfortunately, this
form of legal approach massively favours large corporations that have
the resources to wage endless legal battles and, in practice, often use
it to “chill” creativity and freedom of expression. (I would advise
anyone to look into chillingeffects.org, which comprehensively documents
the phenomenon.)
I don't have a simple answer, and I don't believe that one exists; we
need to find a subtle balance of technology, law, consumer pressure and
individual education; and we need it at a global level.
We need to act beforehand, because we can't take back data once it is
released. We will get things wrong, of course, but we need to take a
holistic and long-term look at how we can affect the technologies that,
in turn, affect us. We would do well to look at the development of the
environmental movement, which has laid the groundwork for this kind of
debate. We need to consider proactive means to protect the information
environment that we have created from long-term externalities that are
not immediately apparent.
As to the services that we choose to use, walled gardens are
initially convenient and comfortable; what terrifies me is the extent to
which they lead us to stop thinking beyond the walls, and to conform to
the safe options that they present.
I prefer the freedom and complexity of open services, without walls
or gates, that do not ask who I am and do not ask me to stay the same.
Corporate responsibility
Thank you for your considered response – and for your neat
articulation of the tension between individual freedom and corporate
responsibility.
Another way of putting this, perhaps, is that in the internet we
possess an unprecedentedly democratic set of tools which also gifts
unprecedented power to those controlling its infrastructure. There is,
for me, more thus far to celebrate than there is to mourn in the changes
these tools have brought. But you're surely right to highlight the
difficulty of acting in advance across many interlocked fields to
safeguard against as-yet-indeterminate perils.
Your example of the environmental movement laying the groundwork for
this debate also seems almost too apt, in that it suggests how little
effective action tends collectively to be taken by humanity when a
problem – however menacing – is nebulous and as yet fully to unfold.
As a writer, I'd like nothing better than for eloquent explanations
and expert insights to create an entirely persuasive movement
anatomising technology's opportunities and perils.
I can't help feeling, though, that words will never be enough; and
that the most necessary counterpoint to the resources corporations and
governments possess remains that one field where weight of numbers truly
counts, direct action.
Here, from Occupy and UK Uncut to the Arab spring, the present is
richly stocked with role models and warnings: with devolved,
ideologically charged movements embodying many of those virtues your
last message sets out.
There's surely hope in this notion of privacy defended by avowedly
public action – and safeguarded by those open, evolving digital services
you advocate. But only if those protesting are ultimately able to find
common cause with those wielding other forms of power.
The Guardian |