For me, the most interesting questions to come out of Saturday’s open-space discussion of Privacy, By Design at CIS were those that focused around how the notion of ‘privacy’ is constructed and negotiated. For those who attended, please excuse me if I get your ideas all jumbled up! At the moment this is just a rough sketch of some ideas that have come out of the session, so feel free to clarify your ideas in the comments, or link to your own posts on the discussion.
At the most fundamental level, Nishant Shah was concerned about how we define privacy, and the extension of the concept to (potentially) unrelated areas. For example, debates that might previously have been framed within the discourse of rights are now being framed with reference to privacy, which might be seen as an example of the concept’s vagueness. I can see Nishant’s point as it applies to analytical clarity, and the difficulty that discussion participants had in coming up with a mutually-agreeable definition illustrated it admirably.
However, from an activist perspective I wonder if the issue is more complex; perhaps for activists the question not so much how to come up with an analytically-sound definition, as how to come up with a politically-useful discourse? From this perspective, I wonder why it is that privacy is gaining so much ground as an activist discourse. It might be because it fits so neatly with the (neo)liberal emphasis on individual rights, and on negative freedoms. And if that’s the case, a focus on privacy in activism may have the unintended effect of strengthening the neoliberal discourse. (Or not! What do you think?)
An important part of how we define privacy is deciding who defines what is private. Some people argued for a notion of privacy as entirely defined by the individual: each person should decide what is private to them, and what constitutes a privacy violation. However, this is a potential problem when it comes to trying to build legal or technical systems that respect privacy: who decides what it’s reasonable to call ‘private’?
One potential solution is for flexibility and choice to be built into systems, as it (arguably) has been to differing extents in Facebook, Livejournal, Twitter and other Web 2.0 tools. While this is possible in systems based on voluntary participation, it’s not a likely solution for coercive government systems like the UID or census. (This part of the discussion also led off into a fascinating sidestreet about the inclusion of caste in the census. More on BBC and Frontline.)
The latter part of the discussion foundered a bit, I think. People wanted to know more about the design of technological systems, and to learn about the limitations and possibilities of trying to build privacy in from the beginning. Malavika Jayaram opened up some great questions around this, including asking whether in practice there’s a difference between a coder’s notion of privacy and the organisational notion of privacy when it comes to designing systems. She also asked whether there was a sense that technological design was replacing legislation (similar to Lessig’s argument around code becoming law), and what the appropriate response to this might be: should we work at the level of system design, or aim to create better regulatory systems?
While some of the attendees had some good experience to draw on, I think perhaps there were not quite enough people from a technical background there to get a good sense of how the process of system design works across different systems (but perhaps this was just because my own attention was lagging by the afternoon).
There was also an assumption that came up in a lot of the talk about technical design: one or two participants referred to the necessity of separating out ethical/moral issues from technological ones. I’m not sure whether this is actually possible. It seems more like a process of design where you believe ethical issues are not present is simply one where the ethical implications are hidden, assumed, and/or unquestioned.
As I’m sure this post makes clear, I think far more questions were raised than were answered. These are useful questions, for academic, designers, and activists. I’m looking forward to seeing other posts on the discussion, especialy as I’ve missed whole chunks of the discussion here. It’ll also be good to see what comes out of CIS and Privacy International‘s future work in the area.
—-
Note: I was going to use some kind of ridiculous pun in the title, like, “shining a light on privacy,” but I just couldn’t bring myself to do it. I worry this means I’m not cut out for academia, which breeds puns like rabbits.
Thank you to Flickr users Urban Combing and *cough* MrSexyPantsATX for their photos.
Loved your summing up! Thank you. Just a thought: Could society be moving towards something like Universal Privacy so far as system design, technology or coding is concerned (with reference to Universal Design). And would that work in terms of setting the framework for a more transparent governance and access infrastructure?
This sounds interesting, but could you explain what you mean by “Universal Privacy”? It’s not a term I’ve come across before, as most of my research isn’t closely focused on either design or privacy.