Case study: Privacy

(Adapted with permission from An Introduction to Software Engineering Ethics by Shannon Vallor and Arvind Narayanan.)

Before starting this experience, please go through all of the “Foundations” readings and experiences.

Along with intellectual property/copyright concerns, privacy is one of the most commonly discussed issues of ethical and legal concern with software. There are many definitions of what constitutes privacy: they include control over one’s personal information; the ‘right to be forgotten,’ to be left alone or to have a measure of obscurity; the integrity of the context in which your personal information is used; and the ability to form your own identity on your own terms. Each of these, along with many other potential definitions, captures something important about privacy. There is also increasing debate about the extent to which new technologies are changing our expectations of privacy, or even how much we value it.

Regardless, privacy is in many contexts a legally protected right, and, in all contexts, among those interests that stakeholders may legitimately expect to be acknowledged and respected. The pressures that Web 2.0, ‘Big Data,’ cloud computing and other technological advances are putting on privacy will continue to make headlines for the foreseeable future, and software engineers will continue to struggle to balance the legitimate desire for expanding software functionality with the ethical requirements of privacy protection. This is complicated by the spread of ‘dual-use’ technologies with open-ended and adaptable functionalities, and technologies that offer a scaffold upon which third-party apps can be built.

Among the most famous cases to reveal these challenges is Google Street View:

In 2007, Google launched its StreetView feature, which displays searchable aerial and street-view photographs of neighborhoods, city blocks, stores, and even individual residences. From the very beginning privacy concerns with Google’s technology were evident; it did not take long for people to realize that the feature displayed photographs of unwitting customers leaving adult bookstores and patients leaving abortion clinics, children playing naked, adults sunning themselves topless in their backyards, and employees playing hooky from work. Moreover, it was recognized that the display of these photos was being used by burglars and other criminals to identify ideal targets. Although Google did initially think to remove photos of some sensitive locations, such as domestic violence shelters, it was initially very difficult for users to request removal of photos that compromised their privacy. After an initial outpouring of complaints and media stories on its privacy problems, Google streamlined the user process for requesting image removal. This still presupposed, however, that users were aware of the breach of their privacy.

In 2010, StreetView became the center of a new privacy scandal; it was discovered that software used in Google vehicles doing drive-by photography had been collecting personal data from unencrypted Wi-Fi networks, including SSID’s, device identifiers, medical and financial records, passwords and email content. Initially Google claimed that this data had not been collected, later they said that only ‘fragments’ of such data had been retained, yet eventually they conceded that complete sets of data had not only been collected but stored. At one point Google blamed the breaches on a single ‘rogue engineer,’ though later it was learned that he had communicated with his superiors about the Wi- Fi data collection. As of 2012, 12 countries had launched formal investigations of Google for breaches of privacy or wiretap laws, at least 9 have determined that its laws were violated.16 In the U.S., Google was fined $25,000 for obstructing its investigation into the matter. More recently, Google settled a lawsuit brought by 38 states over the breaches for $7 million (a tiny fraction of its profits). As part of the settlement, Google acknowledged its culpability in privacy breaches, and promised to set up an annual “privacy week” for its employees along with other forms of privacy training and education.

Questions for discussion

  1. What forms of harm did members of the public suffer as a result of Google StreetView images? What forms of harm could they have suffered as a result of Google’s data-collection efforts?

  2. Did Google’s engineers violate either the ACM Code of Ethics or the Software Engineering Code of Ethics and Professional Practice? If so, how?

  3. What ethical strategies could Google engineers and managers have used to produce a more optimal balance of functionality and ethical design?

  4. What can Google do to prevent similar privacy issues with its products in the future?