Google StreetView was released in Australia on 6 August 2008. It has a range of potential benefits, and is likely to be much-used for both social and business purposes.
But a number of risks arise from a service of this nature:
- to people’s safety (e.g. being found by people who don’t like you, disclosing female-only households, disclosing the presence of young children)
- to building security (e.g. facilitating break-in and escape)
- to public safety (e.g. providing information about places where people congregate)
- to personal privacy (as little as ‘just being seen’, outside the home, or in other locations. There is no simple definition of privacy. Different people have different sensitivities, and different thresholds)
The conventional business processes that need to be applied in these circumstances are Risk Assessment and the specialised form of it called Privacy Impact Assessment (PIA).
A PIA is an open process, which commences with the provision of information to consumer and privacy advocacy organisations and representatives (e.g. through focus groups). This is followed by discussions and submissions, reflection of the feedback in the design and associated procedures, and publication of a report on the process and its outcomes. A PIA takes into account existing laws. But, especially with advanced services, privacy laws lag reality by decades, and hence a PIA’s primary focus is people’s needs and expectations.
It appears that no formal PIA has been conducted on either the Maps service, or the StreetView aspect of the Maps service. A meeting was held between APF Board members and a Google Australia executive in May 2008. APF issued a preliminary Policy Statement at that time (which is superseded by this version).
APF expressed its concerns, the company provided an amount of information about the design and business processes, APF gave its feedback, the company noted them, and to some extent at least it reflected APF’s submissions in the service when it was launched. This was a valuable exercise, from the perspectives of both Google and privacy. But it was not an adequate substitute for the comprehensive PIA that should have been undertaken much earlier in the development of the service.
This document summarises the Australian Privacy Foundation’s policy position. Much of it dates from the original meeting in May, but it has been considerably updated to reflect subsequent developments. The document commences by identifying concerns about operational aspects of the service, and then outlines key concerns that have not been addressed.
At the meeting in May 2008, it was apparent that Google was aware of many of the issues that the service would give rise to, and was able to provide positive responses to many of them. The service, when it was released, reflected the discussions. The features of the service in Australia are in several ways more privacy-protective than those of the earlier versions released in the USA and a couple of other countries.
The APF welcomes Google Australia’s efforts to address those concerns.
Specific concerns, and design-features and business processes that Google has adopted in order to address them, include the following:
- image-precision – addressed by publishing only relatively low-resolution images
- image-intensity – addressed by only capturing images every c. 10 metres
- image-frequency – addressed by taking care in the scheduling of re-visits, and ensuring that excluded images do not re-emerge because of re-drives
- visibility of faces – addressed by the automated blurring of faces (with some ‘false positives’, such as a horse’s head blurred, and some false negatives, i.e. people’s faces not blurred), and discovery by individuals of images they consider inappropriate and their use of the take-down request mechanism
- visibility of vehicle number-plates – addressed by relatively low-resolution images (but with many cases of number-plates remaining visible), and discovery by individuals of images they consider inappropriate and their use of the take-down request mechanism
- private locations – addressed by only driving and capturing images on public roads (with some errors arising, e.g. from mistakes in existing map databases, and human error)
- sensitive locations – addressed by defining categories of sensitive location, identifying instances of each category, and avoiding them (with some errors arising)
- image escape – addressed by preventing image-saving (but the technological opportunity exists to capture screen-shots (‘screen-scraping’), so some are inevitably recorded outside Google StreetView)
- unforeseen problems – addressed by providing a prompt, ‘no questions asked’ take-down mechanism (although there have been reports of requests for blurring or take-down not being done, or not being done promptly; and whether it will be performed consistently over the long term remains to be seen)
- several of the design-features and operational processes have not functioned sufficiently well to fully solve the problems they are meant to address
- whether all operational risks and privacy-relevant design-features have been identified is not clear, because the service has not been subjected to a comprehensive PIA
Key Concerns That Have Not Been Addressed
Based on the limited assessment undertaken, discussions with Google, and the experience gained in the first two days after launch, the following key concerns exist.
1. The Difficulty of Finding Where to Report a Problem
The primary way to navigate to the page where a request can be entered is from the StreetView window. If you click on the words ‘Street View Help’, and look 2/3rds of the way down the contents of the pop-up window, you can see ‘Report inappropriate image’. If you click on that, you can fill in a form.
APF argued that this was too hard to find, and that the request should be at the same level as the image itself. Since the service was released, it has become apparent that many people are unable to find the link.
There are other ways, including the StreetView home-page which includes a link to ‘How to Report an Image’. But navigation throughout the Google site is extremely haphazard, and such pages as exist are very difficult to find.
APF reiterates its proposal that the reporting mechanism should be much easier to find.
2. The Lack of a Complaints Channel
Google goes to great lengths to avoid having direct contact with its users. The limited contact facilities are beneath the ‘Help’ hotlink. That offers ‘Contact Options’. That offers ‘support team – English (US)’. That directs the user to ‘Help Groups’ (which mainly depend on mutual support among users), and offers a short list of specific reporting capabilities, with the statement ‘We offer email support only for the following cases …’. In short, there is no contact-point for complaints.
Admittedly Google has millions of users, and many of them have no formal contract with it, and make no direct payments to it (although they may be an indirect source of revenue for the company). But the experiences of the days following the launch showed that there are inadequate means for users to communicate with the company.
People who were concerned about various aspects of the service had to resort to contacting the media and the APF (and perhaps, with little likelihood of any success, the Office of the Privacy Commissioner), or guessing the (non-standard) email-address formats used by Google staff.
APF urges that much better complaints channel arrangements be put into place.
A page exists, but it contains nothing of substance. One or more videos exist that include privacy-related material. There are also scattered comments in official and semi-official blogs; and comments have been made in media releases and conversations with journalists. These are all very difficult to locate, and all are couched in language that is warming, but that makes no actual commitments.
There is no location at which even the many statements are consolidated, and the many and varied informal sources may or may not be discoverable when they are needed.
4. The Lack of Formal Undertakings by the Company
The most serious of the issues that have surfaced to date is the company’s failure to provide actionable assurances about the service.
The (good) design features mentioned above are not formal commitments by the company. They can be changed at will, at any time, without notice, and without consultation with anyone.
As one example among a great many, what the company says about the take-down facility is:
“Report Inappropriate Image:
“Google takes concerns about its services very seriously. Please use the link below to report concerns about an inappropriate street view.”
This provides no assurance whatsoever that reports will even be considered, let alone processed.
In general, users probably have no contract with the company, and hence cannot enforce compliance with the terms of any contract. Lawsuits by consumers against large, rich corporations are in any case a highly ineffective way to ensure fair play.
The appropriate mechanism is for the company to publish its undertakings, expressed in unequivocal form, such that they can readily give rise to actions under the Trade Practices Act, in particular s.52 and s.59. That enables users to formalise complaints to the company, and if necessary to regulators and the courts, if the company fails to comply with its undertakings.
The APF reiterates its submission to the company that it formalise and tighten its loose statements so that they represent actual protections under Australian law.
The discussions held in May 2008 were highly beneficial, to the company and the public alike. The conduct of a formal Privacy Impact Assessment process in relation to significant initiatives like this can deliver even greater benefits.