On October 22, 2012, the Federal Trade Commission released a report entitled “Facing Facts: Best Practices for Common Uses of Facial Recognition Technologies”. The FTC has had its eye on this technology for a long time–at least since the workshop it held on the subject in December 2011–aware that it is being implemented by a wide variety of industries.
Among the privacy issues that concerns the FTC most is “the prospect of identifying anonymous individuals in public.” This prospect became eerily real for me earlier this year aboard a cruise ship. It used to be that ship photographers had to post their photos in a massive onboard gallery that patrons spent hours browsing through, trying to pick out the pictures they appeared in. No more. This time, my digital folder was updated in near-real time with new photos every day, using software that had tagged my face or even the faces of others in my party.
Chances are that I signed something at some point allowing the ship to do this, although I’m not sure US privacy laws would hold much sway in international waters anyway. And one fundamental precept of those laws is that there are no “anonymous individuals in public [places]”; being publicly visible pretty well eliminates any expectation of privacy you might hold. But the FTC also worries about “data [being] collected [that] may be susceptible to security breaches and hacking.” And there’s no doubting where the report’s authors got their inspiration; the report opens with a quote from Minority Report.
Because the technology is “young,” the FTC sees this as the perfect time to publish its expectations, “to ensure that as this industry grows, it does so in a way that respects the privacy interests of consumers while preserving the beneficial uses the technology has to offer.” This report does not have the force of law, but you can bet that it will influence the decision-making processes of FTC administrative law judges and others evaluating novel allegations of “deceptive advertising pratices” involving facial recognition.
Although the report characterizes its recommendations as “best practices,” it doesn’t do much to actually reduce its discussion to practice. Rather, the report loosely follows the theme of the following three “principles”:
1. Privacy by Design: Companies should build in privacy at every stage of product development.
2. Simplified Consumer Choice: For practices that are not consistent with the context of a transaction or a consumer’s relationship with a business, companies should provide consumers with choices at a relevant time and context.
3. Transparency: Companies should make information collection and use practices transparent.
Honestly, these “principles” strike me as so vague as to almost be counterproductive. They are intuitive to anyone making a modicum of effort to incorporate privacy concerns into a facial recognition application. And as a result, this recitation will encourage nothing more than a modicum of effort to protect privacy. And the technology itself is so “young” that efforts to guide it remain purely speculative at this point.
I’m not alone in being uncomfortable with this report. The FTC committee behind the report adopted it on a 4-1 vote. The dissenting commissioner, J. Thomas Rosch, wrote that “the Report goes too far, too soon.” He made three points. First, he thinks that the report fails to identify any “substantial injury” threatened by facial recognition technology. Second, he finds it premature, because there is no evidence that any abuses of the technology have yet occurred. Third, he believes the recommendation to provide consumers with “choices” anytime that the technology doesn’t fit the “context” is impossible, given the difficulty in assessing consumer expectations. As a result, he says, this amounts to an overly broad “opt-in” requirement.
This report is the first official word on facial recognition privacy, but it is sure not to be the last.