As my readers are now well-aware, augmented reality marketing campaigns are now on the radar of consumer advocacy groups. Last month, four of these groups filed a complaint claiming that a Doritos campaign involving augmented “virtual concerts” was “too immersive” for teenagers to handle, and “deceptively” blurred the lines between advertising and entertainment.
Marketers should take heed of these claims. Regardless of their merit in the Doritos case, some other aggrieved party is likely to make them again in the future, over some other marketing campaign–because “immersiveness” is an essential quality of AR.
But no two cases are exactly the same, and some plaintiffs are more creative than others. Which raises the questions: in what other ways could an AR marketing claim be alleged to be “deceptive”? What causes of action might another plaintiff bring besides the Federal Trade Commission complaint lodged against Doritos?
One likely candidate is a lawsuit alleging “false advertising.” The federal Lanham Act (which is also the source of federal trademark law) defines false advertising as “any false designation of origin, false or misleading description of fact, or false or misleading representation of fact, which … in commercial advertising or promotion, misrepresents the nature, characteristics, qualities, or geographic origin of his or her or another person’s goods, services, or commercial activities.”
In order to prevail, a plaintiff must prove that the defendant made a false or misleading statement of fact about a product or service, and that this statement was likely to influence a customer’s purchasing decisions. In reality, though, defendants responding to such complaints end up shouldering an expensive burden to show that their statements (or implications) were true and not misleading. Quite a few of these cases have been brought over the years. Prof. Rebecca Tushnet’s 43(B)log, one of the leading resources on this area of law, is up to nearly 900 entries under the “false advertising” category.
How might AR be used to “misrepresent the nature, characteristics, [or] qualities” of goods or services? To answer that question, let’s phrase it another way: how might representations made via AR get the facts wrong?
One obvious answer is “mistakenly.” AR remains an emerging technology with a lot of developing yet to do. And there are currently a lot more ideas about how to apply the technology than there is hardware capable of implementing those ideas. It may seem to the general public that the camera capabilities of smartphones and tablets are maturing rapidly, but to AR developers waiting for markerless object recognition, millimeter-precise GPS, and stereoscopic machine vision capabilities, they’re moving at a snail’s pace.
Consequently, some over-ambitious AR apps may try to convey or recognize more data than they’re able to–resulting in blocky, choppy, imprecise output. (For example, the jerky floating boxes that characterize most location-based AR apps on Android devices.) Under the wrong set of circumstances, that might end up conveying information that is false and has a material impact on a consumer.
Another answer is “by cutting corners” or “over-polishing.” Take, for example, the incident this summer in which British regulators banned L’Oreal from running ads containing these two photos of Julia Roberts and Christy Turlington. L’Oreal’s marketers digitally enhanced both photos to the point that it could not prove to the regulators’ satisfaction that the advertised makeup products were able to produce results like the ones shown.
By definition, digitally enhancing physical reality is a fundamental element of what AR does. This type of situation, therefore is one that AR marketers could very easily get themselves into if they’re not careful (and if they don’t run their content by trained lawyers first.)
Of course, more than just “marketers” should be concerned about making false statements of fact that injure another person or company. The law of defamation (a.k.a. libel or slander) provides a cause of action against anyone who publishes a demonstrably false statement of fact that injures another’s reputation. We usually think of this cause of action in terms of a slander against an individual’s reputation. But businesses can also bring defamation claims against those whose false statements injure the reputation of their products or services.
Therefore, augmented representations made of a product could potentially defame that product’s manufacturer, regardless of whether the augmented content was in an advertisement or some other context.
How might this scenario play out? As one example, take this excerpt from a short story about AR law published in 2007. (*) David, the protagonist, is an attorney in the near future bringing a defamation claim against a company for misrepresenting his client’s product in augmented space:
Wysiwyg—among the few manufacturing businesses left in the area—was David’s client. Its sales had dipped when the defendant, a competitor, issued press releases questioning Wysiwyg’s quality standards and business practices. David sued for defamation, and now sought to add an additional count based on his recent discovery that the defendant’s comments had been published in [augmented form] as well. …
Predictably, [the competitor’s lawyer] stressed that the videos underlying the original complaint and their 3-D versions contained identical statements. He therefore argued that they collectively gave rise to only one cause of action under defamation law’s “single publication rule.”
“Concededly,” said David in response, “the virtual world is still a place where, from the law’s point of view, the streets have no name…. But publishing the statements in virtual form adds significant content that is also defamatory. For example, the speaker is seen holding a part allegedly from Wysiwyg. A virtual viewer can pause and examine that object in three dimensions, gaining a significantly poorer impression of my client’s workmanship. And someone wearing v-gloves . . . could even pick the thing up and examine it. Virtual actions, in this case, speak louder than words.”
Judge Darling stroked his chin and nodded. After a few additional questions, he granted David’s motion.
The story set this scene in the year 2022, but I’m willing to bet that we’ll see something like this happen well before then.
How about you? What potentials for deceptive or misleading speech do you see in augmented digisphere?
* In the interest of full disclosure, this is from a story I wrote for a contest sponsored by the State Bar of Michigan.