You can’t have an informed discussion–especially in the legal context–without first defining the terms that you’re using. This blog is going to talk a lot about “augmented reality” (or “AR”), so it’s important to understand exactly what we mean by that phrase. I’ve already given one definition–”overlaying digital data on the physical world”–but let’s see if we can unpack that a little further.
The subject of the phrase is “reality.” That’s the thing being “augmented” by AR technology. So what do we mean by “reality” in this context? Obviously, we could take this in several directions. For example, when asked recently to give an example of “augmented reality” that the general public could easily understand, one leading AR commentator responded (perhaps jokingly): “drugs.”
That’s not what the emerging AR industry has in mind. It doesn’t encompass the dream worlds of such films as Inception or Sucker Punch, or a drug-enhanced vision quest. Poetic license aside, we’re not talking about mental, emotional, spiritual, or metaphysical “reality” when we discuss the latest AR app. Instead, we mean the actual, physical world we all inhabit.
What, then, does it mean to “augment” that reality? Starting again with what it doesn’t mean, it’s important to note the distinction between AR and virtual reality, or VR. This more-familiar term describes a completely self-contained, artificial environment. Think Tron or The Lawnmower Man, or the web-based worlds of Second Life and Warcraft. The actual, physical surroundings of the person experiencing the simulated environment don’t factor into the equation.
AR, then, is a blending of VR with plain, old physical reality. The American Heritage Dictionary defines the verb “augment” as “to make (something already developed or well under way) greater, as in size, extent, or quantity.” That’s what AR does. It uses digital (or “virtual”) information to make our experience of actual, physical reality “greater.” It doesn’t create a brand-new, standalone plane of existence; it simply adds to the information we already process in the physical world. (This an objective description, of course; whether AR makes our experience subjectively “greater” promises to be a fascinating and ongoing debate.)
Tying this understanding of “augmented” back into the word “reality” shows why it’s important to define our terms. How does this technology increase the “size, extent, or quantity” of our physical reality? To answer that question, we need to recall how it is that we experience the physical world. And the answer, of course, is through our five senses: sight, smell, touch, taste, and hearing. “Augmented reality,” therefore, is technology that gives us more to see, smell, touch, taste, or hear in the physical world than we would otherwise get through our non-augmented faculties.
This definition perfectly describes the examples of AR that already exist. Those include the yellow first down line in NFL broadcasts; “magic mirrors” that superimpose eyewear or clothing over our physical reflections; walking directions superimposed on the sidewalk in front of us; and data fields (such as home prices or sex offender registry profiles) that appear to float in the air next to a particular building.
But this definition also encompasses applications that have barely begun to be conceived, much less created. The vast majority of AR apps in existence or in the planning stages involve only one physical sense: sight. They overlay virtual imagery on top of what we already see with our naked eyes, essentially mooting the need for a computer monitor. The real world becomes our monitor when viewed through AR-enabled devices. This in itself will be monumentally useful–but there is more.
What about using AR to augment reality for those whose ability to perceive the world (through one of more of the five senses) is impaired? I heard this question posited at the AR Immersion 2010 Conference hosted by Total Immersion, one of the biggest players in the AR field, and it’s a noble question. Some such apps are on the way, such as this one that would translate shapes and colors into pitch and volume for the blind, and DanKam, which assists people with color blindness. Many more examples are surely just around the corner.
It seems inevitable that sound will play a bigger role in AR as sight-based apps become more mainstream. After all, sight and sound already go hand-in-hand in our everyday experiences. Perhaps devices that identify people by facial recognition or iris scans will be supplemented by identifying voiceprints as well.
Touch (or “haptic”) technology will also follow along. Minority Report-inspired gloves will inevitably replace the computer mouse as AR optics replace the computer monitor. As I’ve opined elsewhere, I don’t see AR technology becoming fully mainstream until using it to interact with data becomes as easy as looking at it and touching it.
What about the remaining (and related) senses: smell and taste? Here is where I’ve yet to see even the speculative AR literature tread. According to the definition above, devices that expand these senses are just as much “AR” as anything else. I doubt that the demand for these features, or the technology to offer them, are quite as advanced as some of the other examples. But I’ll be fascinated to see what a mature AR industry comes up with in order to expand into these realms.
What do you think? Does my definition capture the meaning of “augmented reality”? And are you excited by its potential?