Typically, head up displays of the sort referred to in Introducing Augmented Reality Apparatus – From Victorian Stage Effects to Head-Up Displays represent one or more layers of “dashboard” style information to a forward-facing viewer without them having to look down at an instrument panel. But augmented reality displays can go further by registering or identifying items within the visual scene and then overlaying information on top of the scene that directly relates to those entities, or transforming it directly, in real time. In this section, we will introduce several examples of how augmented reality has been implemented, and the uses to which it has been put, over the last few years, and identify further ways of describing the various components that make up a mixed reality system.
In the examples of augmented reality that follow, try to relate the “problem” being solved with the sort of AR apparatus being used as described in Taxonomies for Describing Mixed and Alternate Reality Systems. Ask yourself why that technique might have been chosen and whether it appears to be the most appropriate one. Would alternative implementations also work, and if so, how would they compare in term of their relative advantages and disadvantages?
Projection based displays
The augmented reality church organ/equaliser we met earlier represents an example of what MIT researchersRaskar, Ramesh, Greg Welch, and Henry Fuchs referred to as Spatially Augmented Reality (SAR) (Raskar, Ramesh, Greg Welch, and Henry Fuchs, “Spatially augmented reality“, First IEEE Workshop on Augmented Reality (IWAR’98), pp. 11-20. 1998):
In Spatially Augmented Reality (SAR), the user’s physical environment is augmented with images that are integrated directly in the user’s environment, not simply in their visual field. For example, the images could be projected onto real objects using digital light projectors, or embedded directly in the environment with flat panel displays.
The Virtual Watershed Table / Augmented Reality Sandbox provides another example of SAR, in which the vertical relief of a table of sand moulded in three dimensions by the user is tracked in real time by a Microsoft Kinect device. A virtual model of the extracted shape of the surface is then used as the basis for a topographic map projection onto the surface of the sand, along with animated displays of waterflows across the sculpted sand model.
SAQ: What difficulties might be associated with projection based displays?
Answer: one obvious problem is that the viewer may occlude the projected imagery, casting a shadow over parts of it. Another is that a projection system is required, and must be calibrated so that it maps the digital imagery appropriately on the matched physical substrate.
Augmented Reality Apps
Although the AR Sandbox provides a compelling demonstration of how augmented reality can be used to enrich a learning or discussion activity, augmented reality applications have yet to prove they can make it in the consumer marketplace. Do users really want to stand looking through a camera as a see-through display, or would they be happier grabbing a photo and then looking at an augmentation or transformation of it?
A good example of this is shown by the Word Lens augmented reality application that was acquired by Google and is now part of Google Translate. It not only detects text, in realtime, in a visual scene, but also identifies the language and then translates the text, as required, replacing the original text with the translated version.
If you’ve ever found yourself in a foreign city with a script you don’t recognise, such as Greek, or Russian, you might appreciate the value of this sort of application! But does this really need to be an augmented reality video application? Or would it work equally well if the user looked up to take a photo of the street sign that was causing them confusion and then looked down at their phone to inspect a translated version of it, much as they might preview a photo they had just taken?
SAQ: how would you categorise the previous examples of augmented reality in terms of the AR technology frameworks?
With a conceptual scheme (the technology framework) already in place for categorising the various approaches to implementing the optical components of an augmented reality system, we now need some way of talking about the visual components that make up the augmented reality scene.