When webcams first started to appear, many of them shipped with simple games that incorporated basic image processing tools – such as motion detection – that let players engage in augmented reality “webcam controlled gaming”. In contrast to the more elaborate forms of augmented reality where digital objects are overlaid on tracked, registered images (as described in Introducing Augmented Reality – Blending Real and Digital Worlds), a typical “webcam game might simply superimpose ‘balloons’ on top of a video image of the player and require the player to jump around and ‘pop’ the balloons.
The premise behind many of these games was that if a moving object (as captured by the webcam) moved to an area of the screen where a digital object was (such as a ‘balloon’) then the moving real world player would have been deemed to have hit the digital object. That is, if the moving player image is at the same part of the screen as a digital object, a ‘collision event’ is raised, just as if a player controlled game character collided with the object in ‘normal’ game, and some action is taken as a result (such as the balloon being ‘popped’).
As many of the algorithms used to perform motion detection are computationally expensive, it was no surprise that one of the early webcam games was promoted by chipmaker Intel, who were keen to demonstrate how powerful their processors were at the time. As this quote from Justin Rattner, Intel’s chief technology officer, in a Business Week article from December 2007 suggests, the trend toward using increasing computer processing power to implement ever more powerful video based control systems may still hold true: “We imagine some future generation of [Nintendo’s] Wii won’t have hand controllers,” says . “You just set up the cameras around the room and wave your hand like you’re playing tennis” (Supercomputing for the Masses).
Like this, maybe? Camspace:
As far as the ‘user’ is concerned, what are the main similarities and differences between the simple motion detection used in basic webcam games, and the techniques used in the more elaborate motion tracking techniques required for augmented reality and motion capture?
As with many other technologies that have left the controlled environment of the lab and made it into everyday use, it is worth watching how artists are making use of these technologies in their own artworks – and the halfway house between the lab and the everyday world that is the public art gallery – to get an idea for how we might interact with these systems in the (near) future.
For example, the following video shows how Animata, a “real-time animation software” toolkit, that has been “designed to create interactive background projections for concerts, theatre and dance performances”
Keep an eye out for installations in your local gallery, arts center or media center that make use of video based controllers – they are excellent way of exploring some of the issues and ideas around how we might interact in the future…