Archive Page 6

Digital Worlds in the Real World: Augmenting Reality with a 21st Century Take on Pepper’s Ghost

In Introducing Augmented Reality – Blending Real and Digital Worlds, I introduced the idea of augmented reality in which digital graphical objects are overlaid on video images of real world scenes, to give the appearance of digital objects inhabiting the real world. By overlaying the digital objects on top of ‘tracked’ real world objects, it is possible for a human puppeteer to enter the digital realm and both control and interact with digital animations. But now consider the case of digital characters entering the “real world” and joining human actors on a physical stage, rather than the actors having to move behind the video screen?

If you even been to a science discovery center, it’s quite likely that you’ll have seen an exhibit based around a piece of theatrical trickery known as Pepper’s Ghost.

The effect is used to make a ghostly apparition appear and disappear from a scene – see if you can find out how the effect works…

A recent twist to the illusion allows digitally projected 3D animations to come to life on stage:

The same trick can be used to create a feeling of telepresence, for example in the case of large business presentations:

Recalling the theatrical origins of the technique, this New York Times “presentation’ describes a recent theatrical performance that uses the same effect: First Person Ghost Lighting

One company that is championing the ‘digital Pepper’s ghost” approach is the UK based Musion Systems Ltd (Musion Systems blog). The following video sequence shows how they create the illusion with their Musion Eyeliner system.

To what extent does this system represent something ‘new’ and to what extent is it just an extension of Victorian theatrical stagecraft?

For what sorts of game might this technique provide a compelling user interface? Are there any game genres where it is unlikely to be effective? Why?

Introducing Augmented Reality – Blending Real and Digital Worlds

The 1988 film “Who Framed Roger Rabbit” merged the worlds of human live action and classic Disney animation to present a world in which human actors and cartoon characters acted alongside each other (see trailer, or Amazon product listing).

The animations were painted on to the original “human action” film during a period of post production, but nevertheless, the result is still quite compelling.

Many film productions today also use post production techniques to add photo-realistic computer generated imagery (CGI) to a film, particularly in the area of special effects and ‘digital virtual set design’, but what if it were possible to actually interact with digital creations in real time?

Step-in, augmented reality

There are several augmented toolkits available on the web, many of which use the approach demonstrated in this BBC Radio 1 promotion:

A series of easily identified, high-contrast images are registered with the AR system (that is, the system is trained to recognise them) and then different movie clips are associated with those images. When the image is recognised, the video clip is overlaid on the image and starts to play. As well as videos, 3D computer graphics may also be superimposed on the detected image.

You can see more clearly how different patterns might be registered and associated with different 3D models in this page about the ARTag, augmented reality system: ARTag. (See also the ARToolkit – warning: if you don’t know what a compiler is, this isn’t for you…)

One of the easiest ways of experiencing augmented reality is to try out the Fix8 animation tool that lets you animate your own appearance by registering key facial features and then animating on top of those: Fix8

(If you do have a go at creating a Fix8 movie, why post a link back to it here as a comment to this post?!:-)

How many ways can you think of using augmented reality? Write down two or three ideas as a comment to this post.

To get you started, here’s how you might use augmented reality to support car maintenance:

…or maybe Lego car maintenance!

(Lego have also started experimenting with augmented reality kiosks that register a tag on a Lego box and then display a 3D animation of the model that can be constructed from that Lego set sitting on top of the box.)

Finally, here are a few ideas for augmented reality games: Top 10 augmented reality demos that will revolutionize video games. (Note that this list may be a little dated by now – if you manage to find any more recent examples, please post a link back to them in a comment to this post.)

So how does AR actually work? To explain that, I’ll need another post…

Friday Fun #15 Spore

A month away from the Digital Worlds blog, but I’m going to try to get back in the flow for a week or two, or at least stack up a few posts that can trickle out over the next few weeks… So to ease my way in, here’s a (late) Friday Fun post about the Spore Creature Creator.

If you haven’t heard about Spore, it’s an ‘ecosystem’ game (still in development) whose release has been hyped – and eagerly anticipated – for well over a year. Created by the same team that produced SimCity, a simulation game for growing and managing your own city (and which you can play in its original form online), Spore is the next step in simulation games, providing the opportunity to reach beyond the simulation of a city or civilisation, and “play with Creation”.

Did you spot the hype in the above paragraph?! ;-) The ethos of the game – which I take to be creating ecosystems that evolve over time – is reminiscent of an early ‘Artificial Life’ game from the UK, called Creatures. In the Creatures universe, players created creatures (‘Norn’) that developed and learned throughout their lifetime, and that could ‘breed’ with other Norns. Creatures is still available from Gameware Development (who also created the popular – and Creatures inspired – CBBC Bamzooki game), along with a free net-enabled version of the ‘game’: Docking Station Central).

The Spore Creature Creator looks like a lot of fun (I haven’t had a chance to play with it yet:-( if the user generated creations uploaded to the Sporepedia Gallery are anything to go by!

Notwithstanding the complexity of the game, the Creature Creator tool also represents a huge achievement in design terms, as this interview with Will Wright, the game’s creator, suggests: Will Wright talks Spore and defensive cows (Joystiq interview)
Vodpod videos no longer available.

Friday Fun #14 Play the News, or For Another Purpose

Okay – first up an apology for there not being any posts over the last week. It’s half term, and there have been other more pressing things to do, unfortunately…

Anyway, I canlt not do Friday fun, so here are a couple of serious games, (I guess?) that you might like to try out.

First up, is the Play the News game, in which you are presented with information relating to a story or situation that is in the news, and you choose one of several actions that people in the story might take. As well as comparing your predictions to other peoples, you also get scored according to how well your predictions turn out (that is, whether the course of action you selected is the one that actually happens).

To what extent is this approach likely to engage you in a news story, and potentially learn a little more about it, compared to something like the New York Times news quiz (which is also available as a social network (Facebook) application)?

The second game – or rather, set of games – that I’d like to mention appear on the beautifully named gwap.com site. That is, Games with a Purpose.

Several years ago, a game appeared on the web called the ESP game, that required two players who didn’t know each other, and who just happened to be online at the same time, to try and find matching words that described a particular picture. The intention was to help index images so that they could be found by a search engines, and the approach represents a form of “human assisted computing”.

Anyway, Luis von Ahn, the creator of the ESP game, has just released several more games on the GWAP website:

  • Tag a Tune: both players hear a tune and have to describe it to each other; they then have to decide whether they are listening to the same tune. The purpose? Help a search engine learn more ways of finding songs (for example, whether they are happy or sad).
  • Verbosity: players take it in turns to describe a secret word to each other – one person describes the word, the other has to guess it.
  • Squigl: both players seem the same image, and are presented with a word; they each trace round the object described by the word as it appears in the picture.
  • Matchin: two players are shown the same image; each player picks the image they like the most.

Now who was it working for who again?


The Machine is Us/ing Us, M. Wesch

Ah yes, we all work for the machine… ;-)

Friday Fun #13 Putty Puzzle

Putty Puzzle is an another of those addictive puzzle games that you have to try just one more time. The idea of the game is move blocks of putty around, a square at a time, in order to reach a goal square. The first time I tried it, I lost an hour…!

It’s available as a download, or online via a Java applet (I played with the applet)…

To what extent does the game itself teach you how to play the game during the lower levels?

The Uncanny Valley

Looking back at screenshots of some of the original video arcade games, and comparing them to the increasingly realistic imagery of games on the latest generation consoles, it is difficult not to be amazed at how much the visual appearance of the games has evolved. The advances in both computer hardware design and software development mean that today’s games hold the promise of photorealistic views in the not too distant future. But is this desirable? (see for example: Videogame Aesthetics: The Future).

Even animated movements themselves are becoming more realistic, through the use of motion capture techniques (as described in Realistic Movement with Motion Capture). However, when the motion capture to animation technique is not quite right, then the resulting animation can feel very off-putting.

For example, in the CGI movie Polar Express, audiences were left feeling uncomfrtable by much of the animation, as this post by animator Ward Jenkins describes: The Polar Express: A Virtual Train Wreck (conclusion).

This effect has come to be known as the uncanny valley.


Taken from:
http://www.androidscience.com/theuncannyvalley/proceedings2005/uncannyvalley.html

The still unproven “uncanny valley” effect was coined by Japanase roboticist Mashahiro Mori, based on his observations about peoples’ emotional response to robotic or animated representations of living things. The claim goes that we are likely to have an increasingly positive emotional response to a representation as it becomes increasingly lifelike until something ‘not quite right’ (i.e. unnatural, or ‘uncanny’) comes to our attention, at which point we become negatively disposed to, or even repulsed by, the object in question.

Read this article on The Uncanny Valley by Masahiro Mori (1970) Energy, 7(4), pp. 33-35 [Translated by Karl F. MacDorman and Takashi Minato].

Have you ever experienced the uncanny valley effect, for example, when watching a photoreaslistic computer generated animation?

(See more “motion portrtait” animations here: Motion Portrait; or a full screen version of the above animation that follows your mouse cursor…)

As gamemakers pursue photorealism, there is the danger that their game characters will put off potential players if they stray into the uncanny valley, as Clive Thompson warns in his 2005 Wired magazine commentary “Monsters of Photorealism”.

For example, in Looking at Movies: The Uncanny Valley, an essay critiquing Polar Express, as well as other CGI movies, from the perspective of the uncanny valley, we get the following observation:

When applied to special effects in movies, the implications of the uncanny valley are clear: if a filmmaker strives for a very high level of verisimilitude in computer-generated characters, they may risk taking the humanlike resemblance too far, causing viewers to notice every detail of the characters’ appearance or movement that doesn’t conform to the way real human beings actually look or move. Our emotional response to these “almost human” characters will therefore be unease and discomfort, not pleasure or empathy.

If the filmmaker decides instead to render characters in a more stylized manner, clearly signaling that they are not supposed to appear “almost human,” we will notice, paradoxically enough, all the aspects of their appearance and behavior that resemble human beings, and we will be more likely to perceive these characters as more complex and more “human” characters than the characters that are designed to look nearly human.

We can extend the concept even further to acknowledge that, when an animated object or a creature that is clearly not human is shown onscreen exhibiting certain human traits or emotions, we may actually feel more sympathetic to that creature than we do to overly detailed “human” animated characters.

James Portnow takes a similar viewpoint with respect to games in this article: GAME DESIGN: The Uncanny Valley.

As computer animations – and robots – get ever more realistic, we naturally get more opportunities to test out the validity of the Uncanny Valley Hypothesis…

To what extent do you think that the uncanny valley is a plausible theory? In what ways do you think that computer games may be susceptible to the uncanny valley effect? If computer game characters can wander into the uncanny valley, so what?

See also: In Search of the Uncanny Valley, F.E.Pollick, published in USER CENTRIC MEDIA, Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 2010, Volume 40, Part 4, 69-78, DOI: 10.1007/978-3-642-12630-7_8

Realistic Movement with Motion Capture

In Making Pictures Move, we saw how classical animation techniques could be used to bring a series of fixed images to life, and as a result give an impression of animated movement.

We also saw how real movement could be captured using photography, as Eadweard Muybridge’s freeze frame photographs of a galloping horse demonstrated.

Making animated characters move in a realistic way is still a major challenge to animators today. Whilst many animators do still work in a traditional way – watching themselves in a mirror, for example, to see how to piece together a series of movements – it is now increasingly likely that big-budget productions will use motion capture (“mo-cap”) techniques to film the motion of actors performing a particular movement or series of movements, and then used this captured movements to bring alive a digital character, much like a puppeteer might bring alive a wooden puppet.

How so? Watch this movie clip showing how motion capture can be used put the human inside a digital puppet!

You may have noticed the unusual suit that the actor was wearing. The marks on the suit are high contrast areas that a computer can detect easily. By mapping different points on the filmed image of the suit with corresponding points on the digital avatar that we wish to animate, we can use filmed movements to drive the movements of the animated character.

Many motion capture techniques actually use pointwise markers on the motion capture suit. That is, individual markers are place at different points on the suit, often the joints. The motion of the person’s joints can then be matched to the joints of the digital avatar.

Another approach to motion capture is described in this research video on Practical Motion Capture in Everyday Surroundings (described in this paper in SIGGRPH 207).

Motion capture is widely used in films that make heavy use of CGI – computer generated imagery – as well as games. However, whereas Hollywood may use motion capture to create animated movement that is as realistic as possible, in games the actor may actually be required to perform ‘stylised’ actions that fit in with the genre of the game.

For example, here is an interview with one of the motion capture artists who worked on the Conan the Barbarian game: Age of Conan Site Tour Pt. 1 – Motion Capture.

If you watched the Conan movie clip, you may have picked up on the reference to motion capture being used with horses. The techniques used are certainly several steps on from anything Eadweard Muybridge attempted to do, as this brief post on the MTV Multiplayer blog suggests: A Horse Covered in Ping Pong Balls — The ‘Age of Conan’ Mo-Cap Shoot.

Friday Fun #12 Video Storytelling

Earlier this week, I posted about machinima, the creation of short movies using 3D game engines as a stage. Creating an effective piece of machinima requires skills in video storytelling, as well as directing (insofar as you can) and filming the action in the game environment.

So in this week’s Friday Fun, why not have a go at some video storytelling?

One of the easiest ways to get started is to use something like the Dr Who Trailermaker, which allows you to cut together various scenes from Doctor Who, as well as adding music and sound effects.

You can also share your creations, so if you do make a trailer, why not post a link back here? ;-)

If you’re not into Dr Who, then why not have a go at a Star Wars movie mashup instead?

If filmmaking is not your thing, and you’d rather just play a game, then how about trying out one of the ‘adventure game’-like interactive stories on the Penguin “We Tell Stories” interactive fiction site?

Fairy Tales
The (Former) General In His Labyrinth

Have a good weekend :-)

Virtual World Films – machinima

As well as providing environments within which games can be played, game worlds (and other virtual worlds) are increasingly offering the ability to record action from within a game. As the game worlds themselves support richer and richer interactions and behaviours, the game world becomes a virtual movie set that can be used for the production of game related “fan fiction”, as well as ‘standalone’ animated movies.

Machinima is the name given to computer generated films that are rendered in real time using a game engine. That is, machinima represents a form of emergent gameplay in which the game characters are treated as “digital puppets” and used to act out a story that can be recorded using screen capture, or a screen recorder built in to the game to specifically encourage the recording of game demos, or more general machinima short films.

Read the Futurelab article: Machinima and education (September, 2007). Bear in mind the following questions as you do so.

  • What are the four (4) most common machinima production techniques identified in the article and what do they involve?
  • Give two or three examples of how the genre of a game can influence the sort of machinima it can be best used to create.
  • In what ways does machinima ‘democratise’ the film-making process (that is, how does it lower barriers to entry for people wiching to get started with film-making?)?

If you are interested in the evolution of machinima, this article from the August 2007 issue of EDGE magazine (issue 178) is a good place to learn more: Screen Play: The Future of Machinima.

The best produced machinima films are scripted in a similar way to any animated short film, and then acted out using game characters. As well as one off short films, machinima has spawned several of its own series, such as The Strangerhood, a sitcom(?!) created using The Sims, which even attracted a review from the BBC when it was launched several years ago: Review: The Strangerhood (via BBC News).

See if you can find out what other ‘cult’ machinima series Rooster Teeth, the producers of “The Strangerhood”, created using the Halo game engine?

Creating Machinima

If you are interested in how to get started producing your own machinima, the following presentation gives an excellent overview – “Making Machinima” by Jeremy Kemp:

The two videos recommended in the presentation can be found here: What is machinima? and Inside the Machinima (both on Youtube).

Using footage from one game in another

One early use of machinima was as a production technique for creating cutscenes in one game, using the game engine of another. This approach has quite a long history, and is described in this Gamasutra article Machinima Cutscene Creation, Part One dating back to September, 2000, and followed up in Machinima Cutscene Creation, Part Two.

If you are interested in creating short, cutscene films, read the above two articles. They provide a good introduction to the storytelling techniques that go towards making an effective cutscene.

In more recent times, the growth of online multiplayer games has enabled full ‘cast and crew’ machinima productions, in which one character may take on the role of cameraman, filming the action as it is ‘played out’ by characters controlled by other game players.

How does machinima in general differ from “speed run” or walkthrough recordings of how to complete a game, or the production of game demos or game trailers from within the game itself? (See also Post hoc Game Documentation – Walkthroughs and Speedruns)

If you would like to view some more machinima, there is plenty on social video websites such as Youtube, as well as on dedicated machinima video sharing sites such as machinima.com. The GameSetWatch article World of Warcraft Exposed: A Moviemaking Culture describes the rise of machinima in the massively multiplayer online role playing game (MMORPG) World of Warcraft (WoW), and provides several links to directories of machinima created in that virtual world.

ARGs, Serious Games and the Magic Circle

In “Alternate Reality Games: What Makes or Breaks Them?“, a blog post reviewing the rise of alternate reality games (ARGs) (see ARGs Uncovered for an into), Muhammad Saleem suggests several characteristics that a successful ARG should embrace:

– Storytelling or narrative
– Discovery/deciphering and documentation elements
– Cross-medium interactivity
– Blurring the lines between reality and fiction

To what extent do you agree with this view? If you are familiar with an ARG, write down how the game conforms to Saleem’s list. If you aren’t particularly familiar with an ARG, see if you can identify features of the ARG Perplex City that correspond to the categories listed above. To what extent do you think these “essential characterstics” apply to any digital game, ARG or otherwise?

The post also describes some ‘features’ that the ARG should avoid if it is to be successful:

– Lack of interactivity, too linear
– Lack of a reward
– No instant gratification
– Too difficult
– Same old game, different name
– Too scripted, too commercial

To what extent are these ‘negative features’ likely to detract from the success of any digital game, ARG or otherwise?

One popular refrain of the actors/characters in an ARG is that “this is not a game”. This reflects the fact that the game is being played out like a piece of invisibe theatre in the real world. At the same time, the actors act out the game narrative in a way that encourages audience participation, providing interaction with the game as far as the audience member is concerned, even if the actual direction of the game is largely scripted and tightly plotted ‘on-the-inside’.

How do you think the ‘this is not a game’ view relates to the idea of the Magic Circle, described by Salen and Zimmerman as “the boundary that defines the game in time and space” (see Getting Philosophical About Games)?

In the section “Community Formation and the Magic Circle” from the Game Studies article The Playful and the Serious: An approximation to Huizinga’s Homo Ludens, Hector Rodriguez comments thus:

Game designers aiming to highlight trust and suspicion sometimes take the radical step of rendering the boundaries of the magic circle deliberately ambiguous. Phone calls or text messages received in the middle of the night may be real calls for help from a friend or part of the game’s conspiracy. Well-known examples include the Electronic Arts game Majestic and the plot of David Fincher’s 1997 film The Game. This uncertainty can generate experiences that resemble philosophical scepticism about reality. The designer becomes the equivalent of a Cartesian evil genius capable of controlling, and potentially deceiving, our sense of the distinction between reality and make-believe. From the designer’s standpoint, the players become toys to be played with; the game designer is the only player who for sure knows where the boundaries of the magic circle are.

A footnote in the same article elaborates further:

[6] The fuzziness of the magic circle is not restricted to children’s play. Recent scholarship on “expanded” or “pervasive” games has highlighted three techniques that subvert the magic circle (Montola, 2005). First of all, the location of the game can be ambiguous, uncertain or unlimited, so that participants may not be sure about the place where the game is played. Secondly, the temporal boundaries of play need not always be sharply demarcated from the rest of daily life. A game may, for instance, lack a clear-cut beginning or end; or its duration may extend until it coincides with a player’s entire life, even span several generations, so that its temporal boundaries become effectively irrelevant. Thirdly, games can blur the boundary between players and non-players by bringing “outsiders” into its sphere.

Serious Games and the Magic Circle

Just as ARGs make use of the ‘real world’ to roll out the game, we have also seen how real world situations can be ‘folded back’ into digital space, opening up the possibility of playing ‘real world’ games in virtual worlds (for example, The World of Serious Games).
To what extent do serious games require the player to adopt the view that whilst they are playing a game (and so insulating themselves from the real world by entering the magic circle) they are also not playing a game, in the sense that their performance in the game world could actually be replayed ‘for real’ in the real world, maybe as part of their job?


Categories

Top Clicks

  • None