Archive Page 3

Introducing Augmented Reality Apparatus – From Victorian Stage Effects to Head-Up Displays

When viewing things through a screen, can you distinguish what’s “real” from what isn’t? How much of the background in the last blockbuster movie you saw was footage of real buildings, a set put together by scenic carpenters, a scale model, or computer generated imagery? In the last fantasy film you saw, were the mythical creatures puppets, “pure animations”, or animations based on human performance capture? Are the adverts that you see in televised sporting events really displayed on the pitch or hoardings? And are the dashboard instruments in you car display “real” or “virtual”?

At the time it was released in 1997, Jamiroquai’s Virtual Insanity video raised the sort of confusion that one might have imagined Victorian audiences seeing Pepper’s ghost – an illusion we will return to – for the first time. Watching the video today, you might imagine it was created using digital trickery, but in fact the illusion created was purely a physical one.

I’ve been unable to find any behind the scenes footage from the making of that video, but the technique, or something akin to it, was reused to make an advert for the Spanish beer, Estrella Galicia:

There is little, if anything, in this form of production that would have prevented a similar sequence being filmed over a hundred years ago, well before the advent of digital technologies.

But some Victorian theatrical effects can benefit from a dash of the digital…

Pepper’s ghost

If you’ve ever seen a floating “holographic head” as part of a Ghost Train or Haunted House fairground ride, you’ve most likely been presented with a version of Pepper’s Ghost. Taken into the theatre by popular scientist John Henry Pepper, building on a technique developed a few years by Henry Dircks, an engineer, inventor, and  debunker of Victorian spiritualists and pseudo-scientists, Pepper’s ghost appeared to place a ghostly apparition alongside a “real” actor on stage.

https://www.comsol.com/blogs/explaining-the-peppers-ghost-illusion-with-ray-optics/

The effect is an optical and relies on placing a piece of glass at an angle and through which the audience sees the “real” on stage characters. The glass reflects an otherwise hidden area and “projects” it’s ghostly image onto the stage. What that audience sees is the reflection of the ghost in the glass, and the main stage actors through the glass.

//image from: https://www.comsol.com/blogs/explaining-the-peppers-ghost-illusion-with-ray-optics/

As well as theatrical use, the effect can be used on a large scale in amusement park rides.

If you are happy with an illuminated scene, the effect can be used to float static objects, and even actors, within the visual field of view of the audience.  However, the same technique can be employed to use a projector to cast an image onto the glass plate; which means you can also use a digital projector. This has the advantage that you can now float (animated) digital creations, as well as filmed ones, onto the stage.

If you’ve ever seen pop statistician Hans Rosling’s OU co-produced BBC Two statistics lectures, you’ll have seen this effect being used to cast huge “holographic” data visualisations onto the stage via the Musion 3D projection system.

The effectiveness of the technique is not limited to the large theatrical scale either. Indeed, you can create you own floating three dimensional “holographic” display using just a few pieces of acetate, or CD/DVD cases, and a mobile phone…

Creating your own 3D display:

The 3D effect is created by having four separate points of view, each with its own animation, one projected onto each face of the four sided pyramid.

A wide range of “how to” videos showing how to make these viewers are available online. You can also can find a range of “4 sided” pre-made videos to use with them by searching social video websites for: pyramid hologram screen up.

Making Wider Use of Pepper’s Ghost

The Pepper’s Ghost illusion provides one way of casting the digital into the physical world. Heads-up displays (HUDs) provide another opportunity for overlaying projected digital imagery onto our view of the world. Head-up displays represent a simple form of augmented reality in which the “real” visual scene is overlaid, or augmented, with additional visual information.

Head-up displays have been a feature of military aircraft for many years, more recently appearing in civilian aircraft. Head-up displays use a transparent screen mounted inside the cockpit and in the field of view of the pilot onto which aircraft related information is projected.

(More recently, HUD technology has started migrating inside the pilot’s helmet, as with the Rockwell Collins F35 Helmet Display System.)

Head-up displays are now also starting to appear in top-end production cars, with the display projected onto the windscreen, HUD style attachments are also starting to appear as freestanding peripherals either using a built in “screen”, or, as in the case of the Garmin Head-Up Display, by projecting the display onto a transparent film attached to the windscreen. .

SAQ: The HUDWAY Glass demo shows vehicle lanes projected onto the display. How do you think those lanes are generated?

Answer: As the device appears to be free standing, and is not apparently attached to a camera, I suspect that the lanes are generated using GPS and map data. The fact that the lane appears to predict, rather than track, the actual view of the road would appear to confirm this.

Head-Up display units can also be built into “smart helmets”, such as as the BMW System 6 Evo helmet.

The BMW helmet projects the display onto a transparent screen within the field of view of one of the wearer’s eyes. This approach is rather more elegant than other “in-sight” displays such as the original version of Google glass or the Garmin Varia Vision cycling glasses. Indeed, we would probably not class these as true head up displays because the intention is not to overlay a transparent display layers onto the visual scene. Instead, a small screen is inserted into the field of view and occludes the scene over the visual angle it intrudes into.

Given the requirement for a physical layer onto which the head up display layer or layers can be projected, more “natural” forms of eyewear in the form of glasses present may be required for mass adoption of everyday head up displays or augmented reality wear. However, as with the bulky and not particularly becoming 3D glasses for watching 3D cinema films, frames such as Sony’s SmartEyeglass look as if they still have some way to do in the fashion design stakes, and minituarisation of the projection technology  seems to be issues for other display innovators such as  Magic Leap.

In the workplace, however, where protective equipment may be the norm, there may be more freedom to develop augmented reality displays mounted within protective headware. Once again, smart helmets may provide the answer, such as the DAQRI Smart Helmet.

SAQ: How can the Pepper’s Ghost illusion be used to render augmented reality layers in the field of view of a viewer?

SAQ: what practical problems does the use of Pepper’s Ghost style projection introduce into the design of a head-up augmented reality display?

SAQ: what other uses can  you think of – or discover – for head-up displays?

Extension SAQ: what other ways might them be of projecting visual imagery into a physical space?

Extension answer:

  • a glass or plastic screen can be replaced using a fog screen:

If you have a physical screen that can be sensibly visualised, you may be able to do a site specific visual augmentation of the site simply by using a well-directed projector. For example, treating the pipes of a church organ as sound level indicators for each pipe:

The video also suggests how audio processing may also be used to dynamically alter the perception of the sound, to give use “augmented reality audio”.

Blurred Edges – Dual Reality

The launch of several virtual reality headsets into the consumer market in the first half of 2016 saw a flurry of new hype around this rather old idea, at least in terms of computer technology: a technical report from the Institute of Computer Graphics and Algorithms at the Vienna University of Technology saw fit to report on Virtual Reality History, Applications, Technology and Future over twenty years ago, in 1996. However, of the disadvantages of immersive virtual reality, other than oft reported visually induced “VR sickness”, is that the apparatus required to enter it covers your eyes, and completely occludes your direct view of the physical world with a computer generated one. On the other hand, in an augmented reality system, you still directly perceive physical world elements, even if they are overlaid, or annotated, with additional digital information.

One of the seminal papers in augmented reality research, Milgram, Paul & Fumio Kishino, “A taxonomy of mixed reality visual displays”IEICE TRANSACTIONS on Information and Systems 77, no. 12 (1994): 1321-1329) describes a Mixed Reality environment as “one in which real world and virtual world objects are presented together within a single display, that is, anywhere between the extrema of the virtuality continuum”.

r76JBo-Milgram_IEICE_1994_pdf

The paper also describes an operational definition of Augmented Reality (AR) as “any case in which an otherwise real environment is ‘augmented’ by means of virtual (computer graphic) objects…. not for lack of a better name, but simply out of conviction that the term Augmented Reality is quite appropriate for describing the essence of computer graphic enhancement of video images of real scenes” and we shall find it convenient to adopt a similar definition, although other definitions exist.

For example, as Azuma et al. (Azuma, Ronald, Yohan Baillot, Reinhold Behringer, Steven Feiner, Simon Julier, and Blair MacIntyre. “Recent advances in augmented reality.” IEEE computer graphics and applications 21, no. 6 (2001): 34-47) define it:

An AR system supplements the real world with virtual (computer-generated) objects that appear to coexist in the same space as the real world.[A]n AR system [is defined] to have the following properties:

  • combines real and virtual objects in a real environment;
  • runs interactively, and in real time; and
  • registers (aligns) real and virtual objects with each other.

Note that this definition is not limited to any particular display technology or sensory modality.

Augmented reality is itself a form of mediated reality, or computer mediated reality. Mediated realities may themselves be thought of in terms of the extent to which information is added to an environment (augmented reality) or subtracted from an environment (which we might term a diminished reality). In addition, the notion of hyper-reality describes a system where no externally derived information is added to the system.

To implement a mixed reality system requires the presence of some sort of physical system, or apparatus, that can typically capture a visual scene, often from the viewer’s perspective, and render it back to the viewer, replete with augmentations. In a visually based system, we also need a computational system that is capable not only of registering and tracking, in real time, objects or locations within the scene, but also transforming them in some way in order to generate the augmented view of the physical reality.

In this series of posts, created as part of a scoping activity for a short unit in a new Open University introductory computing course, we’ll be stopping short of discussing fully immersive virtual environments, but we will be looking at augmented reality, exploring how digital technologies are blurring the ground in terms of the physical reality of what we see whenever we look at – or through – a screen, as well as capturing physical depictions of form and movement so that they can be rendered within mixed reality spaces. We will also consider non-visual  mixed realities, such as mixed realities that we can listen to, rather than see visually.

When taken to extremes, such technologies may present us with a nightmarish, rather than compelling, vision of the future, as imagined by Keiichi Matsuda in his video short, “Hyper-Reality” [review]. Fortunately, perhaps, the physical technology required to implement such a system is still several years away!

See also: Infinity AR Augmented Reality Concept Video.

That isn’t to say, however, that frivolities such as Pokemon Go, released to global audiences at the start of July 2016, won’t have their five minutes of global appeal!

Across the posts, we will be focusing primarily the notion idea of virtual overlays or real or virtual transformations of real objects, looking at how we can overlay virtual scenes and information onto views of the real world, as well as how to get representations of physical objects into the virtual world so that they can be virtually transformed. This will include a consideration of how to capture real objects so that can be represented as faithful virtual objects which provide the basis for the virtual transformation, where the real and the virtual are combined in a composite view of the world, as well as a consideration of the apparatus required to implement such techniques.

As the posts are produced (and they may well be subject to change after posting!), I’ll add them to the list here:

 

Accessible Gaming

One of the things that a great many games have in common is that they are visually rich and actually require a keen visual sense in order to play them. In this post, I’ll briefly review the idea of accessible gaming in the sense of accessible video games, hopefully as a springboard for a series of posts that explore some of the design principles around accessible games, and maybe even a short accessible game tutorial.

So what do I mean by an accessible game? A quick survey of web sites that claim to cover accessible gaming focus on the notion of visual accessibility, or the extent to which an unsighted person or person with a poor vision will be able to engage with a game. However, creating accessible games also extends to games that are appropriate for gamers who are hard of hearing (audio cues are okay, but they should not be the sole way of communicating something important to the player); gamers who have a physical disability that makes it hard for the player to use a particular input device (whether that’s a keyboard and mouse, gamepad, Wiimote controller, or whatever.); and gamers who have a learning disability or, age or trauma related cognitive impairment.

The Game Accessibility website provides the following breakdown of accessible games and the broad strategies for making them accessible:

Gaming with a visual disability: “In the early days of video gaming visually disabled gamers hardly encountered any accessibility problems. Games consisted primarily of text and therefore very accessible for assistive technologies. When the graphical capabilities in games grew, the use of text was reduced and ‘computer games’ transformed into ‘video games’, eventually making the majority of mainstream computer games completely inaccessible. The games played nowadays by gamers with a visual disability can be categorized by 1) games not specifically designed to be accessible (text-based games and video games) and 2) games specifically designed to be accessible (audio games, video games that are accessible by original design and video games made accessible by modification).” Accessible games in this category include text based games and audio games, “that consists of sound and have only auditory (so no visual) output. Audio games are not specifically “games for the blind”. But since one does not need vision to be able to play audio games, most audio games are developed by and for the blind community.”.
Gaming with a hearing disability: “In the early days of video gaming, auditory disabled gamers hardly encountered any accessibility problems. Games consisted primarily of text and graphics and had very limited audio capabilities. While the audio capabilities in games grew, the use of text was reduced. … The easiest way to provide accessibility is to add so-called “closed-captions” for all auditory information. This allows deaf gamers to obtain the information and meaning of, for instance, dialog and sound effects.”
Gaming with a physical disability: “There are several games that can be played by people with a physical disability. … For gamers with a severe physical disability the number of controls might be limited to just one or two buttons. There are games specifically designed to be played with just one button. These games are often referred to as “one-switch”-games or “single-switch”-games.”
Gaming with a learning disability: “In order to get a good understanding of the needs of gamers with a learning disability, it is important to identify the many different types of learning disabilities [and] know that learning disabilities come in many degrees of severeness. … Learning disabilities include (but are not limited to): literacy difficulty (Dyslexia), Developmental Co-ordination Disorder (DCD) or Dyspraxia, handwriting difficulty (sometimes known as Dysgraphia), specific difficulty with mathematics (sometimes known as Dyscalculia), speech language and communication difficulty (Specific Language Impairment), Central Auditory Processing Disorder(CAPD), Autism or Aspergers syndrome, Attention Deficit (Hyperactivity) Disorder (ADD or ADHD) and memory difficulties. … The majority of mainstream video games are playable by gamers with learning disabilities. … Due to the limited controls one switch games are not only very accessible for gamers with limited physical abilities, but often very easy to understand and play for gamers with a learning disability.”

Generally then, accessible games may either rely on modifications or extensions to a particular game that offers players alternative ways of engaging with the game (for example, closed captions to provide an alternative to spoken word instructions), or they may have been designed with a particular constituency or modality in mind (for example, an audio game or game that responds well to a one-click control). It might also be that accessible games can be designed to suit a range of accessibility requirements (for example, an audio, text-based game with a simple or one-click control).

In the next post, I’ll focus on one class of games in particular – audio games.

Scripting With the Game Maker Language

Although Game Maker’s visual environment provide a friendly interface to many users, some people (particularly experienced programmers) may find it more natural to construct programmes using the text based Game Maker Language (GML).

In this short series of posts, I’ll repeat some of the other Game Maker tutorials using GML. So if you’ve ever fancied trying your hand at writing actual programme code, why not give this a try…?:-) Note that this series assumes some familiarity with writing games the visual way in Game Maker.

To start with, I’m going to replicate the “Catch a Clown” game described the introductory Game Maker tutorial.

The approach I’ll take is to write a series of series of GML scripts that can be attached to events associated with game objects, Setting up the game objects and events needs to be done using the visual editor (at least until I can figure out how, or if, we can write scripts to set up objects and rooms directly!)

To use Game Maker Language scripts, you’ll need to run Game Maker in its Advanced mode (set it from the File menu). Scripts can then be created from the toolbar, or from contextual menu raised by right clicking on the scripts folder in the left hand sidebar palette.

To begin with, we’re going to recreate the a single room game, with wall elements round the edges, and a clown that bounces around inside the room. The aim of the game is to splat the clown by clicking on it.

As I’m teaching myself GML too, will start by learning how to attach scripted events to the game objects. TO start with, you’ll need to:

– create a room;
– create a wall object with a solid wall sprite;
– create a clown object with a transparent clown sprite;
– put wall objects round the edge of the room;
– pop a clown object somewhere in the middle of the room.

To get things moving, we’re going to create a script that will set an instance of the clown object moving when it is created with speed 4 in a random direction.

Create a new script from the toolbar or scripts resource folder; double click on the new script to raise the script editor. In the Name textbox in the status bar at the top of the script editor, give your script a name. For convenience, you might prfix it with scr_. I’m going to call my first script scr_moveAny.

When writing a script, there are certain syntactic conventions we need to be aware of. Firstly, programme code nees to be contained within curly braces: { }

Secondly, separate lines of code need to end with a semicolon – ;

So how do we go about writing a GML programme. GML has a wealth of predefined functions, which are all described in the Game Maker Help menu. But as novices, we donlt know what any of those functions are, or what they do. So let’s start to build up our knowledge somehow.

Before we get into the code, remember this: writing programming code is not what programming is about. Progrtamming is working out the things you need to tell the computer to do so that the programme does what you want. Writing the code is just that – taking the programme, an coding it in a particular way. You can write your programme – the list of things you want the computer to do – anywhere. I often doodle outlines for the programmes I want to write on scraps of paper. Having worked out the programme, we can then write the code.

So what programme are we going to write for starters? Well, when the clown object is created, we need it to start moving in a random direction with a specified speed.

Looking through the GML help file manual, in the Moving Around section, I noticed a function called motion_set(dir,speed) which Sets the motion with the given speed in direction dir.

Hmm… okay, so my programme code might look something like:

{
motion_set(dir,4);
}

So if I attach this script to the clown’s Create event, it should start to move in direction dir with speed 4. But what’s dir? Digging around a little bit more, it’s a direction given in degrees. So to move randomly, I need to set dir to a random number between 0 and 359). I know a bit about other programming languages, so I guess there’s a probably a “random” function… and there is: random(N) which returns a random real number (i.e. it might have lots of bits after the decimal point) between 0 and N.

To get a whole number (known as an integer) we can put the random() function inside another function, called floor(M), which rounds the number, M, contained in the brackets down to the nearest whole number. That is, if we write:

floor(random(360))

the random(360) function will return a random number between 0 and 360 (such as 124.9734), and the floor function will round it down to the nearest whole number (in this example, 124).

Great – so we can get a direction angle for our motion_set() command. We could just replace the dir term in the motion_set function with the floor(random(360)) expression, or we could use a variable. A variable is like a container we can use to represent a (possibly changing) value. In GML, we need to declare a variable before we use it with as follows:

var dir;

var is a special reserved word that tells Game Maker the next word is the name of a variable, in the above case, dir. We can then set the dir variable to a numerical value:
var dir;
dir=floor(random(360));

The whole script looks like this:

{
var dir;
dir=floor(random(360));
motion_set(dir,4);
}

You can check that the code is “correct” in a syntactical sense (i.e. you can check you’ve got the brackets and punctuation right, if not the logic) by clicking on the 1010 button (“check the script for syntax errors”).

If you save the script, we’re now in a poistion to attach it to the clown object. Open the clown object window, add a Create event, and from the control panel on the right hand side, add the “Execute script” action.

Select the clown object and add a Create event. It would be nice if we could configure Game Maker to easily allow us to attach a script to this element more directly, but in Game Maker 7 at least, we need to do this from the control tab on the right hand sidebar palette in the object editor window. The element we’ll be needing is the Execute Script code element:

Game Maker Exectute script element

Select your script from the list of named scripts that are available from the drop down listbox, and attach the desired script to the object itself:

Now play the game. Hopefully, you should see the clown start to move in a random direction at the required speed when the game is started.

So what’s next? If the clown bumps into a wall, we need it to bounce off. We might also want to play a sound as the clown does so. Create a new script, and call it something like scr_wallBounceSound. The function move_bounce_solid(adv) (“Bounces against solid instances, like the corresponding action. adv indicates whether to use advance bounce, that also takes slanted walls into account.”) looks handy. I’m not sure what adv is, but I’m guessing true or false…

Let’s try this:

{
move_bounce_solid(false);
}

Save the script, add a collision event to the clown object that detects a collisions with a wall object, and attach the scr_wallBounceSound script to it. Run the game – hopefully your clown will start to move in a random direction and then bounce off the walls…

Now lets add a sound. Searching for sound in the help file turns up dozens of sound related GML functions, but sound_play(index) looks relevant (“sound_play(index) Plays the indicates sound once. If the sound is background music the current background music is stopped.”). index is the number of the sound file, as ordered in the sounds folder in the resource sidebar, and starting with the index number zero. I have two sounds in ,my game, one for wall bounces, one for when the clown is clicked on, so I choose the approariate one. My script now looks like this:

{
move_bounce_solid(true);
sound_play(0);
}

And finally… In the simplest version of the original game, the idea was to click on the clown to catch it. Catching it has the effect of increasing the score, playing a sound, repositioning the clown to a different location, and setting it moving in a random direction again.

We know how to play the sound and get the character moving, so all we need to figure out is how to increase the score, and move the clown to a new location. In the GML help pages the section on “Score” tells us the name of the variable that is defined by the game to hold the current score: score.

To increase the score by 10, we can write one of two things. Either:
score=score+10;
That is, set the new value of the score to equal the current value of the score, plus 10.

Or we can use the shorthand form: score+=10

To reposition the clown, the help file comes to our rescue again. In the Moving Around section, we find the function move_random(hsnap,vsnap) Moves the instance to a free random, snapped position, like the corresponding action. I think we can just set the hsnap and vsnap values to 0.

So here’s the script that we want to attach to the left-click mouse event on the clown object:

{
score+=10;
sound_play(1);
move_random(0,0);
motion_set(floor(random(360)),4);
}

Okay, I think that’s enough for now… except to look at how we might save and load scripts. In the scripts menu is an option to Export a selected script. The export looks something like this:

#define scr_clickedClown
{
score+=10;
sound_play(1);
move_random(0,0);
motion_set(floor(random(360)),4);
}

It might therefore seem reasonable to suppose we could edit a who range of scripts in a text editor outside of Game Maker and save them in a single text file. Something like this maybe?

#define scr_moveAny
{
var dir;
dir=floor(random(360));
motion_set(dir,4);
}

#define scr_wallBounceSound
{
move_bounce_solid(true);
sound_play(0);
}

#define scr_clickedClown
{
score+=10;
sound_play(1);
move_random(0,0);
motion_set(floor(random(360)),4);
}

And it does indeed seem to work… as long as you save the file as a text file with the suffix .gml

Finally, finally, it’s just worth saying that if you want to leave notes to yourself in a GML programme that are ignored by Game Maker, you can do. They’re called “comments” and you prefix them like this:

// a comment in a programme that is
// ignored by Game Maker;

That is, use double slash… And how are comments typically used? A bit like this:

//script to handle the left-mouseclick event when a clown is clicked on
// this script should be attached to the clown object
#define scr_clickedClown
{
score+=10; // add 10 to the score
sound_play(1); //play the squished sound
move_random(0,0); //move the clown to a new location
motion_set(floor(random(360)),4); //move in a random direction at speed 4
}

That is, we can use the comments as programme code documentation…

Digital Worlds – The Blogged Uncourse

Digital Worlds – Interactive Media and Game Design was originally developed as a free learning resource on computer game design, development and culture, authored as part of an experimental approach to the production of online distance learning materials. Many of the resources presented on this blog also found their way into a for credit, formal education course from the UK’s Open University.

This blog was rebooted at the start of summer 2016 to act as a repository for short pieces relating to mixed and augmented reality, and related areas of media/reality distortion, as preparation for a unit on the subject in a forthcoming first level Open University course.

Friday Fun #20 Net Safety

For games that are sold on the UK High Street, the PEGI classification scheme allows purchasers to check that the game is appropriate for a particular age range, and also be forewarned about any ‘questionable’ content contained within the game, such as violence, sex or drugs references, and so on (e.g. Classifying Games).

At the time of writing, there is no mandated requirement for online games to display PEGI ratings, even if the games are made specifically for the UK market, although PEGI does have an online scheme – PEGI Online:

The licence to display the PEGI Online Logo is granted by the PEGI Online Administrator to any online gameplay service provider that meets the requirements set out in the PEGI Online Safety Code (POSC). These requirements include the obligation to keep the website free from illegal and offensive content created by users and any undesirable links, as well as measures for the protection of young people and their privacy when engaging in online gameplay.

So how do you decide whether an online game is likely to be appropriate for a younger age range? One way is to ‘trust’ a branded publisher. For example, games appearing on the BBC CBeebies games site are likely to be fine for the youngest of players. And the games on CBBC hit the spot for slightly older children. If you’re not too bothered about product placement and marketing, other trusted brands are likely to include corporates such as Disney, although if you’re a parent, you may prefer games hosted on museum websites, such as Tate Kids or the Science Museum.

But what about a game like following, which is produced by Channel 4 and is intended to act as a ‘public service information’ game about privacy in online social networks?

What sort of cues are there about the intended age range of the players of this game? Are there any barriers or warnings in place to make it difficult to gain access to this game on grounds of age? Should there be? Or is it enough to trust that the design and branding of the site is only likely to appeal to the ‘appropriate’ demographic?

Look through the Smokescreen game website and missions. To what extent is the game: a simulation? a serious game?

How does the visual design of the game compare with the designs for games on the ‘kids’ games sites listed above?

PS if you get a chance to play some of the kids games, well, it is Friday… :-) I have to admit I do like quite a few of the gams on the Science Museum website ;-)

Friday Fun #19 Let’s Make a Movie

A recent post reporting on the 2008 Machinama filmfest on the Game Set Watch blog (The State Of Machinima, Part 2: The Machinima Filmfest Report) mentions, in passing, how in certain respects machinama – films made using game engines – can “be best described as digital puppetry”.

So for the budding digital puppeteers out there, why not wind down this Friday afternoon by having a go at putting together your own digital puppetry performance using xtranormal?

This online application allows you to select a “film set” and then place one or two characters within it. The characters actions can be defined from a palette of predefined actions:

and facial expressions:

Dialogue can also be scripted – simply type in what you want the characters to say, and it will be rendered to speech when the scene is “shot”.

You also have control over the camera position:

To get you started, here’s a quick tutorial:

If you don’t want to start from scratch, you can remix pre-existing films… Here’s one I made earlier, a video to the opening lyrics of a New Model Army song: White Coats.

The following clip shows a brief demo of the application, along with a sales pitch and a quick review of the business model.

Based on the demo pitch and some if the ideas raised in Ad Supported Gaming, how do you think xtranormal might be used as part of an online, interactive or user-engaged advertising campaign?

PS For a large collection of machinima created using the Halo game engine, see Halomovies.org.


Categories


Follow

Get every new post delivered to your Inbox.

Join 66 other followers