Archive for the 'Friday Fun' Category

Interlude – Ginger Facial Rigging Model

Applications such as Faceshift, as mentioned in The Photorealistic Effect…, demonstrate how face meshes can be captured from human actors and used to animate digital heads.

Ginger is a browser based facial rigging demo, originally from 2011, but since updated, that allows you to control the movements of a digital head.

If you enable the Follow on feature, the eyes and head will follow the motion of your mouse cursor about the screen. The demo is listed on the Google Chrome Experiments website and can be found here: (The code, which builds on the three.js 3D javascript library, is available on Github: StickmanVentures/ginger.)

Interlude – AR Apps Lite – Faceswapping

In the post From Magic Lenses to Magic Mirrors and Back Again we reviewed several consumer facing alternate reality phone applications, such as virtual make-up apps In this post, we’ll review some simple face based reality distorting effects with an alternative reality twist.

In the world of social networks, Snapchat provides a network for sharing “disposable” photographs and video clips, social objects that persist on the phone for a short period before disappearing. One popular feature of snapchat comes in the form of its camera and video filters, also referred to as SnapChat Lenses, that can be used to transform or overlay pictures of faces in all sorts of unbecoming ways.

As the video shows, the lenses allow digital imagery to be overlaid on top of the image, although the origin of the designs is sometimes open to debate as the intellectual property associated with facepainting designs becomes contested (for example, Swiped –  Is Snapchat stealing filters from makeup artists?).

Behind the scenes, facial features are captured using a crude form of markerless facial motion capture to create a mesh that acts as a basis for the transformations or overlays as described in From Motion Capture to Performance Capture and 3D Models from Imagery.

Another class of effect supported by “faceswap” style applications is an actual faceswap, in which one person’s face is swapped with another’s – or even your own.

Indeed, New York songwriter Anthony D’Amato went one step further, using the app to swap his face with various celebrities to make a faceswapped video of him singing one of his own songs (/via Digital Trends (World’s first FaceSwap music video is equal parts creepy, impressive).

As well as swapping two human faces, faceswapping can be used to swap a human face with the face of a computer game character. For computer gamers wanting to play a participating role in the games they are playing, features such as EASports GameFace allow users to upload two photos of their face – a front view and a side view – and then use their face on one of the game characters models.

The GameFace interface requires the user to physically map various facial features on the uploaded photograph so that these can then be used to map the facial mesh onto an animated character mesh. The following article shows how facial features registered as a simple mesh on two photographs can be used to achieve a faceswap effect “from scratch” using open source programming tools.

DO: read through the article Switching Eds: Face swapping with Python, dlib, and OpenCV by Matthew Earl to see how a faceswap style effect can be achieved from scratch using some openly available programming libraries. What process is used to capture the facial features used to map from one face to the other? How is the transformation of swapping one face with another actually achieved? What role does colour manipulation play in creating a realistic faceswap effect?

If you would like to try to replicate Earl’s approach, his code is available on Github at matthewearl/faceswap. (A quick search of Github also turns up some other approaches, such as zed41/faceSwapPython and MarekKowalski/FaceSwap.)

Developing algorithms and approaches face tracking is an active area of research, both in academia and commercially. The outputs of academic research are often written up in academic publications. Sometimes, the implementation code is made available by researchers, although at other times it is not. Academic reports should also provide enough detail about the algorithms described for independent third parties to be able to implement, as is the case in Audun Mathias Øygard’s clmtrackr.

DO: What academic paper provided the inspiration for clmtrackr? Try running examples listed on auduno/clmtrackr and read about the techniques used in the posts Fitting faces – an explanation of clmtrackr and Twisting faces: some use cases of clmtrackr. How does the style of writing and explanation in those posts compare to the style of writing used in the academic paper? What are the pros and cons of each style of writing? Who might the intended audience be in each case?

UPDATE: it seems as if Snapchat may doing a line of camera enabled sunglasses – Snapchat launches sunglasses with camera. How much harder is it to imagine the same company doing a line in novelty AR specs that morph those around you in a humorous and amusing way whenever you look at them…?! Think: X-Ray spex adds from the back of old comics…

Interlude – Animated Colouring Books as An AR Jumping Off Point

Demonstrations such as the Augmented Reality Browser Demo show how browser based technologies can implement simple augmented reality demonstrations. By building on a browser’s ability to access connected camera feeds, we can reuse third party libraries to detect and track registration images contained within the video feed and 3D plotting libraries to render and overlay 3D objects on the tracked image in real time.

But what if we could also capture information from a modified registration image and use that as part of the rendered 3D model?

A research paper by Disney research – Live Texturing of Augmented Reality Characters from Colored Drawings [PDF] – presented at the International Symposium on Mixed and Augmented Reality (ISMAR 2015) describes “an augmented reality coloring book App in which children color characters in a printed coloring book and inspect their work using a mobile device”, since released as the Disney Color and Play app.

But Disney is not the only company exploring augmented reality colouring books…

Another app in a similar vein is produced by QuiverVision (coloring packs) and is available for iOS and Android devices.

And as you might expect, crayon companies are also keen on finding new ways to sell more crayons and have also been looking at augmented reality colouring books, as in the case of Crayola and their ColorALive app.

DO: grab a coffee and some coloured pen or pencils, install an augmented reality colouring book app, print off an AR colouring pack, then colour in your own 3D model. Magic!:-)

Now compare and contrast augmented reality applications in which a registration image, once captured, can be used to trigger a video effect or free running augmented reality animation with augmented reality applications in which a registration image of environmental feature must be detected and tracked continually in terms of the technology required to implement them, the extent to which they transform a visual scene and the uses to which each approach might be put. Try to think of one or two examples where one technique might be appropriate but the other would not when trying to achieve some sort of effect or meet some particular purpose.

Interlude Activity – Augmented Reality Browser Demo

Such is the power of today’s web browsers, on smartphones as well as laptops, that it’s possible to run a simple augmented reality demo in your phone or laptop browser using just the code contained in a small javascript library.

DO: visit the online Github code repository jeromeetienne/threex.webar. You can run the demo in several ways:

  • if you have a laptop computer with a camera, make a copy of the registration marker image, either by printing it or grabbing a photograph of it with a smartphone and then show the marker to the demo page;
  • load the demo page on your smartphone, allow the page to make use of the phone camera, and then use it to view the marker image displayed on a computer screen or the screen of someone else’s smartphone.

What enabling technologies made the threex.webar demonstration possible?

Interlude – Cleaning Audio Tracks With Audacity

Noise cancelling headphones remove background noise by comparing a desired signal to a perceived signal and removing the unwanted components. So for noisy situations where we don’t have access to the clean signal, are we stuck with just the noisy signal?

Not necessarily.

Audio editing tools like Audacity can also be used to remove constant background noise from an audio track by building a simple model of the noise component and then removing it from the audio track.

The following tutorial shows how a low level of background noise may be attenuated by generating a model of the baseline noise on a supposedly quiet part of an audio track and then removing it from the whole of the track. (The effect referred to as Noise Removal in the following video has been renamed Noise Reduction in more recent versions of Audacity.)

SAQ: As the speaker records his test audio track, we see Audacity visualising the waveform in real time. To what extent might we consider this a form of augmented reality?

Other filters can be used to remove noise components with a different frequency profile such as the “pops” and “clicks” you might hear on a recording made from a vinyl record.

In each of the above examples, Audacity’s visual representation of the audio waveform, creating a visual reality from an audio one. This reinforces through visualisation what the original problems were with the audio signals and the consequences of applying the particular audio effect when trying to clean them.

DO: if you have a noisy audio file to hand and fancy trying to clean it up, why not try out the techniques shown in the videos above – or see if you can find any more related tutorials.

Friday Fun #20 Net Safety

For games that are sold on the UK High Street, the PEGI classification scheme allows purchasers to check that the game is appropriate for a particular age range, and also be forewarned about any ‘questionable’ content contained within the game, such as violence, sex or drugs references, and so on (e.g. Classifying Games).

At the time of writing, there is no mandated requirement for online games to display PEGI ratings, even if the games are made specifically for the UK market, although PEGI does have an online scheme – PEGI Online:

The licence to display the PEGI Online Logo is granted by the PEGI Online Administrator to any online gameplay service provider that meets the requirements set out in the PEGI Online Safety Code (POSC). These requirements include the obligation to keep the website free from illegal and offensive content created by users and any undesirable links, as well as measures for the protection of young people and their privacy when engaging in online gameplay.

So how do you decide whether an online game is likely to be appropriate for a younger age range? One way is to ‘trust’ a branded publisher. For example, games appearing on the BBC CBeebies games site are likely to be fine for the youngest of players. And the games on CBBC hit the spot for slightly older children. If you’re not too bothered about product placement and marketing, other trusted brands are likely to include corporates such as Disney, although if you’re a parent, you may prefer games hosted on museum websites, such as Tate Kids or the Science Museum.

But what about a game like following, which is produced by Channel 4 and is intended to act as a ‘public service information’ game about privacy in online social networks?

What sort of cues are there about the intended age range of the players of this game? Are there any barriers or warnings in place to make it difficult to gain access to this game on grounds of age? Should there be? Or is it enough to trust that the design and branding of the site is only likely to appeal to the ‘appropriate’ demographic?

Look through the Smokescreen game website and missions. To what extent is the game: a simulation? a serious game?

How does the visual design of the game compare with the designs for games on the ‘kids’ games sites listed above?

PS if you get a chance to play some of the kids games, well, it is Friday… :-) I have to admit I do like quite a few of the gams on the Science Museum website ;-)

Friday Fun #19 Let’s Make a Movie

A recent post reporting on the 2008 Machinama filmfest on the Game Set Watch blog (The State Of Machinima, Part 2: The Machinima Filmfest Report) mentions, in passing, how in certain respects machinama – films made using game engines – can “be best described as digital puppetry”.

So for the budding digital puppeteers out there, why not wind down this Friday afternoon by having a go at putting together your own digital puppetry performance using xtranormal?

This online application allows you to select a “film set” and then place one or two characters within it. The characters actions can be defined from a palette of predefined actions:

and facial expressions:

Dialogue can also be scripted – simply type in what you want the characters to say, and it will be rendered to speech when the scene is “shot”.

You also have control over the camera position:

To get you started, here’s a quick tutorial:

If you don’t want to start from scratch, you can remix pre-existing films… Here’s one I made earlier, a video to the opening lyrics of a New Model Army song: White Coats.

The following clip shows a brief demo of the application, along with a sales pitch and a quick review of the business model.

Based on the demo pitch and some if the ideas raised in Ad Supported Gaming, how do you think xtranormal might be used as part of an online, interactive or user-engaged advertising campaign?

PS For a large collection of machinima created using the Halo game engine, see