Posts Tagged 'photomanipulation'

The Photorealistic Effect…

In Even if the Camera Never Lies, the Retouched Photo Might… we saw how photographic images may be manipulated using digital tools to create “hyperreal” imagery in which perceived “imperfections” in the real world artefact are removed using digital tools. In this post, we’ll explore how digital tools can be used to create digital imagery that looks like a photograph but were digitally created from the mind of the artist.

As an artistic style, photorealism refers to artworks in which the artist uses a medium other than photography to try create a representation of a scene that looks as if it was captured as a photograph using a camera.  By extension, photorealism aims toto (re)create something that looks like a photograph and in so doing capture a lifelike representation of the scene, whether or not the scene is imagined or is a depiction of an actual physical reality.

DO: Look through the blog posts Portraits Of The 21st Century: The Most Photorealistic 3D Renderings Of Human Beings (originally posted as an imgur collection shared by Reddit user republicrats) and 15 CGI Artworks That Look Like Photographs. How many of the images included in those posts might you mistake for a real photograph?

According to digital artist and self-proclaimed “BlenderGuru” Andrew Price in his hour long video tutorial Photorealism Explained, which describes some of the principles and tools that can be used in making photorealistic CGI (computer generated imagery), there are four pillars to creating a photorealistic image – modelling, materials, lighting, post-processing:

  • photorealistic modelling – “matching the proportions and form of the real world object”;
  • photorealistic materials – “matching the shading and textures of real world materials”;
  • photorealistic lighting – “matching the color, direction and intensity of light seen in real life”;
  • photorealistic post-processing – “recreating imperfections from real life cameras”.

Photorealistic modelling refers to the creation of a digital model that is then textured and lit to create to the digital image. Using techniques that will be familiar to 3D game developers, 3D mesh models may be constructed from scratch using open-source tools such as Blender or professional commercial tools.

blender_-_modeling_a_human_head_basemesh_-_youtube

The mesh based models can also be transformed in a similar way to the manipulation of 2D photos mapped onto the nodes of a 2D mesh.

Underpinning the model may be a mesh containing many thousands of nodes encompassing thousands of polygons. Manipulating the nodes allows the model to be fully animated in a realistic way.

Once the model has been created, the next step is to apply textures to it. The textures may be created from scratch by the artist, or based on captures from the real world.

In fact, captures provide another way of creating digital models by seeding them with data points captured from a high resolution scan of a real world model. In the following clip about the development of the digital actor “Digital Emily” (2008), we see how how 3D scanning can be used to capture a face pulling multiple expressions, and from these construct a mesh with overlaid textures grabbed from the real world photographs as the basis of the model.

Watch the full video  – ReForm | Hollywood’s Digital Clones – for a more detailed discussion about “digital actors”. Among other things, the video describes the Lightstage X technology used to digitise human faces. Along with “Digital Emily”, the video introduces “Digital Ira” , from 2012. Whereas Emily took 30 mins to render each frame, Ira could be rendered at 30fps (30 renders per second).

Price’s third pillar refers to lighting. Lighting effects are typically based on computationally expensive algorithms, incorporated into the digital artist’s toolchain using professional tools such as Keyshot as well as forming part of more general toolsuites such as Blender. The development of GPUs – graphical processing units – capable of doing the mathematical calculations required in parallel and ever more quickly is one of the reasons why Digital Ira is a far more responsive actor than Digital Emily could be.

The following video reviews some of the techniques used to render photorealistic computer generated imagery.

Finally, we come to Price’s fourth pillar – post-processing – things like motion blur, glare/lens flare and depth of field effects, where the camera can only focus at items a particular distance away and everything else is out of focus. In other words, all the bits that are “wrong” with a photographic image. (A good example of this can be found in the blog post This Image Shows How Camera Lenses Beautify or Uglify Your Pretty Face, which shows the same portrait photograph taken using various different lenses; /via @CharlesArthur.) In professional photography, the photographer may use tools such as Photoshop to create images that are physically impossible to capture using a camera because of the physical properties of the camera. Photo-manipulation is then used to create hyper-real images, closely based on reality but representing a fine tuning of it. According to Price, to create images that are photorealistic using tools that create perfect depictions of a well-textured and well-lit accurate model in a modelled environment, we need to add back in the imperfections that the camera, at least, introduces into the captured scene. To imitate reality, it seems we need to model just not the (imagined) reality of the scene we want to depict, but also the reality of the device we claim to be capturing the depiction with.

VideoRealistic Motion

In addition to the four pillars of photorealism described by Andrew Price when considering photorealistic still imagery, we might add another pillar for photorealistic moving pictures (maybe we should call this videorealistic motion!):

  • photorealistic motion – matching the way things move and react in real life.

When used as the basis of a animated (video) scene, a question arises as to how to actually animate the head in a realistic way. Where the aim is to recreate human like expressions or movements, the answer may simply be to use a person as a puppeteer, using motion capture to capture an actor’s facial expressions and use them to actuate the digital model. Such puppetry is now a commodity application, as the Faceshift markerless motion capture facial animation software demonstrates. (See From Motion Capture to Performance Capture – Sampling Movement in the Real World into the Digital Space for more discussion about motion capture.)

With Hollywood film-makers regularly using virtual actors in their films, the next question to ask is will such renderings be possible in a “live” augmented reality context: will it be possible to add sit a virtual Emily in your Ikea postulated sitting room and talk through the design options with you?

The following clip, which combines many of the techniques we have already seen – uses a 3D registration image within a physical environment as the location point for a digital actor animated using motion capture from a human actor.

In the same way that digital backlots now provide compelling visual recreations of background  – as well as foreground – scenery as we saw in Mediating the Background and the Foreground, it seems that now even the reality of the human actors may be subject to debate. By the end of the clip, I am left with the impression that I have no idea what’s real and what isn’t any more! But does this matter at all? If we can create photorealistic digital actors and digital backlots, does it change our relationship to the real world in any meaningful way? Or does it start to threaten our relationship with reality?

Hyper-reality Offline – Creating Videos from Photos

In Mediating the Background and the Foreground – From Green Screen and Chroma-Key Effects to Virtual Sets we saw how green screen/chroma key effects could be used to mask out part of one image so that it could be composited with another. In this post, you’ll see how we can also generate animation effects from a single image.

Many of you will recognise the following effect from television documentaries, as well as screen savers or photo-stories:

Know as the Ken Burns effect, named after the documentary maker who made extensive use of the technique, it allows a moving image to be generated from a still photograph by panning and zooming across the image.

But what happens if you take a flat, static image, separate out the foreground and background elements, and then apply the effect, panning and zooming foreground and background elements differentially to create a “2.5D” parallax effect?

These views can be created from a single, flat image by cutting the foreground component out into its own layer, and then inpainting the background layer;  when the foreground component moves relative to the background, the inpainted area hides the fact that that part of the original image was taken up by the foreground component.

The inpainting effect can be achieved by applying an image processing technique that works from the edge of a cropped area inwards, trying to predict what value each missing neighbouring pixel should be based on the actual set values of the surrounding pixels. More elaborate techniques allow for “content aware” fills, in which the patterns generated from the surrounding texture are used to fill in the missing area. The following video show how to apply such as content aware effect in a popular photo-editing tool.

An extension of the technique – content aware crop – automatically inpaints whitespace around the edge of an image when changing the aspect ration of an image, such as following a straightening of the horizon.

Developing algorithms for improved content aware fills is an active area of academic, as well as commercial, research (eg (Pathak, Deepak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A. Efros. “Context Encoders: Feature Learning by Inpainting.arXiv preprint arXiv:1604.07379 (2016).)).

Related techniques can be used to improve the quality of images, as demonstrated by the Magic Pony Technology company (MIT Technology Review – Artificial Intelligence Can Now Design Realistic Video and Game Imagery) or deep learning neural networks. For example, a project by David Garcia shows how deep learning can “upscale 16×16 images by a 4x factor. The resulting 64×64 images display sharp features that are plausible based on the dataset that was used to train the neural net.” Here’s an example of what the networks can do (“the first column is the 16×16 input image, the second one is what you would get from a standard bicubic interpolation, the third is the output generated by the neural net, and on the right is the ground truth”):

srez_sample_output.png

Additional 2.5d effects can be created by animating both the foreground and background elements. Alternatively, by associating a mesh with particular points in a photo, translating those points appropriately results in the animation of the meshed element .

These effects are all based on the manipulation of pixels within a static image. But as you’ll see in another post, flat images can also be used as the basis for generating three dimensional models.

Tuning the colour palette of an image can is another technique that can be used to make it feel hyper-real, or somehow sharper than the captured reality. Similar techniques can also be applied to video to create a stylised hyper-real video effect.

As you are perhaps beginning to realise, many mediated reality effects rely on a whole stack of technologies, techniques or other effects being available first. But this in turn means that many of today’s yet-to-be invented techniques are likely to be built from a novel combination of techniques that already exist, or that can be built on; and once those new techniques are identified, and tools built to implement them efficiently, they in turn will provide the basis for yet more techniques.


Categories