Posts Tagged 'photorealism'

The Photorealistic Effect…

In Even if the Camera Never Lies, the Retouched Photo Might… we saw how photographic images may be manipulated using digital tools to create “hyperreal” imagery in which perceived “imperfections” in the real world artefact are removed using digital tools. In this post, we’ll explore how digital tools can be used to create digital imagery that looks like a photograph but were digitally created from the mind of the artist.

As an artistic style, photorealism refers to artworks in which the artist uses a medium other than photography to try create a representation of a scene that looks as if it was captured as a photograph using a camera.  By extension, photorealism aims toto (re)create something that looks like a photograph and in so doing capture a lifelike representation of the scene, whether or not the scene is imagined or is a depiction of an actual physical reality.

DO: Look through the blog posts Portraits Of The 21st Century: The Most Photorealistic 3D Renderings Of Human Beings (originally posted as an imgur collection shared by Reddit user republicrats) and 15 CGI Artworks That Look Like Photographs. How many of the images included in those posts might you mistake for a real photograph?

According to digital artist and self-proclaimed “BlenderGuru” Andrew Price in his hour long video tutorial Photorealism Explained, which describes some of the principles and tools that can be used in making photorealistic CGI (computer generated imagery), there are four pillars to creating a photorealistic image – modelling, materials, lighting, post-processing:

  • photorealistic modelling – “matching the proportions and form of the real world object”;
  • photorealistic materials – “matching the shading and textures of real world materials”;
  • photorealistic lighting – “matching the color, direction and intensity of light seen in real life”;
  • photorealistic post-processing – “recreating imperfections from real life cameras”.

Photorealistic modelling refers to the creation of a digital model that is then textured and lit to create to the digital image. Using techniques that will be familiar to 3D game developers, 3D mesh models may be constructed from scratch using open-source tools such as Blender or professional commercial tools.

blender_-_modeling_a_human_head_basemesh_-_youtube

The mesh based models can also be transformed in a similar way to the manipulation of 2D photos mapped onto the nodes of a 2D mesh.

Underpinning the model may be a mesh containing many thousands of nodes encompassing thousands of polygons. Manipulating the nodes allows the model to be fully animated in a realistic way.

Once the model has been created, the next step is to apply textures to it. The textures may be created from scratch by the artist, or based on captures from the real world.

In fact, captures provide another way of creating digital models by seeding them with data points captured from a high resolution scan of a real world model. In the following clip about the development of the digital actor “Digital Emily” (2008), we see how how 3D scanning can be used to capture a face pulling multiple expressions, and from these construct a mesh with overlaid textures grabbed from the real world photographs as the basis of the model.

Watch the full video  – ReForm | Hollywood’s Digital Clones – for a more detailed discussion about “digital actors”. Among other things, the video describes the Lightstage X technology used to digitise human faces. Along with “Digital Emily”, the video introduces “Digital Ira” , from 2012. Whereas Emily took 30 mins to render each frame, Ira could be rendered at 30fps (30 renders per second).

Price’s third pillar refers to lighting. Lighting effects are typically based on computationally expensive algorithms, incorporated into the digital artist’s toolchain using professional tools such as Keyshot as well as forming part of more general toolsuites such as Blender. The development of GPUs – graphical processing units – capable of doing the mathematical calculations required in parallel and ever more quickly is one of the reasons why Digital Ira is a far more responsive actor than Digital Emily could be.

The following video reviews some of the techniques used to render photorealistic computer generated imagery.

Finally, we come to Price’s fourth pillar – post-processing – things like motion blur, glare/lens flare and depth of field effects, where the camera can only focus at items a particular distance away and everything else is out of focus. In other words, all the bits that are “wrong” with a photographic image. (A good example of this can be found in the blog post This Image Shows How Camera Lenses Beautify or Uglify Your Pretty Face, which shows the same portrait photograph taken using various different lenses; /via @CharlesArthur.) In professional photography, the photographer may use tools such as Photoshop to create images that are physically impossible to capture using a camera because of the physical properties of the camera. Photo-manipulation is then used to create hyper-real images, closely based on reality but representing a fine tuning of it. According to Price, to create images that are photorealistic using tools that create perfect depictions of a well-textured and well-lit accurate model in a modelled environment, we need to add back in the imperfections that the camera, at least, introduces into the captured scene. To imitate reality, it seems we need to model just not the (imagined) reality of the scene we want to depict, but also the reality of the device we claim to be capturing the depiction with.

VideoRealistic Motion

In addition to the four pillars of photorealism described by Andrew Price when considering photorealistic still imagery, we might add another pillar for photorealistic moving pictures (maybe we should call this videorealistic motion!):

  • photorealistic motion – matching the way things move and react in real life.

When used as the basis of a animated (video) scene, a question arises as to how to actually animate the head in a realistic way. Where the aim is to recreate human like expressions or movements, the answer may simply be to use a person as a puppeteer, using motion capture to capture an actor’s facial expressions and use them to actuate the digital model. Such puppetry is now a commodity application, as the Faceshift markerless motion capture facial animation software demonstrates. (See From Motion Capture to Performance Capture – Sampling Movement in the Real World into the Digital Space for more discussion about motion capture.)

With Hollywood film-makers regularly using virtual actors in their films, the next question to ask is will such renderings be possible in a “live” augmented reality context: will it be possible to add sit a virtual Emily in your Ikea postulated sitting room and talk through the design options with you?

The following clip, which combines many of the techniques we have already seen – uses a 3D registration image within a physical environment as the location point for a digital actor animated using motion capture from a human actor.

In the same way that digital backlots now provide compelling visual recreations of background  – as well as foreground – scenery as we saw in Mediating the Background and the Foreground, it seems that now even the reality of the human actors may be subject to debate. By the end of the clip, I am left with the impression that I have no idea what’s real and what isn’t any more! But does this matter at all? If we can create photorealistic digital actors and digital backlots, does it change our relationship to the real world in any meaningful way? Or does it start to threaten our relationship with reality?


Categories