One of the recurring themes in this series of posts has been the extent to which particular augmented or mixed reality effects are impossible to achieve without the prior development of one or more enabling technologies.
The following video clip from Cinefix describing “The Top 10 VFX Innovations in the 21st Century” demonstrates how visual effects in blockbuster movies have evolved over several years as new techniques are invented, developed and then combined in new ways.
Here’s a quick breakdown of the top 10.
- digital color-grading: recoloring films automatically to influence the mood of the film;
- fluid modelling/water effects: bulk volume mesh vs. droplet (particle by particle) models -> hybrid simulation
- AI powered crowd animation: individuals have their own characters and actions that are then played out;
- motion capture as a basis for photo-realistic animation;
- universal capture/markerless performance capture;
- painted face marker capture;
- digital backlot;
- imocap – in-camera motion capture – motion capture data captured alongside principal photography;
- intermeshing of 3D digital backlot, live capture and live rendering, virtual reality camera;
- lightbox cage rig, compositing of human actor and digital world.
DO: watch the video clip, noting what technologies were developed in order to achieve the effect or how pre-existing technologies were combined in novel ways to achieve the effect. To what extent might such technologies be used in a realtime mixed or augmented reality setting and for what purpose? What technical challenges would need to be overcome in order to use the techniques in such a way?