Archive Page 3

Why Don’t Things Fall Up? Making Use of Physics Engines

When film director Stephen Spielberg first announced he would be collaborating on the development of a game for the Wii console, many people were intrigued as to what sort of storytelling extravaganza they might be treated too… At the time, I’m not sure many of the would have anticipated anything like the physics based puzzle game Boom Blox in which the aim is to knock stacks of blocks (“blox”?!) down as efficiently as possible!

(If you’re creatively minded, there’s also a “create” editor in Boom Blox that allows you create your own levels.)

So why does everything fall the way it does in Boom Blox? Physics…

In the post Gravity Waves, I mention three games based around simulated physics – Newtoon, Launchball (a browser based physics puzzle game, also with its own level creation tool) and Phun. In each case, the idea is to manipulate or create objects in the game world and then let physics – gravity, elasticity, and so on – have its way…

Crayon Physics is another construction-led physics game (in many respects reminiscent of Phun) in that the idea is to… well, watch the following video clip, and you decide…

How would you describe Crayon Physics?

Physics games are games that involve the player directly and purposefully engaging with the physics of the game world. But for many games, whether they are 2D Pacman like arcade games, or 3D games set in realistic simulated real world settings, physics still plays a part. Why can’t Pacman walk through a wall, for example? And why do cars crash just the way they do in many a 3D racing game? Physics, that’s why…

According to the presentation shown above (or otherwise), what, in the context of game physics, are:

  • “rigid bodies”;
  • “soft bodies”;
  • “ragdolls”.

In many games, a physics engine is used to manage the behaviour of both small and large objects alike according to set of mathematical equations that model the physics – that is, the physical behaviour – of the the objects. This behaviour extends from describing how objects move, or fall, to how they swerve round each other, and what happens when collide: people generally can’t walk through walls, for example, but neither do they tend to break, or shatter, or crumple…

The mathematics involved in calculating game physics can be very computationally expensive and difficult to programme, particularly as simulations get more realistic and require the behaviour of ever increasing numbers of particles to be modeled. Many games make use of licensed physics engines that have been developed by specialist companies or the larger game developers, which means that they can benefit from complex simulations without the need to programme that behaviour in from scratch.

Read the Gamasutra article Outsourcing Reality: Integrating a Commercial Physics Engine. If physics is (not yet) your thing, the following may help you in your reading:

  • Integration Basics: Geometry Export: what basic shapes is the physics engine likely to undertsand?
  • Time Management: what are ‘game time’, ‘frame time’ and ‘simulation time’ and how do they relate to each other?
  • Applying Forces: in what three ways can an object be compelled to move in a physical model?
  • Spatial Queries: in what ways might the physics engine allow ‘logical statements’ to be made about the interaction of different characters in the game world?
  • Integrating Keyframed Motion: although many game objects move in the way they do because of physics, and the forces applied to them, some objects may have been animated as keyframes – that is, they have been drawn to move in a single particular way and as such will not respond to any forces applied to them, although they may exert forces on objects they are in contact with…
  • Player Control Strategies: to simplify collision detection, a simple, regularly shaped ‘bounding box’ or ‘capsule’ is often drawn around game and player characters. This box is then used as the basis for collision detection.

    What are the ‘three fundamental approaches’ to controlling how the player character actually moves? How does the choice of approach affect the ‘usability’ of the game in terms of how easy the character is to control?

Plausible – and well modeled – game physics can make a significant difference to the feel of game, both from a usability (ease of control) point of view, as well as a degree of faithfulness point of view (for example, some racing games pride themselves on how realistic the physical behaviour of the cars is).

Getting to grips with game physics can also be a great way of learning about real physics – and the maths used to describe it – because it provides a real context for using the equations. If you would like to learn more about game physics, a good place to start is with these OpenLearn videos on differential equations

To learn more about physics games, visit the Fun-Motion physics games blog. As well as a comprehensive listing of physics related ‘games’, it also includes a wide range of posts and video clips relating to many issues of game physics.

Friday Fun #16 Sharkrunners

In Play Along With the Real World…, I describe the hypothetical notion of playing games in the context of real world data being fed from live sporting events.

But live data games have actually been around for some time…

One such example is Sharkrunners, from the Discovery Channel.

Sharkrunners first appeared at the start of last summer (Live Data Gaming – Sharkrunners), as a television series tie-in, and then returned for a second season based around the Great Barrier Reef.

The aim of the game is to equip a research boat and go in search of real sharks, either as a research scientist, as a documentary maker, or with an ecological mission in mind. Finding a shark brings rewards for the captain of the boat in terms of finance which can be used to hire extra crew, purchase additional scientific instruments, and improve the boat. The shark location data is based on telemetry from real sharks, although ‘live’ weather conditions don’t feature as part of the game (yet?!).

The game is played in real time, so the player must set waypoints for the ship to travel between that will hopefully lead to a shark encounter. When a shark is detected within range of the boat, an email or SMS message is sent to the player so they can log in to the game and take an appropriate action.

I played Sharkrunners over a two week period last summer, and have just signed up for another tour of duty now ;-)

Play Along With the Real World…

In Ad hoc Game Controllers – Use Whatever Comes to Hand, I described how simple webcam/video controlled games used simple motion detection to generate on-screen collision events when player movements intersected with the location of digital objects at some point on the screen. This contrasted with more sophisticated motion tracking algorithms used in motion capture software.

If you watch television sport at all, in particular golf, cricket, or tennis, you are likely to have seen computer graphics that ‘replay’ particular shots (tennis, cricket), or offer player’s eye view perspectives of the game setting (snooker).

The Hawkeye and PointTracker systems are capable of tracking a ball’s trajectory and then replaying it, as this video clip describes:

(To see an example of PointTracker in action, visit the 2006 US Open PointTracker website.)

A rather less exact approach appears to be used as part of the Cricinfo 3d visualisation, as described in this post by Martin Belam: Will virtual representations of sporting events become part of the online rights economy?: “Rather than just describing the action in near real-time, they show you, using a game engine to simulate the match being played in their Cricinfo 3D feature. As each ball in the over is bowled, a Shockwave plug-in on the web-page illustrates the action.”

In this case, it seems that a canned repertoire of bowls and shots is used as a palette from which a shot “replay” can be illustrated (rather than a faithful visualisation of an actual ball trajectory, and stroke played?)

Some time ago, I wondered aloud in a blog post about whether or not the time was approaching when the TV sports viewer might be able to ‘play along’ with TV sports action (Re:Play – The Future of Sports Gaming? “I’ll Take it From Here…”):

For example:

- in a cricket sim, rather than watch a replay of a particular delivery, you could take the bat and see if you could do better. The fielding positions and the actual flight of the ball (captured using something like HawkEye) would be faithful – at least at the start of the shot!

- in a snooker sim, you could pick up a (real) 147 break making frame after the reds have been cleared.

- in an F1 race, you could take over the drive from a real driver. The AI controlling the other drivers could directly simulate an actual race for at least as long as the time as your actions have no influence on any other particular car.

- in a round of golf, you could matchplay an actual game against someone else – or pick up the hole at any point in a championship winning round.

So what? you may say… Sounds a bit dull… just replaying some old game…

Ah yes – but what about if you ‘take it from here’ during the actual event and play along, maybe split screen style?

Or maybe during the TV replay, your digital ents box offers you a re:play? That is, you get to try the shot, etc. (maybe even ‘for real’, Wii style ;-)

It’d be one way of filling time while the adverts are on!

What this boils down to is interactive sports viewing; or in other words: “I could have made that one – here, I’ll show you…”.

So is the era of “interactive television sports” a real possibility? It would seem so…

Read this BBC Technology news post on Real racing in the virtual world. What information needs to be collected to create a “play along with the race” Formula One motor racing game? What problems can you foresee in this sort of playalong game, where several you are competing against real cars and drivers in real time, compared to what is essentially a turn based, single player game such as golf?

You can read more about iOpener’s approach to “real time racing” here: Real Time Racing.

Ad hoc Game Controllers – Use Whatever Comes to Hand

When webcams first started to appear, many of them shipped with simple games that incorporated basic image processing tools – such as motion detection – that let players engage in augmented reality “webcam controlled gaming”. In contrast to the more elaborate forms of augmented reality where digital objects are overlaid on tracked, registered images (as described in Introducing Augmented Reality – Blending Real and Digital Worlds), a typical “webcam game might simply superimpose ‘balloons’ on top of a video image of the player and require the player to jump around and ‘pop’ the balloons.

The premise behind many of these games was that if a moving object (as captured by the webcam) moved to an area of the screen where a digital object was (such as a ‘balloon’) then the moving real world player would have been deemed to have hit the digital object. That is, if the moving player image is at the same part of the screen as a digital object, a ‘collision event’ is raised, just as if a player controlled game character collided with the object in ‘normal’ game, and some action is taken as a result (such as the balloon being ‘popped’).

As many of the algorithms used to perform motion detection are computationally expensive, it was no surprise that one of the early webcam games was promoted by chipmaker Intel, who were keen to demonstrate how powerful their processors were at the time. As this quote from Justin Rattner, Intel’s chief technology officer, in a Business Week article from December 2007 suggests, the trend toward using increasing computer processing power to implement ever more powerful video based control systems may still hold true: “We imagine some future generation of [Nintendo’s] Wii won’t have hand controllers,” says . “You just set up the cameras around the room and wave your hand like you’re playing tennis” (Supercomputing for the Masses).

Like this, maybe? Camspace:

As far as the ‘user’ is concerned, what are the main similarities and differences between the simple motion detection used in basic webcam games, and the techniques used in the more elaborate motion tracking techniques required for augmented reality and motion capture?

As with many other technologies that have left the controlled environment of the lab and made it into everyday use, it is worth watching how artists are making use of these technologies in their own artworks – and the halfway house between the lab and the everyday world that is the public art gallery – to get an idea for how we might interact with these systems in the (near) future.

For example, the following video shows how Animata, a “real-time animation software” toolkit, that has been “designed to create interactive background projections for concerts, theatre and dance performances”

Keep an eye out for installations in your local gallery, arts center or media center that make use of video based controllers – they are excellent way of exploring some of the issues and ideas around how we might interact in the future…

Digital Worlds in the Real World: Augmenting Reality with a 21st Century Take on Pepper’s Ghost

In Introducing Augmented Reality – Blending Real and Digital Worlds, I introduced the idea of augmented reality in which digital graphical objects are overlaid on video images of real world scenes, to give the appearance of digital objects inhabiting the real world. By overlaying the digital objects on top of ‘tracked’ real world objects, it is possible for a human puppeteer to enter the digital realm and both control and interact with digital animations. But now consider the case of digital characters entering the “real world” and joining human actors on a physical stage, rather than the actors having to move behind the video screen?

If you even been to a science discovery center, it’s quite likely that you’ll have seen an exhibit based around a piece of theatrical trickery known as Pepper’s Ghost.

The effect is used to make a ghostly apparition appear and disappear from a scene – see if you can find out how the effect works…

A recent twist to the illusion allows digitally projected 3D animations to come to life on stage:

The same trick can be used to create a feeling of telepresence, for example in the case of large business presentations:

Recalling the theatrical origins of the technique, this New York Times “presentation’ describes a recent theatrical performance that uses the same effect: First Person Ghost Lighting

One company that is championing the ‘digital Pepper’s ghost” approach is the UK based Musion Systems Ltd (Musion Systems blog). The following video sequence shows how they create the illusion with their Musion Eyeliner system.

To what extent does this system represent something ‘new’ and to what extent is it just an extension of Victorian theatrical stagecraft?

For what sorts of game might this technique provide a compelling user interface? Are there any game genres where it is unlikely to be effective? Why?

Introducing Augmented Reality – Blending Real and Digital Worlds

The 1988 film “Who Framed Roger Rabbit” merged the worlds of human live action and classic Disney animation to present a world in which human actors and cartoon characters acted alongside each other (see trailer, or Amazon product listing).

The animations were painted on to the original “human action” film during a period of post production, but nevertheless, the result is still quite compelling.

Many film productions today also use post production techniques to add photo-realistic computer generated imagery (CGI) to a film, particularly in the area of special effects and ‘digital virtual set design’, but what if it were possible to actually interact with digital creations in real time?

Step-in, augmented reality

There are several augmented toolkits available on the web, many of which use the approach demonstrated in this BBC Radio 1 promotion:

A series of easily identified, high-contrast images are registered with the AR system (that is, the system is trained to recognise them) and then different movie clips are associated with those images. When the image is recognised, the video clip is overlaid on the image and starts to play. As well as videos, 3D computer graphics may also be superimposed on the detected image.

You can see more clearly how different patterns might be registered and associated with different 3D models in this page about the ARTag, augmented reality system: ARTag. (See also the ARToolkit – warning: if you don’t know what a compiler is, this isn’t for you…)

One of the easiest ways of experiencing augmented reality is to try out the Fix8 animation tool that lets you animate your own appearance by registering key facial features and then animating on top of those: Fix8

(If you do have a go at creating a Fix8 movie, why post a link back to it here as a comment to this post?!:-)

How many ways can you think of using augmented reality? Write down two or three ideas as a comment to this post.

To get you started, here’s how you might use augmented reality to support car maintenance:

…or maybe Lego car maintenance!

(Lego have also started experimenting with augmented reality kiosks that register a tag on a Lego box and then display a 3D animation of the model that can be constructed from that Lego set sitting on top of the box.)

Finally, here are a few ideas for augmented reality games: Top 10 augmented reality demos that will revolutionize video games. (Note that this list may be a little dated by now – if you manage to find any more recent examples, please post a link back to them in a comment to this post.)

So how does AR actually work? To explain that, I’ll need another post…

Friday Fun #15 Spore

A month away from the Digital Worlds blog, but I’m going to try to get back in the flow for a week or two, or at least stack up a few posts that can trickle out over the next few weeks… So to ease my way in, here’s a (late) Friday Fun post about the Spore Creature Creator.

If you haven’t heard about Spore, it’s an ‘ecosystem’ game (still in development) whose release has been hyped – and eagerly anticipated – for well over a year. Created by the same team that produced SimCity, a simulation game for growing and managing your own city (and which you can play in its original form online), Spore is the next step in simulation games, providing the opportunity to reach beyond the simulation of a city or civilisation, and “play with Creation”.

Did you spot the hype in the above paragraph?! ;-) The ethos of the game – which I take to be creating ecosystems that evolve over time – is reminiscent of an early ‘Artificial Life’ game from the UK, called Creatures. In the Creatures universe, players created creatures (‘Norn’) that developed and learned throughout their lifetime, and that could ‘breed’ with other Norns. Creatures is still available from Gameware Development (who also created the popular – and Creatures inspired – CBBC Bamzooki game), along with a free net-enabled version of the ‘game': Docking Station Central).

The Spore Creature Creator looks like a lot of fun (I haven’t had a chance to play with it yet:-( if the user generated creations uploaded to the Sporepedia Gallery are anything to go by!

Notwithstanding the complexity of the game, the Creature Creator tool also represents a huge achievement in design terms, as this interview with Will Wright, the game’s creator, suggests: Will Wright talks Spore and defensive cows (Joystiq interview)


Categories


Follow

Get every new post delivered to your Inbox.

Join 59 other followers