Archive for the 'Aside' Category

Perplex City Exposed

Moving on from ARGs Uncovered, which reviewed the IGDA white paper on Alternate Reality Games (ARGs), this post provides you with an opportunity to find out for yourself a little more about the design of the first Perplex City ARG.

First up is a presentation by Adrian Hon (who you may remember we’ve come across before…), of the game development company Mind Candy, which created the original Perplex City ARG as well as its successor….

To set the scene, you may like to read a little bit of background about the game by reading this review of Perplex City… The Wikipedia entry for Perplex City also provides a brief summary of the game.

So now you sort of know what it is, let’s here about the game from the inside: “Alternate Reality Games and Perplex City Season 2”, by Adrian Hon (Google Tech Talks)

Whilst you are listening to the presentation – or maybe afterwards? ;-) – you may like to visit the Perplex City Season 1 retrospective website.

One of the great features of this site is an archive of some of the design notes used when creating the game (Perplex City Season One Story Planning).

You may notice that the game was storyboarded using a series of flowcharts to describe the order of events that were planned for the game. Flowcharts can be used to provide a very concise summary of the key actions and decision points that must be negotiated in a game in order for the story to progress. They can also reveal the complexity of a game’s design at a glance!

The site also contains a brief history of the evolution of the Perplex City map that provided a solid foundation for the game.

You can still explore an interactive version of the Perplex City map at http://www.perplexcitymap.com/.

Representing Analogue Sound Files in a Digital Way

In the post Finishing the Maze – Adding Background Music, I mentioned there were two sorts of sound file that Game Maker could play: sound files (like WAV files, or compressed MP3 files) or MIDI files.

In this aside post, I just want to briefly review the principle of how analogue (continuously varying) sound recordings can be stored as digital files using material sourced in part from the OpenLearn units “Crossing the boundary – analogue universe, digital worlds” (in particular the section Crossing the boundary – Sound and music) and “Representing and manipulating data in computers” (in particular the section Representing sound).

Sound and music

Second only to vision, we rely on sound. Music delights us, noises warn us of impending danger, and communication through speech is at the centre of our human lives. We have countless reasons for wanting computers to reach out and take sounds across the boundary.

Sound is another analogue feature of the world. If you cry out, hit a piano key or drop a plate, then you set particles of air shaking – and any ears in the vicinity will interpret this tremor as sound. At first glance, the problem of capturing something as intangible as a vibration and taking it across the boundary seems even more intractable than capturing images. But we all know it can be done – so how is it done?

The best way into the problem is to consider in a little more detail what sound is. Probably the purest sound you can make is by vibrating a tuning fork. As the prongs of the fork vibrate backwards and forwards, particles of air move in sympathy with them. One way to visualise this movement is to draw a graph of how far an air particle moves backwards and forwards (we call this its displacement) as time passes. The graph (showing a typical wave form) will look like this:

An image showing the pattern of peaks and troughs in air particle displacement over time by vibrating a tuning fork
Displacement of air particles over time by vibrating a tuning fork

Our particle of air moves backwards and forwards in the direction the sound is traveling. As shown in the previous figure, a cycle represents the time between adjacent peaks (or troughs) and the number of cycles completed in a fixed time (usually a second) is known as the frequency. The amplitude of the wave (i.e. maximum displacement of the line in the graph) determines how loud the sound is, the frequency decides how low or high pitched the note sounds to us. Note, though, that the diagram is theoretical; in reality, the amplitude will decrease as the sound fades away.

A sound of high frequency is one that people hear as a high-pitched sound; a sound of low frequency is one that people hear as one of low-pitched sound. Sound consists of air vibrations, and it is the rate at which the air vibrates that determines the frequency: a higher vibration rate is a higher frequency. So if the air vibrates at, say, 100 cycles per second then the frequency of the sound is said to be 100 cycles per second. The unit of 1 cycle per second is given the name ‘hertz’, abbreviated to ‘Hz’. Hence a frequency of 100 cycles per second is normally referred to as a frequency of 100 Hz.

Of course, a tuning fork is a very simple instrument, and so makes a very pure sound. Real instruments and real noises are much more complicated than this. An instrument like a clarinet would have a complex waveform, perhaps like the left hand graph (a) below, and the dropped plate would be a formless nightmare like right hand one (b).

Typical waveforms from a clarinet playing a note and a plate being dropped
Typical waveforms

Exercise

Write down a few ideas about how we might go about transforming a waveform into numbers. This is a difficult question, so as a clue, why not see look at how numbers may be used to encode images: Subsection 4.3 of the the OpenLearn Unit Crossing the boundary – analogue universe, digital worlds.

Discussion

In a way the answer is similar to the question on how to transform a picture into numbers (see Subsection 4.3 of the OpenLearn Unit Crossing the boundary – analogue universe, digital worlds). We have to find some way to split up the waveform. We split up images by dividing them into very small spaces (pixels). We can split a sound wave up by dividing it into very small time intervals.

What we can do is record what the sound wave is doing at small time intervals. Taking readings like this at time intervals is called sampling. The number of times per second we take a sample is called the sampling rate.

I’ll take the tuning fork example, set an interval of say 0.5 second and look at the state of the wave every 0.5 second, as shown below.

An image showing the sampling rate for the tuning fork at an interval of 0.5 seconds
Sampling a sound wave

Reading off the amplitude of the wave at every sampling point (marked with dots), gives the following set of numbers:

+9.3, −3.1, −4.1, +8.2, −10.0, +4.0, +4.5

as far as I can judge. Now, if we plot a new graph of the waveform, using just these figures, we get the graph below.

The sample from the previous image shown as a graph
Image of a waveform

The plateaux at each sample point represent the intervals between samples, where we have no information, and so assume that nothing happens. It looks pretty hopeless, but we’re on the right track.

Self-Assessment Question (SAQ)

How can we improve on the blocky figure shown directly above?

The problem here is similar to one that may be encountered with a digitised bitmapped (pixelated) image. In that case we decreased our spatial division of the image by making the pixel size smaller. In this case we can decrease our temporal splitting up of the waveform, by making the sampling interval smaller.

So, let’s decrease the sampling interval by taking a reading of the amplitude every 0.1 second.

Image showing the amplitude every 0.1 second
Sampling every 0.1 second

Once again, I’ll read the amplitude at each sampling point and plot them to a new graph, which is already starting to look a little bit more like the original waveform.

Graph of the amplitude from the previous image which looks more like the original waveform because it is more detailed
Waveform using higher sampling rate

So how often must the sound be sampled? There is a rule called the sampling theorem which says that if the frequencies in the sound range from 0 to B Hz then, for a faithful representation, the sound must be sampled at a rate greater than 2B samples per second.

Example

The human ear can detect frequencies in music up to around 20 kHz (that is, 20 000 Hz). What sampling rate is needed for a faithful digital representation of music? What is the time interval between successive samples?

Answer
20 kHz is 20 000 Hz, and so the B in the text above the question is 20 000. The sampling theorem therefore says that the music must be sampled more than 2 × 20 000 samples per second, which is more than 40 000 samples per second.

If 40 000 samples are being taken each second, they must be 1/40 000 seconds apart. This is 0.000025 seconds, which is 0.025 milliseconds (thousandths of a second) or 25 microseconds (millionths of a second).

The answer shows the demands made on a computer if music is to be faithfully represented. Samples of the music must be taken at intervals of less than 25 microseconds. And each of those samples must be stored by the computer.

If speech is to be represented then the demands can be less stringent, first because the frequency range of the human voice is smaller than that of music (up to only about 12 kHz) and second because speech is recognisable even when its frequency range is quite severely restricted. (For example, some digital telephone systems sample at only 8000 samples per second, thereby cutting out most of the higher-frequency components of the human voice, yet we can make sense of what the speaker on the other end of the phone says, and even recognise their voice.)

SAQ

Five minutes of music is sampled at 40 000 samples per second, and each sample is encoded into 16 bits (2 bytes). How big will the resulting music file be?
Five minutes of speech is sampled at 8000 samples per second, and each sample is encoded into 16 bits (2 bytes). How big will the resulting speech file be?

5 minutes = 300 seconds. So there are 300 × 40 000 samples. Each sample occupies 2 bytes, making a file size of 300 × 40 000 × 2 bytes, which is 24 000 000 bytes – some 24 megabytes!

A sampling rate of 8000 per second will generate a fifth as many samples as a rate of 40 000 per second. So the speech file will ‘only’ be 4 800 000 bytes.

This process of sampling the waveform is very similar to the breaking up of a picture into pixels, except that, whereas we split the picture into tiny units of area; we are now breaking the waveform into units of time. In the case of the picture, making our pixels smaller increased the quality of the result, so making the time intervals at which we sample the waveform smaller will bring our encoding closer to the original sound. And just as it is impossible to make a perfect digital coding of an analogue picture, because we will always lose information between the pixels, so we will always lose information between the times we sample a waveform. We can never make a perfect digital representation of an analogue quantity.

SAQ

Now we’ve sampled the waveform, what do we need to do next to encode the audio signal?

Answer

Remember that after we had divided an image into pixels, we then mapped each pixel to a number. We need to carry out the same process in the case of the waveform.

This mapping of samples (or pixels) to numbers is known as quantisation. Again, the faithfulness of the digital copy to the analogue original will depend on how large a range of numbers we make available. [If “8-bit” sampling is used, 256 different amplitudes can be measured.

That is: 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 = 256 different levels.]

The human eye is an immensely discriminating instrument; the ear is less so. We are not generally able to detect pitch differences of less than a few hertz (1 hertz (Hz) is a frequency of one cycle per second). So sound wave samples are generally mapped to 16-bit numbers.

Copyright OpenLearn/The Open University, licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 2.0 Licence.

The wav encoding that Game Maker can play back is based on the above principles. wav files can be recorded at 8-bit, 16-bit or 24-bit resolution, using a sampling rate set between 8,000 Hz and 48000 Hz. If you calculate some file sizes for different lenght audio clips at a variety of sampling rates and quantisation levels, you will see that wav audio files can be quite big, even for short audio clips.

The MP3 format uses a similar approach to digitise the sound file in the first instance, but then reduces the size of the digital file by using a compression technique to encode the file again in another way. Compression effectively squashes the file size down so that it becomes smaller, but we shall not consider that here.

Growing the Platform Game – You’re On Your Own, Now (If You Want to Be!)

Having introduced the idea of a platform game, your mission, should you care to accept it, is to build a platform game of your own, on your own, using the Game Maker Platform game tutorial, as well as any other resources you happen to find, to guide you…

If you don’t fancy the idea of that, I’ll carry on developing the platform game here in the Digital Worlds uncourse blog, at a slightly gentler pace. I’ll also show how to use some of the new techniques in the context of the maze game world.

So – if the DIY platform game adventure is for you, read on… feel free to blog your progress and link your posts back here, or even set up a page for your game in the Digital Worlds wiki. If not, get a yourself a cup of tea, write down some requirements about how you’d like some monsters to behave in your platform game world, and stay posted…

For those of you who’ve opted for the solo mission, visit the YoYo games website, and download the Platform Game Tutorial (if you haven’t already done so) and use it to guide your exploration of how to develop a simple, arcade style 2D platform game.

We’ve already looked at how to create a simple platform world for your player character to explore, but the Game Maker tutorial goes into more detail. If you work through it, you will learn how to:

  • introduce monsters into the game: you already know the basics, but here you’ll find how to ‘squish’ monsters by jumping on them, and how to use invisible markers to limit the territory the monsters patrol;
  • make the platform look pretty: the tutorial includes a tileset in the Resources folder that is ideal for creating a stylish looking platform game;
  • construct – and explore – huge rooms: our games to date have shown the whole extent of a room within the screen view. It is possible to spread a room over several screens however, through the use of views. At any particular time, the player character is kept in focus and a view of a small part of the room around the character is presented. This technique requires the player character to explore several screens worth of room – not all of which can be seen at once – in order to negotiate the level;
  • introduce ramps and ladders: as well as jumping to get between levels, it’s sometimes nicer to walk – or climb. The tutorial describes how configure the player character to walk up a ramp, thought you’ll have to do a bit of thinking yourself (or peek at the tutorial programme code!) to work out how to create a ladder with the correct properties!

Feel free to work through the above tutorial as quickly, and as in as much depth, as you like.

If you would like to explore the construction of platform games in a slightly more theoretical, formal academic sense, try working through the Game Maker platform game lecture notes from the UCSC Foundations of Interactive Game Design course: Creating a Platformer using Game Maker (collision detection, undesirable collision detection cases, creating simple state machines, jumping mechanic) [(PDF) (iPaper)], Creating a Platformer using Game Maker, Part 2 (advanced collision detection, all-in-one collision handler for platformers, jumping onto a moving platform) [ (PDF) (iPaper)]. Audio versions of the lectures are also available from the actual course site.

Remember, if you don’t fancy the idea of working through the YoYo Games Game Maker tutorial at your own hectic rate, I’ll carry on at a gentler pace, over the next week or two, here in the Digital Worlds uncourse blog…

Springboard, And A Short Aside – The Persistence of Vision

Many sources explain the psychophysical basis of frame based animation in terms of the ‘persistence of vision’ effect, whereby it is claimed that the retina of the eye retains an afterimage of one frame and somehow blends it with the next, thus providing an illusion of continuous motion.

Unfortunately, while this explanation is the one that is typically offered as the mechanism by which we experience the illusion of motion from frame-by-frame animations, it is not the reason that is accepted by cognitive psychologists. (The actual explanation is beyond the scope of this post; you need a proper Cognitive Psychology/Psychology of Perception course for that! maybe something like this? Signals and perception: the science of the senses)

In a short essay entitled “Persistence of Vision”, Stephen Herbert provides a brief history from cinema of how this popular misunderstanding came to be. You can read the article here: Persistence of Vision.

A more comprehensive refutation of the persistence of vision explanation, along with some simple experiments (using animated GIFs ;-) that demonstrate both the actual persistence of vision effect, as well as how it does not account for the illusion of motion, is given by Rod Munday in this ‘lecture’ on The Moving Image. Links to several other academic papers on the subject are also provided.

PS I’ve placed this post in the Springboard category, as well as a couple of other categories. Springboard posts will be light on content, (‘incomplete’ would be another word for it!) but will always link out to one or more hopefully reputable sources, from where you can go on to find out more about a particular topic.

I’ve also categorised it as an Aside, so it’s slightly off the main topic the uncourse…

Please feel free to comment back with anything you find out from following the links that is relevant to the springboard topic. For example, in this case, it might be a summary of how the ‘persistence of vision’ argument came to be proposed and commonly accepted; a review of an experiment that attempts to refute the persistence of vision hypothesis; or an explanation of what is thought to explain our perception of motion from a watching a sequence a fixed images presented at a rate of several images per second.

Friday Fun #1 Linerider

As well as trying to post a ‘game making tutorial’ each Friday, I’ll also try to find some idling curiosity for you to play with over the weekend.

First up is Line rider. Linerider is, well, linerider…

Draw a line (in your browser, on the Line Rider site) and set the sledger off… brilliantly compelling – but why is it such fun?

If you find yourself hooked, here are some Line Rider video tutorials.

I think you can make movies of your own line rides? If you can and do, why not post a link as a comment? ;-)

Just by the by, what principles from physics does the Line Rider “game” depend on? (and indeed, is Line Rider a game?!).

How are these “physical laws” represented in the Line Rider, and how do they affect the behaviour of the character within it?

An Aside – Checking Book References Online

One of the things I meant to mention in the previous post when I referenced a famous quote from Johan Huizinga’s Home Ludens (“Play is a voluntary activity…”) was how to view this quote in its original context using services such as the online Google Books service. (I’ve actualy done this “in context” in one of the comments to that post, with a link to Live book search.)

2008-03-05_2313

This passage is is quite widely cited in a range of other books on the subject of game theory. If you click on the Popular Passages link, you can gain access to references to some of those books that have cited (that is, quoted and referenced) Huizinga’s view:

2008-03-05_2319

Microsoft’s rival service – Live book search – http://books.live.com – also allows you to search within books for that famous quote, although you do need to log in to see the quote in context to comply with copyright licensing matters…

I’ll have bit more to say about copyright, and its enforcement using digital rights management in the computer games industry, in the next week or two…

There are two three (!) main reasons I wanted to bring your attention to these services (and I guess I should add a third, the Amazon’s Search Inside this Book service, to the list, too).

Firstly, they demonstrate how the interface provides an interactive way of searching the full text of a book whilst online. (Note that not all books have been digitised yet, and copyright reasons mean that you can’t necessarily see all the content of those that have been, but the trends would seem to indicate that the content of books can increasingly be googled!)

Secondly, they are something you can play with, which leads back to the question of just what a game is, and in particular what’s its relationship with “play” is….

(If nothing else, I’d like to try to get you think about how having a playful attitude can often help drive a positive learning attitude to a particular problem, such as learning how to make the most of new technology…)

Thirdly, they provide a way in to finding out more about what people have had say about the notion of games through the book literature.

If any of the books take your fancy, be warned – many of the academic publications can be quite expensive. However, you might be able to find them in a library near you using a service such as WorldCat. For example, here’s a WorldCat library book search around Milton Keynes for a copy of Huizinga’s book:

2008-03-06_0005

Okay, that’s enough of that aside… (unless you can think of a WorldCat game we could play, such as a booksearch-cum-treasure hunt involving a hunt across UK libraries for a set of books that all cross reference each other, perhaps?! ;-)

If you haven’t already done so, watch the first part of the interactive tutorial on “Understanding Games”.

As you do so, ask yourself the questions what does the character identify as the key features a game must have? and how is the publishing medium itself (i.e. the animation) being used to communicate the message it contains?.

I’ll post my answers to those questions tomorrow, along with the first Game Maker tutorial…


Categories