Whatever Happened to Virtual Reality?

Sherman Young

Virtual Reality (VR) sometimes appears to be ?so 1990s?. The promise of immersive utopias as described in Wired magazine stories on Jaron Lanier (http://www.wired.com/wired/archive/1.02/jaron.html) hasn?t been realised and, for many, VR dreams are little more than murky memories of Virtuality headsets and Michael Douglas in Disclosure.

Despite the perception of its demise, VR is very much alive and well in forms somewhat different to early promises. Of course, computer games are where VR technologies are now most obvious. From driving sims to FPSs, the headset has been replaced by a screen, the glove by a gamepad. The immersive world is now defined by physics engines and gameplay. Thus, as well as creating virtual environments, VR researchers have become more and more interested in what?s happening in those environments.

The second international conference on Virtual Storytelling was a showcase for VR researchers to demonstrate their engagement with ideas of character and narrative; an opportunity for computer scientists and engineers to grapple with the rather less stable world of the media arts; and from the outset, it was apparent that CP Snow style divides between art and science do exist.

The conference was anchored by three keynotes that represented a range of expertise and understanding.

It began with a paper from Ken Perlin (http://mrl.nyu.edu/~perlin/), a NYU computer scientist whose work crossed over into the media arts. He is credited with developing ?procedural noise?, a method for creating textures for CGI animations (which won him a technical Academy Award in 1997) and he has numerous Hollywood credits including the seminal Disney VR film Tron. Perlin?s paper argued that to date, gaming characters have been bad actors. The challenge was to fill the space between a character ?that is someone else? (as in film) and a character ?that is me? (as in a game). Perlin was suggesting the need to create ?a character that is me, but not me?, akin to the daemons in Philip Pullman?s fantasy novels. (http://dir.salon.com/books/feature/2000/10/18/pullman/index.html)

Applying the concept of ?noise? to character animation, Perlin demonstrated the use of ?noisy? gestures to create more evocative emotional characterisations in animated characters, (http://mrl.nyu.edu/~perlin/gdc/), suggesting that such realism would lead to more convincing acting and better characterisation.

His paper then touched on a number of intriguing concepts, including what he called the ?uncanny valley?; the place where representation was too close to reality to be believed. For example, the Final Fantasy movie, with its almost perfect rendering of humans made viewers a little uneasy. It was too close to the real thing to be acceptable, too far to be believable. By way of contrast, Pixar style animation (Finding Nemo, Toy Story etc.) was seen to be more accessible and engaging for audiences.

Perlin concluded with a rather tangential call to recognise the literacies involved in computer programming and suggested that all children should be taught programming skills ? or so-called procedural literacy, to empower their involvement in the new creative possibilities.

The second keynote was given by Kevin Bjorke from nVidia (http://www.nvidia.com), a manufacturer of GPUs (graphic processing units). Bjorke demonstrated examples of real-time graphics rendering that wowed the geeks. He reeled off impressive polygon count statistics and control over light and shade that added extraordinary realism to real-time applications such as multi-player games. More interesting was his understanding that this technology was subservient to the creativity of its users. The computer-generated look, he argued, was now part of the media zeitgeist, and audiences and users readily accepted its usage. Movies such as Brother Where Art Thou, and Amelie, for example, used computer graphics rendering seamlessly to remove unwanted background scenery or create otherwise impossible lighting effects.

In between these two computer-scientist keynotes, a flurry of engineers presented papers on topics ranging from real-time lighting design to programming techniques for creating non-linear background music. To a humanities academic, whose natural instinct is to ask why, the overwhelming impression was of papers that asked (and occasionally answered) the how question. One long and intriguing afternoon session consisted of four papers addressing the problem of balancing narrative and interactivity in self-generating virtual worlds. Whilst the technical focus of these papers (covering expert systems, heuristic engines, partial order plans and hierarchical actor call rules) appealed to my inner geek, I kept wanting someone to actually define what they meant by narrative or interactivity.

The telos of many conference-goers was the desire to create virtual worlds ? computer generated spaces that existed visibly, sonically and autonomously. Autonomy was defined in terms of a built-in narrative generator, capable of simulating the ebbs and flows of its characters? virtual lives and balancing users? ability to interact with their characters against the need for the world to be an interesting and compelling place in which to spend time. The research presented was largely focused on developing programming rules for rearranging snippets of plot, dialogue or character development.

A between-sessions discussion with a fellow attendee clarified what I was already thinking ? that this type of technical VR research had to be accepted on its own terms; the nature of such research was to solve problems, not ask why they should be solved.

Thankfully the third keynote performed the necessary dissection of assumptions. Sally Jane Norman, from Ecole Superieure de l?Image (http://www.eesati.fr/) challenged the dominant thinking. What narrative-generating program, she asked, could generate Waiting for Godot, a narrative in which "nothing happens, twice"? Whilst theories of narratology were all well and good, Norman reminded attendees that humanity was often required for creating engaging outcomes.

Moreover, she encouraged a broader context for virtual reality, arguing that it was limiting to understand VR environments as isolated worlds, hermetically sealed off from the rest of the world. Instead, Norman pondered whether VR technologies were overlooking the reality of hybridity; the mixing of natural and virtual realities. She argued that simulations of convincing human behaviour (as well as realistic environments) couldn?t replace the imagination and its symbolic rather than utilitarian creations.

To support her stance, she demonstrated a number of what she called hybrid realities, firmly inserted in today?s ?remix culture?.
These included Anonymous Muttering (www. Khm.de/people/krcf/AM_Rotterdam/long.html), Vectorial Elevation (www.alzado.net/eintro.html) and Can You See Me Now? (www.blasttheory.co.uk/cysmn).

Anonymous Muttering allowed multi-user creation of sound and light events in a Rotterdam nightclub; Vectorial Elevation empowered Net users globally to create lighting patterns from floodlights positioned above Mexico City?s Zocalo Plaza) and Can You See Me Now gave Net participants the opportunity to engage with real-life hunters intent on tracking down their virtual GPS-defined location. Each of these projects combined computer-based interactivity with reality-based activity and demonstrated an engagement with virtual reality technologies that went beyond the closed-in space of the screen.

Norman?s paper presented a refreshing counterpoint to the technically driven approaches that preceded her and provided the conference with the interdisciplinary balance it required.

Norman?s paper presented a refreshing counterpoint to the technically driven approaches that preceded her and provided the conference with the interdisciplinary balance it required.

For new media artists and critics, the Virtual Storytelling Conference emphasised the need for humanity to be a component of technological research. The ?why? question and the other ?how? questions need to be asked. Not the technical how, but the socio, cultural and political ?hows? that interrogate the effects and implications of particular technological pathways; and question the specific assumptions integrated into the thinking of computer scientists and engineers.