Scan | Journal of Media Arts Culture
Volume 10 Number 2 2013

Through the Screen: Deconstructing Spatial Dualism in Augmented Reality Games

Kyle Moore

Abstract

Over-emphasis on the visual capabilities of augmented reality creates a spatial dualism, exemplified by descriptions of augmented mobile screen as “see through” (Caudell & Mizell 1992; Milgram & Kishino 1994; Milgram & Colquhoun 1999). This paper re-conceptualises augmented reality games, critiquing the often-used spatial metaphor of the screen as window. Such conceptualisations situate augmented reality games within the binary of real and virtual, placing an emphasis on the visual and technical qualities of the device, rather than the performance of mediated play enacted by the player. This paper explores the discursive constructions surrounding augmented reality, shifting away from techno-centric discussions concerned with viewing displays, and rejecting the reading of augmented reality as a visual medium. Instead, this paper focuses on the construction of space as multi-layered, exploring the metaphor of the layer in relation to the role of the player. Lev Manovich’s (2006) concept of augmented space is used to extend scholarship of augmented reality games beyond viewing regimes - focusing instead on the player’s role in interacting with multiple forms of layered code, both visible and invisible, during moments of augmented game play. This paper argues that the screen should no longer be thought of as the primary point of augmentation. Instead, the player is given the role of encoding and decoding a series of aural, visual, and information-based cues, negotiating a series of spatial layers in order to play.

Introduction

The technological process that occurs when playing mobile augmented reality games (ARGs) involves the layering of visual and aural computer-generated input over the player’s field of vision. Mobile augmented reality games utilise the in-built camera and Global Positioning System (GPS) function of a mobile device, creating a dynamic, and often context-aware, layer of information. With the aid of Hirokazu Kato and Mark Billinghurst’s (1999) ARToolKit, mobile augmented reality game development has advanced from institutionalised and experimental games, such as ARQuake (2000) and Human PacMan (2003), towards an application, or ‘app’, driven market. Despite the increased numbers of application development, the majority of mobile augmented reality games remain in their infancy - often relying on fiducial markers to generate dynamic computer-generated images that appear to be situated within material space. However, the inclusion of augmented reality (AR) capabilities in the latest commercial portable gaming devices, the Nintendo 3DS (2011) and PlayStation Vita (2011) suggests the potential for innovative design, and a bright future for mobile ARGs. Thus far, the conceptualisation of augmented reality has been predominantly scientific, focusing on technical aspects (Caudell & Mizell 1992; Milgram & Kishino 1994; Milgram & Colquhoun 1999). An emphasis on the development of hardware and software has led to AR being theorised as primarily a viewing display; a screen-based media, and screen-based user experience.

This paper aims to re-conceptualise mobile augmented reality games, focusing on the player’s experience of multi-layered, data dense spaces. Shifting away from augmented reality as a viewing display, this paper explores the interaction between visible and invisible layers of code, and the process of encoding and decoding that the player undertakes when playing. In doing so, this paper deconstructs the spatial dualisms that AR has inadvertently been subjected to. An over emphasis on the technological and software-based capabilities of AR has resulted in the discursive construction of a binary - situating the real and virtual as inherently distinct. This paper breaks down such dualisms, rejecting the oft-used metaphor of the screen-as-window (Manovich 2001; Friedberg 2006) and techno-centric discussion of AR as a viewing display. Instead, this paper explores the metaphor of the layer (Verhoeff 2012a; 2012b; 2012c) - placing an emphasis on the role of the player and the act of encoding multi-layered spaces.

In 1968, Ivan Sutherland developed what is arguably the very first augmented reality display system, successfully layering a dynamic computer-generated image over a user’s field of vision. However, documenting the production, he alludes to a problematic conceptualisation of augmented reality as a display system - the window. Sutherland’s (1968) ‘head-mounted three-dimensional display’, presents the user with a perspective image that changes when he or she moves. In his highly technical discussion that details the developmental process of creating such a display, Sutherland describes the process of framing the user’s field of view as “windowing”. The term refers specifically to a “clipping divider”, used to focus the user’s field of vision and creating a seamless layering of computer-generated imagery. Although relevant to the design process, the term establishes a seemingly useful metaphor with which to conceptualise AR displays. While Sutherland’s application of the screen (or head-mounted display) as window does not intentionally allude to spatial dualisms, it established a way of thinking about augmented reality’s viewing regimes that situates the image as distinctly separate from the material space it appears in, thus constructing a problematic binary between the real and virtual.

Despite Sutherland’s (1968) early construction of an augmented reality head-mounted display, the term “augmented reality” was not coined until 1992, when Boeing employee Thomas Caudell described a head-mounted digital display as a technology “used to ‘augment’ the visual field of the user with information necessary in the performance of the current task, and therefore we refer to the technology as augmented reality (AR)” (Caudell & Mizell 1992: 65). Emphasis was placed on the visual aspects of the display system, particularly on the apparatuses’ “see thru” [sic] capabilities. This emphasis on the illusion of transparency pervaded through early scientific discourse, becoming integral for a series of definitions that viewed augmented reality within the binary of ‘real’ and ‘virtual’ (Azuma 1997; Milgram & Kishino 1994; Milgram & Colquhoun 1999). Augmented reality has been predominantly discussed in opposition to virtual reality systems - as a ‘see-through’ view of the real environment, as opposed to the closed-view of virtual reality. Such distinctions prompted early definitions of AR, placing the technological practice on a continuum opposite virtual reality (Milgram & Kishino 1994). Category-based definitions of AR as a real-time or ‘live-view’ of the real environment further emphasise the transparent nature of augmented reality (Azuma 1997; Azuma et al. 2001). Although the aforementioned discussions do not explicitly equate display systems to a metaphorical window, the emphasis on head-mounted displays as primarily see-through, points towards a discursive construction of screen-as-window and the establishment of a real versus virtual binary that fails to consider the complex layering process that occurs when playing a mobile ARG.

Looking through the screen

The aforementioned discussions of augmented reality focus predominately on the hardware of head-mounted displays - displays that create a seemingly closed field of vision by closely aligning images to the user’s field of vision. Mobile augmented reality applications and games pose a problem for developers; namely, constructing a viewing display utilising a small personalised screen that has a culturally constructed viewing regime of distraction (Hjorth & Richardson 2009). Maintaining the same see-through qualities as the head-mounted display, and featuring the contemporary interface of a mobile screen as a framing device, the adoption of the screen-as-window metaphor seems a fitting application to such AR practices. So much so, that in a series of online articles, Robert Manson (2011a; 2011b) utilises the metaphor of a ‘keyhole’ as a means of capturing both the size of the screen and the embodied viewing practices of the user. As a transparent, real-time, framed view of material space, conceptualising augmented reality as a window seems highly appropriate. However, tracing the genealogy of screens, and moreover the media specific viewing regimes associated with types of screens reveals the window metaphor to be problematic for the analysis of mobile augmented reality games.

Prominent media theorist Lev Manovich (2001) constructs a genealogy of the screen, tracing the history of viewing regimes associated with a number of screens. Categorically situating screens within periods of technological development; Manovich begins his archaeology of the screen with what he terms the classical screen. Here, Manovich associates the framed, static image of the classical screen to Alberti’s metaphorical window - an approach also adopted by film scholar Anne Friedberg (2006). With the introduction of moving images, in the form of cinema and television, the screen shifts from ‘classical’ to ‘dynamic’ - creating a sense of virtual mobility for the audience, further emphasised by their lack of movement and position to the screen. It is here that the metaphorical window begins to crack, constructing a binary of the real and the virtual, situating the player within a material space, virtually navigating a second space. The window becomes further complicated within Manovich’s genealogy, becoming multiplied via personal computer interfaces, and shifting in temporality, from viewing images of the past to a real-time screen, focusing on images of the present. While the essential qualities of dynamic and real-time screens are evident in mobile augmented reality game interfaces, both the screen (as a framing agent) and the player are highly mobile.

Moving beyond generalisations of the screen towards a more medium specific analysis of mobile augmented reality games, the viewing regimes associated with the software of digital games and the platform of mobile devices becomes integral for conceptualising AR as more than a display system. Extending on established archaeologies of the screen, Chris Chesher (2007) introduces the term “glaze” as a means of conceptualising the screen of console games. He derives the term glaze from three distinct characteristics: spectacular immersion, the glazing of the player’s eyes when absorbed in the game; interactive agency, the sense of “stickiness” created by narrative and game mechanics that hold the player to the game; and mimetic simulation, the ability for games to reflect allowing players to recognise themselves in a familiar world. This glazing process occurs both through viewing regimes and through the player’s haptic engagement with the controller.

While Chesher’s (2007) concept of the glaze was constructed specifically for console games, it is still applicable to the viewing regimes of mobile ARG players. Constructing a frame as a point of reference, players are encouraged to focus on the spatial representation of material space as framed by the mobile screen. Mobile ARGs often use mechanics and a narrative that encourage a turning towards the screen, augmenting the role of the gaming device to become an in-game object, rather than the point in space where the game takes place. Lastly, via representation, and integration of everyday environments, mobile ARGs reflect, to a degree, a sense of familiarity. Often, narratives engage the player by positioning them as the saviour of material space, constructing opponents in the form of spatial invaders. The device then takes on the role of a weapon, and becomes the means of bridging the gap between the narrative of the game, and the reality of the everyday.

Taking a phenomenological approach to the platform of mobile devices, Ingrid Richardson (2005; 2007; 2010) questions the application of the screen-as-window metaphor for the analysis of mobile devices. While Manovich claims that while a screen may be “dynamic, real-time, [or] interactive, a screen is still a screen” (Manovich 2001: 115), Richardson (2005) questions the front-to-front relation that is associated with the screen as window metaphor. The metaphorical window presumes an embodied relationship between screen and player, reducing the player to a set of eyes. Chesher’s (2007) concept of the glaze works well to extend this screen-body relationship, through the haptic involvement of the player. Furthermore, the front-to-front relationship of screen and body that may be applicable for the analysis of cinema fails to consider the socially-constructed viewing practices of mobile devices. Namely, that mobile devices have inherently distracted viewing regimes (Hjorth & Richardson 2009).

The result of such viewing regimes is a paradoxical relationship between the player and the screen of augmented reality devices. The ARG developer aims to glaze the player to the screen, insuring adhesion via a touchscreen interface. However, the challenge for developers is establishing this level of engagement on what has been culturally framed as a ‘casual’ platform, where the player (or user) views the device with a distracted glance. Furthermore, in the technological shift from head-mounted displays to personal mobile devices, developers are faced with the challenge of constructing a personalised frame in which to view overlays of computer-input. Emphasis on the screen fails to consider the importance of this visual overlay - an interface element that complicates the framing that takes place within the mobile device’s screen.

What occurs during moments of augmented game play is not an explicit separation of ‘virtual’ and ‘real’ spaces, but rather a complex layering of visual representations, a live camera feed, informational overlays, and a paradoxical viewing regime that fosters both a glazed adhesion to the point of reference as well as a distracted glance that focuses on the material space beyond the frame. Rather than situate the player at the boundary between digital and material, looking through the window into what can be deemed an augmented space, this paper adopts a metaphor used extensively by Nanna Verhoeff (2012a; 2012b; 2012c) - the layer. Conceptualising augmented reality as a ‘layer’ allows us to look beyond the screen, beyond visual representations towards the underlying invisible code of mobile augmented reality; the code utilised by developers to construct the game; social codes that dictate movements within space; and the encoding process in which players co-create a unique experience, making sense of the game within these frameworks to create an augmented play experience. Focusing on a range of layers allows for a holistic analysis of the human-technological assemblage that is an ARG - de-emphasising but not entirely abandoning the importance of the screen. In doing so, the remainder of this paper moves beyond the screen, focusing on the interaction of code and material space in moments of augmented game play.

Beyond the screen

The theorisation of augmented reality as primarily a software- or hardware-based technological practice fails to acknowledge the broad technological shift whereby the boundary between informational and material spaces is becoming increasingly blurred. The metaphorical window presumes a distinct separation between what is commonly referred to as virtual space, compared to a physical or material space. The continued application of this theoretical binary falls within the category of what Nathan Jurgenson (2009; 2011; 2012) terms “digital dualism”. The term was developed, and expanded, by Jurgenson via a number of online features in an attempt to deconstruct the fallacy that digital (virtual) and material (real) are inherently separate. Rather, ‘reality’ needs to be framed as a combination of organic and technological (Jurgenson 2011). Notable in Jurgenson’s (2012) argument is the shift from thinking of augmented reality as a specific technological (software/hardware-based) process, and towards a conceptual framework arguing against digital dualist notions of a spatial and experiential binary. There have been similar arguments towards a conceptual coupling of digital and material; O’Reilly and Pahlka’s (2009) extension of web 2.0 to ‘web squared’, Gordon and de Souza e Silva’s (2011) concept of ‘net locality’, and Manovich’s (2006) theorisation of space as augmented.

Manovich (2006: 220) considers the augmentation of space to be a cultural and aesthetic practice, rather than a purely technological one. Historically, informational and material spaces have always overlapped; images, painting shop signs, public displays, and so forth. With the introduction of technologies capable of overlaying dynamic forms of information, there is the possibility for extending the fallacy of digital dualism, suggesting an expansion of virtual spaces into the real. However, such an approach proves unhelpful for understanding the context in which mobile augmented reality games are played. Turning away from the space of the screen, and notions of the real versus the virtual, Manovich’s (2006) term “augmented space” becomes helpful for understanding the spatial experience augmented reality game players may experience.

Derived from the term augmented reality, augmented space is, in essence, the physical or material space overlaid with dynamically changing information (Manovich 2006: 220). Technologies such as mobile phones or gaming devices, surveillance technology such as security cameras, and architecturally-embedded electronic displays such as public screens transform physical space into a dataspace. Dataspace is the term used to describe the possibility that any point within material space may potentially contain information that is being either delivered or extracted from elsewhere. Manovich argues that various monitoring or augmentation technologies add new dimensions to the three-dimensional physical space - making it multi-dimensional. He calls this multi-dimensional, data dense space, “augmented space” (2006: 226). He stresses however, that this issue is not just technological but also conceptual. The layering of a dynamic dataspace over physical space is a type of general aesthetic paradigm - the combining of two spaces. This aesthetic paradigm is not unique to augmented reality. For instance, architects and fresco painters need to not only think about the practical means of extending beyond physical space, but also the conceptual means, taking into consideration how spectators or inhabitants may interact with multiple layers of space simultaneously.

Moreover, this space is highly cultural. Data-dense spaces are constructed via a number of data-driven technologies, that not only send data to multiple points within material space, but also extract data. As Manovich (2006) rightly argues, augmented space is more than just the material space overlaid with dynamic data, it is also a monitored space. Surveillance technologies - cameras, personal tracking devices, GPS, and so forth - not only send data to spaces, but extract data too. Such processes shape the way augmented reality game players may engage with space, adding social layers to the complex mesh of material and informational. The embodied relationship between player and screen becomes a social cue within public spaces, surveilled by technologies and the public. Utilising a camera-like device (smartphone) and a similar posture or gestures to mobile phone use may possibly lead to problematic, unintentional, invasions of privacy. While space may be consider inherently ‘data-dense’, the process in which that data is collected and distributed is subject to multiple social codes, often invisible, or at least opaque, to the player.

The augmentation of space is not just about connecting data to architectural structures. It is also about the process of receiving and extracting that data. Information remains attached to places until it is gathered by a user and processed. While it is possible to argue that media devices retrieve information, Patrick Allen (2008) believes it is the user’s body that frames the augmentation of space. For Allen, the framing of information becomes a question of the body and its location in space (2008: 36). Within augmented spaces, the body acts as an interface. It is simultaneously an interface on a sensory level, in terms of receiving information, and on an environmental level, in terms of receiving information from many technological artifacts, such as mobile phones or personal stereos. The body plays an important role in conceptualising the navigation of augmented space. Allen argues that the notion of navigation is “predicated on the assumption that the body exists, or is always located, within real space” (2008: 37). The body inhabits a certain space within the environment; it is an element within itself and is able to navigate through spaces. Mobile games, particularly with embedded pervasive elements, require the player to fully utilise their body. Moreover, the use of location-sensitive technology also takes the physical body of the player and transforms their position in space into data. Not only do spaces become augmented; so too does the body of the player. This means that players will simultaneously navigate through physical and informational space during gameplay.

In this sense, space can no longer be considered a binary of virtual and real. Mobile augmented reality games are played in the real environment, within physical space - a space that, as Manovich (2006) argues, is layered with dynamic data. The player engages with their AR device, a spatial node, a place situated within both information networks and physical space, and extracts and projects data. Both the device and the player together act as navigational agents - as a type of mediated flâneur - subverting the urban environment via ludic activities, while remaining bound to a series of ‘codes’ that dictate performance.

Via a distinct layering process, mobile augmented reality games combine problematic notions of ‘reality’ - constructing binaries between digital and material – with the ‘reality’ of a game world. What emerges, then, is perhaps best not theorised as an augmentation of reality, but rather as multiple subjective realities that are decoded by the player during the performance of play. Mark Graham et al. (2013: 4) use the term augmented realities to refer to the constant enactment and remaking of subjective realities; realities that are subject to a four part typology of power relations. The first two of these - distributed power, the distribution of user-generated content; and communication power, the power to create, interpret, recirculate and repackage content - reference the social production of augmented realities. Here, the process of creating maps and location-sensitive information is emphasised, not only in terms of original cartographic practices, but also the ability designers, users and players may have to remake or recontextualise geospatial information. The final two dimensions - code power, the opaque sorting of information; and timeless power, the flattening of time - emphasise the transparent nature of code. For mobile augmented reality games, this process involves acknowledging the developer’s construction of a temporal compression, syncing in-game temporality to an in-the-moment experience of everyday reality.

Examining the power relations of code leads to the question of authorship in mobile ARGs. The games are coded by programmers; are subject to social codes within urban spaces; utilise encoded geospatial forms of data; and are decoded via the player’s constant performance of the game play and, moreover, their performance of space. Here, Henry Jenkins’ (2006) discussion of game designers as narrative architects is particularly useful. Rather than consider game space as a series of geometrical or mathematical inputs - that is, the result of a coded environment - Jenkins sees designers and game space as constructed environments to be enacted upon by the player. Re-examining the spatial possibilities of mobile augmented reality games reveals a multi-layered space that encompasses physical space as an augmented space, as well as the spatial possibilities of the screen.

Playing with layers

To understand the player, and his or her role in encoding material spaces to become augmented game spaces, it is necessary to consider the holistic game space that is constructed during play. Michael Nitsche’s (2008) study into the spatial qualities of videogames provides a good framework for conceptualising augmented reality games. He believes that treating videogames with the broad conceptual framework of ‘virtual spaces’ can be a misleading simplification. There are fundamental differences between the space of written text, a cinema space and interactive worlds - all of which can be blanketed under virtual space. It is for this reason he develops a method that allows for distinguishing between various spaces - not just digital, but also material – through five conceptual planes for the analysis of videogame space (Nitsche 2008: 15). These are: rule-based space, mediated space, fictional space, play space and social space.

These planes are not separated, but overlap and interconnect. Rule-based space is defined by the mathematical set of rules; it is defined by the code, the data or hardware restrictions. Rule-based space is the basis for mediated space, which is defined by the presentation of the game world. The player then confronted with the audio-visual based world, constructs a world from the provided information - the fictional space. Based on the player’s comprehension of the images and their engagement with the fictional world, players decide on actions that affect the game space. Play space is the physical space inhabited by the player; this space includes not only the player but also videogame consoles and at times, other people. The way the game can affect other people is part of the game’s social space. These spaces are emergent, they are created via the playing of games. Rules come in to practice only when engaged with, the images have no meaning without the process of imagination. Although Nitsche’s five conceptual planes were developed to analyse 3D videogame spaces, they can be applied to the conceptualisation of mobile augmented reality games. AR blends these planes together, the rules of the game can be challenged by the rules of the everyday, and often the architecture of physical space may or may not align with the architecture and mathematical design structures of the game.

The relationship between screen and device means that the rule-based space and mediated space constantly overlap, the see-through capabilities of augmented reality mean that play space becomes mediated and rule-based, as well as fictional, due the designer’s integration of a narrative elements layer over the real environment and the player’s acceptance and performance of these narratives. Moreover, the screen/device assemblage becomes social, via data networks that connect the player to others not physically present, and the body-technological relationship that signifies to non-players that an act of mediated play, or at least a technological mediation of some sort, is taking place. Nitsche (2008) rightly claims that these spaces are mutually exclusive. However, it is difficult to apply the same form of categorical clarity to mobile augmented reality games. What is evident is that these spatial planes work together to create a holistic understanding of how players experience video game space as a multi-dimensional space, rather than something mathematically constructed by the designer.

However, as Verhoeff (2012a: 163) rightly puts forth, augmented reality is not a sum of layers, but a whole new dimension to the experience of space. Exploring the notion of spatial planes further, Verhoeff (2012b; 2012c) dissects the layered interface of smartphone devices. Informational spaces and material spaces intersect in what Verhoeff (2012b: 119) terms a “hybrid screenspace” - a new form of mapping where images are produced while the user simultaneously navigates. This process, argues Verhoeff, transforms the viewer into a user (or player) - they become performers. Verhoeff (2012b; 2012c) further dissects the layered nature of navigational interfaces in augmented reality browsers, constructing three conceptual, non-hierarchical levels. The first layer of the interface, internal interfacing of applications, explores the back-end system and software, for example the use of in-built GPS functions (Google Maps or Apple Maps), utilised by AR developers as a source of geospatial information. The second layer concerns the positioning and connectivity of the device in relation to the data space in which it is being utilised. Verhoeff (2012c) terms this an inertial navigational system, used to calculate the player’s position within physical space. The third layer is the level of user interaction. Here, the first two layers are enacted - where the player engages with the internal operations of the device, and extracts and decodes location-aware data from the surrounding augmented space.

Verhoeff (2012b) expands ‘screenspace’ - conceptualised not as a window, nor merely as a layer within a multi-dimensional game space, but rather as a multi-layered space within itself. Similarities arise between Verhoeff’s levels of interface and the aforementioned power relations of code (Graham et al. 2013). What emerges is a conceptualisation of mobile augmented reality games as a process of co-creation (Verhoeff 2012c), whereby the active practice of navigating, of encoding data that is extracted from augmented spaces, becomes an integral part of creating the game play experience. There are explicit power relations at play in augmented reality games; code is constructed by those with the means to program, and those with the means to create archival databases (Google and Apple Maps, for instance). The game designer too, engages with such information, while maintaining an open role in the process, as more of a narrative architect, or perhaps facilitator for code. Meanwhile, the player is ascribed the role of decoder; the player performs, they navigate, and they co-create.

Conclusion

Given the active role the player has in co-creating mobile augmented reality games, it becomes difficult to ascribe the screen-as-window metaphor, and the inherent spatial dualisms that are associated with this trope. The window metaphor situates the player at the threshold that separates the real from the virtual, a passive onlooker, virtually exploring new spaces, reduced to a set of eyes, glazed to representations of informational space. The complex layers of mobile augmented reality games reveal the player to be an active agent when performing play. The player decodes forms of information - whether they are visual representations of the space, coded social protocols, or invisible currents of data that are constantly extracted and received across multiple points within physical spaces. Examining the augmented reality game space as multi-layered helps us to understand the human-technological assemblage that is augmented reality games. By de-emphasising the role of the screen, situating it within a broader framework of layers, and further examining the screen/interface itself as a series of layers, this paper argues towards augmented reality not as a bleeding of virtual into the real, but as a constant engagement between informational and material spaces.

Works Cited

Allen, P. (2008) “Framing, Locality and the Body in Augmented Public Space” in Augmented Urban Spaces: Articulating the Physical and Electronic City (Aurigi, A. & Cindio, F.D., eds), Aldershot: Ashgate.

Azuma, R. (1997) “A Survey of Augmented Reality” in Presence: Teleoperators and Virtual Environments 6: 4 pp. 355-385.

Azuma, R. et al. (2001) “Recent Advances in Augmented Reality” in IEEE Computer Graphics and Applications 21: 6 pp.34-47.

Caudell, T.P. & Mizell, D.W. (1992) “Augmented Reality: An Application of Heads-up Display Technology to Manual Manufacturing Processes” in Proceedings of the Twenty-Fifth Hawaii International Conference on System Sciences, Vol. 2. pp. 659-669.

Chesher, C. (2007) “Neither Gaze nor Glance, but Glaze: Relating to Console Game Screens” in Scan Journal 4: 2 /scan/journal/print.php?j_id=11&journal_id=19

Friedberg, A. (2006) The Virtual Window: from Alberti to Microsoft, Cambridge: MIT Press.

Gordon, E. & de Souza a Silva, A. (2011) Net Locality: Why Location Matters in a Networked World, London: John Wiley & Sons.

Graham, M., Zook, M. & Boulton, A. (2013) “Augmented Reality in Urban Places: Contested Content and the Duplicity of Code” in Transactions of the Institute of British Geographers 38: 3 pp.464-479.

Hjorth, L. & Richardson, I. (2009) “The Waiting Game: Complicating Notions of (Tele)Presence and Gendered Distraction in Casual Mobile Gaming” in Australian Journal of Communication 36: 1 pp. 23-35.

Jenkins, H. (2006) “Game Design as Narrative Architecture” in The Game Design Reader: A Rules of Play Anthology (Salen, K. & Zimmerman, E., eds), Cambridge: MIT Press.

Jurgenson, N. (2009) “Towards Theorizing an Augmented Reality” Sociology Lens, viewed 4 June 2013, http://thesocietypages.org/sociologylens/2009/10/05/towards-theorizing-an-augmented-reality/

Jurgenson, N. (2011) “Digital Dualism versus Augmented Reality”, Cyborgology, viewed 4 June 2013, http://thesocietypages.org/cyborgology/2011/02/24/digital-dualism-versus-augmented-reality/

Jurgenson, N. (2012) “When Atoms Meet Bits: Social Media, the Mobile Web and Augmented Revolution” in Future Internet 4: 4 pp.83-91.

Kato, H. & Billinghurst, M. (1999) “Marker Tracking and HMD Calibration for a Video-Based Augmented Reality Conferencing System” in 2nd IEEE and ACM International Workshop on Augmented Reality. pp. 85-94.

Manovich, L. (2001) The Language of New Media, Cambridge: MIT Press.

Manovich, L. (2006) “The Poetics of Augmented Space” in Visual Communication 5: 2 pp.219-240.

Manson, R. (2011a) “An Exploration of User Experience for Augmented Reality - AR UX”, viewed 7 October 2012, http://ar-ux.com/an-exploration-of-user-experience-for-augment

Manson, R. (2011b) “The 4 Key User Experience Modes of Augmented Reality - AR UX”, viewed 7 October 2012, http://ar-ux.com/the-4-key-user-experience-modes-of-augmented

Milgram, P. & Colquhoun, H.J. (1999) “A Taxonomy of Real and Virtual World Display Integration” in International Symposium on Mixed Reality.

Milgram, P. & Kishino, F. (1994) “A Taxonomy of Mixed Reality Visual Displays” in IEICE Transactions on Information Systems, E77-D(12).

Nitsche, M. (2008) Video Game Spaces: Image, Play, and Structure in 3D Game Worlds, Cambridge: MIT Press.

O’Reilly, T. & Pahlka, J. (2009) “The ‘Web Squared’ Era” in Forbes, viewed 4 June 2013, http://www.forbes.com/2009/09/23/web-squared-oreilly-technology-breakthroughs-web2point0.html

Richardson, I. (2005) “Mobile Technosoma: Some Phenomenological Reflections on Itinerant Media Devices” in The Fibreculture Journal 6: http://six.fibreculturejournal.org/fcj-032-mobile-technosoma-some-phenomenological-reflections-on-itinerant-media-devices/

Richardson, I. (2007) “Pocket Technospaces: the Bodily Incorporation of Mobile Media” in Continuum: Journal of Media & Cultural Studies 21: 2.

Richardson, I. (2010) “Faces, Interfaces, Screens: Relational Ontologies of Framing, Attention and Distraction” in Transformations 18: http://www.transformationsjournal.org/journal/issue_18/article_05.shtml

Sutherland, I.E. (1968) “A Head-Mounted Three Dimensional Display” in Proceedings of the December 9-11, 1968, Fall Joint Computer Conference, Part I, New York: ACM. pp. 757-764.

Verhoeff, N. (2012a) Mobile Screens: The Visual Regime of Navigation, Amsterdam: Amsterdam University Press.

Verhoeff, N. (2012b) “A Logic of Layers: Indexicality of iPhone Navigation in Augmented Reality” in Studying Mobile Media: Cultural Technologies, Mobile Communication, and the iPhone (Hjorth, L. & Burgess, J., eds), New York: Routledge.

Verhoeff, N. (2012c) “The Medium is the Method” in (Dis)Orienting Media and Narrative Mazes (Eckel, J. et al., eds), Bielefeld: Transcript.

Biographical Note

Researching the spatial experiences of mobile augmented reality game players, Kyle Moore has recently completed his Masters by Research thesis at the University of New South Wales. Currently, Kyle is working as a sessional tutor at Macquarie University and the University of New South Wales.