Learn Piano through AR

I like this:

The Projected Instrument Augmentation system (PIANO) was developed by pianists Katja Rogers and Amrei Röhlig and their colleagues at the University of Ulm in Germany. A screen attached to an electric piano has colourful blocks projected onto it that represent the notes. As the blocks of colour stream down the screen they meet the correct keyboard key at the exact moment that each one should be played.

Florian Schaub, who presented the system last month at the UbiComp conference in Zurich, Switzerland, said that users were impressed by how quickly they could play relatively well, which is hardly surprising given how easily we adapt to most screen interfaces these days.

But while there is real potential for PIANO as a self-guided teaching aid, in my view it’s the potential for a really tight feedback loop that makes this most interesting, and potentially more widely applicable.

When a piano teacher corrects a student’s mistake, they will perhaps specify one or two things that need improving, but this approach would sense each incorrect note and could provide an immediate visual response, flashing red for instance, conditioning the student to success more quickly.

via New Scientist.

Bibliography

So that’s it, my series is over. All that’s left to do now is credit the academic sources that influenced and aided in the construction of my argument. Thanks to everyone below, and thanks to you, dear reader, for coming along for the ride.

References:

Baudrillard, Jean (1983). Simulations. New York: Semiotext(e).

Baudrillard, Jean (1988). Selected Writings, ed. Mark Poster. Cambridge: Polity Press.

Baumann, Jim (date unknown). ‘Military applications of virtual reality’ on the World Wide Web. Accessed 20th March 2007. Available at http://www.hitl.washington.edu/scivw/EVE/II.G.Military.html

Benjamin, Walter (1968). ‘The Work of Art in the Age of Mechanical Reproduction’, in Walter Benjamin Illuminations (trans. Harry Zohn), pp. 217–51. New York: Schocken Books.

Bolter, J. D., B. Mcintyre, M. Gandy, Schweitzer, P. (2006). ‘New Media and the Permanent Crisis of Aura’ in Convergence: The International Journal of Research into New Media Technologies, Vol. 12 (1): 21-39.

Botella, Cristina.M, & M.C. Juan, R.M. Banos, M. Alcaniz, V. Guillen, B. Rey (2005). ‘Mixing Realities? An Application of Augmented Reality for the Treatment of Cockroach Phobia’ in CyberPsychology & Behaviour, Vol. 8 (2): 162-171.

Clark, N. ‘The Recursive Generation of the Cyberbody’ in Featherstone, M. & Burrows, R. (1995) Cyberspace/Cyberbodies/Cyberpunk, London: Sage.

Featherstone, Mike. & Burrows, Roger eds. (1995). Cyberspace/ Cyberbodies/ Cyberpunk: Cultures of Technological Embodiment. London: Sage.

Future Image (author unknown) (2006). ‘The 6Sight® Mobile Imaging Report’ on the World Wide Web. Accessed 22nd March 2007. Available at http://www.wirelessimaging.info/

Genosko, Gary (1999). McLuhan and Baudrillard: The Masters of Implosion. London: Routledge.

Kline, Stephen, DePeuter, Grieg, & Dyer-Witheforde, Nick (2003). Digital Play: The Interaction of Technology, Culture, and Marketing. Kingston & Montreal: McGill-Queen’s University Press.

Levinson, Paul (1999). Digital McLuhan: a guide to the information millennium. London: Routledge.

Liarokapis, Fotis (2006). ‘An Exploration from Virtual to Augmented Reality Gaming’ in Simulation Gaming, Vol. 37 (4): 507-533.

Manovich, Lev (2006). ‘The Poetics of Augmented Space’ in Visual Communication, Vol. 5 (2): 219-240.

McLuhan, Marshall (1962). The Gutenberg galaxy: The Making of Typographic Man. Toronto, Canada: University of Toronto Press.

McLuhan, Marshall (1964). Understanding Media: The Extensions of Man. New York: McGraw-Hill.

McLuhan, Marshall and Powers, Bruce R. (1989). The Global Village: Transformations in World Life in the 21st Century. Oxford University Press: New York.

Milgram, Paul & Kishino, Fumio (1994). ‘A Taxonomy of Mixed Reality Visual Displays’ in IEICE Transactions on Information Systems, Vol. E77-D, No.12 December 1994.

Reitmayr, Gerhard & Schmalstieg, Dieter (2001). Mobile Collaborative Augmented Reality. Proceedings of the IEEE 2001 International Symposium on Augmented Reality, 114–123.

Roberts, G., A. Evans, A. Dodson, B. Denby, S. Cooper, R. Hollands (2002) ‘Application Challenge: Look Beneath the Surface with Augmented Reality’ in GPS World, (UK, Feb. 2002): 14-20.

Stokes, Jon (2003). ‘Understanding Moore’s Law’ on the World Wide Web. Accessed 21st March 2007. Available at http://arstechnica.com/articles/paedia/cpu/moore.ars

Straubhaar, Joseph D. & LaRose, Robert (2005). Media Now: Understanding Media, Culture, and Technology. Belmont, CA: Wadsworth.

Thomas, B., Close. B., Donoghue, J., Squires, J., De Bondi, I’,. Morris, M., and Piekarski, W. ‘ARQuake: An outdoor/indoor augmented reality first-person application’ in Proceedings of the Fourth International Symposium on Wearable Computers, (Atlanta, GA, Oct. 2000), 139-141.

Wagner, D., Pintaric, T., Ledermann, F., & Schmalstieg, D. (2005). ‘Towards massively multi-user augmented reality on handheld devices’. In Proc. 3rd Int’l Conference on Pervasive Computing, Munich, Germany.

Weiser, M. (1991) ‘The Computer for the Twenty-First Century’ in Scientific American 265(3), September: 94–104.

Williams, Raymond (1992). Television: Technology and Cultural Form. Hanover and London: University Press of New England and Wesleyan University Press

Further Reading:

Bolter, Jay D. & Grusin, Richard (1999). Remediation: Understanding New Media. Cambridge, MA: MIT Press.

Cavell, Richard (2002). McLuhan in Space: a Cultural Geography. Toronto: University of Toronto Press.

Galloway, Alexander R. (2006). Gaming: Essays on Algorithmic Culture. Minneapolis: University of Minnesota Press.

Horrocks, Christopher (2000). Marshall McLuhan & Virtuality. Cambridge: Icon Books.

Jennings, Pamela (2001). ‘The Poetics of Engagement’ in Convergence: The International Journal of Research into New Media Technologies, Vol. 7 (2): 103-111.

Lauria, Rita (2001). ‘In Love with our Technology: Virtual Reality A Brief Intellectual History of the Idea of Virtuality and the Emergence of a Media Environment’ in Convergence: The International Journal of Research into New Media Technologies, Vol. 7 (4): 30-51.

Lonsway, Brian (2002). ‘Testing the Space of the Virtual’ in Convergence: The International Journal of Research into New Media Technologies, Vol. 8 (3): 61-77.

Moos, Michel A. (1997). Marshall McLuhan Essays: Media Research, technology, art, communication. London: Overseas Publishers Association.

Pacey, Arnold (1983). The Culture of Technology. Oxford: Basil Blackwell.

Salen, Katie & Zimmerman, Eric. (2004) Rules of Play: Game Design Fundamentals. Cambridge, MA: MIT.

Sassower, Raphael (1995). Cultural Collisions: Postmodern Technoscience. London: Routledge.

Wood, John ed. (1998). The Virtual Embodied: Presence/Practice/Technology. London: Routledge.

Applying Baudrillard

For Jean Baudrillard (1983), “at any moment in the course of our modernity, a particular arrangement of signifying objects and images conditions the way we see the world” (Clark, 1995). “Each major transformation is accompanied by a feeling of disorientation and discomfort over the loss of the previous ‘reality’. This effects a recourse into the imagined certainties of the receding order to ground or stabilise that which is new. In this way, “reality loops around itself”, as “each phase of value integrates into its own apparatus the anterior apparatus as a phantom reference, a puppet or simulation reference”” (Baudrillard, 1988: 145, 121; cited in Clark, 1995). In these words, we see Baudrillard’s perspective can apply neatly to my analysis of Mobile AR. Taking up where McLuhan left us- a view of the Magic Lens constrained by its deterministic overtones- Baudrillard injects the much-needed element of an actively social construction of Mixed Reality, whilst grounding my work in his Postmodern thought on Virtuality.

I am interested in the view that iterations of reality, whilst overlapping and viewable through the Magic Lens, support and influence each other’s existence within a wider structure. I could live wholly in The Virtual, and bring to it conceptions of the reality from whence I came. We see a similar behaviour in Alternate Reality games such as Second Life (Linden Lab: 2003) or The Sims (Maxis: 2000) whereby developers program known physical world causalities, behaviours and actions despite the near-limitless formal opportunities offered by the medium. Users, when given freedom, will likely bring their own conceits and personal experiences to these alternate realities, thereby foregoing what else might be possible in favour of their own culturally-inherited drives and ambitions. The Magic Lens presents a wholly new canvas for the social construction of reality. The collaborative and democratic Mobile 2.0 ethos that Nokia hope to breathe into Mobile AR could falter if users bring too much of our present iteration of reality to it. The Magic Lens offers an opportunity to reshape The Real, not solely through tagging buildings or leaving messages floating in mid-aid, but through the lessons we might learn through engaging with each other in a new way.

Baudrillard focused his work on how we interface with information, and how we build it into our view of reality. He posited that The Media had hijacked reality, becoming a powerful force in the construction of hyper-reality, a social reality that has become more powerful than we exert control over. Through the Magic Lens, we might give form to some aspects of hyper-reality. The medium allows for virtual elements to co-exist with real objects occupying space in the user’s own hyper-reality. In this way, each user can choose which hyper-reality they want to exist in, whether it is one in which 3D AR avatars walk the streets and go about their virtual lives; or one where arrows and directions graphically point out where to go to fulfil a shopping list’s requirements. The Magic Lens makes a shift from mass-media control to personalised, user-focused context-based reality: Reality 2.0 if you will.

Assuming AR does present a new layer to reality, there are certain Baudrillardian imperatives that we will bring to this landscape. One such imperative links the physical properties of real-world space- gravity, mass, optics- to our new environment. To make sense of virtual elements in their context we will employ what we already know about the environment we are in. This means that the most prized virtual objects will exhibit expected behaviour, intuitive interactivity and will be visually suited to its surroundings. Similarly, an object’s location in space alters its perceived importance. I would argue that should a common Mixed Reality exist, governing bodies would write entire protocol for the positioning and size of virtual objects so that one contributor could not take up more than his worth. Important to consider is that even writing hypothetically I am bringing Baudrillardian imperatives to task, applying democracy to a non-existent world! Baudrillard’s “reality loops around itself” has a troublesome effect on my analysis. Let me instead take a fresh perspective, in my next section written from the perspective of Walter Benjamin…

Applying McLuhan

I begin with McLuhan, whose Laws of Media or Tetrad offers greater insights for Mobile AR, sustaining and developing upon the arguments developed in my assessment of the interlinking technologies that meet in Mobile AR, whilst also providing the basis to address some of this man’s deeper thoughts.

The tetrad can be considered an observation lens to turn upon one’s subject technology. It assumes four processes take place during each iteration of a given medium. These processes are revealed as answers to these following questions, taken from Levinson (1999):

“What aspect of society or human life does it enhance or amplify? What aspect, in favour or high prominence before the arrival of the medium in question, does it eclipse or obsolesce? What does the medium retrieve or pull back into centre stage from the shadows of obsolescence? And what does the medium reverse or flip into when it has run its course or been developed to its fullest potential?”

(Digital Mcluhan 1999: 189).

To ask each of these it is useful to transfigure our concept of Mobile AR into a more workable and fluid term: the Magic Lens, a common expression in mixed reality research. Making this change allows the exploration of the more theoretical aspects of the technology free of its machinic nature, whilst integrating a necessary element of metaphor that will serve to illustrate my points.

To begin, what does the Magic Lens amplify? AR requires the recognition of a pre-programmed real-world image in order to augment the environment correctly. It is the user who locates this target, it is important to mention. It could be said that the Magic Lens more magnifies than amplifies an aspect of the user’s environment, because like other optical tools the user must point the device towards it and look through, the difference with this Magic Lens is that one aspect of its target, one potential meaning, is privileged over all others. An arbitrary black and white marker holds the potential to mean many things to many people, but viewed through an amplifying Magic Lens it means only what the program recognises and consequently superimposes.

This superimposition necessarily obscures what lies beneath. McLuhan might recognise this as an example of obsolescence. The Magic Lens privileges virtual over real imagery, and the act of augmentation leaves physical space somewhat redundant: augmenting one’s space makes it more virtual than real. The AR target undergoes amplification, becoming the necessary foundation of the augmented reality. What is obsolesced by the Magic Lens, then, is not the target which it obscures, but everything except the target.

I am reminded of McLuhan’s Extensions of Man (1962: 13), which offers the view that in extending ourselves through our tools, we auto-amputate the aspect we seek to extend. There is a striking parallel to be drawn with amplification and obsolescence, which becomes clear when we consider that in amplifying an aspect of physical reality through a tool, we are extending sight, sound and voice through the Magic Lens to communicate in wholly new ways using The Virtual as a conduit. This act obsolesces physical reality, the nullification effectively auto-amputating the user from their footing in The Real. So where have they ‘travelled’? The Magic Lens is a window into another reality, a mixed reality where real and virtual share space. In this age of Mixed Realities, the tetrad can reveal more than previously intended: new dimensions of human interaction.

The third question in the tetrad asks what the Magic Lens retrieves that was once lost. So much new ground is gained by this technology that it would be difficult to make a claim. However, I would not hold belief in Mobile AR’s success if I didn’t recognise the exhumed, as well as the novel benefits that it offers. The Magic Lens retrieves the everyday tactility and physicality of information engagement, that which was obsolesced by other screen media such as television, the Desktop PC and the games console. The Magic Lens encourages users to interact in physicality, not virtuality. The act of actually walking somewhere to find something out, or going to see someone to play with them is retrieved. Moreover, we retrieve the sense of control over our media input that was lost by these same technologies. Information is freed into the physical world, transfiguring its meaning and offering a greater degree of manipulative power. Mixed Reality can be seen only through the one-way-glass of the Magic Lens, The Virtual cannot spill through unless we allow it to. We have seen that certain mainstream media can wholly fold themselves into reality and become an annoyance- think Internet pop-ups and mobile ringtones- through the Magic Lens we retrieve personal agency to navigate our own experience. I earlier noted that “the closer we can bring artefacts from The Virtual to The Real, the more applicable these can be in our everyday lives”; a position that resonates with my growing argument that engaging with digital information through the Magic Lens is an appropriate way to integrate and indeed exploit The Virtual as a platform for the provision of communication, leisure and information applications.

It is hard to approximate what the Magic Lens might flip into, since at this point AR is a wave that has not yet crested. I might suggest that since the medium is constrained to success in its mobile device form, its trajectory is likely entwined with that medium. So, the Magic Lens flips into whatever the mobile multimedia computer flips into. Another possibility is that the Magic Lens inspires such commercial success and industrial investment that a surge in demand for Wearable Computers shifts AR into a new form. This time, the user cannot dip in and out of Mixed Reality as they see fit, they are immersed in it whenever they wear their visor. This has connotations all of its own, but I will not expound my own views given that much cultural change must first occur to implement such a drastic shift in consumer fashions and demands. A third way for the Magic Lens to ‘flip’ might be its wider application in other media. Developments in digital ink technologies; printable folding screens; ‘cloud’ computing; interactive projector displays; multi-input touch screen devices; automotive glassware and electronic product packaging could all take advantage of the AR treatment. We could end up living far more closely with The Virtual than previously possible.

In their work The Global Village, McLuhan and Powers (1989) state that:

“The tetrad performs the function of myth in that it compresses past, present, and future into one through the power of simultaneity. The tetrad illuminates the borderline between acoustic and visual space as an arena of the spiralling repetition and replay, both of input and feedback, interlace and interface in the area of imploded circle of rebirth and metamorphosis”

(The Global Village 1989: 9)

I would be interested to hear their view on the unique “simultaneity” offered by the Magic Lens, or indeed the “metamorphosis” it would inspire, but I would argue that when applied from a Mixed Reality inter-media perspective, their outlook seems constrained to the stringent and self-involved rules of their own epistemology. Though he would be loath to admit it, Baudrillard took on McLuhan’s work as the basis of his own (Genosko, 1999; Kellner, date unknown), and made it relevant to the postmodern era. His work is cited by many academics seeking to forge a relationship to Virtual Reality in their research…

Mobile Telephone

The Internet and the mobile phone are two mighty forces that have bent contemporary culture and remade it in their form. They offer immediacy, connectivity, and social interaction of a wholly different kind. These are technologies that have brought profound changes to the ways academia consider technoscience and digital communication. Their relationship was of interest to academics in the early 1990’s, who declared that their inevitable fusion would be the beginning of the age of Ubiquitous Computing: “the shift away from computing which centered on desktop machines towards smaller multiple devices distributed throughout the space” (Weiser, 1991 in Manovich, 2006). In truth, it was the microprocessor and Moore’s Law- “the number of transistors that can be fit onto a square inch of silicon doubles every 12 months” (Stokes, 2003) that led to many of the technologies that fall under this term: laptops, PDA’s, Digital Cameras, flash memory sticks and MP3 players. Only recently have we seen mobile telephony take on the true properties of the Internet.

The HARVEE project is partially backed by Nokia Corp. which recognises its potential as a Mobile 2.0 technology: user-generated content for mobile telephony that exploits web-connectivity. Mobile 2.0 is an emerging technology thematically aligned with the better established Web 2.0. Nokia already refer to their higher-end devices as multimedia computers, rather than as mobile phones. Their next generation Smartphones will make heavy use of camera-handling systems, which is predicated on the importance of user-generated content as a means to promote social interaction. This strategic move is likely to realign Nokia Corp.’s position in the mobile telephony and entertainment markets.

Last year, more camera phones were sold than digital cameras (Future Image, 2006). Nokia have a 12 megapixel camera phone ready for release in 2009, and it will be packaged with a processing unit equal to the power of a Sony PSP (Nokia Finland: non-public product specification document). MP3 and movie players are now a standard on many handsets, stored on plug-in memory cards and viewed through increasingly higher resolution colour screens. There is a growing mobile gaming market, the fastest growing sector of the Games Industry (Entertainment & Leisure Software Publishers Association (ELSPA) sales chart). The modern mobile phone receives its information from wide-band GPRS networks allowing greater network coverage and faster data transfer. Phone calls are the primary function, but users are exploiting the multi-media capabilities of their devices in ways not previously considered. It is these factors, technologic, economic and infrastructural that provide the perfect arena for Mobile AR’s entry into play.

Mobile Internet is the natural convergence of mobile telephony and the World Wide Web, and is already a common feature of new mobile devices. Mobile Internet, I would argue, is another path leading to Mobile AR, driven by mobile users demanding more from their handsets. Mobile 2.0 is the logical development of this technology- placing the power of location-based, user-generated content into a new real-world context. Google Maps Mobile is one such application that uses network triangulation and its own Google Maps technologies to offer information, directions, restaurant reviews or even satellite images of your current location- anywhere in the world. Mobile AR could achieve this same omniscience (omnipresence?) given the recent precedent for massively multi-user collaborative projects such as Wikipedia, Flickr and Google Maps itself. These are essentially commercially built infrastructures designed to be filled with everybody’s tags, comments or other content. Mobile AR could attract this same amount of devotion if it offered such an infrastructure and real-world appeal.

There is a growing emphasis on Ubiquitous Computing devices in our time-precious world, signified by the increased sales in Smartphones and WiFi enabled laptops. Perhaps not surprisingly, Mobile Internet use has increased as users’ devices become capable of greater connectivity. Indeed, the mobile connected device is becoming the ubiquitous medium of modernity, as yet more media converge in it. It is the mobile platform’s suitability to perform certain tasks that Mobile AR can take advantage of, locating itself in the niche currently occupied by Mobile Internet. Returning to my Mixed Reality Scale, Mobile AR serves the user better than Mobile Internet currently can: providing just enough reality to exploit virtuality, Mobile AR keeps the user necessarily grounded in their physical environment as they manipulate digital elements useful to their daily lives.

The Internet

The Internet, or specifically the World Wide Web, requires a limited virtuality in order to do its job. The shallow immersion offered to us by our computer screens actually serves our needs very well, since the Internet’s role in our lives is to connect, store and present information in accessible, searchable, scannable, and consistent form for millions of users to access simultaneously, to be dived in and out of quickly or to surround ourselves in the information we want. The naturally-immersive VR takes us partway towards Mobile AR, but its influence stops at the (admittedly profound) concept of real-time interaction with 3D digital images. What the Internet does is bring information to us, but VR forces us to go to it.

This is a function of the Mixed Reality Scale, and the distance of each from The Real. The closer we can bring artefacts from The Virtual to The Real, the more applicable these can be in our everyday lives. The self-sufficient realm of The Virtual does not require grounding in physical reality in order to exist, whereas the Internet and other MR media depend on The Real to operate. AR is the furthest that a virtual object can be ‘stitched into’ our reality, and in doing so we exploit our power in this realm to manipulate and interact with these digital elements to suit our own ends, as we currently do with the World Wide Web.

The wide-ranging entertainment resources offered by the Internet are having a profound effect on real-world businesses, a state of flux that Mobile AR could potentially exploit. There is a shift in the needs of consumers of late that is forcing a change in the ways that many blue-chip organisations are handling their businesses: Mobile data carriers (operators), portals, publishers, content owners and broadcasters are all seeking new content types to face up to the threat of VOIP (Voice Over Internet Protocol) – which is reducing voice traffic; and Web TV/ Internet – reducing (reduced?) TV audiences, particularly in the youth market.

T-Mobile, for example, seeks to improve on revenues through offering unique licensed mobile games, themes, ringtones and video-clips on their T-Zones Mobile Internet Portal; NBC’s hit-series ‘Heroes’ is the most downloaded show on the Internet, forcing NBC to offer exclusive online comics on their webpage, seeking to recoup advertising revenue losses through lacing the pages of these comics with advertising. Mobile AR represents a fresh landscape for these businesses to mine. It is no surprise, then, that some forward-thinking AR developers are already writing software specifically for the display of virtual advertisement billboards in built-up city areas (T-Immersion).

The Internet has changed the way we receive information about the world around us. This hyper-medium has swallowed the world’s information and media content, whilst continuing to enable the development of new and exciting offerings exclusive to the desktop user. The computing capacity required to use the Internet has in the past constrained the medium to the desktop computer, but in the ‘Information Age’ the World Wide Web is just that: World Wide.

Virtual Reality

AR is considered by some to be a logical progression of VR technologies (Liarokapis, 2006; Botella, 2005; Reitmayr & Schmalstieg, 2001), a more appropriate way to interact with information in real-time that has been granted only by recent innovations. Thus, one could consider that a full historical appraisal would pertain to VR’s own history, plus the last few years of AR developments. Though this method would certainly work for much of Wearable AR- which uses a similar device array- the same could not be said for Mobile AR, since by its nature it offers a set of properties from a wholly different paradigm: portability, connectivity and many years of mobile development exclusive of AR research come together in enhancing Mobile AR’s formal capabilities. Despite the obvious mass-market potential of this technology, most AR research continues to explore the Wearable AR paradigm. Where Mobile AR is cousin to VR, Wearable AR is sister. Most published works favour the Wearable AR approach, so if my assessment of Mobile AR is to be fair I cannot ignore its grounding in VR research.

As aforementioned, VR is the realm at the far right of my Mixed Reality Scale. To explore a Virtual Reality, users must wear a screen array on their heads that cloak the user’s vision with a wholly virtual world. These head-mounted-displays (HMD’s) serve to transpose the user into this virtual space whilst cutting them off from their physical environment:

A Virtual Reality HMD, two LCD screens occupy the wearer's field of vision
A Virtual Reality HMD, two LCD screens occupy the wearer's field of vision

The HMD’s must be connected to a wearable computer, a Ghostbusters-style device attached to the wearer’s back or waist that holds a CPU and graphics renderer. To interact with virtual objects, users must hold a joypad. Aside from being a lot to carry, this equipment is restrictive on the senses and is often expensive:

A Wearable Computer array, this particular array uses a CPU, GPS, HMD, graphics renderer, and human-interface-device
A Wearable Computer array, this particular array uses a CPU, GPS, HMD, graphics renderer, and human-interface-device

It is useful at this point to reference some thinkers in VR research, with the view to better understanding The Virtual realm and its implications for Mobile AR’s Mixed Reality approach. Writing on the different selves offered by various media, Lonsway (2002) states that:

“With the special case of the immersive VR experience, the user is (in actual fact) located in physical space within the apparatus of the technology. The computer-mediated environment suggests (in effect) a trans-location outside of this domain, but only through the construction of a subject centred on the self (I), controlling an abstract position in a graphic database of spatial coordinates. The individual, of which this newly positioned subject is but one component, is participant in a virtuality: a spatio-temporal moment of immersion, virtualised travel, physical fixity, and perhaps, depending on the technologies employed, electro-magnetic frequency exposure, lag-induced nausea, etc.”

Lonsway (2002: 65)

Despite its flaws, media representations of VR technologies throughout the eighties and early nineties such as Tron (Lisberger, 1982), Lawnmower Man (Leonard, 1992) and Johnny Mnemonic (Longo, 1995) generated plenty of audience interest and consequent industrial investment. VR hardware was produced in bulk for much of the early nineties, but it failed to become a mainstream technology largely due to a lack of capital investment in VR content, a function of the stagnant demand for expensive VR hardware (Mike Dicks of Bomb Productions: personal communication). The market for VR content collapsed, but the field remains an active contributor in certain key areas, with notable success as a commonplace training aid for military pilots (Baumann, date unknown) and as an academic tool for the study of player immersion and virtual identity (Lonsway, 2002).

Most AR development uses VR’s same array of devices: a wearable computer, input device and an HMD. The HMD is slightly different in these cases; it is transparent and contains an internal half-silvered mirror, which combines images from an LCD display with the user’s vision of the world:

An AR HMD, this model has a half-mirrored screen at 45 degrees. Above are two LCDs that reflect into the wearer's eyes whilst they can see what lies in front of them
An AR HMD, this model has a half-mirrored screen at 45 degrees. Above are two LCDs that reflect into the wearer's eyes whilst they can see what lies in front of them

 

What Wearable AR looks like, notice the very bright figure ahead. If he was darker he would not be visible
What Wearable AR looks like, notice the very bright figure ahead. If he was darker he would not be visible

There are still many limitations placed on the experience, however: first, the digital graphics must be very bright in order to stand out against natural light; second, they require the use of a cumbersome wearable computer array; third, this array is at a price-point too high for it to reach mainstream use. Much of the hardware used in Wearable AR research is bought wholesale from liquidized VR companies (Dave Mee of Gameware: personal communication), a fact representative of the backward thinking of much AR research.

In their work New Media and the Permanent Crisis of Aura Bolter et al. (2006) apply Benjamin’s work on the Aura to Mixed Reality technologies, and attempt to forge a link between VR and the Internet. This passage offers a perspective on the virtuality of the desktop computer and the World Wide Web:

“What we might call the paradigm of mixed reality is now competing successfully with what we might call ‘pure virtuality’ – the earlier paradigm that dominated interface design for decades.
In purely virtual applications, the computer defines the entire informational or perceptual environment for the user … The goal of VR is to immerse the user in a world of computer generated images and (often) computer-controlled sound. Although practical applications for VR are relatively limited, this technology still represents the next (and final?) logical step in the quest for pure virtuality. If VR were perfected and could replace the desktop GUI as the interface to an expanded World Wide Web, the result would be cyberspace.”

Bolter et al. (2006: 22)

This account offers a new platform for discussion useful for the analysis of the Internet as a component in Mobile AR: the idea that the Internet could exploit the spatial capabilities of a Virtual Reality to enhance its message. Bolter posits that this could be the logical end of a supposed “quest for pure virtuality”. I would argue that the reason VR did not succeed is the same reason that there is no “quest” to join: VR technologies lack the real-world applicability that we can easily find in reality-grounded media such as the Internet or mobile telephone.