Anyone for Tonsil Tennis?

This is pretty cool I guess. The idea is that your partner “helps you” to play a video game by letting you snog them in different ways (while you’re looking at a computer screen and therefore not really paying attention).

It’s a bit gross, but it’s still a novel idea, so have a look:

What’s the mechanic here?

The Kiss Controller interface has two components: a customized headset that functions as a sensor receiver and a magnet that provides sensor input. The user affixes a magnet to his/ her tongue with Fixodent. Magnetic field sensors are attached to the end of the headset and positioned in front of the mouth. As the user moves her tongue, this creates varying magnetic fields that are used to control games.

We demonstrate the Kiss Controller bowling game. One person has a magnet on his/her tongue and the other person wears the headset. While they kiss, the person who has the magnet on his/her tongue, controls the direction and speed of the bowling ball for 20 seconds. The goals of this game are to guide the ball so that it maintains an average position in the center of the alley and to increase the speed of the ball by moving the tongue faster while kissing.

And what’s the point?

I literally do not know. If I were the developers I’d have focused on highlighting their innovative technique to use the tongue as an input device: it’s the most dexterous muscle in the body and it’s use is often one of the few remaining facilities among paralytics.

Can’t this be a remote control for wheelchairs or similar, rather than a Wii Sports ripoff? Come on guys…

More details here: Kiss Controller.

Applying McLuhan

I begin with McLuhan, whose Laws of Media or Tetrad offers greater insights for Mobile AR, sustaining and developing upon the arguments developed in my assessment of the interlinking technologies that meet in Mobile AR, whilst also providing the basis to address some of this man’s deeper thoughts.

The tetrad can be considered an observation lens to turn upon one’s subject technology. It assumes four processes take place during each iteration of a given medium. These processes are revealed as answers to these following questions, taken from Levinson (1999):

“What aspect of society or human life does it enhance or amplify? What aspect, in favour or high prominence before the arrival of the medium in question, does it eclipse or obsolesce? What does the medium retrieve or pull back into centre stage from the shadows of obsolescence? And what does the medium reverse or flip into when it has run its course or been developed to its fullest potential?”

(Digital Mcluhan 1999: 189).

To ask each of these it is useful to transfigure our concept of Mobile AR into a more workable and fluid term: the Magic Lens, a common expression in mixed reality research. Making this change allows the exploration of the more theoretical aspects of the technology free of its machinic nature, whilst integrating a necessary element of metaphor that will serve to illustrate my points.

To begin, what does the Magic Lens amplify? AR requires the recognition of a pre-programmed real-world image in order to augment the environment correctly. It is the user who locates this target, it is important to mention. It could be said that the Magic Lens more magnifies than amplifies an aspect of the user’s environment, because like other optical tools the user must point the device towards it and look through, the difference with this Magic Lens is that one aspect of its target, one potential meaning, is privileged over all others. An arbitrary black and white marker holds the potential to mean many things to many people, but viewed through an amplifying Magic Lens it means only what the program recognises and consequently superimposes.

This superimposition necessarily obscures what lies beneath. McLuhan might recognise this as an example of obsolescence. The Magic Lens privileges virtual over real imagery, and the act of augmentation leaves physical space somewhat redundant: augmenting one’s space makes it more virtual than real. The AR target undergoes amplification, becoming the necessary foundation of the augmented reality. What is obsolesced by the Magic Lens, then, is not the target which it obscures, but everything except the target.

I am reminded of McLuhan’s Extensions of Man (1962: 13), which offers the view that in extending ourselves through our tools, we auto-amputate the aspect we seek to extend. There is a striking parallel to be drawn with amplification and obsolescence, which becomes clear when we consider that in amplifying an aspect of physical reality through a tool, we are extending sight, sound and voice through the Magic Lens to communicate in wholly new ways using The Virtual as a conduit. This act obsolesces physical reality, the nullification effectively auto-amputating the user from their footing in The Real. So where have they ‘travelled’? The Magic Lens is a window into another reality, a mixed reality where real and virtual share space. In this age of Mixed Realities, the tetrad can reveal more than previously intended: new dimensions of human interaction.

The third question in the tetrad asks what the Magic Lens retrieves that was once lost. So much new ground is gained by this technology that it would be difficult to make a claim. However, I would not hold belief in Mobile AR’s success if I didn’t recognise the exhumed, as well as the novel benefits that it offers. The Magic Lens retrieves the everyday tactility and physicality of information engagement, that which was obsolesced by other screen media such as television, the Desktop PC and the games console. The Magic Lens encourages users to interact in physicality, not virtuality. The act of actually walking somewhere to find something out, or going to see someone to play with them is retrieved. Moreover, we retrieve the sense of control over our media input that was lost by these same technologies. Information is freed into the physical world, transfiguring its meaning and offering a greater degree of manipulative power. Mixed Reality can be seen only through the one-way-glass of the Magic Lens, The Virtual cannot spill through unless we allow it to. We have seen that certain mainstream media can wholly fold themselves into reality and become an annoyance- think Internet pop-ups and mobile ringtones- through the Magic Lens we retrieve personal agency to navigate our own experience. I earlier noted that “the closer we can bring artefacts from The Virtual to The Real, the more applicable these can be in our everyday lives”; a position that resonates with my growing argument that engaging with digital information through the Magic Lens is an appropriate way to integrate and indeed exploit The Virtual as a platform for the provision of communication, leisure and information applications.

It is hard to approximate what the Magic Lens might flip into, since at this point AR is a wave that has not yet crested. I might suggest that since the medium is constrained to success in its mobile device form, its trajectory is likely entwined with that medium. So, the Magic Lens flips into whatever the mobile multimedia computer flips into. Another possibility is that the Magic Lens inspires such commercial success and industrial investment that a surge in demand for Wearable Computers shifts AR into a new form. This time, the user cannot dip in and out of Mixed Reality as they see fit, they are immersed in it whenever they wear their visor. This has connotations all of its own, but I will not expound my own views given that much cultural change must first occur to implement such a drastic shift in consumer fashions and demands. A third way for the Magic Lens to ‘flip’ might be its wider application in other media. Developments in digital ink technologies; printable folding screens; ‘cloud’ computing; interactive projector displays; multi-input touch screen devices; automotive glassware and electronic product packaging could all take advantage of the AR treatment. We could end up living far more closely with The Virtual than previously possible.

In their work The Global Village, McLuhan and Powers (1989) state that:

“The tetrad performs the function of myth in that it compresses past, present, and future into one through the power of simultaneity. The tetrad illuminates the borderline between acoustic and visual space as an arena of the spiralling repetition and replay, both of input and feedback, interlace and interface in the area of imploded circle of rebirth and metamorphosis”

(The Global Village 1989: 9)

I would be interested to hear their view on the unique “simultaneity” offered by the Magic Lens, or indeed the “metamorphosis” it would inspire, but I would argue that when applied from a Mixed Reality inter-media perspective, their outlook seems constrained to the stringent and self-involved rules of their own epistemology. Though he would be loath to admit it, Baudrillard took on McLuhan’s work as the basis of his own (Genosko, 1999; Kellner, date unknown), and made it relevant to the postmodern era. His work is cited by many academics seeking to forge a relationship to Virtual Reality in their research…

Mobile Telephone

The Internet and the mobile phone are two mighty forces that have bent contemporary culture and remade it in their form. They offer immediacy, connectivity, and social interaction of a wholly different kind. These are technologies that have brought profound changes to the ways academia consider technoscience and digital communication. Their relationship was of interest to academics in the early 1990’s, who declared that their inevitable fusion would be the beginning of the age of Ubiquitous Computing: “the shift away from computing which centered on desktop machines towards smaller multiple devices distributed throughout the space” (Weiser, 1991 in Manovich, 2006). In truth, it was the microprocessor and Moore’s Law- “the number of transistors that can be fit onto a square inch of silicon doubles every 12 months” (Stokes, 2003) that led to many of the technologies that fall under this term: laptops, PDA’s, Digital Cameras, flash memory sticks and MP3 players. Only recently have we seen mobile telephony take on the true properties of the Internet.

The HARVEE project is partially backed by Nokia Corp. which recognises its potential as a Mobile 2.0 technology: user-generated content for mobile telephony that exploits web-connectivity. Mobile 2.0 is an emerging technology thematically aligned with the better established Web 2.0. Nokia already refer to their higher-end devices as multimedia computers, rather than as mobile phones. Their next generation Smartphones will make heavy use of camera-handling systems, which is predicated on the importance of user-generated content as a means to promote social interaction. This strategic move is likely to realign Nokia Corp.’s position in the mobile telephony and entertainment markets.

Last year, more camera phones were sold than digital cameras (Future Image, 2006). Nokia have a 12 megapixel camera phone ready for release in 2009, and it will be packaged with a processing unit equal to the power of a Sony PSP (Nokia Finland: non-public product specification document). MP3 and movie players are now a standard on many handsets, stored on plug-in memory cards and viewed through increasingly higher resolution colour screens. There is a growing mobile gaming market, the fastest growing sector of the Games Industry (Entertainment & Leisure Software Publishers Association (ELSPA) sales chart). The modern mobile phone receives its information from wide-band GPRS networks allowing greater network coverage and faster data transfer. Phone calls are the primary function, but users are exploiting the multi-media capabilities of their devices in ways not previously considered. It is these factors, technologic, economic and infrastructural that provide the perfect arena for Mobile AR’s entry into play.

Mobile Internet is the natural convergence of mobile telephony and the World Wide Web, and is already a common feature of new mobile devices. Mobile Internet, I would argue, is another path leading to Mobile AR, driven by mobile users demanding more from their handsets. Mobile 2.0 is the logical development of this technology- placing the power of location-based, user-generated content into a new real-world context. Google Maps Mobile is one such application that uses network triangulation and its own Google Maps technologies to offer information, directions, restaurant reviews or even satellite images of your current location- anywhere in the world. Mobile AR could achieve this same omniscience (omnipresence?) given the recent precedent for massively multi-user collaborative projects such as Wikipedia, Flickr and Google Maps itself. These are essentially commercially built infrastructures designed to be filled with everybody’s tags, comments or other content. Mobile AR could attract this same amount of devotion if it offered such an infrastructure and real-world appeal.

There is a growing emphasis on Ubiquitous Computing devices in our time-precious world, signified by the increased sales in Smartphones and WiFi enabled laptops. Perhaps not surprisingly, Mobile Internet use has increased as users’ devices become capable of greater connectivity. Indeed, the mobile connected device is becoming the ubiquitous medium of modernity, as yet more media converge in it. It is the mobile platform’s suitability to perform certain tasks that Mobile AR can take advantage of, locating itself in the niche currently occupied by Mobile Internet. Returning to my Mixed Reality Scale, Mobile AR serves the user better than Mobile Internet currently can: providing just enough reality to exploit virtuality, Mobile AR keeps the user necessarily grounded in their physical environment as they manipulate digital elements useful to their daily lives.

Virtual Reality

AR is considered by some to be a logical progression of VR technologies (Liarokapis, 2006; Botella, 2005; Reitmayr & Schmalstieg, 2001), a more appropriate way to interact with information in real-time that has been granted only by recent innovations. Thus, one could consider that a full historical appraisal would pertain to VR’s own history, plus the last few years of AR developments. Though this method would certainly work for much of Wearable AR- which uses a similar device array- the same could not be said for Mobile AR, since by its nature it offers a set of properties from a wholly different paradigm: portability, connectivity and many years of mobile development exclusive of AR research come together in enhancing Mobile AR’s formal capabilities. Despite the obvious mass-market potential of this technology, most AR research continues to explore the Wearable AR paradigm. Where Mobile AR is cousin to VR, Wearable AR is sister. Most published works favour the Wearable AR approach, so if my assessment of Mobile AR is to be fair I cannot ignore its grounding in VR research.

As aforementioned, VR is the realm at the far right of my Mixed Reality Scale. To explore a Virtual Reality, users must wear a screen array on their heads that cloak the user’s vision with a wholly virtual world. These head-mounted-displays (HMD’s) serve to transpose the user into this virtual space whilst cutting them off from their physical environment:

A Virtual Reality HMD, two LCD screens occupy the wearer's field of vision
A Virtual Reality HMD, two LCD screens occupy the wearer's field of vision

The HMD’s must be connected to a wearable computer, a Ghostbusters-style device attached to the wearer’s back or waist that holds a CPU and graphics renderer. To interact with virtual objects, users must hold a joypad. Aside from being a lot to carry, this equipment is restrictive on the senses and is often expensive:

A Wearable Computer array, this particular array uses a CPU, GPS, HMD, graphics renderer, and human-interface-device
A Wearable Computer array, this particular array uses a CPU, GPS, HMD, graphics renderer, and human-interface-device

It is useful at this point to reference some thinkers in VR research, with the view to better understanding The Virtual realm and its implications for Mobile AR’s Mixed Reality approach. Writing on the different selves offered by various media, Lonsway (2002) states that:

“With the special case of the immersive VR experience, the user is (in actual fact) located in physical space within the apparatus of the technology. The computer-mediated environment suggests (in effect) a trans-location outside of this domain, but only through the construction of a subject centred on the self (I), controlling an abstract position in a graphic database of spatial coordinates. The individual, of which this newly positioned subject is but one component, is participant in a virtuality: a spatio-temporal moment of immersion, virtualised travel, physical fixity, and perhaps, depending on the technologies employed, electro-magnetic frequency exposure, lag-induced nausea, etc.”

Lonsway (2002: 65)

Despite its flaws, media representations of VR technologies throughout the eighties and early nineties such as Tron (Lisberger, 1982), Lawnmower Man (Leonard, 1992) and Johnny Mnemonic (Longo, 1995) generated plenty of audience interest and consequent industrial investment. VR hardware was produced in bulk for much of the early nineties, but it failed to become a mainstream technology largely due to a lack of capital investment in VR content, a function of the stagnant demand for expensive VR hardware (Mike Dicks of Bomb Productions: personal communication). The market for VR content collapsed, but the field remains an active contributor in certain key areas, with notable success as a commonplace training aid for military pilots (Baumann, date unknown) and as an academic tool for the study of player immersion and virtual identity (Lonsway, 2002).

Most AR development uses VR’s same array of devices: a wearable computer, input device and an HMD. The HMD is slightly different in these cases; it is transparent and contains an internal half-silvered mirror, which combines images from an LCD display with the user’s vision of the world:

An AR HMD, this model has a half-mirrored screen at 45 degrees. Above are two LCDs that reflect into the wearer's eyes whilst they can see what lies in front of them
An AR HMD, this model has a half-mirrored screen at 45 degrees. Above are two LCDs that reflect into the wearer's eyes whilst they can see what lies in front of them

 

What Wearable AR looks like, notice the very bright figure ahead. If he was darker he would not be visible
What Wearable AR looks like, notice the very bright figure ahead. If he was darker he would not be visible

There are still many limitations placed on the experience, however: first, the digital graphics must be very bright in order to stand out against natural light; second, they require the use of a cumbersome wearable computer array; third, this array is at a price-point too high for it to reach mainstream use. Much of the hardware used in Wearable AR research is bought wholesale from liquidized VR companies (Dave Mee of Gameware: personal communication), a fact representative of the backward thinking of much AR research.

In their work New Media and the Permanent Crisis of Aura Bolter et al. (2006) apply Benjamin’s work on the Aura to Mixed Reality technologies, and attempt to forge a link between VR and the Internet. This passage offers a perspective on the virtuality of the desktop computer and the World Wide Web:

“What we might call the paradigm of mixed reality is now competing successfully with what we might call ‘pure virtuality’ – the earlier paradigm that dominated interface design for decades.
In purely virtual applications, the computer defines the entire informational or perceptual environment for the user … The goal of VR is to immerse the user in a world of computer generated images and (often) computer-controlled sound. Although practical applications for VR are relatively limited, this technology still represents the next (and final?) logical step in the quest for pure virtuality. If VR were perfected and could replace the desktop GUI as the interface to an expanded World Wide Web, the result would be cyberspace.”

Bolter et al. (2006: 22)

This account offers a new platform for discussion useful for the analysis of the Internet as a component in Mobile AR: the idea that the Internet could exploit the spatial capabilities of a Virtual Reality to enhance its message. Bolter posits that this could be the logical end of a supposed “quest for pure virtuality”. I would argue that the reason VR did not succeed is the same reason that there is no “quest” to join: VR technologies lack the real-world applicability that we can easily find in reality-grounded media such as the Internet or mobile telephone.

Reverse-Engineering AR

This section seeks to locate AR’s position within a wider context.

There are three media that converge in Mobile AR: Virtual Reality; the Internet and the mobile telephone, with other, subsidiary technologies as enablers to this end. Assessing the three of these in turn, we can glean knowledge of these highly influential media forms and their impact, the findings of which can be built into a model for the commercial diffusion and societal impact that Mobile AR might enjoy.

Virtual Reality is first up. Check out my next post in the series!