I read about Foldit in Wired US yesterday, a game that takes the foundations laid by SETI@home, which uses thousands of computers’ idle time to decode frequencies from Space, and crowdsources solutions to the protein folding problems that are currently baffling the smartest machines in the world.
The difference with Foldit is that it’s not PC idle time that is tapped into here, but players’ idle time. There is no algorithm that can yet match humans’ depth perception; natural ability to recognise patterns; and see causal links in their actions. These traits make us humans the ideal CPU to solve these ‘protein-puzzles’:
Foldit provides a series of tutorials in which the player manipulates simple protein-like structures, and a periodically updated set of puzzles based on real proteins. The application displays a graphical representation of the protein’s structure which the user is able to manipulate with the aid of a set of tools.
As the structure is modified, a “score” is calculated based on how well-folded the protein is, based on a set of rules. A list of high scores for each puzzle is maintained. Foldit users may create and join groups, and share puzzle solutions with each other; a separate list of group high scores is maintained.
Indeed, the creators report that groups working together have led to breakthroughs not matched by either individuals or heavy-duty computing power. It is the power of the engaged-masses that the Baker Lab, research team behind the game are hoping will bring forth potential cures for HIV/AIDS, Cancer and Alzheimer’s.
You’ve probably read about Google Latitude, and maybe even used it yourself. I’ve been using it mostly without meaning to, because I activated the service on my N95’s Google Maps and the bloody thing never turns off. Here’s where I am right now:
Locative technologies are a growing area of interest for me. I believe that GPS, cell-tower triangulation and even good old Bluetooth will play a large part in making cloud-computing extra-relevant to consumers.
I know that people get a bit funny with the blend of real locations and virtual space (see Google Street View debacle) but once we’re all using our next-gen pieces of UI, your networked device could begin to act as a portal to new layers of information useful to you about the city, street, or shop you are in.
I am talking about location-based advertising. An implementational nightmare, but it is foreseeable that Semantic technologies could serve geographically relevant messages, charging advertisers on a cost per impact basis. Google kind of do this with their local search results. It’s a bit shit at the moment though.
The nearest we have to the kind of next-gen solution I’m thinking of is lastminute.com’s free service NRU, available on the Android OS. It lets you scan around your environment with your phone acting as a viewfinder, where cinemas, restaurants and theatres are overlaid in a sonar-like interface. These services pay a small amount to lastminute.com on an affiliate basis, or are paid inclusions:
There’s one locative service I’m disappointed never took off in the UK, despite being around for a while. BrightKite is a kind of location-based Twitter, and it had real promise until Google came stomping all over them with the release of Latitude.
If I were to ‘check in’ at The Queens Larder on Russell Square, BrightKite users would see my marker and message on a map of the area, as well as other people checked in nearby. The potential for social interaction is high, because through using the service one feels proximity with other users.
With all this in mind, I’d like my readers to ‘feel closer’ to me, so as well as in this post I’ll be placing my Latitude Location Badge on my Contact Page. If you’re in the vicinity, go ahead and either serve me an advert or say hello. I won’t mind which.
AR is considered by some to be a logical progression of VR technologies (Liarokapis, 2006; Botella, 2005; Reitmayr & Schmalstieg, 2001), a more appropriate way to interact with information in real-time that has been granted only by recent innovations. Thus, one could consider that a full historical appraisal would pertain to VR’s own history, plus the last few years of AR developments. Though this method would certainly work for much of Wearable AR- which uses a similar device array- the same could not be said for Mobile AR, since by its nature it offers a set of properties from a wholly different paradigm: portability, connectivity and many years of mobile development exclusive of AR research come together in enhancing Mobile AR’s formal capabilities. Despite the obvious mass-market potential of this technology, most AR research continues to explore the Wearable AR paradigm. Where Mobile AR is cousin to VR, Wearable AR is sister. Most published works favour the Wearable AR approach, so if my assessment of Mobile AR is to be fair I cannot ignore its grounding in VR research.
As aforementioned, VR is the realm at the far right of my Mixed Reality Scale. To explore a Virtual Reality, users must wear a screen array on their heads that cloak the user’s vision with a wholly virtual world. These head-mounted-displays (HMD’s) serve to transpose the user into this virtual space whilst cutting them off from their physical environment:
The HMD’s must be connected to a wearable computer, a Ghostbusters-style device attached to the wearer’s back or waist that holds a CPU and graphics renderer. To interact with virtual objects, users must hold a joypad. Aside from being a lot to carry, this equipment is restrictive on the senses and is often expensive:
It is useful at this point to reference some thinkers in VR research, with the view to better understanding The Virtual realm and its implications for Mobile AR’s Mixed Reality approach. Writing on the different selves offered by various media, Lonsway (2002) states that:
“With the special case of the immersive VR experience, the user is (in actual fact) located in physical space within the apparatus of the technology. The computer-mediated environment suggests (in effect) a trans-location outside of this domain, but only through the construction of a subject centred on the self (I), controlling an abstract position in a graphic database of spatial coordinates. The individual, of which this newly positioned subject is but one component, is participant in a virtuality: a spatio-temporal moment of immersion, virtualised travel, physical fixity, and perhaps, depending on the technologies employed, electro-magnetic frequency exposure, lag-induced nausea, etc.”
Lonsway (2002: 65)
Despite its flaws, media representations of VR technologies throughout the eighties and early nineties such as Tron (Lisberger, 1982), Lawnmower Man (Leonard, 1992) and Johnny Mnemonic (Longo, 1995) generated plenty of audience interest and consequent industrial investment. VR hardware was produced in bulk for much of the early nineties, but it failed to become a mainstream technology largely due to a lack of capital investment in VR content, a function of the stagnant demand for expensive VR hardware (Mike Dicks of Bomb Productions: personal communication). The market for VR content collapsed, but the field remains an active contributor in certain key areas, with notable success as a commonplace training aid for military pilots (Baumann, date unknown) and as an academic tool for the study of player immersion and virtual identity (Lonsway, 2002).
Most AR development uses VR’s same array of devices: a wearable computer, input device and an HMD. The HMD is slightly different in these cases; it is transparent and contains an internal half-silvered mirror, which combines images from an LCD display with the user’s vision of the world:
There are still many limitations placed on the experience, however: first, the digital graphics must be very bright in order to stand out against natural light; second, they require the use of a cumbersome wearable computer array; third, this array is at a price-point too high for it to reach mainstream use. Much of the hardware used in Wearable AR research is bought wholesale from liquidized VR companies (Dave Mee of Gameware: personal communication), a fact representative of the backward thinking of much AR research.
In their work New Media and the Permanent Crisis of Aura Bolter et al. (2006) apply Benjamin’s work on the Aura to Mixed Reality technologies, and attempt to forge a link between VR and the Internet. This passage offers a perspective on the virtuality of the desktop computer and the World Wide Web:
“What we might call the paradigm of mixed reality is now competing successfully with what we might call ‘pure virtuality’ – the earlier paradigm that dominated interface design for decades.
In purely virtual applications, the computer defines the entire informational or perceptual environment for the user … The goal of VR is to immerse the user in a world of computer generated images and (often) computer-controlled sound. Although practical applications for VR are relatively limited, this technology still represents the next (and final?) logical step in the quest for pure virtuality. If VR were perfected and could replace the desktop GUI as the interface to an expanded World Wide Web, the result would be cyberspace.”
Bolter et al. (2006: 22)
This account offers a new platform for discussion useful for the analysis of the Internet as a component in Mobile AR: the idea that the Internet could exploit the spatial capabilities of a Virtual Reality to enhance its message. Bolter posits that this could be the logical end of a supposed “quest for pure virtuality”. I would argue that the reason VR did not succeed is the same reason that there is no “quest” to join: VR technologies lack the real-world applicability that we can easily find in reality-grounded media such as the Internet or mobile telephone.
I have been aided in this series by a connection with Gameware Development Limited, a Cambridge-based commercial enterprise working in the entertainment industry. Gameware was formed in May 2003 from Creature Labs Ltd, developing for the PC games market which produced the market leading game in Artificial Intelligence (AI), Creatures. When Gameware was formed, a strategic decision was made to move away from retail products and into the provision of technical services. They now work within the Broadcasting and Mobile Telephony space in addition to the traditional PC market. I use this business as a platform to launch into a discussion of the developments current and past that could see AR become a part of contemporary life, and just why AR is such a promising technology.
Gameware’s first explorations into AR came when they were commissioned by the BBC to develop an AR engine and software toolkit for a television show to be aired on the CBBC channel. The toolkit lets children build virtual creatures or zooks at home on their PCs which are uploaded back to the BBC and assessed:
The children with the best designs are then invited to the BAMZOOKi studio to have their virtual creatures compete against each other in a purpose-built arena comprised of real and digital elements. The zooks themselves are not real, of course, but the children can see silhouettes of digital action projected onto the arena in front of them. Each camera has an auxiliary camera pointed at AR markers on the studio ceiling, meaning each camera’s exact location in relation to the simulated events can be processed in real time. The digital creatures are stitched into the footage, and are then navigable and zoomable as if they were real studio elements. No post-production is necessary. BAMZOOKi is currently in its fourth series, with repeats aired daily:
BAMZOOKi has earned Childrens BBC some of its highest viewing figures (up to 1.2 million for the Monday shows on BBC1 and around 100,000 for each of the 20 episodes shown on digital Children’s BBC), which represents a massive milestone for AR and its emergence as a mainstream media technology. The evidence shows that there is a willing audience already receptive to contemporary AR applications. Further to the viewing figures the commercial arm of the BBC, BBC Worldwide, is in talks to distribute the BAMZOOKi format across the world, with its AR engine as its biggest USP. Gameware hold the rights required to further develop their BAMZOOKi intellectual property (IP), and are currently working on a stripped down version of their complex AR engine for the mobile telephony market.
I argue, however, that Broadcast AR is not the central application of AR technologies, merely an enabler for its wider applicability in other, more potent forms of media. Mobile AR offers a new channel of distribution for a variety of media forms, and it is its flexibility as a platform that could see it become a mainstream medium. Its successful deployment and reception is reliant on a number of cooperating factors; the innovation of its developers and the quality of the actual product being just part of the overall success the imminent release.
As well as their AR research, Gameware creates innovative digital games based on their Creatures AI engine. They recently produced Creebies; a digital game for Nokia Corp. Creebies is one of the first 3D games which incorporates AI for mobile phones. Gameware’s relationship with Nokia was strengthened when Nokia named them Pro-Developers. This is a title that grants Gameware a certain advantage: access to prototype mobile devices, hardware specifications, programming tools and their own Symbian operating system (Symbian OS) for mobile platforms. It was this development in combination with their experiences with BAMZOOKi and a long-standing collaboration with Cambridge University which has led to the idea for their HARVEE project. HARVEE stands for Handheld Augmented Reality Virtual Entertainment Engine.
Their product allows full 3D virtual objects to co-exist with real objects in physical space, viewed through the AR Device, which are animated, interactive and navigable, meaning the software can make changes to the objects as required, providing much space for interesting digital content. The applications of such a tool range from simple toy products; advertising outlets; tourist information or multiplayer game applications; to complex visualisations of weather movements; collaborating on engineering or architectural problems; or even implementing massive city-wide databases of knowledge where users might ‘tag’ buildings with their own graphical labels that might be useful to other AR users. There is rich potential here.
In HARVEE, Gameware attempt to surmount the limitations of current AR hardware in order to deliver the latest in interactive reality imaging to a new and potentially huge user base. Indeed, Nokia’s own market research suggests that AR-capable Smartphones will be owned by 25% of all consumers by 2009 (Nokia Research Centre Cambridge, non-public document). Mobile AR of the type HARVEE hopes to achieve represents not only a significant technical challenge, but also a potentially revolutionary step in mobile telephony technologies and the entertainment industry.
Gameware’s HARVEE project is essentially the creation of an SDK (Software Development Kit) which will allow developers to create content deliverable via their own Mobile AR applications. The SDK is written with the developer in mind, and does the difficult work of augmenting images and information related to the content. This simple yet flexible approach opens up a space for various types of AR content created at low cost for developers and end-users. I see Mobile AR’s visibility on the open market the only impediment to its success, and I believe that its simplicity of concept could see it become a participatory mass-medium of user-generated and mainstream commercial content.
Presently, most AR research is concerned with live video imagery and it’s processing, which allows the addition of live-rendered 3D digital images. This new augmented reality is viewable through a suitably equipped device, which incorporates a camera, a screen and a CPU capable of running specially developed software. This software is written by specialist software programmers, with knowledge of optics, 3D-image rendering, screen design and human interfaces. The work is time consuming and difficult, but since there is little competition in this field, the rare breakthroughs that do occur are as a result of capital investment: something not willingly given to developers of such a nascent technology.
What is exciting about AR research is that once the work is done, its potential is immediately seen, since in essence it is a very simple concept. All that is required from the user is their AR device and a real world target. The target is an object in the real world environment that the software is trained to identify. Typically, these are specially designed black and white cards known as markers:
These assist the recognition software in judging viewing altitude, distance and angle. Upon identification of a marker, the software will project or superimpose a virtual object or graphical overlay above the target, which becomes viewable on the screen of the AR device. As the device moves, the digital object orients in relation to the target in real-time:
The goal of some AR research is to free devices from markers, to teach AR devices to make judgements about spatial movements without fixed reference points. This is the cutting edge of AR research: markerless tracking. Most contemporary research, however, uses either marker-based or GPS information to process an environment.
Marker-based tracking is suited to local AR on a small scale, such as the Invisible Train Project (Wagner et al., 2005) in which players collaboratively keep virtual trains from colliding on a real world toy train track, making changes using their touch-screen handheld computers:
GPS tracking is best applied to large scale AR projects, such as ARQuake (Thomas et al, 2000), which exploits a scale virtual model of the University of Adelaide and a modified Quake engine to place on-campus players into a ‘first-person-shooter’. This application employs use of a headset, wearable computer, and a digital compass, which offer the effect that enemies appear to walk the corridors and ‘hide’ around corners. Players shoot with a motion-sensing arcade gun, but the overall effect is quite crude:
More data input would make the game run smoother and would provide a more immersive player experience. The best applications of AR will exploit multiple data inputs, so that large-scale applications might have the precision of marker-based applications whilst remaining location-aware.
Readers of this blog will be aware that AR’s flexibility as a platform lends applicability to a huge range of fields:
Current academic work uses AR to treat neurological conditions: AR-enabled projections have successfully cured cockroach phobia in some patients (Botella et al., 2005);
There are a wide range of civic and architectural uses: Roberts et al. (2002) have developed AR software that enables engineers to observe the locations of underground pipes and wires in situ, without the need schematics
AR offers a potentially rich resource to the tourism industry: the Virtuoso project (Wagner et al., 2005) is a handheld computer program that guides visitors around an AR enabled gallery, providing additional aural and visual information suited to each artefact;
The first commercial work in the AR space was far more playful, however: AR development in media presentations for television has led to such primetime projects as Time Commanders (Lion TV for BBC2, 2003-2005) in which contestants oversee an AR-enabled battlefield, and strategise to defeat the opposing army, and FightBox (Bomb Productions for BBC2, 2003) in which players build avatars to compete in an AR ‘beat-em-up’ that is filmed in front of a live audience; T-Immersion (2003- ) produce interactive visual installations for theme parks and trade expositions; other work is much more simple, in one case the BBC commissioned an AR remote-control virtual Dalek meant for mobile phones, due for free download from BBC Online:
The next entry in this series is a case study in AR development. If you haven’t already done so, please follow me on Twitter or grab an RSS feed to be alerted when my series continues.
Augmented Reality (AR) is a theme of computer research which deals with a combination of real world and computer generated data. AR is just one version of a Mixed Reality (MR) technology, where digital and real elements are mixed to create meaning. In essence AR is any live image that has an overlay of information that augments the meaning of these images.
Digital graphics are commonly put to work in the entertainment industry, and ‘mixing realities’ is a common motif for many of today’s media forms. There are varying degrees to which The Real and The Virtual can be combined. This is illustrated in my Mixed Reality Scale:
This is a simplified version of Milgram and Kishino’s (1994) Virtuality Continuum; simplified, because their research is purely scientific, without an explicit interest in media theory or effects, therefore not wholly applicable to my analysis. At the far left of my Mixed Reality Scale lies The Real, or physical, every-day experiential reality. For the longest time we lived solely in this realm. Then, technological innovation gave rise to the cinema, and then television. These media are located one step removed from The Real, a step closer to The Virtual, and can be considered a window on another world. This world is visually similar to our own, a fact exploited by its author to narrate believable, somewhat immersive stories. If willing, the viewer is somewhat ‘removed’ from their grounding here in physical reality, allowing them to participate in the construction of a sculpted, yet static existence. The viewer can only observe this contained reality, and cannot interact with it, a function of the viewing apparatus.
Later advancements in screen media technologies allowed the superimposition of graphical information over moving images. These were the beginnings of AR, whereby most of what is seen is real with some digital elements supplementing the image. Indeed, this simple form of AR is still in wide use today, notably in cases where extra information is required to make sense of a subject. In the case of certain televised sports, for example, a clock and a scoreboard overlay a live football match, which provides additional information that is useful to the viewer. Television viewers are already accustomed to using information that is displayed in this way:
More recently, computing and graphical power gave designers the tools to build wholly virtual environments. The Virtual is a graphical representation of raw data, and the furthest removed from physical reality on my Mixed Reality Scale. Here lies the domain of Virtual Reality (VR), a technology that uses no real elements except for the user’s human senses. The user is submersed in a seemingly separate reality, where visual, acoustic and sometimes haptic feedback serve to transpose them into this artificial, yet highly immersive space. Notice the shift from viewer to user: this is a function of the interactivity offered by digital space. VR was the forerunner to current AR research, and remains an active realm of academic study.
Computer graphics also enhanced the possibilities offered by television and cinema, forging a new point on the Mixed Reality Scale. I refer to the Augmented Virtuality (AV) approach, which uses mainly digital graphics with some real elements superimposed. For example, a newsreader reporting from a virtual studio environment is one common application. I position AV one step closer towards The Virtual to reflect the ratio of real to virtual elements:
There is an expansive realm between AV and VR technologies, media which offer the user wholly virtual constructions that hold potential for immersion and interactivity. I refer to the media of video games and desktop computers. Here the user manipulates visually depicted information for a purpose. These media are diametrically opposed to their counterpart on my scale, the cinema and television, because they are windows this time into a virtual world, actively encouraging (rather than denying) user interactivity to perform their function. Though operating in virtuality, the user remains grounded in The Real due to apparatus constraints.
Now, further technological advancements allow the fusion of real and virtual elements in ways not previously possible. Having traversed our way from The Real to The Virtual, we have now begun to make our way back. We are making a return to Augmented Reality, taking with us the knowledge to manipulate wholly virtual 3D objects and the computing power to integrate digital information into live, real world imagery. AR is deservedly close to The Real on my scale, because it requires physicality to function. This exciting new medium has the potential to change the way we perceive our world, forging a closer integration between our two binary worlds. It is this potential as an exciting and entirely new medium that has driven me to carry out the following work.
To begin, I address the science behind AR and its current applications. Next, I exploit an industry connection to inform a discussion of AR’s development as an entertainment medium. Then, I construct a methodology for analysis from previous academic thought on emergent technologies, whilst addressing the problems of doing so. I use this methodology to locate AR in its wider technologic, academic, social and economic context. This discussion opens ground for a deeper analysis of AR’s potential socio-cultural impact, which makes use of theories of media and communication and spatial enquiry. I conclude with a final critique that holds implications for the further analysis of Mixed Reality technology.