At work, where social features & discovery apps help me find new stuff
On my mobile, where offline playlists provide the backdrop to my travel
And since I no longer play physical CDs, nor use iTunes or other media player (barring web apps such as SoundCloud, Hype Machine, Mixcloud etc.) Spotify has become the main hub and jumping-off point for whatever type of music I’m after.
Spotify leaves it to its users to build, subscribe to and share playlists, their primary organisational schema, however they see fit. But with millions of tracks and carte blanche to curate a personal library of preferences comes a unique challenge: how should one filter, organise and archive their preferences with access to the worlds biggest music collection?
There is no self-populating iTunes-esque ‘smart playlist’ feature, no editorialised ‘recommended playlists’ feature, and until recently there was no way to search playlists without third-party involvement. Users have to come up with their own organisational approach, and I use my patented Star System™. Here’s how it works:
Play whatever music you want
Star the tracks you particularly love
These self-populate a ‘Starred Tracks’ playlist
Set this playlist to ‘Available Offline’ and they’ll download automatically
Carry on jamming, removing stars from any tracks if they get boring
After a period of time, move all starred tracks into playlist of their own
Release this playlist to the public to critical acclaim!
Repeat steps 1-7 with a blank slate
So without further ado, here are my Star Mix Playlists for your listening pleasure, along with some tasting notes.
AR is considered by some to be a logical progression of VR technologies (Liarokapis, 2006; Botella, 2005; Reitmayr & Schmalstieg, 2001), a more appropriate way to interact with information in real-time that has been granted only by recent innovations. Thus, one could consider that a full historical appraisal would pertain to VR’s own history, plus the last few years of AR developments. Though this method would certainly work for much of Wearable AR- which uses a similar device array- the same could not be said for Mobile AR, since by its nature it offers a set of properties from a wholly different paradigm: portability, connectivity and many years of mobile development exclusive of AR research come together in enhancing Mobile AR’s formal capabilities. Despite the obvious mass-market potential of this technology, most AR research continues to explore the Wearable AR paradigm. Where Mobile AR is cousin to VR, Wearable AR is sister. Most published works favour the Wearable AR approach, so if my assessment of Mobile AR is to be fair I cannot ignore its grounding in VR research.
As aforementioned, VR is the realm at the far right of my Mixed Reality Scale. To explore a Virtual Reality, users must wear a screen array on their heads that cloak the user’s vision with a wholly virtual world. These head-mounted-displays (HMD’s) serve to transpose the user into this virtual space whilst cutting them off from their physical environment:
The HMD’s must be connected to a wearable computer, a Ghostbusters-style device attached to the wearer’s back or waist that holds a CPU and graphics renderer. To interact with virtual objects, users must hold a joypad. Aside from being a lot to carry, this equipment is restrictive on the senses and is often expensive:
It is useful at this point to reference some thinkers in VR research, with the view to better understanding The Virtual realm and its implications for Mobile AR’s Mixed Reality approach. Writing on the different selves offered by various media, Lonsway (2002) states that:
“With the special case of the immersive VR experience, the user is (in actual fact) located in physical space within the apparatus of the technology. The computer-mediated environment suggests (in effect) a trans-location outside of this domain, but only through the construction of a subject centred on the self (I), controlling an abstract position in a graphic database of spatial coordinates. The individual, of which this newly positioned subject is but one component, is participant in a virtuality: a spatio-temporal moment of immersion, virtualised travel, physical fixity, and perhaps, depending on the technologies employed, electro-magnetic frequency exposure, lag-induced nausea, etc.”
Lonsway (2002: 65)
Despite its flaws, media representations of VR technologies throughout the eighties and early nineties such as Tron (Lisberger, 1982), Lawnmower Man (Leonard, 1992) and Johnny Mnemonic (Longo, 1995) generated plenty of audience interest and consequent industrial investment. VR hardware was produced in bulk for much of the early nineties, but it failed to become a mainstream technology largely due to a lack of capital investment in VR content, a function of the stagnant demand for expensive VR hardware (Mike Dicks of Bomb Productions: personal communication). The market for VR content collapsed, but the field remains an active contributor in certain key areas, with notable success as a commonplace training aid for military pilots (Baumann, date unknown) and as an academic tool for the study of player immersion and virtual identity (Lonsway, 2002).
Most AR development uses VR’s same array of devices: a wearable computer, input device and an HMD. The HMD is slightly different in these cases; it is transparent and contains an internal half-silvered mirror, which combines images from an LCD display with the user’s vision of the world:
There are still many limitations placed on the experience, however: first, the digital graphics must be very bright in order to stand out against natural light; second, they require the use of a cumbersome wearable computer array; third, this array is at a price-point too high for it to reach mainstream use. Much of the hardware used in Wearable AR research is bought wholesale from liquidized VR companies (Dave Mee of Gameware: personal communication), a fact representative of the backward thinking of much AR research.
In their work New Media and the Permanent Crisis of Aura Bolter et al. (2006) apply Benjamin’s work on the Aura to Mixed Reality technologies, and attempt to forge a link between VR and the Internet. This passage offers a perspective on the virtuality of the desktop computer and the World Wide Web:
“What we might call the paradigm of mixed reality is now competing successfully with what we might call ‘pure virtuality’ – the earlier paradigm that dominated interface design for decades.
In purely virtual applications, the computer defines the entire informational or perceptual environment for the user … The goal of VR is to immerse the user in a world of computer generated images and (often) computer-controlled sound. Although practical applications for VR are relatively limited, this technology still represents the next (and final?) logical step in the quest for pure virtuality. If VR were perfected and could replace the desktop GUI as the interface to an expanded World Wide Web, the result would be cyberspace.”
Bolter et al. (2006: 22)
This account offers a new platform for discussion useful for the analysis of the Internet as a component in Mobile AR: the idea that the Internet could exploit the spatial capabilities of a Virtual Reality to enhance its message. Bolter posits that this could be the logical end of a supposed “quest for pure virtuality”. I would argue that the reason VR did not succeed is the same reason that there is no “quest” to join: VR technologies lack the real-world applicability that we can easily find in reality-grounded media such as the Internet or mobile telephone.
Presently, most AR research is concerned with live video imagery and it’s processing, which allows the addition of live-rendered 3D digital images. This new augmented reality is viewable through a suitably equipped device, which incorporates a camera, a screen and a CPU capable of running specially developed software. This software is written by specialist software programmers, with knowledge of optics, 3D-image rendering, screen design and human interfaces. The work is time consuming and difficult, but since there is little competition in this field, the rare breakthroughs that do occur are as a result of capital investment: something not willingly given to developers of such a nascent technology.
What is exciting about AR research is that once the work is done, its potential is immediately seen, since in essence it is a very simple concept. All that is required from the user is their AR device and a real world target. The target is an object in the real world environment that the software is trained to identify. Typically, these are specially designed black and white cards known as markers:
These assist the recognition software in judging viewing altitude, distance and angle. Upon identification of a marker, the software will project or superimpose a virtual object or graphical overlay above the target, which becomes viewable on the screen of the AR device. As the device moves, the digital object orients in relation to the target in real-time:
The goal of some AR research is to free devices from markers, to teach AR devices to make judgements about spatial movements without fixed reference points. This is the cutting edge of AR research: markerless tracking. Most contemporary research, however, uses either marker-based or GPS information to process an environment.
Marker-based tracking is suited to local AR on a small scale, such as the Invisible Train Project (Wagner et al., 2005) in which players collaboratively keep virtual trains from colliding on a real world toy train track, making changes using their touch-screen handheld computers:
GPS tracking is best applied to large scale AR projects, such as ARQuake (Thomas et al, 2000), which exploits a scale virtual model of the University of Adelaide and a modified Quake engine to place on-campus players into a ‘first-person-shooter’. This application employs use of a headset, wearable computer, and a digital compass, which offer the effect that enemies appear to walk the corridors and ‘hide’ around corners. Players shoot with a motion-sensing arcade gun, but the overall effect is quite crude:
More data input would make the game run smoother and would provide a more immersive player experience. The best applications of AR will exploit multiple data inputs, so that large-scale applications might have the precision of marker-based applications whilst remaining location-aware.
Readers of this blog will be aware that AR’s flexibility as a platform lends applicability to a huge range of fields:
Current academic work uses AR to treat neurological conditions: AR-enabled projections have successfully cured cockroach phobia in some patients (Botella et al., 2005);
There are a wide range of civic and architectural uses: Roberts et al. (2002) have developed AR software that enables engineers to observe the locations of underground pipes and wires in situ, without the need schematics
AR offers a potentially rich resource to the tourism industry: the Virtuoso project (Wagner et al., 2005) is a handheld computer program that guides visitors around an AR enabled gallery, providing additional aural and visual information suited to each artefact;
The first commercial work in the AR space was far more playful, however: AR development in media presentations for television has led to such primetime projects as Time Commanders (Lion TV for BBC2, 2003-2005) in which contestants oversee an AR-enabled battlefield, and strategise to defeat the opposing army, and FightBox (Bomb Productions for BBC2, 2003) in which players build avatars to compete in an AR ‘beat-em-up’ that is filmed in front of a live audience; T-Immersion (2003- ) produce interactive visual installations for theme parks and trade expositions; other work is much more simple, in one case the BBC commissioned an AR remote-control virtual Dalek meant for mobile phones, due for free download from BBC Online:
The next entry in this series is a case study in AR development. If you haven’t already done so, please follow me on Twitter or grab an RSS feed to be alerted when my series continues.
Mobile multimedia capabilities are increasing in uptake and potential, but the small form-factor we so desire in our handsets are beginning to inhibit a rich user experience.
The typical mobile screen size is 320×240.
If your mobile has a pico-projector, it will be able to emit high-res imagery onto any suitable surface, up to 50″ in width.
This unlocks the full immersive power of your mobile web browser, 3D games engine, DivX movie player or video conferencing.
Pico-projectors are already on sale as stand-alone units, though are yet to be implemented in mobiles, PMPs or laptops.
The first of these hardware mashups will be on sale in the East by the end of this year, but it’ll likely be another 18 months before they reach Western shores.
Aside from the new opportunities for deeper engagement with content and software on the mobile platform, the largest socio-cultural change will occur once people begin to share their mobile experience.
Picture regular consumers using the real world as a medium for virtual interaction.
Location-aware video advertising anyone?