Web Discoveries for October 20th

These are my del.icio.us links for October 20th

Bibliography

So that’s it, my series is over. All that’s left to do now is credit the academic sources that influenced and aided in the construction of my argument. Thanks to everyone below, and thanks to you, dear reader, for coming along for the ride.

References:

Baudrillard, Jean (1983). Simulations. New York: Semiotext(e).

Baudrillard, Jean (1988). Selected Writings, ed. Mark Poster. Cambridge: Polity Press.

Baumann, Jim (date unknown). ‘Military applications of virtual reality’ on the World Wide Web. Accessed 20th March 2007. Available at http://www.hitl.washington.edu/scivw/EVE/II.G.Military.html

Benjamin, Walter (1968). ‘The Work of Art in the Age of Mechanical Reproduction’, in Walter Benjamin Illuminations (trans. Harry Zohn), pp. 217–51. New York: Schocken Books.

Bolter, J. D., B. Mcintyre, M. Gandy, Schweitzer, P. (2006). ‘New Media and the Permanent Crisis of Aura’ in Convergence: The International Journal of Research into New Media Technologies, Vol. 12 (1): 21-39.

Botella, Cristina.M, & M.C. Juan, R.M. Banos, M. Alcaniz, V. Guillen, B. Rey (2005). ‘Mixing Realities? An Application of Augmented Reality for the Treatment of Cockroach Phobia’ in CyberPsychology & Behaviour, Vol. 8 (2): 162-171.

Clark, N. ‘The Recursive Generation of the Cyberbody’ in Featherstone, M. & Burrows, R. (1995) Cyberspace/Cyberbodies/Cyberpunk, London: Sage.

Featherstone, Mike. & Burrows, Roger eds. (1995). Cyberspace/ Cyberbodies/ Cyberpunk: Cultures of Technological Embodiment. London: Sage.

Future Image (author unknown) (2006). ‘The 6Sight® Mobile Imaging Report’ on the World Wide Web. Accessed 22nd March 2007. Available at http://www.wirelessimaging.info/

Genosko, Gary (1999). McLuhan and Baudrillard: The Masters of Implosion. London: Routledge.

Kline, Stephen, DePeuter, Grieg, & Dyer-Witheforde, Nick (2003). Digital Play: The Interaction of Technology, Culture, and Marketing. Kingston & Montreal: McGill-Queen’s University Press.

Levinson, Paul (1999). Digital McLuhan: a guide to the information millennium. London: Routledge.

Liarokapis, Fotis (2006). ‘An Exploration from Virtual to Augmented Reality Gaming’ in Simulation Gaming, Vol. 37 (4): 507-533.

Manovich, Lev (2006). ‘The Poetics of Augmented Space’ in Visual Communication, Vol. 5 (2): 219-240.

McLuhan, Marshall (1962). The Gutenberg galaxy: The Making of Typographic Man. Toronto, Canada: University of Toronto Press.

McLuhan, Marshall (1964). Understanding Media: The Extensions of Man. New York: McGraw-Hill.

McLuhan, Marshall and Powers, Bruce R. (1989). The Global Village: Transformations in World Life in the 21st Century. Oxford University Press: New York.

Milgram, Paul & Kishino, Fumio (1994). ‘A Taxonomy of Mixed Reality Visual Displays’ in IEICE Transactions on Information Systems, Vol. E77-D, No.12 December 1994.

Reitmayr, Gerhard & Schmalstieg, Dieter (2001). Mobile Collaborative Augmented Reality. Proceedings of the IEEE 2001 International Symposium on Augmented Reality, 114–123.

Roberts, G., A. Evans, A. Dodson, B. Denby, S. Cooper, R. Hollands (2002) ‘Application Challenge: Look Beneath the Surface with Augmented Reality’ in GPS World, (UK, Feb. 2002): 14-20.

Stokes, Jon (2003). ‘Understanding Moore’s Law’ on the World Wide Web. Accessed 21st March 2007. Available at http://arstechnica.com/articles/paedia/cpu/moore.ars

Straubhaar, Joseph D. & LaRose, Robert (2005). Media Now: Understanding Media, Culture, and Technology. Belmont, CA: Wadsworth.

Thomas, B., Close. B., Donoghue, J., Squires, J., De Bondi, I’,. Morris, M., and Piekarski, W. ‘ARQuake: An outdoor/indoor augmented reality first-person application’ in Proceedings of the Fourth International Symposium on Wearable Computers, (Atlanta, GA, Oct. 2000), 139-141.

Wagner, D., Pintaric, T., Ledermann, F., & Schmalstieg, D. (2005). ‘Towards massively multi-user augmented reality on handheld devices’. In Proc. 3rd Int’l Conference on Pervasive Computing, Munich, Germany.

Weiser, M. (1991) ‘The Computer for the Twenty-First Century’ in Scientific American 265(3), September: 94–104.

Williams, Raymond (1992). Television: Technology and Cultural Form. Hanover and London: University Press of New England and Wesleyan University Press

Further Reading:

Bolter, Jay D. & Grusin, Richard (1999). Remediation: Understanding New Media. Cambridge, MA: MIT Press.

Cavell, Richard (2002). McLuhan in Space: a Cultural Geography. Toronto: University of Toronto Press.

Galloway, Alexander R. (2006). Gaming: Essays on Algorithmic Culture. Minneapolis: University of Minnesota Press.

Horrocks, Christopher (2000). Marshall McLuhan & Virtuality. Cambridge: Icon Books.

Jennings, Pamela (2001). ‘The Poetics of Engagement’ in Convergence: The International Journal of Research into New Media Technologies, Vol. 7 (2): 103-111.

Lauria, Rita (2001). ‘In Love with our Technology: Virtual Reality A Brief Intellectual History of the Idea of Virtuality and the Emergence of a Media Environment’ in Convergence: The International Journal of Research into New Media Technologies, Vol. 7 (4): 30-51.

Lonsway, Brian (2002). ‘Testing the Space of the Virtual’ in Convergence: The International Journal of Research into New Media Technologies, Vol. 8 (3): 61-77.

Moos, Michel A. (1997). Marshall McLuhan Essays: Media Research, technology, art, communication. London: Overseas Publishers Association.

Pacey, Arnold (1983). The Culture of Technology. Oxford: Basil Blackwell.

Salen, Katie & Zimmerman, Eric. (2004) Rules of Play: Game Design Fundamentals. Cambridge, MA: MIT.

Sassower, Raphael (1995). Cultural Collisions: Postmodern Technoscience. London: Routledge.

Wood, John ed. (1998). The Virtual Embodied: Presence/Practice/Technology. London: Routledge.

Applying McLuhan

I begin with McLuhan, whose Laws of Media or Tetrad offers greater insights for Mobile AR, sustaining and developing upon the arguments developed in my assessment of the interlinking technologies that meet in Mobile AR, whilst also providing the basis to address some of this man’s deeper thoughts.

The tetrad can be considered an observation lens to turn upon one’s subject technology. It assumes four processes take place during each iteration of a given medium. These processes are revealed as answers to these following questions, taken from Levinson (1999):

“What aspect of society or human life does it enhance or amplify? What aspect, in favour or high prominence before the arrival of the medium in question, does it eclipse or obsolesce? What does the medium retrieve or pull back into centre stage from the shadows of obsolescence? And what does the medium reverse or flip into when it has run its course or been developed to its fullest potential?”

(Digital Mcluhan 1999: 189).

To ask each of these it is useful to transfigure our concept of Mobile AR into a more workable and fluid term: the Magic Lens, a common expression in mixed reality research. Making this change allows the exploration of the more theoretical aspects of the technology free of its machinic nature, whilst integrating a necessary element of metaphor that will serve to illustrate my points.

To begin, what does the Magic Lens amplify? AR requires the recognition of a pre-programmed real-world image in order to augment the environment correctly. It is the user who locates this target, it is important to mention. It could be said that the Magic Lens more magnifies than amplifies an aspect of the user’s environment, because like other optical tools the user must point the device towards it and look through, the difference with this Magic Lens is that one aspect of its target, one potential meaning, is privileged over all others. An arbitrary black and white marker holds the potential to mean many things to many people, but viewed through an amplifying Magic Lens it means only what the program recognises and consequently superimposes.

This superimposition necessarily obscures what lies beneath. McLuhan might recognise this as an example of obsolescence. The Magic Lens privileges virtual over real imagery, and the act of augmentation leaves physical space somewhat redundant: augmenting one’s space makes it more virtual than real. The AR target undergoes amplification, becoming the necessary foundation of the augmented reality. What is obsolesced by the Magic Lens, then, is not the target which it obscures, but everything except the target.

I am reminded of McLuhan’s Extensions of Man (1962: 13), which offers the view that in extending ourselves through our tools, we auto-amputate the aspect we seek to extend. There is a striking parallel to be drawn with amplification and obsolescence, which becomes clear when we consider that in amplifying an aspect of physical reality through a tool, we are extending sight, sound and voice through the Magic Lens to communicate in wholly new ways using The Virtual as a conduit. This act obsolesces physical reality, the nullification effectively auto-amputating the user from their footing in The Real. So where have they ‘travelled’? The Magic Lens is a window into another reality, a mixed reality where real and virtual share space. In this age of Mixed Realities, the tetrad can reveal more than previously intended: new dimensions of human interaction.

The third question in the tetrad asks what the Magic Lens retrieves that was once lost. So much new ground is gained by this technology that it would be difficult to make a claim. However, I would not hold belief in Mobile AR’s success if I didn’t recognise the exhumed, as well as the novel benefits that it offers. The Magic Lens retrieves the everyday tactility and physicality of information engagement, that which was obsolesced by other screen media such as television, the Desktop PC and the games console. The Magic Lens encourages users to interact in physicality, not virtuality. The act of actually walking somewhere to find something out, or going to see someone to play with them is retrieved. Moreover, we retrieve the sense of control over our media input that was lost by these same technologies. Information is freed into the physical world, transfiguring its meaning and offering a greater degree of manipulative power. Mixed Reality can be seen only through the one-way-glass of the Magic Lens, The Virtual cannot spill through unless we allow it to. We have seen that certain mainstream media can wholly fold themselves into reality and become an annoyance- think Internet pop-ups and mobile ringtones- through the Magic Lens we retrieve personal agency to navigate our own experience. I earlier noted that “the closer we can bring artefacts from The Virtual to The Real, the more applicable these can be in our everyday lives”; a position that resonates with my growing argument that engaging with digital information through the Magic Lens is an appropriate way to integrate and indeed exploit The Virtual as a platform for the provision of communication, leisure and information applications.

It is hard to approximate what the Magic Lens might flip into, since at this point AR is a wave that has not yet crested. I might suggest that since the medium is constrained to success in its mobile device form, its trajectory is likely entwined with that medium. So, the Magic Lens flips into whatever the mobile multimedia computer flips into. Another possibility is that the Magic Lens inspires such commercial success and industrial investment that a surge in demand for Wearable Computers shifts AR into a new form. This time, the user cannot dip in and out of Mixed Reality as they see fit, they are immersed in it whenever they wear their visor. This has connotations all of its own, but I will not expound my own views given that much cultural change must first occur to implement such a drastic shift in consumer fashions and demands. A third way for the Magic Lens to ‘flip’ might be its wider application in other media. Developments in digital ink technologies; printable folding screens; ‘cloud’ computing; interactive projector displays; multi-input touch screen devices; automotive glassware and electronic product packaging could all take advantage of the AR treatment. We could end up living far more closely with The Virtual than previously possible.

In their work The Global Village, McLuhan and Powers (1989) state that:

“The tetrad performs the function of myth in that it compresses past, present, and future into one through the power of simultaneity. The tetrad illuminates the borderline between acoustic and visual space as an arena of the spiralling repetition and replay, both of input and feedback, interlace and interface in the area of imploded circle of rebirth and metamorphosis”

(The Global Village 1989: 9)

I would be interested to hear their view on the unique “simultaneity” offered by the Magic Lens, or indeed the “metamorphosis” it would inspire, but I would argue that when applied from a Mixed Reality inter-media perspective, their outlook seems constrained to the stringent and self-involved rules of their own epistemology. Though he would be loath to admit it, Baudrillard took on McLuhan’s work as the basis of his own (Genosko, 1999; Kellner, date unknown), and made it relevant to the postmodern era. His work is cited by many academics seeking to forge a relationship to Virtual Reality in their research…

Summary So Far

In summary, Mobile AR has many paths leading to it. It is this convergence of various paths that makes a true historical appraisal of this technology difficult to achieve. However, I have highlighted facets of its contributing technologies that assist in the developing picture of the implications that Mobile AR has in store. A hybridisation of a number of different technologies, Mobile AR embodies the most gainful properties of its three core technologies: This analyst sees Mobile AR as a logical progression from VR, but recognises its ideological rather than technological founding. The hardware basis of Mobile AR stems from current mobile telephony trends that exploit the growing capabilities of Smartphone devices. The VR philosophy and the mobile technology are fused through the Internet, the means for enabling context-based, live-updating content, and housing databases of developer-built and user-generated digital objects and elements, whilst connecting users across the world.

I have shown that where the interest in VR technologies dwindled due to its limited real-world applicability, Mobile Internet also lacks in comparison to Mobile AR and its massive scope for intuitive, immersive and realistic interpretations of digital information. Wearable AR computing shares VR’s weaknesses, despite keeping the user firmly grounded in physical reality. Mobile AR offers a solution that places the power of these complex systems into a mobile telephone: the ubiquitous technology of our generation. This new platform solves several problems at once, most importantly for AR developers and interested Blue-chip parties, market readiness. Developing for Mobile AR is simply the commercially sensible thing to do, since the related industries are already making the changes required for its mass-distribution.

Like most nascent technologies, AR’s success depends on its commercial viability and financial investment, thus most sensible commercial developers of AR technologies are working on projects for the entertainment and advertising industries, where their efforts can be rewarded quickly. These small-scale projects are often simple in concept, easily grasped and thus not easily forgotten. I claim here that the first Mobile AR releases will generate early interest in the technology and entertainment markets, with the effect that press reportage and word-of-mouth behaviour assist Mobile AR’s uptake. I must be careful with my claims here however, since there is no empirical evidence to suggest that this will occur for Mobile AR. Looking at the emergence of previous technologies, however, the Internet and mobile telephony grew rapidly and to massive commercial success thanks to some strong business models and advancements in their own supporting technologies. It is strongly hoped by developers like Gameware and T-Immersion that Mobile AR can enjoy this same rapid lift-off. Both technologies gained prominence once visible in the markets thanks to a market segment called early adopters. This important group gathers their information from specialist magazine sources and word of mouth. Mobile AR developers would do well to recognise the power of this group, perhaps by offering shareware versions of their AR software that encourage a form of viral transmission that exploit text messaging.

Gameware have an interesting technique for the dissemination of their HARVEE software. They share a business interest with a Bluetooth technology firm, which has donated a prototype product the Bluetooth Push Box, which scans for local mobile devices and automatically sends files to users in acceptance. Gameware’s Push Box sends their latest demo to all visitors to their Cambridge office. This same technology could be placed in public places or commercial spaces to offer localised AR advertising, interactive tourist information, or 3D restaurant menus, perhaps.

Gameware, through its Nokia projects and HARVEE development program is well placed to gain exposure on the back of a market which is set to explode as mobile offerings become commercially viable, ‘social’, powerful, multipurpose and newsworthy. Projects like HARVEE are especially interesting in terms of their wide applicability and mass-market appeal. It is its potential as a revolutionary new medium that inspires this very series.

Virtual Reality

AR is considered by some to be a logical progression of VR technologies (Liarokapis, 2006; Botella, 2005; Reitmayr & Schmalstieg, 2001), a more appropriate way to interact with information in real-time that has been granted only by recent innovations. Thus, one could consider that a full historical appraisal would pertain to VR’s own history, plus the last few years of AR developments. Though this method would certainly work for much of Wearable AR- which uses a similar device array- the same could not be said for Mobile AR, since by its nature it offers a set of properties from a wholly different paradigm: portability, connectivity and many years of mobile development exclusive of AR research come together in enhancing Mobile AR’s formal capabilities. Despite the obvious mass-market potential of this technology, most AR research continues to explore the Wearable AR paradigm. Where Mobile AR is cousin to VR, Wearable AR is sister. Most published works favour the Wearable AR approach, so if my assessment of Mobile AR is to be fair I cannot ignore its grounding in VR research.

As aforementioned, VR is the realm at the far right of my Mixed Reality Scale. To explore a Virtual Reality, users must wear a screen array on their heads that cloak the user’s vision with a wholly virtual world. These head-mounted-displays (HMD’s) serve to transpose the user into this virtual space whilst cutting them off from their physical environment:

A Virtual Reality HMD, two LCD screens occupy the wearer's field of vision
A Virtual Reality HMD, two LCD screens occupy the wearer's field of vision

The HMD’s must be connected to a wearable computer, a Ghostbusters-style device attached to the wearer’s back or waist that holds a CPU and graphics renderer. To interact with virtual objects, users must hold a joypad. Aside from being a lot to carry, this equipment is restrictive on the senses and is often expensive:

A Wearable Computer array, this particular array uses a CPU, GPS, HMD, graphics renderer, and human-interface-device
A Wearable Computer array, this particular array uses a CPU, GPS, HMD, graphics renderer, and human-interface-device

It is useful at this point to reference some thinkers in VR research, with the view to better understanding The Virtual realm and its implications for Mobile AR’s Mixed Reality approach. Writing on the different selves offered by various media, Lonsway (2002) states that:

“With the special case of the immersive VR experience, the user is (in actual fact) located in physical space within the apparatus of the technology. The computer-mediated environment suggests (in effect) a trans-location outside of this domain, but only through the construction of a subject centred on the self (I), controlling an abstract position in a graphic database of spatial coordinates. The individual, of which this newly positioned subject is but one component, is participant in a virtuality: a spatio-temporal moment of immersion, virtualised travel, physical fixity, and perhaps, depending on the technologies employed, electro-magnetic frequency exposure, lag-induced nausea, etc.”

Lonsway (2002: 65)

Despite its flaws, media representations of VR technologies throughout the eighties and early nineties such as Tron (Lisberger, 1982), Lawnmower Man (Leonard, 1992) and Johnny Mnemonic (Longo, 1995) generated plenty of audience interest and consequent industrial investment. VR hardware was produced in bulk for much of the early nineties, but it failed to become a mainstream technology largely due to a lack of capital investment in VR content, a function of the stagnant demand for expensive VR hardware (Mike Dicks of Bomb Productions: personal communication). The market for VR content collapsed, but the field remains an active contributor in certain key areas, with notable success as a commonplace training aid for military pilots (Baumann, date unknown) and as an academic tool for the study of player immersion and virtual identity (Lonsway, 2002).

Most AR development uses VR’s same array of devices: a wearable computer, input device and an HMD. The HMD is slightly different in these cases; it is transparent and contains an internal half-silvered mirror, which combines images from an LCD display with the user’s vision of the world:

An AR HMD, this model has a half-mirrored screen at 45 degrees. Above are two LCDs that reflect into the wearer's eyes whilst they can see what lies in front of them
An AR HMD, this model has a half-mirrored screen at 45 degrees. Above are two LCDs that reflect into the wearer's eyes whilst they can see what lies in front of them

 

What Wearable AR looks like, notice the very bright figure ahead. If he was darker he would not be visible
What Wearable AR looks like, notice the very bright figure ahead. If he was darker he would not be visible

There are still many limitations placed on the experience, however: first, the digital graphics must be very bright in order to stand out against natural light; second, they require the use of a cumbersome wearable computer array; third, this array is at a price-point too high for it to reach mainstream use. Much of the hardware used in Wearable AR research is bought wholesale from liquidized VR companies (Dave Mee of Gameware: personal communication), a fact representative of the backward thinking of much AR research.

In their work New Media and the Permanent Crisis of Aura Bolter et al. (2006) apply Benjamin’s work on the Aura to Mixed Reality technologies, and attempt to forge a link between VR and the Internet. This passage offers a perspective on the virtuality of the desktop computer and the World Wide Web:

“What we might call the paradigm of mixed reality is now competing successfully with what we might call ‘pure virtuality’ – the earlier paradigm that dominated interface design for decades.
In purely virtual applications, the computer defines the entire informational or perceptual environment for the user … The goal of VR is to immerse the user in a world of computer generated images and (often) computer-controlled sound. Although practical applications for VR are relatively limited, this technology still represents the next (and final?) logical step in the quest for pure virtuality. If VR were perfected and could replace the desktop GUI as the interface to an expanded World Wide Web, the result would be cyberspace.”

Bolter et al. (2006: 22)

This account offers a new platform for discussion useful for the analysis of the Internet as a component in Mobile AR: the idea that the Internet could exploit the spatial capabilities of a Virtual Reality to enhance its message. Bolter posits that this could be the logical end of a supposed “quest for pure virtuality”. I would argue that the reason VR did not succeed is the same reason that there is no “quest” to join: VR technologies lack the real-world applicability that we can easily find in reality-grounded media such as the Internet or mobile telephone.

What is AR and What is it Capable Of?

Presently, most AR research is concerned with live video imagery and it’s processing, which allows the addition of live-rendered 3D digital images. This new augmented reality is viewable through a suitably equipped device, which incorporates a camera, a screen and a CPU capable of running specially developed software. This software is written by specialist software programmers, with knowledge of optics, 3D-image rendering, screen design and human interfaces. The work is time consuming and difficult, but since there is little competition in this field, the rare breakthroughs that do occur are as a result of capital investment: something not willingly given to developers of such a nascent technology.

What is exciting about AR research is that once the work is done, its potential is immediately seen, since in essence it is a very simple concept. All that is required from the user is their AR device and a real world target. The target is an object in the real world environment that the software is trained to identify. Typically, these are specially designed black and white cards known as markers:

An AR marker, this one relates to a 3D model of Doctor Who's Tardis in Gameware's HARVEE kit
An AR marker, this one relates to a 3D model of Doctor Who's Tardis in Gameware's HARVEE kit

These assist the recognition software in judging viewing altitude, distance and angle. Upon identification of a marker, the software will project or superimpose a virtual object or graphical overlay above the target, which becomes viewable on the screen of the AR device. As the device moves, the digital object orients in relation to the target in real-time:

armarker2
Augmented Reality in action, multiple markers in use on the HARVEE system on a Nokia N73

The goal of some AR research is to free devices from markers, to teach AR devices to make judgements about spatial movements without fixed reference points. This is the cutting edge of AR research: markerless tracking. Most contemporary research, however, uses either marker-based or GPS information to process an environment.

Marker-based tracking is suited to local AR on a small scale, such as the Invisible Train Project (Wagner et al., 2005) in which players collaboratively keep virtual trains from colliding on a real world toy train track, making changes using their touch-screen handheld computers:

crw_80271
The Invisible Train Project (Wagner et al., 2005)

GPS tracking is best applied to large scale AR projects, such as ARQuake (Thomas et al, 2000), which exploits a scale virtual model of the University of Adelaide and a modified Quake engine to place on-campus players into a ‘first-person-shooter’. This application employs use of a headset, wearable computer, and a digital compass, which offer the effect that enemies appear to walk the corridors and ‘hide’ around corners. Players shoot with a motion-sensing arcade gun, but the overall effect is quite crude:

100-0007_img_21
ARQuake (Thomas et al, 2000)

More data input would make the game run smoother and would provide a more immersive player experience. The best applications of AR will exploit multiple data inputs, so that large-scale applications might have the precision of marker-based applications whilst remaining location-aware.

Readers of this blog will be aware that AR’s flexibility as a platform lends applicability to a huge range of fields:

  • Current academic work uses AR to treat neurological conditions: AR-enabled projections have successfully cured cockroach phobia in some patients (Botella et al., 2005);
  • There are a wide range of civic and architectural uses: Roberts et al. (2002) have developed AR software that enables engineers to observe the locations of underground pipes and wires in situ, without the need schematics
  • AR offers a potentially rich resource to the tourism industry: the Virtuoso project (Wagner et al., 2005) is a handheld computer program that guides visitors around an AR enabled gallery, providing additional aural and visual information suited to each artefact;

The first commercial work in the AR space was far more playful, however: AR development in media presentations for television has led to such primetime projects as Time Commanders (Lion TV for BBC2, 2003-2005) in which contestants oversee an AR-enabled battlefield, and strategise to defeat the opposing army, and FightBox (Bomb Productions for BBC2, 2003) in which players build avatars to compete in an AR ‘beat-em-up’ that is filmed in front of a live audience; T-Immersion (2003- ) produce interactive visual installations for theme parks and trade expositions; other work is much more simple, in one case the BBC commissioned an AR remote-control virtual Dalek meant for mobile phones, due for free download from BBC Online:

A Dalek, screenshot taken from HARVEE's development platform (work in progress)
A Dalek, screenshot taken from HARVEE's development platform (work in progress)

The next entry in this series is a case study in AR development. If you haven’t already done so, please follow me on Twitter or grab an RSS feed to be alerted when my series continues.