Philips Hue ’16 Million Moments’

My mate Lucy Tcherniak has just mastered her most recent piece of work, for consumer tech giants Philips and their Wi-fi enabled lighting range Hue – which are remote control light bulbs that can augment the mood of a room via your mobile phone:

The blurb:

Discover just some of the millions of ways to use light with Philips Hue. from helping you relax or concentrate to reminding you of that perfect sunset or bringing a bedtime story to life. it can even tell you if it’ll rain later.

Earlier this year, Ars Technica ran a piece on the Hue’s free to use API & SDK, which have expanded the usefulness of these genius devices through third-party apps such as IFTTT. The article describes the full spectrum of 16 million colours, indicated below:

Philips Hue Full Spectrum

Now, of the available 16 million colours, Lucy chose to feature just 16 in her film, which highlighted at least a few cool use-cases for the Hue range. For example, adjusting from yellow to white light to improve concentration while studying, or the reverse when settling in for a quiet night on the sofa, sampling the colours of a vase of flowers to suit the room they’ll live in, reminding you to take an umbrella in the morning, or making home media more immersive for the viewer.

I can think of a few more, such as adaptive to music streaming from my Sonos, or as an alarm system for a gradual morning wake up, or flashing blue when I have a Twitter mention during a TV show. Cool system, cool advert. Not sure when it will appear on screen but I think it might make it onto a few people’s Xmas lists. I’ll certainly be asking for one!

Philips LivingColors Gen 3 Aura Black 70998/30/PU Colour Changing Mood Lamp with Remote Control  is £49.99 on Amazon.

Learn Piano through AR

I like this:

The Projected Instrument Augmentation system (PIANO) was developed by pianists Katja Rogers and Amrei Röhlig and their colleagues at the University of Ulm in Germany. A screen attached to an electric piano has colourful blocks projected onto it that represent the notes. As the blocks of colour stream down the screen they meet the correct keyboard key at the exact moment that each one should be played.

Florian Schaub, who presented the system last month at the UbiComp conference in Zurich, Switzerland, said that users were impressed by how quickly they could play relatively well, which is hardly surprising given how easily we adapt to most screen interfaces these days.

But while there is real potential for PIANO as a self-guided teaching aid, in my view it’s the potential for a really tight feedback loop that makes this most interesting, and potentially more widely applicable.

When a piano teacher corrects a student’s mistake, they will perhaps specify one or two things that need improving, but this approach would sense each incorrect note and could provide an immediate visual response, flashing red for instance, conditioning the student to success more quickly.

via New Scientist.

Programmed To Love

Two robots, Vincent & Emily, are connected to each other as if deeply in love: where at the heights of romance, every motion, utterance, or external influence is shared in an acutely empathic, highly attuned ’emotional’ response:

The creation of German artists Nikolas Schmid-Pfähler and Carolin Liebl, the robots take in sound and motion data–from each other and from spectators– via sensors, which causes them to react–via gears and motors–with certain expressions. Shown in a gallery and open to the interaction of visitors, the project aims to explore the ideal of the human couple by distilling it into a more basic form. Simple lines represent bodies. Reacting to inputs replaces complicated decision-making.

Like in any relationship, miscommunication is a factor – so an intimate moment can lead to conflict, and eventual resolution. This gives a certain texture to their ‘dance of love’ that makes it hard not to anthropomorphise, or indeed relate to!

Take a look:

Via Co.Exist.


[box]This post originally appeared on the planning blog.[/box]

What does the word ‘digital’ mean to you?

dig·it·al /?d?d??tl/ adv

  1. of or pertaining to a digit or finger
  2. resembling a digit or finger
  3. manipulated with a finger or the fingertips: a digital switch
  4. displaying a readout in digital form: a digital speedometer
  5. having digits or digitlike parts
  6. of, pertaining to, or using data in the form of numerical digits
  7. Computers. involving or using numerical digits expressed in a scale of notation to represent discretely all variables occurring in a problem
  8. of, pertaining to, or using numerical calculations
  9. available in electronic form; readable and manipulable by computer

So digital means either ‘relating to  fingers’, or ‘relating to computers’, right?

My argument: fingers are, by definition, the most digital part of our body. We touch, type, gesture and manipulate our environment (real or virtual) through the interfaces that surround us: a shiny black screen, a keyboard, or even through thin air.

And that’s what digital means to me: the ability to effect a change in the world through the lightest of touch – powered by technology, thought, and action. More on these themes later in the week, but for now I’ll leave you with an illustration – unleash your fingers:

Aurasma vs. Blippar

I’ve written about Augmented Reality extensively in the past, but since the days of immersing myself in the purely theoretical potential for the medium, a few key players have rooted themselves in a very commercial reality that is now powering the fledgling industry.

And while B2B-focused vendors such as ViewAR remain behind the scenes, the likes of Aurasma and Blippar have soared in notoriety thanks to some quite excellent packaging and an impressive sales proposition. They are the standard bearers, at least in the eyes of the public.

I like Aurasma. But I also like Blippar. So which is better? Well, let’s find out… Here are some provocations I’ve been toying around with. See if it helps you decide, and let me know which side you fall on in the comments.

[twocol_one][dropcap]A[/dropcap]urasma has more technological power behind it. They have (supposedly) incorporated academic research into their proprietary tech and have a heritage in pattern recognition systems – remember their core business though: integrating with business critical processes and then slowly ramping up prices. They do this across all other Autonomy products! Also consider they are an HP property, whose business is hardware, not software. I believe Aurasma are only using this period of their lifespan to learn what does and doesn’t work, get better at it, gain status, equip users to enjoy AR, and then develop a mobile chipset (literally, hardware optimised for AR) that can be embedded in mobile devices, making HP buckets of royalties. They are chasing install base, but not because they want advertising bucks: they want to whitelabel their tech (i.e. Tesco, Heat & GQ) and then disappear into the background.[/twocol_one]

[twocol_one_last][dropcap]B[/dropcap]lippar have a proprietary AR engine, but are listed as using Qualcomm’s Vuforia engine – which is free to use. They seem focused on innovations in the augmented layer. Reading their interviews, they speak of AR not as a tech, platform or medium, but as a kind of magic campaign juice: stuff that reveals they are extremely focused on delivering a good consumer experience paid for by advertisers, with them as connective tissue. To this end, they too are chasing install base, but ultimately they have a different goal in mind. Being Qualcomm-backed, their future is in flexing their creative muscles and helping make AR a mass market medium through normalising behaviour. Big rivals: Aurasma in the short term, but I imagine that one day, Aurasma will revert back to being a tech platform, and companies like Blippar will provide the surface experience: where good content, not tech, will be what sells.[/twocol_one_last]

So what do you reckon – A or B?

Rock, Paper, Cyborgs

This robot hand makes a mockery of its human opponent in a game of rock, paper, scissors with a 100% win rate – one more step towards the total obsolescence of the human mind:

Rather than operate within the parameters of its programming and AI, the robot uses available sensory info and its rapid processing power to effectively ‘cheat’, making AI redundant in this instance:

human-machine cooperation system

The effect (at human speed, anyway) is that it wins without a scrap of intelligence, so what does that say about us?!

Matthias Müller’s Particle Art

There’s this guy called Matthias Müller, and he makes beautiful abstractions out of virtual dust on his supercomputer. He’s some kind of motion-art superhero, probably sent to us from the exploding Planet 3DS Max by his scientist parents.

In this post I’ve picked out a few examples of his work, because as well as being simply gorgeous viewing material, they’re great examples of what’s possible with a few gigs of RAM, a graphics card and some imagination.

Probably my favourite due to it’s relative simplicity, this tech demo plays with texture in surprising ways:

This next one is so epic! Like an underwater fireworks show of electric choreographed jellyfish, or something…

Watch as millions of particles merge and blend with infinite complexity in this piece of seemingly generative fludity:

This final clip is almost a love story. Watch as two swirling masses collide, explode and dance in time with the music:

An undoubtedly talented guy, Matthias has done commercial work for Honda and Vodafone (as featured last year).  His YouTube channel is certainly worth a look, as are his lovely image renders on CGPortfolio.

I can barely get the most out of MSPaint, however…