Category Archives: Kinect

Alstom Island : Kinect Application with Large Blue

In October 2012, Large Blue, an agency in Covent Garden, London, commissioned me to collaborate on a Kinect driven interactive for their client, Alstom. The finished piece was exhibited at their annual company dinner at the Science Museum. It was a very fun but challenging project, and we hope to get further funding from Alstom to develop it into a fully fledged piece of educational software.

Kinect for Windows SDK Programming Guide

kinect-for-windows-sdk

This book, produced by Packt Publishing, purports to be a complete guide to the Windows Kinect SDK. This is not to be confused with the broader themed Kinect books which may cover more subjects such as OpenNI, Omek Beckon, Lib Freenect or various other related topics. This book very much concentrates on the official Microsoft Kinect SDK.

It begins with an excellent and well researched discusson of the Kinect hardware, which is well illustrated, and features all of the relevant information in an easily digestible form, and can be easily picked up if you need to quote a client the specs, like for example the field of vision of the depth sensing camera. It’s 43 degrees.

We are then walked through the basic SDK setup, which is simple enough, and then a discussion of the sensor’s capabilities. Seaonsed professionals may want to skip through this, as they will likely already know much of this, but nevertheless there may be a few head scratching or eureka moments when you fill in a gap in your Kinect knowlegde, so I’d recommend a read through of this section.

After an overview of the various tools and components of the SDK, we start to see some code examples.

Sadly, this is where the book and I begin to part company. The example code is almost entirely written in C#, which is not a language I generally use. Although I’m perfectly comfortable using Visual Studio, I generally use it to code in C++ (which the SDK extensively supports) so I feel that this was a bit of an oversight. I’m sure for anyone starting out who is already familiar with C# and WPF programming, this wouldn’t be a problem, but as I work with Cinder and a number of other C++ libraries, C# isn’t really an option.

The book continues on through the various capabilities of the device and the SDK, and as long as you don’t mind being tied into C sharp, it’s a pretty comprehensive read and holds your hand all the way through. It also covers tricky stuff like the encoding of player ids into the 16 bit depth image stream, something which can cause a lot of confusion starting out, but is vital to get a handle on.

It also covers less well understood topics like speech recognition, beamforming, and does a good job of introducing the reader to simple gesture recognition.
Be aware, however, that gesture recognition is not actually provided by the Kinect SDK as such, and you will have to come up with your own solution for this. This is a pretty common gotcha with Kinect applications, and it can take a long time to get to grips with, so be warned that allthough the material in the book is a good starting point, you may want to look into more sophisticated gesture recognition solutions if your application needs to do anything complicated.

As I’ve mentioned, I was a bit disappointed on the heavy reliance on C sharp, but the rest of the book is so useful that I’d say it’s a welcome addition to your library even if you don’t us C#, just for the hardware information alone.

Kinect for Windows SDK Programming Guide is available now from Packt Publishing.

EE Interactive Displays with Kinect for Windows

In September 2012, I developed a Kinect based touchless interactive display for EE, the UK’s first 4G/LTE network. The display features user outline tracking and various particle systems which display fonts, icons and user interaction feedback. I was a lead developer on the build, along with the awesome Michael Lawrence and Jonathon Curtis. Publicis Chemistry agency creatives were David Prideaux (ECD), Paul Westmooreland (AD) and Neame Ingram (CW).

The passer-by user’s skeleton and outline is tracked by a kinect sensor, mapped to particle repulsors and then used to influence the movement of a sprung particle grid. There is also some gesture recognition going on, which we developed from scratch. We used the cinder Kinect windows SDK in conjunction with openCV, and made heavy use of Cinder’s Timeline classes, and built a few custom animation classes to make our particles easier to manage and control.

Our main challenges were :
– Aggressive timescales (approx 10 weeks from concept to deployment)
– Complex branding guidelines determining the behaviour of the particles, as set out by Wolff Olins.
– The project was to be simultaneously deployed in 20 different locations, all of which needed custom installations
– Gesture recognition was a central part of the brief

I’d like to mention my gratitude to Stephen Schieberl for publishing his work on the Kinect SDK cinderblock, which this project relied upon heavily. Thanks dude! You rule!

Also thanks to Andrew Bell for his tireless work on Cinder and his very helpful responses to my various questions.

The project was commissioned by EE, and produced by London based agency Publicis Chemistry.

Secret Cinema : Prometheus Installation

In June, 2012, I was asked to join a large group of installation artists working on a top secret project in a massive warehouse near Euston, in London.

Secret Cinema’s team of artists, perfomers, choreographers and production designers created a fully immersive live cinema experience, which I was happy to be a part of.

In response to the brief, I created an emotional programming lab for the David 8 android, which featured various sculptures, live interactions, and a sophisticated Kinect/OpenNI driver threat recognition system.

Didn’t get to meet Ridley Scott though.