All posts by Alias

Alias Cummins.

SCA Augmented Reality demo

As part of the Publicis Brand Loyalty Innovation Lab, we created an augmented reality demo for SCA’s popular brand of kitchen tissue.

Because SCA are Europe’s largest private owner of sustainable forests, we wanted to create a point of sale experience that would communicate this. We created a simple augmented reality demo to show a virtual tree growing from one of their products. The technology used does not require any special markers to be printed on the packaging, as the packaging design itself can be recognized by the software, using machine learning techniques.

This was a fun project to work on, and it was a relatively short turnaround – the application was developed and the tree modeled in just under two working days.

Mind Candy’s PopJam on Android

In 2014 I worked on an awesome social media app for Mind Candy called PopJam. It’s like a version of Instagram, but safe for kids to use (there’s a lot of technology behind the scenes to make sure of that).

I was mostly working on getting the Android version up to feature parity with the iOS version, so I was fixing bugs, adding features, and generally working as part of a team to improve things.

Mind Candy is an awesome place to work and I had an excellent time there! Check out the app on Google Play Store:

popjam

Augmented Reality with Kinect

Augmented_Reality_with_Kinect

Rui Wang‘s book on Augmented Reality with Kinect is a lovely introduction to using the Kinect SDK to create a simple application, which takes the reader through a large amount of information on Kinect in a remarkably short space of time.  It holds your hand and walks you, step by step, through the process of creating a “fruit ninja” style game (much beloved of UK Prime Minister David Cameron) and provides detailed and well annotated code samples for you to use.

This short tutorial book is an absolute must for anyone finding their feet with Kinect development, and I would strongly recommend it for any intermediate C++ coder who wants to dip a toe into the world of Natural User Interaction.

Augmented Reality means a lot of different things to a lot of different people, but this is very much in the vein of taking a live video feed and adding CG elements on top. If this is the sort of thing you’re interested in.  Be warned however, that it does not cover topics such as AR Markers, which are used a lot in marketing / mobile applications – it is purely focused on the Windows Kinect side of things, at which it does a very good job.

Alias Cummins

Alstom Island : Kinect Application with Large Blue

In October 2012, Large Blue, an agency in Covent Garden, London, commissioned me to collaborate on a Kinect driven interactive for their client, Alstom. The finished piece was exhibited at their annual company dinner at the Science Museum. It was a very fun but challenging project, and we hope to get further funding from Alstom to develop it into a fully fledged piece of educational software.

Speed of Light Salford : Long Exposure Video Effects

In February I was approached by the nice chaps at BBC R&D in Manchester
to work on their custom light painting system.

The system takes an SDI feed from a live camera and applies GLSL shaders to
the video in real time. I produced an application to run on the BBC’s bespoke
broadcast systems, and developed a long exposure GLSL fragment shader for
live video.

The system was first used to produce moving long exposure shots for the
fantastic NVA : Speed of Light Salford event. I think it’s fair to say that
the event was a big success and everyone was generally quite happy.

The whole process was documented on Blue Peter – check it out!

Kinect for Windows SDK Programming Guide

kinect-for-windows-sdk

This book, produced by Packt Publishing, purports to be a complete guide to the Windows Kinect SDK. This is not to be confused with the broader themed Kinect books which may cover more subjects such as OpenNI, Omek Beckon, Lib Freenect or various other related topics. This book very much concentrates on the official Microsoft Kinect SDK.

It begins with an excellent and well researched discusson of the Kinect hardware, which is well illustrated, and features all of the relevant information in an easily digestible form, and can be easily picked up if you need to quote a client the specs, like for example the field of vision of the depth sensing camera. It’s 43 degrees.

We are then walked through the basic SDK setup, which is simple enough, and then a discussion of the sensor’s capabilities. Seaonsed professionals may want to skip through this, as they will likely already know much of this, but nevertheless there may be a few head scratching or eureka moments when you fill in a gap in your Kinect knowlegde, so I’d recommend a read through of this section.

After an overview of the various tools and components of the SDK, we start to see some code examples.

Sadly, this is where the book and I begin to part company. The example code is almost entirely written in C#, which is not a language I generally use. Although I’m perfectly comfortable using Visual Studio, I generally use it to code in C++ (which the SDK extensively supports) so I feel that this was a bit of an oversight. I’m sure for anyone starting out who is already familiar with C# and WPF programming, this wouldn’t be a problem, but as I work with Cinder and a number of other C++ libraries, C# isn’t really an option.

The book continues on through the various capabilities of the device and the SDK, and as long as you don’t mind being tied into C sharp, it’s a pretty comprehensive read and holds your hand all the way through. It also covers tricky stuff like the encoding of player ids into the 16 bit depth image stream, something which can cause a lot of confusion starting out, but is vital to get a handle on.

It also covers less well understood topics like speech recognition, beamforming, and does a good job of introducing the reader to simple gesture recognition.
Be aware, however, that gesture recognition is not actually provided by the Kinect SDK as such, and you will have to come up with your own solution for this. This is a pretty common gotcha with Kinect applications, and it can take a long time to get to grips with, so be warned that allthough the material in the book is a good starting point, you may want to look into more sophisticated gesture recognition solutions if your application needs to do anything complicated.

As I’ve mentioned, I was a bit disappointed on the heavy reliance on C sharp, but the rest of the book is so useful that I’d say it’s a welcome addition to your library even if you don’t us C#, just for the hardware information alone.

Kinect for Windows SDK Programming Guide is available now from Packt Publishing.

EE Interactive Displays with Kinect for Windows

In September 2012, I developed a Kinect based touchless interactive display for EE, the UK’s first 4G/LTE network. The display features user outline tracking and various particle systems which display fonts, icons and user interaction feedback. I was a lead developer on the build, along with the awesome Michael Lawrence and Jonathon Curtis. Publicis Chemistry agency creatives were David Prideaux (ECD), Paul Westmooreland (AD) and Neame Ingram (CW).

The passer-by user’s skeleton and outline is tracked by a kinect sensor, mapped to particle repulsors and then used to influence the movement of a sprung particle grid. There is also some gesture recognition going on, which we developed from scratch. We used the cinder Kinect windows SDK in conjunction with openCV, and made heavy use of Cinder’s Timeline classes, and built a few custom animation classes to make our particles easier to manage and control.

Our main challenges were :
– Aggressive timescales (approx 10 weeks from concept to deployment)
– Complex branding guidelines determining the behaviour of the particles, as set out by Wolff Olins.
– The project was to be simultaneously deployed in 20 different locations, all of which needed custom installations
– Gesture recognition was a central part of the brief

I’d like to mention my gratitude to Stephen Schieberl for publishing his work on the Kinect SDK cinderblock, which this project relied upon heavily. Thanks dude! You rule!

Also thanks to Andrew Bell for his tireless work on Cinder and his very helpful responses to my various questions.

The project was commissioned by EE, and produced by London based agency Publicis Chemistry.

Secret Cinema : Prometheus Installation

In June, 2012, I was asked to join a large group of installation artists working on a top secret project in a massive warehouse near Euston, in London.

Secret Cinema’s team of artists, perfomers, choreographers and production designers created a fully immersive live cinema experience, which I was happy to be a part of.

In response to the brief, I created an emotional programming lab for the David 8 android, which featured various sculptures, live interactions, and a sophisticated Kinect/OpenNI driver threat recognition system.

Didn’t get to meet Ridley Scott though.

YouView

Early on in its conception, I was involved with the YouView project. My role involved investigating actionscript performance on mobile, memory management strategies, automated test planning and development of RDF metadata parsing. Pretty exciting stuff, I think you’ll agree.

YouView-poster-featuring--001