The underlying idea here is something along the lines of a realtime VJ-like performance tool. This is a work in progress. Using openFrameworks, Kinect point-cloud data is collected in a sequence of frames that can be saved to disk and played back on demand. Points beyond a specified depth are filtered out, and a bounding box is calculated to form the basis of some simple dynamic interactions.
Various effects with variations can be triggered and combined in real time on both saved and live video:
An ostentatious particle emitter effect:
Explosions which can be triggered on demand or based on sudden changes to the bounding box:
“Alpha trails” (with more hands)…
Strobe, with quick time shifting effect:
The priority for this has been to develop my up-til-now shallow knowledge of C++, focusing on code design and performance, rather than the exploration of novel uses of the Kinect as an input device (or so I tell myself).
Given more time, I’d like to do more with the visual effects, look into shaders, write a polygon-based renderer (rather than simple points), and find excuses to do more with multi-threading (currently the loading routine runs on a separate thread, and the ‘video capture’ routine runs on three). And find a clever use for OpenCV to do more than what I’m doing now with just a bounding box. And play with MIDI to trigger video clips, and map various parameters DAW-style. And then maybe work on a real user interface, possibly broken out in a separate window, using Flash or what have you.
I believe the core recording and playback system and my implementation of a “Point Cloud Video” file format are logical candidates for a public post on GitHub, but will entail some rationalization and massaging of code.