July 12th, 2011
This is my “Hello World” for iOS.
OK, that’s a lie. First I did some trace-to-the-console stuff, then I did the obligatory “paint with your finger” type deal, and then spent a relatively short but intensive period on this as a more finished piece. It was meant as an exercise in learning some iOS API, coding patterns and idioms. I also had to learn how to use a Mac seriously for the first time, which apparently I’m not ashamed to admit saying on the record.
I recycled the maze algorithm posted previously, and kept that part in C++. Then I combined it with live camera input, which I guess is my habit. I got the sense early on that the end result wouldn’t necessarily be spectacular but forged ahead anyway for the reasons mentioned above, and tried to make it as finished as possible.
I haven’t submitted this little toy to the App Store™ for various reasons, some of which are arguably irrational, and it’s possible I should reconsider. Should that come to pass, I will be sure to post an update.
A few more details… Read the rest of this entry »
June 14th, 2011
The underlying idea here is something along the lines of a realtime VJ-like performance tool. This is a work in progress. Using openFrameworks, Kinect point-cloud data is collected in a sequence of frames that can be saved to disk and played back on demand. Points beyond a specified depth are filtered out, and a bounding box is calculated to form the basis of some simple dynamic interactions.
Various effects with variations can be triggered and combined in real time on both saved and live video:
An ostentatious particle emitter effect:
Explosions which can be triggered on demand or based on sudden changes to the bounding box:
“Alpha trails” (with more hands)…
Strobe, with quick time shifting effect:
The priority for this has been to develop my up-til-now shallow knowledge of C++, focusing on code design and performance, rather than the exploration of novel uses of the Kinect as an input device (or so I tell myself).
Given more time, I’d like to do more with the visual effects, look into shaders, write a polygon-based renderer (rather than simple points), and find excuses to do more with multi-threading (currently the loading routine runs on a separate thread, and the ‘video capture’ routine runs on three). And find a clever use for OpenCV to do more than what I’m doing now with just a bounding box. And play with MIDI to trigger video clips, and map various parameters DAW-style. And then maybe work on a real user interface, possibly broken out in a separate window, using Flash or what have you.
I believe the core recording and playback system and my implementation of a “Point Cloud Video” file format are logical candidates for a public post on GitHub, but will entail some rationalization and massaging of code.
March 30th, 2011
This is an update to FLV Encoder which adds an optional Alchemy routine that’s about 3.5x faster, as well as FileStream support for writing directly to a local file using AIR. The library has been architected in such a way that you can use the package while targeting either the browser (no FileStream support) or AIR, and either Flash 9 (no Alchemy support) or Flash 10 – without getting class dependency compiler errors.
I’ve put the package on Github, along with a couple examples (including the one from the last post). The API has changed a little so be sure to also see the example code below. A method
updateDurationMetadata() has been added so the video duration does not have to be declared before the video has been created. Also, a bug where the top-most line of pixels was not being written has been fixed.
Realtime encoding demo:
Because of the increased speed of the Alchemy version, it is now viable to encode FLV’s in realtime as the audio and video is being captured, at least within certain limits. Click on the thumbnail above for an online demo that encodes webcam video and audio to a file at 320×240 in real time. If your system is fast enough, you can keep the framerate set to 15FPS with minimal hiccups. The browser-based version must store the entire FLV in memory before saving to disk, but the equivalent AIR version can save its contents directly to a file so that the only limiting factor is disk space. I’m using a dynamic timing and queuing system to keep video and audio in sync which could be the topic of another post.
Updated usage examples:
Read the rest of this entry »
March 12th, 2011
FlvEncoder demo implementation: Records webcam video and audio to a local file
! Update: See updated FLV Encoder post + code here
A prospective client recently approached me about the possibility of adding audio support to SimpleFlvEncoder. They ultimately decided to go a different direction and so didn’t commission me for that work, but not before my curiosity had already gotten the better of me…
This class makes it possible to create FLV’s in AS3 that contain both video and audio, all on the client-side using the regular browser-based Flash Player. Like with the old version from 2007 (!), video is uncompressed (or rather, just zlib-compressed on a per-frame basis). Audio must be supplied in uncompressed PCM format with either 8 or 16 bits per sample, mono or stereo, and at 11, 22, or 44 KHz. It can also create FLV’s that contain only audio or only video.
Here is a quick-and-hand-wavey example of using the class (Please see the comments in the source for more detail.). Read the rest of this entry »
February 12th, 2011
A couple months ago, while I was still a full-time student, which did not last very long, but that’s another story, I created a maze generator as a way to learn some C++, now posted here just for the fun of it.
I started with a description of the algorithm found on Wikipedia, worked it out in Processing (with 2D, 3D, and isometric views added as a bonus), and then ported it to C++ using openFrameworks for the basic display action.
Presumably it could be used as the start of a game-like concept, if so inclined.
- Processing (source + Win/Mac executables)
- C++/oF (source + Windows executable)
February 3rd, 2011
Easily the most interesting and innovative commercial project I’ve been involved with over the last period of time has been maqet.com, which is created and run by artist Keith Cottingham.
I’ve been responsible for the site’s configurator component, which allows the user to create customized maquettes by changing their poses and skin. The “maqet” is then created in physical form through a 3D-printing process and delivered to the user.
Most recently, I’ve been working on the drawing tools that allow the user to paint on a two-dimensional canvas that updates the texture of the 3D model in real-time. Soon to be added is a freehand drawing-to-bezier curve tool.
This is a quick-and-dirty video capture of the configurator in action. But mostly, please feel encouraged to visit the site and try it for yourself.
August 21st, 2010
I just wanted to mention that at the end of the month I’ll be taking a happily anticipated break from the commercial grind and going back to school, to the Parsons MFA Design & Technology program.
I’m looking forward to posting new and hopefully interesting things here as I progress through the program, the time during which I aim to broaden the scope of my personal sandbox and make experimentation a full-time vocation. But hopefully without losing any of the snark.
July 31st, 2010
Video demonstrating my first “experiment” with the Android Market. An answer to a question nobody asked– “What if you made an Android program launcher in 3D?”
Video updated with most recent version running on Honeycomb, 6/2011
It was, to be frank, a gigantic pain in the ass coding the motion for this, since there seems to be no Tweener-like class written in Java (Java, not processing…). Anyone know of one?
Built using min3D.