Close Show/hide page

Drive Wiki Maker

June 12th, 2014

I’ve been keeping a publicly viewable wiki of developer notes for a while now, but when I switched taking notes on Google Drive, I still wanted a way to make them publicly accessible, which is what’s led to Drive Wiki Maker.

Drive Wiki Maker is a node.js service which periodically scrapes documents from a Google Drive directory to make them browsable through a public front-end.

And here’s my revamped dev wiki which uses it.

Virtual Trackpad for Android

March 31st, 2014

What started out as a quick test with sockets inevitably grew into a fully baked app. I published this over the winter break, but have just pushed an update, making it free with all features unlocked.

This is a quick demo:

I chose to create the desktop client in Java with the hope of being able to share the socket communications code between the desktop and Android clients. This actually worked out, without even a need for any platform-specific conditionals. A pleasant surprise to say the least.

Additionally, since both projects are Java based, I was able to stay in the same IDE (IntelliJ), which was nice. The desktop project uses SWT (a long-time Java UI library that uses OS-specific dependencies to allow for native OS-based UI widgets) and builds four different JAR’s: Windows 32bit, Windows 64bit, OSX 32bit, and OSX 64bit (yikes). In the last step of the build process, a script packages the OSX versions into app bundles.

Cusor movement is done using the Java Robot class. You’ll see that the mouse cursor can have “momentum” on-unpress, which in practice can be super-useful. And mouse acceleration is configurable using this one-liner which I need to remember to find uses for more often:

double logBase(double value, double base) { return log(value) / log(base); }

The network logic is hand-rolled, using UDP sockets. The size of the encoded messages are kept as small as humanly possible. For example, a mouse-move command uses only 20 bytes — 4 bytes for a start-of-message token, 4 bytes for the command type (ie, move the mouse), two floats for the cursor position, and then 4 bytes for an end-of-message token. As a result, at least in my testing, under decent network conditions mouse-updating stays pegged at a steady 60hz.

> Google Play

Anime Color Palette Experiment

March 27th, 2014

This was a novel attempt at ‘extracting’ color palettes from illustration-styled artwork (and specifically, from anime).

Starting with the assumption/requirement that shapes in the source image should have fairly well-defined boundaries, the image is partitioned into a collection of two-dimensional polygons (OpenCV’s findContours plus approxPolyDP).

Each chunk is then sampled for its surface area and its average color (based on a second assumption that each chunk will have a more-or-less fairly uniform color). The resulting chunks’ colors are then grouped together based on “proximity” by treating HSV values as three-dimensional positions, and each color group is ranked based on surface area, resulting in a final color palette.

In the end, the results were mixed, but predictably better with artwork with well-defined outlines and flat colors.

As a bonus, though, since each chunk is represented internally as a polygon, they can be treated as independent objects, and can therefore be subjected to potentially interesting visual treatments.

This is demonstrated in the video by (predictably enough…) doing some tween-ey things on the z-axis. As well as a quick “pencil-drawn” effect.

FWIW, the still images are from Kill La Kill, and the video is the OP of UN-GO.

From 3/2012. C++, cinder, OpenCV.

Spherical Racing Game Prototype – Level Editor

March 20th, 2014

Here’s a video walkthrough of the level editor. The UI needs work, but most of the major features are in place…

There are five main sections:

- Track: The racetrack is drawn directly on the sphere by pressing and dragging. A maximum turning angle can be used to ensure that turns in the track appear smooth.

- Shape: The vertices of the planet’s mesh can be extruded away from or towards the sphere center to create variations in its topology.

- Texture: Images can be burned into the planet’s texture, essentially by mapping image pixels to texels using raycasting. The algorithm runs on multiple threads. The texture can also be color-filled using three independent colors to create a continuous gradient around the sphere.

- Structures: 3D model obstacles can placed on or around the planet. They’re treated as oriented bounding boxes for the collision detection.

- Preview: The user can jump in and out of preview mode to test out the level at any point.

Note on the video capture: To work around the choppiness that can result from doing video capture via AirPlay, I actually halved the framerate of the application and doubled the playback rate in the video encode. :)

Spherical Racing Game Prototype – Gameplay

February 27th, 2014

I’ve been working on this for quite a while. The main conceit is, as you can see, a game whose “game world” is literally a “world”, ie, a globe-shaped object, with a racing track that wraps around it. It’s 90% C++, but 10% Objective C, which means it lives on iOS, and the iPad specifically, using openFrameworks.

A user-facing level editor is integrated into the game. Though it’s probably more accurate to say that the application _is_ the level editor, which has a playable game component as a byproduct of its execution and design (haha). I’ll demo the level editor’s features next, and then post about certain aspects of game from a high level, maybe including spherical versus Euclidean geometry, AI logic, and ray-casting on a polygon to create textures.

It’s been a (seriously) long time since I’ve posted here so I’m hoping to show some video + code from other (mini-er) experiments soon as well.

FPO Image Generator

November 3rd, 2011

Is it not the norm for the agency developer of whatever flavor to be asked make more progress than what’s reasonable based on the current status of a given project at any given point in time?

Thus, we continuously look for ways to make more progress with less information, even while increasing the risk of wasted work and unseemly displays of developer angst. At the very least, such experiences make a person receptive to finding a few tricks that might shave off a few extra minutes in one’s pursuit to meet the latest unreasonable deadline.

That having been said, the potential utility of this little tool should require no further explanation.

It has its origins in this capricious tweet from some time last winter.

- Download (uses Adobe AIR)

- Source (Flex project)

MAQET 3D Gallery App for Android

October 18th, 2011

I just completed an Android app which is a simple 3D gallery of user-created designs for

It was fun to be able to apply previous work – the MAQET configurator component done in Flash – to a completely separate development environment (native Android).

Happily, I was able to use min3D for the OpenGL action, which allows me the opportunity to mention that min3D has been nominated for Packt Publishing’s 2011 Open Source Awards.

- Android Market link

Maze + Camera Visualizer Toy (iPad)

July 12th, 2011

This is my “Hello World” for iOS.

OK, that’s a lie. First I did some trace-to-the-console stuff, then I did the obligatory “paint with your finger” type deal, and then spent a relatively short but intensive period on this as a more finished piece. It was meant as an exercise in learning some iOS API, coding patterns and idioms. I also had to learn how to use a Mac seriously for the first time (!).

I recycled the maze algorithm posted previously, and kept that part in C++. Then I combined it with live camera input, which I guess is my habit. I got the sense early on that the end result wouldn’t necessarily be spectacular but forged ahead anyway for the reasons mentioned above, and tried to make it as finished as possible.

A few more details… Read the rest of this entry »

Kinect Point Cloud Visualizer

June 14th, 2011

The underlying idea here is something along the lines of a realtime VJ-like performance tool. This is a work in progress. Using openFrameworks, Kinect point-cloud data is collected in a sequence of frames that can be saved to disk and played back on demand. Points beyond a specified depth are filtered out, and a bounding box is calculated to form the basis of some simple dynamic interactions.

Various effects with variations can be triggered and combined in real time on both saved and live video:

An ostentatious particle emitter effect:

Explosions which can be triggered on demand or based on sudden changes to the bounding box:

“Alpha trails” (with more hands)…

Strobe, with quick time shifting effect:

The priority for this has been to develop my up-til-now shallow knowledge of C++, focusing on code design and performance, rather than the exploration of novel uses of the Kinect as an input device (or so I tell myself).

Given more time, I’d like to do more with the visual effects, look into shaders, write a polygon-based renderer (rather than simple points), and find excuses to do more with multi-threading (currently the loading routine runs on a separate thread, and the ‘video capture’ routine runs on three). And find a clever use for OpenCV to do more than what I’m doing now with just a bounding box. And play with MIDI to trigger video clips, and map various parameters DAW-style. And then maybe work on a real user interface, possibly broken out in a separate window, using Flash or what have you.

I believe the core recording and playback system and my implementation of a “Point Cloud Video” file format are logical candidates for a public post on GitHub, but will entail some rationalization and massaging of code.

Updated FLV Encoder, 3.5x faster with Alchemy

March 30th, 2011
Thumbnail - Click me
Demo: Record and encode FLV’s from webcam in realtime, using Alchemy

Or, download AIR desktop version

This is an update to FLV Encoder which adds an optional Alchemy routine that’s about 3.5x faster, as well as FileStream support for writing directly to a local file using AIR. The library has been architected in such a way that you can use the package while targeting either the browser (no FileStream support) or AIR, and either Flash 9 (no Alchemy support) or Flash 10 – without getting class dependency compiler errors.

I’ve put the package on Github, along with a couple examples (including the one from the last post). The API has changed a little so be sure to also see the example code below. A method updateDurationMetadata() has been added so the video duration does not have to be declared at the start. Also, a bug where the top-most line of pixels was not being written has been fixed.

Realtime encoding demo:

Because of the increased speed of the Alchemy version, it is now viable to encode FLV’s in realtime as the audio and video is being captured, at least within certain limits. Click on the thumbnail above for an online demo that encodes webcam video and audio to a file at 320×240 in real time. If your system is fast enough, you can keep the framerate set to 15FPS with minimal hiccups. The browser-based version must store the entire FLV in memory before saving to disk, but the equivalent AIR version can save its contents directly to a file so that the only limiting factor is disk space. I’m using a dynamic timing and queuing system to keep video and audio in sync which could be the topic of another post.

Updated usage examples:

Read the rest of this entry »