This is my “Hello World” for iOS.
OK, that’s a lie. First I did some trace-to-the-console stuff, then I did the obligatory “paint with your finger” type deal, and then spent a relatively short but intensive period on this as a more finished piece. It was meant as an exercise in learning some iOS API, coding patterns and idioms. I also had to learn how to use a Mac seriously for the first time (!).
I recycled the maze algorithm posted previously, and kept that part in C++. Then I combined it with live camera input, which I guess is my habit. I got the sense early on that the end result wouldn’t necessarily be spectacular but forged ahead anyway for the reasons mentioned above, and tried to make it as finished as possible.
A few more details…
User-selectable parameters determine which pixels get erased from the image. This can be based on brightness, or by the apparent red-ness, green-ness, or blue-ness of the image. The calculations for the latter look like this, which I think worked out quite well. If we’re testing for the “redness” of a pixel…
int redness = (redChannel - greenChannel) + (redChannel - blueChannel);
if (redness < someThresholdValue) eraseMyPixel();
Redness can be a good criteria for filtering for exposed skin, depending on skin tone. Greenness can be a good filter for typical indoor settings because it tends to show up less often as red or blue (unless the walls were painted green, which would in any case lead to another interesting use case (green screen...)). You can see in the video where I'm waving around a green folder how well the green gets isolated.
So anyway, the image is scaled to the same dimensions of the maze so a 1-to-1 comparison can be made between the pixels and the squares of the maze. Blocks of the maze are removed based on whether their corresponding pixels have been erased, and the adjacent vertical or horizontal segment/s are filled in for aesthetic effect. At that point, the maze is, unfortunately, no longer solvable.
Capturing of camera data in iOS is done on a different thread, so concurrency issues turned out to be a big deal, particularly since the program runs in a continuous loop. I tried to do as much of the image processing as possible on that secondary thread to take advantage of the dual core-ness of the iPad 2, but could never quite make it work out. In the end, I had to hand off the imagedata off to the UI thread to do the rest of the work, but at least learned the value of the NSObject method