\n", "excerpt"=>"
(Expand all)
\n

\n
[+] Large-scale art installation idea
\n
\nA giant 'AR square' (not sure of the right terminology here...) is painted on the side of a building. People come by to view it thru the web application - an AIR app, possibly. Because the 'geometry' of the landscape around the painted square is known in advance (eg, city streets, adjacent buildings, etc), and can be assumed to remain 'constant', that information could be 'hardcoded' into the 3d scene for the dynamic 3d elements to interact with in a visually convincing manner. The 3d parts could also be properly occluded behind other buildings or certain street obstacles. The dynamic elements wouldn't even have to specifically interact with the location of the square itself; the square simply situates the camera in relation to the entire 3d space... A feature to save footage locally to be uploaded to a central repository later. Also, it wouldn't necessarily have to be a large-scale context. The same treatment could be done within a room-sized scene.\n
\n
[+] An interactive setup-phase to define scene geometry
\n
\nWith the square remaining in a fixed position, the user uses a few simple tools to draw 3d planes and rectangles on top of the video to describe the physical space around them. Eg, user draws 4 connecting line segments to describe a plane which represents a room's wall. (And then defines a few more planes that describe the other walls, floor, and ceiling). Actually, you might just be able to pinpoint the 8 corners of the inside box which makes up the room... Maybe the user can also 'overlay' a 3d cube over a coffee table or desk. Etc. The program could even draw from a library of furniture-like 3d objects for the user to translate, rotate, and scale to overlay onto the scene. These 3d elements then constitute solid objects which the dynamic 3d elements can interact with. The camera is free to move around within the scene as long as the square remains fixed.

\n

(a) 3d rain falling and hitting indoor furniture in interesting, 'convincing' ways. Or snowflakes falling, and collecting on various surfaces. Or a room getting slowly flooded with water, starting from the floor up to the ceiling... The supplied room geometry forming the basis for a platform game...

\n

(b) Actually, if the user-supplied data and the positioning information from the AR Toolkit are accurate enough, there's no reason why you couldn't take snapshots from the video imagery, dynamically snip out the various quadrilaterals corresponding to the scene geometry, correct for perspective, and then skin the 3d geometry with that video texture information... !

\n

(c) A 3d box is positioned over a real-world rectangle-shaped table or something. The top of that box is then used as a playing surface for something like pong pong or air hockey, which uses normal 3d game mechanics and assets but which is of course overlayed on top of live video. Imagine a replay feature where the winning shot is replayed in slow motion, but the user views it from different angles by moving the webcam. Virtual indoor handball by using AR in combination with motion detection.

\n

(d) A square is stuck on a person's chest. If the person's height is supplied, we have enough information for a gross bounding box. This is enough information to go on to do a number of potentially interesting things. If we make the assumption that the subject remains generally upright and standing, we know the general position of the floor as well.

\n

Update: Of course all these questions have already been dreamed up, and solved. This page from chronotext.org looks useful...\n

\n
[+] Use of 'physics'; use of a physics engine (eg, Jiglib)
\n
\nAs the AR square describes a plane, it of course lends itself to being acted upon as a solid surface. If we introduce extra scene geometry as described in points 1 or 2 above, even more could be done. Cubes or spheres falling from the ceiling to fill up a room (of course). Apply that sweet-ass Jiglib rally car example to a scene where the AR square is placed on the floor...

\n

Update: Cloth demo by Saqoosha. (I guess I should do more 'research' before clicking 'Publish' ;)

\n
\n
[+] A specific visual piece: Tentacles
\n
\nThe AR square, placed on the body. Multiple tentacles coming out of the square in anime/sci-fi style. A fun exercise to play with for... inverse kinematics; 3d bezier curve animation; animating bezier patches to generate mesh geometry (Away3D 2.3); tree-like branching of tentacles; 'generative art' generally; crazy, lurid motions. Assuming a fixed camera, various 'motion behaviors' based on the movement of the square on the body; tentacles reacting fluidly to translation and rotation of the square.\n
\n
[+] Interaction of dynamic 3d elements between two or more squares
\n
\nParticles coming out of one square and going into another; gravity-based motion between squares; arcs of electricity going from one to the other; tentacles (from point 4) coming out of one body and 'attacking' another body to which another square is attached, for some reason.\n
\n
[+] Interaction of the video bitmap information with the 3d elements
\n
\n(a) The video image used as an environment map applied to the dynamic 3d elements, to make it look reflective and vaguely chrome-like (with a little cleverness and finesse).

\n

(b) Pixelbender-like effects applied to the areas of the video that are 'underneath' or adjacent to the 3d elements. A 3d 'fire' coming out of the square, and the video imagery around the fire shimmering from the 'heat' of it. Wind-like motion-blur effect emanating from a virtual fan or something, and taking account of perspective. Pixel-dissolve-y action?

\n

(c) Real-time chroma-keying to mask out background video imagery. Dynamic 3d elements can then be made to appear to be circling around the subject by appearing both in front of or 'behind' the video.

\n

(d) The idea of treating the entire video with various video filters momentarily/sporadically to put the artificial content in bolder relief appeals to me...\n

\n
[+] Intelligent video color sampling
\n
\n(a) Application polls the color information of the incoming video to try to mimic, vaguely, the scene's lighting as applied to the 3d elements. Again, with some clever hand-wavery and finesse.

\n

(b) The dynamic 3d elements attempt to 'mimic' the colors of the video pixels around it. An animated lizard character or something. Maybe the colors of the faces of a mesh are assigned by averaging the colors of the area of the video image that the face normals are pointing at.

\n

(c) Random 'remixing' of nearby patches of video imagery applied to a 3d mesh to create its texture. Another Pixel Bender possibility. The invisible suit in the movie 'A Scanner Darkly'...\n

\n
[+] Save video to disk from within application
\n
\nAdd built-in feature to easily save composited output to disk (eg, using SimpleFlvWriter).
\n
[+] Science museum-style interactive art installation
\n
\nAs many AR ideas might require specific rules, setups, or optimal conditions, set them up in an expressedly controlled setting...\n
\n
[+] Use of augmented reality + head-mounted display/webcam + geotagging + wireless internet = William Gibson's Spook Country
\n
\nThe composited output fed into a head-mounted display/'VR goggles', with a lightweight webcam mounted on it pointing outward. When combined with GPS tagging, \"locative art\" a la Spook Country. (Actually, the GPS tagging wouldn't even be necessary, just nice to have). Ie: Users view 3d sculpures and whatnot (pushed via wireless) associated with AR squares (made to various scales) that are 'tagged' around the (real-world) landscape by other users.\n
\n

\n\n\n"}
Close Show/hide page

{zero point nine} personal experiments, etc.

A few ideas for augmented reality

(Expand all)

[+] Large-scale art installation idea
A giant 'AR square' (not sure of the right terminology here...) is painted on the side of a building. People come by to view it thru the web application - an AIR app, possibly. Because the 'geometry' of the landscape around the painted square is known in advance (eg, city streets, adjacent buildings, etc), and can be assumed to remain 'constant', that information could be 'hardcoded' into the 3d scene for the dynamic 3d elements to interact with in a visually convincing manner. The 3d parts could also be properly occluded behind other buildings or certain street obstacles. The dynamic elements wouldn't even have to specifically interact with the location of the square itself; the square simply situates the camera in relation to the entire 3d space... A feature to save footage locally to be uploaded to a central repository later. Also, it wouldn't necessarily have to be a large-scale context. The same treatment could be done within a room-sized scene.
[+] An interactive setup-phase to define scene geometry
With the square remaining in a fixed position, the user uses a few simple tools to draw 3d planes and rectangles on top of the video to describe the physical space around them. Eg, user draws 4 connecting line segments to describe a plane which represents a room's wall. (And then defines a few more planes that describe the other walls, floor, and ceiling). Actually, you might just be able to pinpoint the 8 corners of the inside box which makes up the room... Maybe the user can also 'overlay' a 3d cube over a coffee table or desk. Etc. The program could even draw from a library of furniture-like 3d objects for the user to translate, rotate, and scale to overlay onto the scene. These 3d elements then constitute solid objects which the dynamic 3d elements can interact with. The camera is free to move around within the scene as long as the square remains fixed.

(a) 3d rain falling and hitting indoor furniture in interesting, 'convincing' ways. Or snowflakes falling, and collecting on various surfaces. Or a room getting slowly flooded with water, starting from the floor up to the ceiling... The supplied room geometry forming the basis for a platform game...

(b) Actually, if the user-supplied data and the positioning information from the AR Toolkit are accurate enough, there's no reason why you couldn't take snapshots from the video imagery, dynamically snip out the various quadrilaterals corresponding to the scene geometry, correct for perspective, and then skin the 3d geometry with that video texture information... !

(c) A 3d box is positioned over a real-world rectangle-shaped table or something. The top of that box is then used as a playing surface for something like pong pong or air hockey, which uses normal 3d game mechanics and assets but which is of course overlayed on top of live video. Imagine a replay feature where the winning shot is replayed in slow motion, but the user views it from different angles by moving the webcam. Virtual indoor handball by using AR in combination with motion detection.

(d) A square is stuck on a person's chest. If the person's height is supplied, we have enough information for a gross bounding box. This is enough information to go on to do a number of potentially interesting things. If we make the assumption that the subject remains generally upright and standing, we know the general position of the floor as well.

Update: Of course all these questions have already been dreamed up, and solved. This page from chronotext.org looks useful...

[+] Use of 'physics'; use of a physics engine (eg, Jiglib)
As the AR square describes a plane, it of course lends itself to being acted upon as a solid surface. If we introduce extra scene geometry as described in points 1 or 2 above, even more could be done. Cubes or spheres falling from the ceiling to fill up a room (of course). Apply that sweet-ass Jiglib rally car example to a scene where the AR square is placed on the floor...

Update: Cloth demo by Saqoosha. (I guess I should do more 'research' before clicking 'Publish' ;)

[+] A specific visual piece: Tentacles
The AR square, placed on the body. Multiple tentacles coming out of the square in anime/sci-fi style. A fun exercise to play with for... inverse kinematics; 3d bezier curve animation; animating bezier patches to generate mesh geometry (Away3D 2.3); tree-like branching of tentacles; 'generative art' generally; crazy, lurid motions. Assuming a fixed camera, various 'motion behaviors' based on the movement of the square on the body; tentacles reacting fluidly to translation and rotation of the square.
[+] Interaction of dynamic 3d elements between two or more squares
Particles coming out of one square and going into another; gravity-based motion between squares; arcs of electricity going from one to the other; tentacles (from point 4) coming out of one body and 'attacking' another body to which another square is attached, for some reason.
[+] Interaction of the video bitmap information with the 3d elements
(a) The video image used as an environment map applied to the dynamic 3d elements, to make it look reflective and vaguely chrome-like (with a little cleverness and finesse).

(b) Pixelbender-like effects applied to the areas of the video that are 'underneath' or adjacent to the 3d elements. A 3d 'fire' coming out of the square, and the video imagery around the fire shimmering from the 'heat' of it. Wind-like motion-blur effect emanating from a virtual fan or something, and taking account of perspective. Pixel-dissolve-y action?

(c) Real-time chroma-keying to mask out background video imagery. Dynamic 3d elements can then be made to appear to be circling around the subject by appearing both in front of or 'behind' the video.

(d) The idea of treating the entire video with various video filters momentarily/sporadically to put the artificial content in bolder relief appeals to me...

[+] Intelligent video color sampling
(a) Application polls the color information of the incoming video to try to mimic, vaguely, the scene's lighting as applied to the 3d elements. Again, with some clever hand-wavery and finesse.

(b) The dynamic 3d elements attempt to 'mimic' the colors of the video pixels around it. An animated lizard character or something. Maybe the colors of the faces of a mesh are assigned by averaging the colors of the area of the video image that the face normals are pointing at.

(c) Random 'remixing' of nearby patches of video imagery applied to a 3d mesh to create its texture. Another Pixel Bender possibility. The invisible suit in the movie 'A Scanner Darkly'...

[+] Save video to disk from within application
Add built-in feature to easily save composited output to disk (eg, using SimpleFlvWriter).
[+] Science museum-style interactive art installation
As many AR ideas might require specific rules, setups, or optimal conditions, set them up in an expressedly controlled setting...
[+] Use of augmented reality + head-mounted display/webcam + geotagging + wireless internet = William Gibson's Spook Country
The composited output fed into a head-mounted display/'VR goggles', with a lightweight webcam mounted on it pointing outward. When combined with GPS tagging, "locative art" a la Spook Country. (Actually, the GPS tagging wouldn't even be necessary, just nice to have). Ie: Users view 3d sculpures and whatnot (pushed via wireless) associated with AR squares (made to various scales) that are 'tagged' around the (real-world) landscape by other users.

View or post a comment