Augmented Reality: Kimchi and Chips, "Assembly"

*Another swell "structured light" installation by the bright and industrious Kimchi & Chips.

http://www.creativeapplications.net/openframeworks/assembly-by-kimchi-and-chips-5500-physical-pixels-illuminated-by-digital-light/

(((Sweating the many details:)))

Overview

‘Assembly’ by Kimchi and Chips, 2012
Permanent installation in Nakdong river cultural centre gallery in Busan, Republic of Korea

Description

A hemisphere of 5,500 white blocks occupies the air, each hanging from above in a pattern which repeats in order and disorder. Pixels play over the physical blocks as an emulsion of digital light within the physical space, producing a habitat for digital forms to exist in our world.

A group of external projectors penetrate the volume of cubes with pixel-rays, until every single one of the cubes becomes coated with pixels. By scanning with structured light, each pixel receives a set of known information, such as its absolute 3d position within the volume, and the identity of the block that it lives on.

The spectator is invited to study a boundary line between the digital and natural worlds, to see the limitations of how the 2 spaces co-exist. The aesthetic engine spans these digital and physical realms. Volumetric imagery is generated digitally as a cloud of discontinuous surfaces, which are then applied through the video projectors onto the polymer blocks. By rendering figurations of imaginary digital forms into the limiting error-driven physical system, the system acts as an agency of abstraction by redefining and grading the intentions of imaginary forms through its own vocabulary.

The flow of light in the installation creates visual mass. The spectator’s balance is shifted by this visceral movement causing a kinaesthetic reaction. For digital to exist in the real world, it must suffer its rules, and gain its possibilities. The sparse physical nature of the installation allows for the digital form to create a continuous manifold within the space across the discreet blocks, whilst also passing through each block as a continuous pocket of physical space.

The polymer blocks are engineered for both diffusive/translucent properties and to have a reflective/projectable response to the pixel-rays. This way a black can act as a site for illumination or for imagery.

The incomplete form of the hemisphere becomes extinct at its base, but extends through a reflection below, and therein becomes complete. It takes inspiration from nature, whilst becoming an artefact of technology.

Media

Polymer blocks
Steel wire
Aluminium and steel structure
Glass
Video projectors
Machine vision cameras
Computer
Software

(((But wait! There's more!)))

Technical principle

We think of a projector as ~1 million static spotlights, each of which aimed down a unique direction away from the projector's lens. Each spotlight hitting the structure creates a pool of light with a defined physical location in space. By scanning the 3D location of these tiny pools of light, we can understand how we can construct a macroscopic volumetric scene out of them, and most simply, imagine them as a cloud of individually addressable LED’s.

A set of 5 projectors are connected to a computer, wherein we simultaneously render to all projectors in real time. Every part of every block is seen by at least 1 projector.

Using ofxGraycode and a set of 5 high resolution cameras (each respectively positioned alongside a projector) we make a structured light scan to find correspondences between projector pixels and camera pixels.

We solve the intrinsic and extrinsic properties of the cameras and projectors using these correspondences, and then use this information to triangulate the 3d position of every pixel (thereby creating voxels). (((You don't say. That's really quite interesting.)))

Using Point Cloud Library we cluster this data to discover the locations of the blocks, and then fit cuboids to these clusters. Using this information we can now know for each of the ~3.5 million active pixels:

3d position of pixel
3d position of cube centroid
Cube index
Face index on cube
2d position of pixel on cube face
Wire index which the cube belongs to
Pixel normal
Cube rotation
Final system

A camera calibration app written in openFrameworks allows us to determine the intrinsics and extrinsics of the cameras following a chessboard calibration routine. This data can then be used to calibrate the projectors using structured light. We define the calibration tree as a travelling salesman problem, (((now see, kids, this is what that unbearable drudgery called "mathematics" is for; the "travelling salesman" is this legendary guy who roams from city to city to city, trying to find business models for augmented reality))) with different calibration routes (e.g. camera A to camera C) being assigned a cost based on the accuracy of the available calibration route, we then evaluate the best calibration tree for each camera and projector, and integrate through the calibration.

Each day a startup script first performs the scan in openFrameworks and then starts the runtime in VVVV.

An application first performs the simultaneous capture of structured light on the 5 cameras whilst stepping sequentially through each projector. Following this we triangulate the projector pixels to create a dense mapping between the 2D pixels of the projector and the physical locations of those pixels in 3D space. This map is then stored to disk. (((I hope that disk is as "permanent" as the steel wire.)))

The startup script then loads VVVV which transfers the datamaps to the GPU. We define ‘brushes’ using HLSL shaders which act on the dataset. Different brushes generate different visual effects, for example, some generate density fields which are interpreted as either gradients or isosurfaces. The VVVV graph plays through a script of generative animations and performs systems management.

Software platform choices

We started with identifying VVVV, openFrameworks and Cinema 4D as valuable platforms for us to develop the project. The intention of was to play to the strengths of the platforms available in terms of quality of output, immediacy of creative process and to experiment with developing new workflows. ((("Quality, Immediacy & Experimentation")))

VVVV has used throughout the pre-visualisation, simulation and prototyping stages, and later for runtime and systems management.
openFrameworks was used for more advanced vision tasks where minimalism, timing, threading control and memory management we favored.

Cinema 4D offers a tuned environment for designing and animating 3D content, but is generally limited to producing renders as 2D images/video or exporting meshes. Using python, we ‘hacked’ Cinema 4D’s cameras to capture volumetric data from scenes that could be defined as a multitude of image files.

Commonly the software research became focused on developing effective interoperability between these platforms e.g.:

Developing an OpenGL render pipeline in VVVV, thereby allowing us to embed openFrameworks rendering within VVVV (experimental, fledgling)

Creating a threaded image processing platform within VVVV so that we could rapidly prototype advanced vision tasks within the VVVV graph (released and currently deployed in projects by other studios)

Developing the python scripted ‘volume capture rigs’ inside Cinema 4D to export volumetric fields to be reloaded into either a standalone openFrameworks simulation app, or VVVV for runtime (project specific)

Code

Throughout the development process, all code used for the project has been available on GitHub.

Project files
VVVV projects
openFrameworks projects
Applications
OpenNI-Measure : take measurements on a building site using a Kinect
Kinect intervalometer : apps for taking timelapse point-cloud recordings using a Kinect
VVVV.External.StartupControl : Manage installation startups on Windows computers
Algorithms
ProCamSolver : Experimental system for solving the intrinsics and extrinsics of projectors and cameras in a structured light scanning system
PCL projects
openFrameworks addons
ofxGrabCam : camera for effectively browsing 3d scenes
ofxRay : ray casting, projection maths and triangulation
ofxGraycode : structured light scanning
ofxCvGui2 : GUI for computer vision tasks
ofxTSP : solve Travelling Salesman Problem
ofxUeye : interface with IDS Imaging cameras
VVVV plugins
VVVV.Nodes.ProjectorSimulation : simulate projectors
VVVV.Nodes.Image : threaded image processing, OpenCV routines and structured light

Credits
Kimchi and Chips
Mimi Son
Elliot Woods

Production staff
Minjae Kim
Minjae Park

Mathematicians
Daniel Tang
Chris Coleman-Smith

Videography
MONOCROM
Mimi Son
Elliot Woods

Music by Johnny Ripper

Manufacturing
Star Acrylic, Seoul
Dongik Profile, Bucheon