From Disney to Artechouse

Digital artists seize on light-science technology to create immersive works of art
01 November 2021
By Bob Whitby
Digital art “Universe of Water Particles on a Rock Where People Gather.”
From an exhibition titled “Universe of Water Particles on a Rock Where People Gather.” Credit: Exhibition view of MORI Building Digital Art Museum teamLab Borderless, 2018, Odaiba, Tokyo ©teamLab. teamLab is represented by Pace Gallery.

Barefoot, you walk up a flowing river toward a waterfall of light. The elements blend so that it's difficult to separate water from light. But the water is real, your feet are getting wet.

Continuing, you wade ankle-deep through a koi pond inhabited by neon-green, light-blue, and fluorescent-orange fish. They're almost brushing against your shins but reach down to scoop one up and it becomes a flower blossom as water trickles through your fingers. The fish seem to be aware of and attracted to your presence. They circle faster, morphing into effervescent star trails.

This immersive-reality experience, plus seven others, are collectively called teamLab Planets, an immersive art space in Tokyo, Japan, that's setting attendance records and helping to redefine how people interact with art.

Immersive art is a catchphrase for work that blends digital projection technology, projection mapping, optical sensors, computer programming, and artistic vision to create vast visual fields on walls, ceilings, floors, objects, or entire buildings. The sheer scope of the pieces, and the powerful technology behind them, allows artists to create individual experiences for viewers, says Daniel Sauter, an artist and professor of data visualization at the Parsons School of Design.

"Definitely, making it unique to each viewer is one of the main motivators," says Sauter of the intention behind immersive art works. "Having the viewer close that loop, and be an inherent part of it, or required part of it...to really have something at stake.... If nothing's at stake, it's less engaging, less immersive."

TeamLab is a pioneer in the immersive art field. They describe themselves as an international collective of artists, programmers, architects, animators, and mathematicians founded in 2001. Members of the collective—they decline to say how many are in their ranks—consider optics technologies a tool, like a paint brush. They're most interested in what these technologies enable artists to create.

"Digital technology allows artistic expression to be released from the material world, gaining the ability to change form freely," says former teamLab member Michaela Kane. "The environments where viewers and artworks are placed together allow us to decide how to express those changes."

It took time for technology to catch up to teamLab's vision. Their first exhibit, in 2001, consisted of a musician playing an instrument while listeners simultaneously posted comments on the Internet that were in turn projected on a wall in the performance space.

Today, the collective operates permanent exhibition spaces in Tokyo and Shanghai, China, as well as traveling exhibits in cities throughout the world. TeamLab claims that its Tokyo location had 2.3 million visitors in 2019, the first full year it opened, setting a record for a single-artist installation.

The mix of technology and art teamLab helped create is now popping up seemingly everywhere. Exhibits projecting giant renderings of Van Gogh masterpieces such as The Starry Night which toured dozens of US cities last summer, while standalone immersive art installations are in scores of galleries, hotels, and businesses. Dedicated spaces for immersive art have recently opened in New York, Washington DC, Miami, Indianapolis, Los Angeles, Las Vegas, and other cities in the US.

Immersive art, it seems, has made a place for itself in the cultural landscape.

Machine Hallucination, an exhibition by Refik Anadol Studio

From "Machine Hallucination," an exhibition by Refik Anadol Studio©, produced and presented by Artechouse.

Projecting a moving image onto a flat or curved surface is not a new idea, of course. But immersive art is driven by the idea that almost any physical object can be used as a screen: a waterfall, a pond, a building's interior or exterior, the Space Shuttle Endeavour, and so on.

An early immersive experience that blossomed into the technology for immersive art comes from a familiar popular culture source: The Haunted Mansion, which opened in Disneyland Park in 1969. Among the mansion's spectral occupants were Madame Leota, a fortune teller in the form of a disembodied head in a crystal ball, and five singing busts called the Grim Grinning Ghosts. To create these illusions, Disneyland "Imagineers" projected a film of an actor's face on neutrally colored, featureless busts and the inanimate objects came to life. It was Disney's first use of what academic researchers later called "spatially augmented reality" and known today as projection mapping, says Frank Reifsnyder, a Walt Disney Company spokesman.

Almost 30 years later, researchers from the University of North Carolina (UNC) Chapel Hill created a 3D space designed so remote workers would feel as though they were in a shared room. They published their results in a 1998 paper titled "The Office of the Future." Ramesh Raskar and colleagues replaced standard office lighting with projectors, and used computer vision to capture real-time depth and reflectance information from irregular surfaces including furniture, objects, and people. With that information they were able to project images on the surfaces, render images of the surfaces that could be transmitted over a network and displayed remotely, and interpret changes in the surfaces to enable tracking, interaction, and augmented-reality applications.

Raskar and colleagues next developed what they called "shader" lamps—projectors used to "place" texture, color, or graphics on 3D tabletop-size models. An architect could, for example, add virtual color and texture details to a scale model of a building, and the details would be visible from any viewing angle without the use of a virtual-reality headset.

The UNC work provided solutions to problems inherent in projecting details on something other than a flat screen, such as how to align images from separate projectors and how to simulate depth of field. "We present new algorithms to make the process of illumination practical," Raskar and colleagues wrote in a 2001 paper.

Deepak Bandyopadhyay and colleagues, also from UNC, built on the idea of shader lamps in 2001. In addition to the Raskar team's ideas was an optical tracker that kept virtual details applied to the objects in correct register and proportion as they were in motion. The Bandyopadhyay team also incorporated a virtual "paintbrush" to apply colors or textures to moveable 3D objects.

"Our 3D painting system, along with a new interaction style... induces a new ‘artistic style' for augmented reality to beautify the surfaces of everyday objects or make them appear to be made of some other material," the team wrote.

Although the foundational problems with immersive light displays were solved more than 20 years ago, more recent improvements in technologies, like projectors, have vastly improved the quality of the experience for the viewer. Projectors are now the optical muscle behind many large immersive art installations. Technological advances have made them brighter, more colorful, longer lasting, and more energy efficient, but fundamentally they work as they always have: a light source, an image source, and optics to focus and project the results.

Of those three elements, the light source is where new technology most often comes into play, says Walter Burgess, vice president of operations for Power Technology, Inc., a laser manufacturer in Arkansas. "The projectors that we were using 10 years ago were modern, but the light source inside of it was based on 40- or 50-year-old technology, and that was a xenon light bulb."

Projectors capable of creating today's immersive art experiences or showing 3D movies at cinemas often use lasers as their light source. Lasers are brighter than bulbs for the same energy input, Burgess says, and they run cooler and are more efficient because the light is directional rather than radiating 360 degrees from the source. Perhaps most importantly lasers produce a wider variety of colors than lightbulbs.

Indeed, according to Burgess, "The secret here is the color gamut. And I'm referring to the CIE 1931 chart [an international color standard] when I'm talking about how the human eye perceives color. And what we're able to do with lasers is choose colors of light, wavelengths of light, that the human eye really responds well to. As a result of choosing, say, three or six primary colors, we can produce the most vivid lifelike images that are available."

Infinite Space, an exhibition by Refik Anadol Studio©.

"Infinite Space," an exhibition by Refik Anadol Studio©, produced and presented by Artechouse.

If light and technology are tools like brushes and canvas for some artists, for digital artist Refik Anadol, the input is data.

Anadol was eight years old when he first saw the sci-fi movie Blade Runner. The film's depiction of Los Angeles as a perpetually rainy metropolis dense with people, machines, and building-sized billboards inspired him to think about how machines might one day process data.

Anadol uses data as his raw material; vast screens, rooms, or entire buildings as his canvas; and light as his paint. "My work lies at the intersection of art, architecture, science, and technology," he said. "My work speculates on the question, ‘Can a machine learn, can it become conscious, and then even dream?'"

Anadol finds huge caches of data everywhere: wind speed and direction information collected at an airport, brain waves captured by neuroscientists from people remembering their childhood, publicly available images of New York City, photos of space from the Hubble Space Telescope, or from telescopes at New Mexico's Magdalena Ridge Observatory, the entire archives of the Los Angeles Philharmonic Orchestra, and so on. He runs the data through supercomputers applying machine-learning and artificial intelligence principles, teaching the machines to group and interpret the information, which he then outputs as a visual representation of cognition. The result is a phantasmagorical collage of undulating, seemingly three-dimensional shapes, forms, and colors.

Projected at scale, the work is mesmerizing. Anadol's 2019 "Machine Hallucinations" exhibit created in partnership with Artechouse for the opening of its New York City location, consisted of three million historic and contemporary images of the city, depicting buildings and structures from dozens of angles and dates throughout their history. Iconic structures such as the Statue of Liberty blossomed, shifted, and aged on the walls and floor of the Artechouse in ways no human could mediate.

For the 100th anniversary of the Los Angeles Philharmonic Orchestra in 2018, Anadol fed 45 terabytes of data, including the equivalent of 40,000 hours of audio files from 16,471 performances, into his AI algorithms. For 10 nights, he projected the result on the organic, flowing, metallic skin of the Frank Gehry designed Walt Disney Concert Hall with 42 projectors producing 50K visual resolution. In videos of the installation, the building appears to transform into a living, neural network, the conscious sum of all that has taken place within.

It's a stretch to say Anadol's immersive art proves machines can dream, but not by much.

"I believe that machines can dream," Anadol said, "and show us the invisible narratives of data and the possibilities of alternative realities."

Bob Whitby is a freelance science writer based in Fayetteville, Arkansas.

Enjoy this article?
Get similar news in your inbox
Get more stories from SPIE
Recent News
PREMIUM CONTENT
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research