From Cosmology to Biology

The story of adaptive optics applies lessons learned at the macro to the micro

01 April 2019
Nick Stockton
Credit: ESO / M. Kommesser

In 2014, Eric Betzig won a Nobel Prize for breaking the law. Specifically, Abbé's Law, which set a limit to how deeply optical microscopes could peer into their subjects. Betzig's workaround used fluorescent molecules to light up living cells from the inside. But, breaking one closely held assumption wasn't enough for Betzig. In his acceptance lecture to Stockholm University, Betzig told the audience that optical microscopy must free itself from petri dishes and cover slips. "That's not where cells evolved; we need to look at them inside the whole organism," he said. That is, by peering into dense, living tissue. This was no idle call to action. Using technology borrowed from astronomy, his lab had already witnessed neurons firing in the brain of a live mouse.

That technology is called adaptive optics, and it's one of the marquee stories in the annals of tech transfer. First theorized in the 1950s, adaptive optics is, at its most basic, a two-part system: a sensor reads distortion on incoming light, and a deformable mirror changes shape to match that distortion, unscrambling the photons while reflecting them to the objective viewer. Simple as that sounds, even the most rudimentary models took decades to develop. Many of the essential technological innovations took place in classified military programs—notably, the oft-maligned "Star Wars" defense initiative. When declassified in the early 1990s, that work reinvigorated civilian astronomy.

In his book The Adaptive Optics Revolution, historian Robert Duffner calls adaptive optics, "the most revolutionary technological breakthrough in astronomy since Galileo pointed his telescope skyward to explore the heavens 400 years ago." And adaptive optics is still revolutionizing science, at both macro- and microscopic scale. Biologists use adaptive optics to see cellular interplay in live tissue; vision scientists use it to map individual eyeball aberrations; lithographers can now etch transistor circuitry inside deeply refractive crystal; and, as recently as January of this year, astronomers used it to see what they believe is the birth of either a neutron star or black hole.

It all started with the twinkle of the stars. After being flung from burning globs of gas, stellar photons cross the universe unmolested. That is, until they reach Earth's atmosphere. The mix of warm and cool air up there creates a turbulent mess for light, which will bend its path to travel through relatively dense mediums.

In 1953, astronomer Horace Babcock came up with an idea to untwinkle the stars. In the journal Publications of the Astronomical Society of the Pacific, he described a "seeing compensator" that compared distorted stellar light against light generated by a bright "control star." His original concept called for bouncing the incoming light off an Eidophor—a mirror covered in a thin layer of electrified oil. Inner gadgetry would produce a schlieren image, showing distortion in the incoming light waves. This schlieren image would then pass through a type of television tube called an orthicon, which would transmit the perceived distortion levels back to the Eidophor mirror. The electric impulses controlling the surface tension of the oily mirror would distort the Eidophor so it matched the incoming distortion. And, this would be a closed-loop feedback, constantly reading and morphing to match the distorted photons. Given how quickly atmospheric turbulence shifts, this last bit was essential. If it all worked perfectly, the astronomer would get a clear view of their desired celestial object.

Seeing compensator, Credit: https://doi.org/10.1086/126606 © The Astronomical Society of the Pacific.

Alas, contemporary technology couldn't meet Babcock's specs. It was another decade before anyone began designing rudimentary adaptive optics systems. But military and aerospace researchers, rather than astronomers, picked up the thread. Sputnik—launched in 1953—sparked a reconnaissance race between the US and USSR. As each nation covertly tried to keep tabs on its rival's armaments, both sides launched hundreds of satellites. "This was a time when the US was very interested to know what the Soviets were getting up to in space," says Robert Fugate, a retired senior scientist with the Air Force Research Laboratory in Kirtland, New Mexico, who spearheaded many of the military's later classified efforts with adaptive optics. By the mid-60s, some US defense thinkers thought Babcock's ideas might help them get a better look at those Soviet satellites. It also might be able to see other threats, like incoming missiles. Some even thought these techniques might someday translate into laser energy weapons capable of shooting those enemy missiles down.

The first military research took root in upstate New York, at the Rome Air Development Center. Funded by the Advanced Research Projects Agency—a government-funded civilian agency that later became DARPA—these Air Force researchers teamed up with civilian corporation Itek to tackle the basic systems behind adaptive optics: the wavefront sensor, deformable mirror, and processor capable of relaying the constant signals between the two. The groups also gathered data on light passing through the atmosphere. Some of those experiments involved flying a B-57 with a 1,200 watt tungsten light bulb over the base. Unsurprisingly, civilians living in the area concocted all sorts of stories to explain those lights, even though the research group was forthcoming about the nature of their work. By the early 70s, Rome and Itek had a prototype system capable of measuring atmospheric distortion: the real-time atmospheric compensation (RTAC).

Building and testing the first AO system

Impressed with Itek's role, in 1973 DARPA awarded the company a contract to further the RTAC concept. Once again, the company partnered with Rome's Air Force scientists. Between 1976 and 1981, they developed the compensated imaging system and installed it on the 1.6 meter telescope atop Haleakala volcano in Hawaii. In March of 1982, they tested it for the first time. In The Adaptive Optics Revolution, Robert Duffner describes this maiden run:

"Scientists aimed the telescope at a star. The first image danced around and looked washed out and blurry. But when Don Hanson pushed the button to activate the adaptive optics on the telescope ... a dramatic change occurred: the image became much brighter, clearer, and more detailed."

Exciting as it was, the experiments were classified, and the astronomers at that Maui telescope were sworn from telling their colleagues about the breakthrough. On top of that, the DOD didn't seem particularly impressed with the results, at least not enough to bring adaptive optics into production. See, adaptive optics systems need a lot of light to work. Some of it goes to the wavefront sensor, so the device can measure the distortion. Many stars, and most satellites, aren't bright enough on their own to provide the light needed for adaptive optics with enough left over for the telescope to see the image. Astronomers would work around this by imaging a second nearby star as a "guide." Satellites were trickier. The best method was imaging the satellite just after sunset, but while near-Earth orbit was still illuminated by the sun's light. The adaptive optics system would use the reflected sunlight to measure the atmospheric distortion.

"However, making reflected sunlight from the satellite itself the guide star for the system just is not practical," says Fugate. For one, this limited the time a telescope with adaptive optics had to collect satellite imagery to just a few minutes each day. Also, sometimes the reflected sunlight wasn't bright enough to correct the imaging. These drawbacks made the system too unreliable for counter-espionage purposes. But the DOD did not give up on adaptive optics. Not even close.

Using lasers as guide stars

In 2011, the European Southern Observatory tested the Wendelstein laser guide star while a thunderstorm approached. Credit: ESO / M. Kommesser

In 2011, the European Southern Observatory tested the Wendelstein laser guide star while a thunderstorm approached.

"In the late spring of 1982, we were called to go brief the "Jasons" (a group of hard-nosed scientists who evaluated proposals for the DOD, cofounded by none other than Charles Townes) on using a laser to generate a guide star in the sky to make measurements of atmospheric distortion to be used in an adaptive optics system," says Fugate. The Jasons approved this proposal.

The 1980s were heady times for Fugate. He returned to work at Starfire Optical Range in New Mexico. While refining laser guide star concepts, he continued to develop core adaptive optics systems and even managed to finagle a combined $1 million from the Air Force and Strategic Defense Initiative to get the Starfire Optical Range its own 1.5 meter telescope. And then, on 13 February 1989, he completed the first successful test of adaptive optics combined with a laser guide star, correcting atmospheric distortions in real time.

This was another amazing breakthrough for adaptive optics. The laser guide star allowed corrections that provided unprecedented views of the stars—and satellites. And, it was a breakthrough that mainstream astronomy had no idea about, thanks to national security.

However, the veil would soon lift. The Strategic Defense Initiative—which had funded a lot of Fugate's work—was winding down. He and others set to work lobbying the military to declassify its work on adaptive optics and laser guide stars. Astronomers around the world were also working on adaptive optics and laser guide stars, but were wasting brain power trying to solve problems the US military had figured out years before.

By 1990, the Air Force decided declassifying wouldn't give America's enemies any real strategic advantage. It planned several avenues for releasing this information, the most important (and dramatic) of which was Fugate's presentation at the meeting of the American Astronomical Society in 1991 in Seattle. His presentation featured a slide of side-by-side pictures: On the left, a blurry fuzz. On the right, two tightly focused shining balls of light. The pictures showed the same star system, before and after adaptive optics with laser guide stars.

Fugate's presentation was a smash, and afterward he and his colleagues worked hard to share their work with other astronomers. His influence percolated into other disciplines as well.

Adaptive optics for vision science

Eyes are tempestuous little organs, and in the early 1990s ophthalmologist David Williams heard some colleagues from the University of Rochester's optics labs discussing a new technology that might help him peer through ocular distortion. "I cold-called Fugate out of the blue and told him what I wanted to do," says Williams. Fugate told Williams to come on down to Albuquerque—but told him to show up around midnight. "Astronomers are nocturnal creatures," says Williams. After the visit, Williams returned to Rochester, bought a deformable mirror, and hired a grad student who knew how to build wavefront sensors. They spent the next few years building adaptive optics systems for eye science.

During the 90s, civilian astronomers were pushing adaptive optics further in their own field. One of the most proactive was SPIE Fellow Claire Max, an astrophysicist and Jasons member who, at the time, was based at Lawrence Livermore National Laboratory. She had spearheaded efforts to build the first astronomical laser guide star at Lick Observatory, and wanted to continue innovating systems for larger aperture telescopes. However, she was having trouble finding funding to match the scale of her ambitions. That is, until she and her colleague Jerry Nelson at UC Santa Cruz attended a workshop given by the head of a NSF Science and Technology Center where they heard about 10-year NSF grants worth a few million each year. "This gives you time and money to do ambitious projects," says Max. "We knew these NSF centers liked doing something that involved more than one discipline," she says. So, Max and Nelson contacted Williams, and included his vision work in their proposal.

They won the grant in 1999, and established the Center for Adaptive Optics at UC Santa Cruz, where for a decade astronomers and vision scientists spent that grant money collaboratively advancing adaptive optics in their respective fields, while also operating as an educational hub for the technology. After the government money ran dry, they secured extra funding, and the center continues its mission today. The astronomers successfully built bigger and better adaptive optics systems, such that even the largest-aperture telescopes could benefit from the technology. Meanwhile, the vision research led to significant breakthroughs in anatomy and clinical care. They were able to get 3D views of the retina in high resolution, which helped them understand more about how eyes work, and also became an important diagnostic tool.

Adaptive optics played a role in surgery, too. Williams helped Bausch + Lomb develop technology to map a person's cornea for laser vision correction. Williams is currently using adaptive optics to develop a cure for blindness. "We can look into the eye and see on a cellular spatial scale whether our treatments are making a difference or not," says Williams.

From the edge of space to the limits of biology

Credit: Reprinted with permission from Liu et al., Science 360:6386 (2018).

Credit: Reproduced with permission from Liu et al., Science 360:6386 (2018).

Adaptive optics got its start helping astronomers see deep into space. Over the past two decades, it's also allowed microscopists to peer into the nuances of cellular biology. "My PhD research project 20 years ago involved some of the first work in the application of adaptive optics to microscopes," says Martin Booth, who at the time was working under Tony Wilson and Mark Neil at Oxford University. "The major part of this work was the first demonstration of adaptive aberration correction in confocal laser scanning microscopes, which are commonly used in biomedical imaging."

Since then, SPIE Senior Member Booth has established his own Oxford lab, where he continues to focus on developing the adaptive optics systems themselves. This has resulted in a blossoming array of applications and discoveries. For instance, traditional wavefront sensors don't always work at microscopic scale, so he has developed an image-based aberration measurement scheme. And, he believes this discovery could cross back over into telescopes to help maintain the active alignment of the mirrors used in the cameras of Earth-facing satellites.

Eric Betzig was introduced to adaptive optics in 2006 when he was a new hire at Janelia Research Campus. Though he'd been working on fluorescent schemes to light up single cells for superresolution (the same work that earned him a Nobel Prize in 2014), most of his new colleagues were focused on the brain. So, he got on board and hired his first postdoctoral researcher. Na Ji was part neuroscientist, part physicist, and was already using adaptive optics by the time she came to Betzig's lab.

However, as Booth had also learned, adaptive optics doesn't translate directly from astronomy to microscopy. "The challenge in astronomy is the rapid fluctuations," says Ji. "You have to make a feedback loop between the wavefront sensor and the deformable mirror thousands of times a second." In biology, the distortion doesn't fluctuate, it's just very dense. "I don't know if you've ever seen a brain, but they look like a blob of tofu," she says. Ji and Betzig came up with several highly technical alternative methods for peering through tissue. One involves a homebrewed wavefront sensor that works like an astronomical wavefront sensor in reverse. They also used fluorescent molecules inside the brain like internal guidestars and near-infrared light to penetrate into the flesh.

Betzig says he's close to retiring from microscopy. His final project is building a microscope he calls an "adaptive optics Swiss Army knife." This machine will pair every type of modern optical microscopy—confocal two photon, structural illumination, superresolution localization, expansion, lattice light sheet—with an optimized adaptive optics system. "It's still the early days of adaptive optics microscopy, and most people aren't aware of what it can do," he says. He predicts that within the next 10 to 20 years every commercial microscope will come standard with adaptive optics. "Just like with telescopes, it will make no sense to use one without it." .

-Nick Stockton is a freelance writer based in Pittsburgh, Pennsylvania. He contributes to WIRED and Popular Science.

 

Enjoy this article?
Get similar news in your inbox
Get more stories from SPIE
Recent News
PREMIUM CONTENT
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research