Advances make optimizing everything easier

28 April 2022
Hank Hogan
 slide from Jongwook Kye's keynote talk on DTCO
A slide from Jongwook Kye's (Samsung) keynote talk on design technology co-optimization at SPIE Advanced Lithography and Patterning.

Design technology co-optimization, or DTCO, took center stage at two keynote presentations on Tuesday at the SPIE Advanced Lithography and Patterning conference. The speakers laid out a case why DTCO is increasingly important, pointed to some advances that make it more powerful, and indicated where it may be headed.

In his presentation, Jongwook Kye, executive vice president of design enablement at Samsung Electronics, noted that the pandemic had prevented people from meeting but didn’t keep them apart. “We have virtually connected together,” he said.

Semiconductors make that statement possible, and such performance is why their numbers are increasing. A car, Kye noted, at one time might have had 200 chips in it. That number is now headed to 2000, resulting in more sensing, computing, and communication capabilities.

That chip growth is part of a larger trend. Analysts predict there will be 76 billion connected devices by 2030, triple the number today, Key noted.

Computing and connectivity are driving growth in data and pushing semiconductor requirements when it comes to speed and power. At the same time, transistor scaling is slowing due to lithographic hurdles, Kye noted.

“We need to have smarter design,” he said in describing a solution.

He noted this optimization method also leverages technology such as new transistor approaches that shrink chip size. In this way, DTCO thus increases semiconductor performance while cutting power and lowering cost.

Along those lines, one trend Kye sees coming is a heterogenous 3D chip package. This approach stacks different chip types atop one another. An example might be a logic chip to do processing, a memory chip for data storage, and an image sensor to collect information. This arrangement shortens distances between chips and increases overall processing speed. Having the chips be separate allows the optimization of each for its particular function.

But today’s chip design tools struggle with this setup, Kye pointed out, because it requires determining the best connection pathways and layout using chips that may be at different process nodes. Simulating such a combination is another part of the challenge.

On the horizon are even more complexities, such as in-memory computing. This blending of what in today’s technology are separate logic and memory functions is currently found only in research projects. However, it could someday move out of the lab because it offers a way to do things like simple addition and subtraction quickly and at low power, Kye said.

In the other DTCO keynote, Vivek Singh, vice president of the advanced technology group at NVIDIA, discussed computational lithography. This technology uses software algorithms to improve photolithography performance.

slide from Vivek Singh's keynote talk on DTCO

A slide from Vivek Singh's keynote talk on DTCO design technology co-optimization.

Singh noted that at one time, printing a cross using lithography was simple when there was plenty of resolution margin. But when the industry pushed up against resolution limits, manufacturers found that adding small chevrons improved patterning. Eventually these additions became more complex, and the entrance of computational lithography led to the best possible results — and a final layout that to the eye looked nothing at all like a simple cross.

Computational lithography software calculates how photons interact with a mask and how that light then interacts with the resist. It accounts for photolithographic stepper characteristics and other factors. Extensions such as inverse lithography can help produce curvilinear mask shapes that can double the depth of focus of a stepper. That improves process margin and yield.

Computational lithography requires detailed modeling and perhaps could benefit from different types of modeling, Singh noted. For instance, when researchers extended the technique to EUV, they had to include shadowing effects.

However, available processing power constrains computational lithography. After some debate, the industry has settled on a new computing solution.

“There is a broad consensus we need GPUs,” Singh said.

Using graphics processing units produces a substantial speedup in computational lithography because GPUs are optimized to do the calculations. At the system level, running on a GPU instead of a general-purpose CPU translates into at least a seven-fold increase in computational lithography speed, Singh reported. A GPU is a more expensive chip but the savings in time and the resulting freedom to explore design options more than makes up for that extra cost.

“The threshold to switch [to a GPU] is well below the 7x improvement we’ve demonstrated,” Singh said. He added that the solution investigated consisted of a CPU/GPU combo, with the two different computing chips each handling part of the processing.

Hank Hogan is a science writer based in Reno, Nevada.

Enjoy this article?
Get similar news in your inbox
Get more stories from SPIE

Recent News
PREMIUM CONTENT
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?
close_icon_gray