Share Email Print

Spie Press Book

Design Technology Co-Optimization in the Era of Sub-Resolution IC Scaling
Author(s): Lars W. Liebmann; Kaushik Vaidyanathan; Lawrence Pileggi
Format Member Price Non-Member Price

Book Description

The challenges facing the most-advanced technology nodes in the microelectronics industry can be overcome with the help of design technology co-optimization (DTCO). This mediation process aims to ensure competitive technology architecture definition while avoiding schedule or yield risks caused by unrealistically aggressive process assumptions. This Tutorial Text reviews the fundamental design objectives as well as the resulting topological constraints of a standard cell logic design flow. Cell design, placement, and routing are examined against the backdrop of ever-increasing design constraints in advanced-technology nodes.

Book Details

Date Published: 8 January 2016
Pages: 178
ISBN: 9781628419054
Volume: TT104

Table of Contents
SHOW Table of Contents | HIDE Table of Contents
Preface

1 The Escalating Design Complexity of Sub-Resolution Scaling
1.1 k1 > 0.6: The Good Old Days
1.2 0.6 > k1 > 0.5: Optical Proximity Correction
1.3 0.5 > k1 > 0.35: Off-Axis Illumination
1.4 0.35 > k1 > 0.25: Asymmetric Off-Axis Illumination
1.5 0.25 > k1 > 0.125: Double Patterning
1.6 k1 < 0.125: Higher-Order Frequency Multiplication

2 Multiple-Exposure-Patterning-Enhanced Digital Logic Design
2.1 Introduction to Physical Design
2.2 The Evolution of Standard Cell Layouts
     2.2.1 Attributes of standard logic cells
     2.2.2 Impact of patterning-driven design constraints
2.3 Standard Cell Layout in the Era of Double Patterning
2.4 Two Generations of Double-Patterning-Enhanced P&R
     2.4.1 Color-aware placement
     2.4.2 Color-aware routing
2.5 Beyond P&R: The Impact of Double Patterning on Floor Planning

3 Design for Manufacturability
3.1 Critical Area Optimization
3.2 Recommended Design Rules
3.3 Chemical Mechanical Polishing
3.4 Lithography-Friendly Design
3.5 Prescriptive Design Rules
3.6 Case Study: Design Implications of Restrictive Patterning
     3.6.1 Background
     3.6.2 Impact of extremely restrictive lithography on SoC design

4 Design Technology Co-Optimization
4.1 The Four Phases of DTCO
     4.1.1 Phase 1: establish scaling targets
     4.1.2 Phase 2: first architecture definition
     4.1.3 Phase 3: cell-level refinement
     4.1.4 Phase 4: block-level refinement
4.2 Case Study 1: Leaf-Cell DTCO at N14
     4.2.1 Technology definition
     4.2.2 Embedded memory
     4.2.3 Standard cell logic
     4.2.4 Analog design blocks
     4.2.5 Leaf-cell DTCO effective at N14: holistic DTCO provides further improvements
4.3 Case Study 2: Holistic DTCO at N14
     4.3.1 Holistic DTCO for embedded memories
     4.3.2 Holistic standard-cell DTCO
     4.3.3 Holistic DTCO for analog components
     4.3.4 Test chip and experimental results
4.4 Case Study 3: Using DTCO Techniques to Quantify the Scaling of N7 with EUV and 193i
     4.4.1 Introduction
     4.4.2 Scaling targets
     4.4.3 Comparison of RET implications
     4.4.4 Objectives for power, performance, and area scaling
     4.4.5 Cell architecture comparison
     4.4.6 Macro-level scaling assessment
     4.4.7 Cell-area-limited scaling
     4.4.8 Router-limited scaling
4.5 Conclusion

References

Preface

Design technology co-optimization (DTCO), at its core, is not a specific solution or even a rigorous engineering approach; it is fundamentally a mediation process between designers and process engineers that aims to ensure a competitive technology architecture definition while avoiding schedule or yield risks caused by unrealistically aggressive process assumptions. The authors of this book represent the two parties that come together in these discussions:

Lars Liebmann joined IBM in 1991, when enhancements in the physical lithography resolution through a reduction of wavelength and an increase in numerical aperture were still keeping up with the semiconductor industry's relentless pace of transistor density scaling. However, even back then, the advances in exposure hardware lagged behind the need for higher resolution in support of early device and process development. Liebmann started his career developing design solutions for layout-intensive resolution-enhancement techniques (RETs). One such RET, alternating phase-shifted mask (altPSM) lithography, had just become lithographically viable, and one of Liebmann's first jobs involved drawing phase shapes onto transistors in an early exploratory device test-chip at IBM's Advanced Technology Lab. Naturally, this tedious work lead to him to explore means of automating the layout manipulations necessary to implement such RETs and introduced him to the engineering discipline of electronic design automation (EDA). He joined his colleagues Mark Lavin and Bill Leipold who had just begun work on a piece of code that could very well be the original ancestor to all optical proximity correction (OPC) solutions on the market today. This simple piece of EDA code, which they called ShrinkLonelyGates and that located and biased isolated transistors in a chip design, laid the foundation in 1992 for what many years later would become known as computational lithography. Access to these early (and by today's standards extremely limited) shape-manipulation functions in IBM's internal EDA engine, Niagara, not only opened the door for Liebmann to explore automatic altPSM design2 and more-complex OPC solutions but also lead to spin-offs such as code to generate sub-resolution assist features (SRAFs).3 Although these automatic layout-manipulation routines were extremely useful in driving the adoption of these RETs for increasingly complex chip designs, equally important was the observation that it was quite easy for designers to design shapes that were perfectly legal by that technology node's design rules but would cause the automatic generation routines to fail. Successful and efficient implementation of strong RETs required negotiations with the designers and forced the conversations that many years later grew into DTCO. Soon after the advancements in the fundamental exposure-tool resolution slowed down and eventually stopped entirely after the introduction of 193-nm immersion lithography, scaling through increasingly complex and design restrictive RETs became the semiconductor industry's only path forward.

Even though altPSM was never adopted as a semiconductor manufacturing solution for IBM or the majority of the semiconductor industry, much of what Liebmann learned in those early years of computational lithography held true for many technology nodes to follow:

  • The design space is enormously complicated, and designers operate under crushing time pressure. Maintaining design efficiency must be paramount in any restriction the process engineers intend to impose on designers.
  • Very few designers actually draw transistors and wires; the design space consists entirely of a complex set of automated design solutions. Any process-driven constraints or required design manipulations have to seamlessly integrate into established design flows.
  • Any substantial design constraints must be negotiated early in the technology node and implemented far upstream in the design flow to avoid design re-spins that put a product's time-to-market schedule at risk.

Liebmann's interest in exploring the extent of semiconductor scaling by taking DTCO to its extreme limit caused him to cross paths with a research team at Carnegie Mellon University.

Kaushik Vaidyanathan started his career as an application-specific integrated circuit (ASIC) designer at IBM in 2007. It was the time when designers across the industry were becoming increasingly reliant on electronic design automation (EDA) tools. Specifically, at IBM, their in-house EDA tools had matured to a point where someone straight from undergraduate study could be trained to design a multi-million gate 17 mm * 17 mm N90 ASIC. Physical-design tools and methodologies were complex, and maneuvering them to accomplish design goals was challenging. However, after a year, Vaidyanathan found himself asking many questions about the inner workings of these tools and methodologies, only to realize that he did not have the background to seek or understand the answers. So, he decided to attend graduate school in 2009.

Thanks to Prof. Pileggi (his Ph.D. advisor), Vaidyanathan had the opportunity to build a background and work alongside industry veterans such as Liebmann, seeking answers and solutions to a daunting problem facing the IC industry, i.e., affordable and efficient scaling of systems-on-chip (SoCs) past N20. In their quest for answers, Vaidyanathan and his collaborators started with rigorous DTCO at the N14 technology node for different components of a SoC. This work lasted a couple of years, and they developed several insights, the two most-important ones for Vaidyanathan were

  • There is no substitute for experience--aside from input from experts such as Liebmann, a sensible exploration of the vast design and manufacturability tradeoff space becomes quickly unmanageable; and
  • Opportunities are hidden amidst challenges, a notion that Vaidyanathan learned from his Ph.D. advisor that enabled them to exploit the technology challenges to develop frameworks for affordable design beyond N20, such as construct-based design and smart memory synthesis.

Much of the collaborative work between Carnegie Mellon and IBM is presented in this book as case studies in Sections 3.6, 4.2, and 4.3.

Lawrence Pileggi began his career at Westinghouse Research and Development as an IC designer in 1984. His first chip project was an ASIC elevator controller in a 2-μm CMOS operating at a blazing clock frequency of 1 MHz. Intrigued by the challenges of performing very-large-scale design with somewhat unreliable CAD tools, he entered the Ph.D. program at Carnegie Mellon University in 1986 to participate in electronic design automation (EDA) research, with a specific focus on simulation algorithms for his thesis work. After six years as a faculty member at the University of Texas--Austin, he returned to Carnegie Mellon with the objective of working on research at the boundary between circuit design and design methodologies.

In 1997, the Focus Center Research Program (FCRP) was launched by the Semiconductor Research Corporation (SRC), a consortium of US semiconductor companies. The FCRP was established to create the funding and collaboration that would be needed to perform long-range research. Pileggi became a member of one of the first FCRP programs, the Gigascale Silicon Research Center (GSRC), led by Richard Newton at Berkeley. While there were many challenges faced by the semiconductor community at that time, Pileggi chose to focus his GSRC research on an impending problem that was professed by two of his Carnegie Mellon colleagues, Wojtek Maly and Andrzej Strojwas, i.e., the impending manufacturability challenges due to subwavelength lithography. His colleagues had developed tools and methods at Carnegie Mellon to evaluate the difficult to print patterns and to count the number of unique patterns as a function of the lithography radius of influence.

Captivated by the impact of these patterns on design methods and circuit topologies, Pileggi and his group created a regular-fabrics-based design methodology that followed a simple philosophy: rather than asking lithographers to print any circuit patterns, as they had been doing for decades, instead ask them what patterns they can print well and then develop circuits and methodologies that best utilize those patterns. Pileggi and his group worked with researchers from IBM, Intel, and other sponsoring members of the GSRC to explore the benefits and possibilities of regular fabric design. Through some of those interactions Pileggi met Lars Liebmann at IBM.

While Pileggi and his students worked with various companies, both through Carnegie Mellon and later via a small start-up company, Fabbrix (which was acquired by PDF Solutions in 2007), the deepest collaborations occurred with IBM, and Liebmann in particular. In 2010 a partnership with Lars and IBM to work on the DARPA GRATE program produced much of the work that that comprises the later sections of this book. Now, as the industry deploys the 14-nm FinFET technology node, the regular-fabrics approach, pattern templates, and construct-based design methods that were proposed are clearly evident.

Because DTCO has evolved from LFD and design for manufacturability (DFM), this book starts the DTCO discussion by first reviewing the impact that increasingly invasive RETs and multiple-exposure patterning (MEP) techniques have had on design. It then covers the major DFM techniques and highlights competing optimization goals of LFD and DFM. However, DTCO differs from LfD and DfM in that the goal of DTCO is not just to communicate process-driven constraints to the designers but to negotiate a more optimal tradeoff between designers' needs and process developers' concerns. To facilitate this co-optimization, it is important for the process engineers to understand the high-level goals of the design community. To that end, this book reviews the fundamental SoC design objectives as well as the resulting topological constraints on different building blocks of a SoC, such as standard cells, embedded memories, analog components, and place and route flows. Finally, the mechanics of the DTCO process are explained as a series of steps that incrementally refine the technology architecture using concepts such as a design-driven rules definition, design-rule-arc analysis, and construct-based technology definition. The efficacy of DTCO is illustrated using detailed case studies at N14 contrasting leaf-cell optimization against a more-comprehensive holistic DTCO approach. The final case study illustrates how DTCO can be applied to quantifying the achievable scaling to N7 under different lithography assumptions. While it is impossible to present a simple "how to" manual for DTCO, the goal of this book is to break down the abstract concept of DTCO into specific actionable components that collectively play an increasing important role in maintaining the industry's aggressive pace of semiconductor scaling.

Lars W. Liebmann
Kaushik C. Vaidyanathan
Lawrence Pileggi
December 2015


© SPIE. Terms of Use
Back to Top