Proceedings Volume 6521

Design for Manufacturability through Design-Process Integration

cover
Proceedings Volume 6521

Design for Manufacturability through Design-Process Integration

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 15 March 2007
Contents: 8 Sessions, 64 Papers, 0 Presentations
Conference: SPIE Advanced Lithography 2007
Volume Number: 6521

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 6521
  • Computational Lithography (Joint Session with 6520)
  • Keynote Presentation
  • Layout Verification
  • Layout Optimization
  • Process-Aware Timing and Power Analysis
  • DFM Efficiency
  • Poster Session
Front Matter: Volume 6521
icon_mobile_dropdown
Front Matter: Volume 6521
This PDF file contains the front matter associated with SPIE Proceedings Volume 6521, including the Title Page, Copyright information, Table of Contents, an AL07 Plenary Paper, and the Conference Committee listing.
Computational Lithography (Joint Session with 6520)
icon_mobile_dropdown
Model-based assist feature generation
We optimize a continuous-tone photomask to meet a set of edge-placement tolerances and 2-D image fidelity requirements, for a set of dose and defocus values. The resulting continuous tone mask, although not realizable, indicates where to place assist features and their polarity. This algorithm derives assist features from first principles: when the mask is optimized for best focus, the optimal continuous-tone photomask does not have any features that resemble assist features. When the mask is optimized for best focus and a defocus condition, the optimal continuous-tone photomask spontaneously grows assist features. The continuous-tone photomask also has features that can be identified as phase windows. Polygonal, quantized assist features are extracted from the optimal continuous-tone photomask.
Three-dimensional mask effect approximate modeling for sub-50-nm node device OPC
Sungsoo Suh, SukJoo Lee, Kyoung-yoon Back, et al.
In order to perform an optical proximity correction of memory device nodes below half-pitch 50nm, so called 3D mask effects need to be included in a model based OPC. As the mask pitch approaches wavelength of an optical system, and the angle of off-axis illumination becomes increasingly greater than normal incident beam, combined effects of transmission loss and mask induced polarization induces deviations from Kirchhoff thin mask approximation. Presently, just a handful of methods are being developed for commercial use in full-chip scale optical proximity correction: edge domain decomposition method (DDM), rim-type boundary layer and more recently, M3D model [1-6]. However, these methods currently require extensive modeling and proximity correction runtime although its methods are being continously improved for accuracy and speed. In this work, some results on an alternative approach to 3D mask modeling that is suitable for OPC are presented. Using modeling test pattern experimental data and FDTD rigorous simulation results, a thin mask approximation and alternative 3D mask approximate approaches are compared. And the results indicate improved model accuracy in terms of root mean square of 30% for a cross-pole and a dipole illumination conditions, respectively, while the OPC run-time remained similar. Furthermore, a flash memory gate-poly OPC results using the 3D mask approximate model indicates improved correlation to experimental results than a thin mask model at minimum resolution dense feature and narrow space regions. Thin mask and proposed approximate 3D mask models were calibrated for three differing illumination conditions: two X-dipole illuminations with Y-linear polarization and cross-pole quasar illumination with X&Y-linear polarization states. For each of the extreme off-axis illumination conditions, 3D mask approximate model developed for OPC indicated improved calibration results to both test pattern wafer images and rigorous simulation results. In addition, OPC layout image contours of 3D mask approximate model correlated better to wafer image than the thin mask approximation at nominal and defocus conditions.
Keynote Presentation
icon_mobile_dropdown
Collaborative platform, tool-kit, and physical models for DfM
Exploratory prototype DfM tools, methodologies and emerging physical process models are described. The examples include new platforms for collaboration on process/device/circuits, visualization and quantification of manufacturing effects at the mask layout level, and advances toward fast-CAD models for lithography, CMP, etch and photomasks. The examples have evolved from research supported over the last several years by DARPA, SRC, Industry and the Sate of California U.C. Discovery Program. DfM tools must enable complexity management with very fast first-cut accurate models across process, device and circuit performance with new modes of collaboration. Collaborations can be promoted by supporting simultaneous views in naturally intuitive parameters for each contributor. An important theme is to shift the view point of the statistical variation in timing and power upstream from gate level CD distributions to a more deterministic set of sources of variations in characterized processes. Many of these nonidealities of manufacturing can be expressed at the mask plane in terms of lateral impact functions to capture effects not included in design rules. Pattern Matching and Perturbation Formulations are shown to be well suited for quantifying these sources of variation.
Layout Verification
icon_mobile_dropdown
Lithography simulation in DfM: achievable accuracy versus requirements
Lithography simulation remains one of the primary aspects of most DfM flows, along with critical area analysis and chem-mech-polish (CMP) modeling. Often, the accuracy of the DfM flow is judged solely on the accuracy of the lithographic simulation. In this paper we attempt to refute that viewpoint and highlight the many sources of error in a DfM flow. We examine the factors that impact accuracy and attempt to quantify their effect. Differences between rigorous simulation, which includes full mask data preparation along with lithography simulation, and the use of compact models are explored. Required and achievable DfM accuracy over time and across multiple fabs is examined and the use of a "closed-loop" DfM flow is proposed.
Structural failure prediction using simplified lithography simulation models
Existing approaches to predict those locations in a full-chip layout representing a high risk of structural failure, i.e., bridging or pinching, either rely on lithography simulation using empirical resist models or on a more abstract empirical analysis of aerial image characteristics. Both approaches bear the risk of extrapolating an empirical model well beyond the regime within which it was calibrated and where it can be considered reliable. In this paper, we present as an alternative a systematic method (a) to build a simple, sturdy "constant threshold" (CTR) model that is valid over the required process window and (b) to determine empirical criteria for structural failure detection based on simulations with this CTR model. Even though such a model is not capable of accurately predicting the dimension of structures, it captures trends of the printing behavior very well, even into the failure regime. From standard wafer data, such as used for "optical proximity correction" (OPC) model building, it is straightforward to find out which test structures are not resolved well with a given process. Combined with the CTR model simulation results, this can be used to determine threshold values for the space and width of simulated structures that indicate structural failure, separately for bridging and pinching. The predictive power of this approach has already been verified on hardware and is used in production.
Unified process-aware system for circuit layout verification
One of the challenges in establishing quantitative manufacturability metrics has been establishing a single design quality metric able to describe how a given region in the layout would perform under a specific manufacturing process. Historically, critical area analysis has been sufficient to evaluate the possible yield of a design, but as the relative importance of systematic mechanisms increases, this purely statistical approach needs to be enhanced by incorporating additional process information. In this paper we describe a consolidated metric and the system that can analyze multiple process conditions and different configurations to arrive to an optimal solution. This solution is based on a cost function which depends on the characteristics of the manufacturing process. A general form of the cost function and the parameters defining individual process impacts are discussed and, to demonstrate the system, different layout configurations are analyzed taking into account lithography process variations, random defect distributions, and recommended design rules. Since all layout configurations represent the same electrical devices, it is possible to dynamically determine the most robust layout implementation according to the cost function that incorporates the current relative importance of each yield loss contributor.
Double patterning design split implementation and validation for the 32nm node
Single exposure capable systems for the 32nm 1/2 pitch (HP) node may not be ready in time for production. At the possible NA of 1.35 still using water immersion lithography, one option to generate the required dense pitches is double patterning. Here a design is printed with two separate exposures and etch steps to increase the pitch. If a 2x increase in pitch can be achieved through the design split, double patterning could thus theoretically allow using exposure systems conceived for the 65nm node to print 32nm node designs. In this paper we focus on the aspect of design splitting and lithography for double patterning the poly layer of 32nm logic cells using the Synopsys full-chip physical verification and OPC conversion platforms. All 32nm node cells have been split in an automated fashion to target different aggressiveness towards pitch reduction and polygon cutting. Every design split has gone through lithography optimization, Optical Proximity Correction (OPC) and Lithography Rule Checking (LRC) at NA values of 0.93, 1.20, and 1.35. Final comparisons are based on simulations across the process window. In addition, we have experimentally verified selected single-patterning problem areas on a 1.20 NA exposure tool (ASML XT:1700Fi at IMEC). With this information, we establish guidelines for double patterning conversions and present a new design rule for double patterning compliance checking applicable to full-chip scale.
DRC Plus: augmenting standard DRC with pattern matching on 2D geometries
Vito Dai, Jie Yang, Norma Rodriguez, et al.
Design rule constraints (DRC) are the industry workhorse for constraining design to ensure both physical and electrical manufacturability. However, as technology processes continue to shrink and aggressive resolution enhancement technologies (RET) and optical proximity correction (OPC) are applied, standard DRC sometimes fails to fully capture the concept of design manufacturability. Consequently, some DRC-clean layout designs are found to be difficult to manufacture. Attempts have been made to "patch up" standard DRC with additional rules to identify these specific problematic cases. However, due to the lack of specificity with DRC, these efforts often meet with mixed-success. Although it typically resolves the issue at hand, quite often, it is the enforcement of some DRC rule that causes other problematic geometries to be generated, as designers attempt to meet all the constraints given to them. In effect, designers meet the letter of the law, as defined by the DRC implementation code, without understanding the "spirit of the rule". This leads to more exceptional cases being added to the DRC manual, further increasing its complexity. DRC Plus adopts a different approach. It augments standard DRC by applying fast 2D pattern matching to design layout to identify problematic 2D configurations which are difficult to manufacture. The tool then returns specific feedback to designers on how to resolve these issues. This basic approach offers several advantages over other DFM techniques: It is enforceable, it offers a simple pass/no-pass criterion, it is simple to document as part of the design manual, it does not require compute intensive simulations, and it does not require highly-accurate lithographic models that may not be available during design. These advantages allow DRC Plus to be inserted early in the design flow, and enforced in conjunction with standard DRC.
Layout Optimization
icon_mobile_dropdown
Process window aware layout optimization using hot spot fixing system
The feasibility of Hot Spot Fixing (HSF) system in DfM flow is studied and reported. Hot spot fixing using process simulation is indispensable under low-k1 lithography process for logic devices with advanced design rule (DR). Hot spot such as pinching, bridging, line-end shortening will occur, mainly depending on local pattern context. Proper calibration of DR, mask data preparation (MDP), resolution enhancement technique (RET) and optical proximity effect correction (OPC) will reduce potential hot spots. However, pattern layout variety is so enormous that, even with most careful calibration of every process, unexpected potential hot spots are occasionally left in the design layout 1-2. OPC optimization is useful for maximizing common process margin, but it cannot expand individual pattern's process margin without modification of design layout. So, at an early design stage, hot spot extraction using lithography compliance check (LCC) and manual modification of design at hot spots will be a simple and useful method. The problem is that, it is difficult to determine how to modify layout in order to be consistent with DR, MDP/OPC rule. For proper layout modification, intimate knowledge of the entire process would be necessary, and moreover, the modification work often tends to be iterative, and thus time-consuming. Therefore, using our automated HSF system in the cell design stage and also the chip design stage is helpful for fixing design layout while avoiding fatal hot spot occurrence, with enough process margin and also with short turnaround time (TAT) 3-4. The basic system flow in the developed system is as follows; LCC extracts potential hot spots, and the hot spots are categorized by lithography error mode, grade, and surrounding context. And then, hot spot modification instructor, taking the surrounding situation into consideration, generates modification guide for every hot spot. Design data is automatically modified according to the instruction at every hot spot, complying with the design rule. The design modification process is verified with design-rule checker (DRC) and process simulation to confirm hot spot elimination without side effect. In this work, HSF is implemented in the design flow for various logic devices of 65 nm node. We extend modification target layers to multiple critical layers, including active area, poly, local metal wire and intermediate metal wire. The feasibility of the provided HSF system has been studied by applying it to around one hundred data of various sizes with respect to pattern fixing rate and turn around time (TAT). Moreover, process margin expansion including depth of focus (DOF) and exposure latitude (EL), in small layout was verified using process simulation and also by experimental results, namely, scanning electron microscope (SEM) images of focus exposure matrix. The detailed results are shown in the paper.
Automated full-chip hotspot detection and removal flow for interconnect layers of cell-based designs
Ed Roseboom, Mark Rossman, Fang-Cheng Chang, et al.
An automated flow has been implemented to detect printability hotspots using a model-based solution, and to automatically fix these hotspots during final routing optimization. A widening manufacturing gap has led to a dramatic increase in design rules that are either too restrictive or do not guarantee a litho/etch hotspot-free design. Since the semiconductor industry is currently limited to 193nm scanners, no relief is expected from the equipment side and must come from the design side. Rule-driven routers fail to capture hotspots, as they are based on ideal polygons that do not represent the real silicon image. Model-based hotspot detection can validate design manufacturability and will account for complex two-dimensional effects that stem from aggressive scaling of 193nm lithography. To enable this solution, manufacturing teams started to release model-based lithography checks; initially as a service using the manufacturing flow to check small cells, and now by releasing process information to designers for full-chip lithography hotspot detection. However, if manual fixing is manageable at the cell level, hotspot removal in large placed and routed blocks or even full chip is more challenging. Not only is full-chip litho/etch simulation required to have a reasonable runtime, but the fixing solution needs to be connectivity-aware and incremental with a very fine step size. This is required for a timing-aware solution that mitigates hotspots without adversely affecting timing closure. The automated flow links a hotspot detection solution and a chip routing optimization tool. The hotspot detection solution passes the hotspot locations and associated fixing guidelines to the chip routing optimization tool. The chip routing optimization tool removes the hotspots in an incremental fashion so as to have no significant impact on timing, but a significant impact on printability. This process of checking for hotspots and incrementally fixing them is iterated until a hotspot-free design is achieved. This paper describes how fabless designers have integrated this hotspot detection solution in their design flow and how the hotspot removal flow efficiently removed most hotspots in real designs, thereby providing DFM closure.
Model-assisted routing for improved lithography robustness
Tim Kong, Hardy Leung, Vivek Raghavan, et al.
This paper presents a model-assisted routing technique for improving lithography robustness of synthesized layouts. Presupposing an accurate lithography model and a model-based layout weak spot identification procedure, this method produces routed layout in-situ with acceptable turn around time. The approach starts with a conventionally routed layout that, although conforming to design rules, may contain undesirable layout configurations that the router should reconcile. Since weak spot identification is computation intensive, rule-based filtering is first applied to the incoming layout to select regions for further model-based analysis. The router then performs a non-discriminate correction to reduce the number of potential weak spots. This reduced set subsequently undergoes model-based weak spot analysis, distinguishing the actual weak spots. The router finally optimizes the layout to remove the identified weak spots. This technique has been implemented in an industry detail router, and tested with 65-nm technology. Experimental results show that this method can effectively remove actual weak spots with reasonable runtime.
Model-based approach for design verification and co-optimization of catastrophic and parametric-related defects due to systematic manufacturing variations
Dan Perry, Mark Nakamoto, Nishath Verghese, et al.
Model-based hotspot detection and silicon-aware parametric analysis help designers optimize their chips for yield, area and performance without the high cost of applying foundries' recommended design rules. This set of DFM/ recommended rules is primarily litho-driven, but cannot guarantee a manufacturable design without imposing overly restrictive design requirements. This rule-based methodology of making design decisions based on idealized polygons that no longer represent what is on silicon needs to be replaced. Using model-based simulation of the lithography, OPC, RET and etch effects, followed by electrical evaluation of the resulting shapes, leads to a more realistic and accurate analysis. This analysis can be used to evaluate intelligent design trade-offs and identify potential failures due to systematic manufacturing defects during the design phase. The successful DFM design methodology consists of three parts: 1. Achieve a more aggressive layout through limited usage of litho-related recommended design rules. A 10% to 15% area reduction is achieved by using more aggressive design rules. DFM/recommended design rules are used only if there is no impact on cell size. 2. Identify and fix hotspots using a model-based layout printability checker. Model-based litho and etch simulation are done at the cell level to identify hotspots. Violations of recommended rules may cause additional hotspots, which are then fixed. The resulting design is ready for step 3. 3. Improve timing accuracy with a process-aware parametric analysis tool for transistors and interconnect. Contours of diffusion, poly and metal layers are used for parametric analysis. In this paper, we show the results of this physical and electrical DFM methodology at Qualcomm. We describe how Qualcomm was able to develop more aggressive cell designs that yielded a 10% to 15% area reduction using this methodology. Model-based shape simulation was employed during library development to validate architecture choices and to optimize cell layout. At the physical verification stage, the shape simulator was run at full-chip level to identify and fix residual hotspots on interconnect layers, on poly or metal 1 due to interaction between adjacent cells, or on metal 1 due to interaction between routing (via and via cover) and cell geometry. To determine an appropriate electrical DFM solution, Qualcomm developed an experiment to examine various electrical effects. After reporting the silicon results of this experiment, which showed sizeable delay variations due to lithography-related systematic effects, we also explain how contours of diffusion, poly and metal can be used for silicon-aware parametric analysis of transistors and interconnect at the cell-, block- and chip-level.
Process-Aware Timing and Power Analysis
icon_mobile_dropdown
Context-specific leakage and delay analysis of a 65nm standard cell library for lithography-induced variability
Darsun Tsien, Chien Kuo Wang, Yajun Ran, et al.
A methodology to predict the impact of systematic manufacturing variations on the parametric behavior of standard cells in an integrated circuit is described. Such a methodology can be applied to the analysis of a full chip composed of standard cell components, and reports layout context-dependent changes in chip timing and power. For lithography and etch-induced variability, a study of a 65nm standard cell library has been done to examine the influence of cell context when looking at cell delay and leakage at different focus and exposure conditions. Cell context, or proximity effects from neighboring cells, can have a significant impact on cell performance across a process window, especially through focus, which needs to be considered for silicon-aware circuit analysis. The traditional lookup table approach used in static timing analysis or leakage power analysis needs to be augmented with an instance-specific offset for each cell in a design. Contours need to be generated for each transistor in each cell at different process points and the corresponding delay and leakage offsets should be calculated based on these contours. Electrical characterization also enables the use of other context-specific process models, such as strain and dopant fluctuations, without altering the final output. This allows subsequent tools to use the information for circuit analysis. Such a methodology is thereby useful for process-aware static timing and power analysis.
Patterning effect and correlated electrical model of post-OPC MOSFET devices
Y. C. Cheng, T. H. Ou, M. H. Wu, et al.
Accurate simulation of today's devices needs to account for real device geometry complexities after the lithography and etching processes, especially when the channel length shrinks to 65-nm and below. The device performance is believed to be quite different from what designers expect in the conventional IC design flow. The traditional design lacks consideration of the photolithography effects and pattern geometrical operations from the manufacturing side. In to order obtain more accurate prediction on circuits, an efficient approach to estimate nonrectangular MOSFET devices is proposed. In addition, an electrical hotspot criterion is also proposed to investigate and verify the manufacturability of devices during patterning processes. This electrical rule criterion will be performed after the regular Design Rule Check (DRC) or Design for Manufacturing (DFM) rule check. Photolithography and industrial-strength SPICE model are taken into consideration to further correlate the process variation. As a result, the correlation between process-windows and driving current variation of devices will be discussed explicitly in this paper.
Coupling-aware mixed dummy metal insertion for lithography
Liang Deng, Martin D. F. Wong, Kai-Yuan Chao, et al.
As integrated circuits manufacturing technology is advancing into 65nm and 45nm nodes, extensive resolution enhancement techniques (RET) are needed to correctly manufacture a chip design. The widely used RET called offaxis illumination (OAI) introduces forbidden pitches which lead to very complex design rules. It has been observed that imposing uniformity on layout designs can substantially improve printability under OAI. In this paper, two types of assist features for the metal layer are proposed to improve the uniformity, printable assist feature and segmented printable assist feature. They bring different costs on performance and manufacturing. Coupling and lithography costs from these assist features are discussed. Optimal insertion algorithm is proposed to use both types of dummy metals, considering trade-offs between coupling and lithography costs.
Prediction of interconnect delay variations using pattern matching
An exploratory Process Variation Net Scanning (PVNS) approach to estimate interconnect delay variations is presented. It is shown that the geometrical response of lithographic nonidealities can be quickly predicted to first order with Pattern Matching. This concept can be extended to other process nonidealities by developing Maximum Lateral Impact Functions to capture the effects of variations in conductor sidewall angle and thickness from etch and CMP processes. The geometrical response for each variation can then be used to model the effective change in resistance and capacitance and perturb the corresponding values in the extracted netlist. The impact of PVNS is demonstrated using a 90nm digital design, and the runtime analysis indicates that this approach may potentially be twice as fast as traditional extraction. This allows for fast electrical analysis of independent process variations on different interconnect layers instead of traditional best and worst case corner analyses.
OPC to reduce variability of transistor properties
Scaling toward 65 nm and beyond, process variations are increased and influences both functional yield and parametric yield. The process variations consist of systematic components and random components. Systematic variations are caused by predictable design and process procedures, therefore systematic variations should be removed from process corner model for LSI design. With the effect of scaling, print images on a wafer shows complicated distortion. The method of calculating distorted transistor properties without slicing into individual rectangular transistors has been previously proposed. Using this calculation method, transistor properties with distortion are able to be calculated, reduction of transistor property variations is expected. Transistor property variations caused by layout dependence could be reduced by using OPC with SPICE for each transistor, however, the calculation time of gate length retarget with SPICE is not realistic. Therefore we have investigated approximation for transistor properties using statistics of gate length distribution and layout parameters, and found that parameter fitting by average and &sgr; of gate length distribution of each transistor is useful. According to the results of application to standard cell libraries using OPC with transistor property estimation, we have achieved that our new OPC reduces threshold voltage and drive current variations greatly without increasing throughput. It is difficult to suppress variation about all properties without area penalty, however, property priority required for each transistor is different. Therefore performance improvement of the whole circuit and chip is possible by the argument of priority between manufacturing engineer and circuit designer or using design intents.
DFM Efficiency
icon_mobile_dropdown
Improving the power-performance of multicore processors through optimization of lithography and thermal processing
It is generally assumed that achieving a narrow distribution of physical gate length (Lpoly) for the poly conductor layer helps improve power performance metrics of modern integrated circuits. However, in advanced 90 nm technologies, there are other drivers of chip performance. In this paper we show that a global optimization of all variables is necessary to achieve the optimum performance at the lowest leakage. We will also describe how systematic physical gate-length variation can improve core matching in multicore designs.
Cost-performance tradeoff between design and manufacturing: DfM or MfD?
Design, CAD, and manufacturing are focused on optimizing translation methodology from electrical design to physical layout, and finally to mask data. The general goal is to improve integrated circuit (IC) functionality, reliability, manufacturability, testability, etc., using Design-for-X-ability (DfX) rules. Among those, the key role is played by DfM which is most directly related to the yield and therefore, the profit. A lot of pressure is being put on design to improve their understanding of all technology implementation issues, such that the mask pattern generated out of design layout would be "correct by construction" and comply with all of them. One can expect that such DfM-compliant layout should require significant effort to create, and its salient features would include: Manhattan geometries, and restricted grid for critical geometries, such as poly gates, large enclosures of the active area in the corners of implant layers, complete symmetry and proximity of the matched devices on all masking levels, minimal amount of jogs even for the complex features, neat alignment of source and drain contacts, line ends of gates and interconnects, doubled contacts and vias, etc. The question is if the cost of following all these practices at design time is not higher than that of other design improvement options. One alternative approach is to automatically adjust the "draft" layout using CAD post-processing such that all geometries would be optimized to conform to the DfM rules. Another approach to the DfM methodology is to improve the manufacturing capabilities such that the process tools would be able to achieve high yield for a layout which conforms only to some basic set of rules. This approach becomes even more relevant when the product line tries to address only selected DfM issues to improve die performance where it is most needed. We discussed the layout flow charts to determine the best approach, depending on the direct cost of the solution, the wafer volume, product time to market, and the risks involved.
Hardware verification of litho-friendly design (LfD) methodologies
Reinhard März, Kai Peter, Sonja Gröndahl, et al.
With the upcoming technology generations, it will become more and more challenging to provide a good yield and a fast yield ramp. The contribution of Resolution Enhancement Technologies (RET) to Design for Manufacturability (DfM) targets is to provide a good printability over the whole process window and the control by print image simulation (PW-ORC) and to identify and remove yield issues imprinted in the drawn layout in early phases of the design flow. Such a lithography-aware design data flow, which we call LfD (Litho-friendly Design) will be a very important step towards a fully developed DfM environment. We report in this paper the application of a LfD design flow used for library cells at the MAPLE, an Infineon 65 nm design prototype fabricated by Chartered. The results of the process variability analysis are verified by experimental results (dose-focus exposure matrices).
Lithography and yield sensitivity analysis of SRAM scaling for the 32nm node
In this paper the impact of overlay and CD uniformity specifications on device and SRAM cell functional and parametric yield are analyzed. The variation of channel strain due to partial etching of the stress layer is determined, and we find that including this effect in the device parametric yield leads to severe CDU and overlay requirements. The method is applied to SRAM cells and memories, and it is shown that only the co-optimization of SRAM cell layout, CDU and overlay can increase the number of good dies per wafer.
Poster Session
icon_mobile_dropdown
Litho aware method for circuit timing/power analysis through process
R. S. Fathy, M. Al-Imam, H. Diab, et al.
Device extraction and the quality of device extraction is becoming of increasing concern for integrated circuit design flow. As circuits become more complicated with concomitant reductions in geometry, the design engineer faces the ever burgeoning demand of accurate device extraction. For technology nodes of 65nm and below approximation of extracting the device geometry drawn in the design layout polygons might not be sufficient to describe the actual electrical behavior for these devices, therefore contours from lithographic simulations need to be considered for more accurate results. Process window variations have a considerable effect on the shape of the device wafer contour, having an accurate method to extract device parameters from wafer contours would still need to know which lithographic condition to simulate. Many questions can be raised here like: Are contours that represent the best lithography conditions just enough? Is there a need to consider also process variations? How do we include them in the extraction algorithm? In this paper we first present the method of extracting the devices from layout coupled with lithographic simulations. Afterwards a complete flow for circuit time/power analysis using lithographic contours is described. Comparisons between timing results from the conventional LVS method and Litho aware method are done to show the importance of litho contours considerations.
Circuit size optimization with multiple sources of variation and position dependant correlation
Qian Ying Tang, Paul Friedberg, George Cheng, et al.
The growing impact of process variation on circuit performance requires statistical design approaches in which circuits are designed and optimized subject to an estimated variation. Previous work [1] has shown that by including extra margins in each of the gate delays and optimizing the gate sizes, the circuit delay variation can be reduced by half. Our work goes further by deploying extended models that include delay variations due to Vth and Leff, as well as position dependant variation. Two types of models have been proposed to account for various variations: 1) a model that explicitly adds spatial correlation terms to the design objective; 2) a model that implicitly includes such effect through the use of a modified version of Pelgrom's model. These design models are used to size a 32-bit Ladner-Fischer adder and the circuit delay distributions are obtained from Monte Carlo simulations. The analysis shows that both types of models have a noticeable performance improvements over the model presented in [1]. In addition, the second model appears to be a more adequate method for modeling various variation components and has a better performance over the first model; the drawback is a more complicated object function.
Multidimensional physical design optimization for systematic and parametric yield loss reduction
L. N. Karklin, A. Arkhipov, Y. Belenky, et al.
This presentation demonstrates one of Design-for-Manufacturing (DFM) solutions where a combination of design rules and lithography analysis (NILS, MEEF, and PW) is used as a basis for physical design correction and optimization. A physical design flow typically includes RET/OPC and post-OPC verification (Silicon DRC) steps. Error markers, generated at the verification step, show locations of so-called "hot spots" which are lithographically sensitive, areas prone to silicon failures. In our approach "hot spots" are traced back to a design and the design has been optimized to make those areas manufacturable. "Hot spot" markers of a flat post OPC layout are analyzed and categorized and only a unique instance of a "hot spot" is traced back to design hierarchy and corrected. Layout correction and optimization is guided by litho analysis and design intent. A set of lithography specific local constraints is added to a set of global constraints (DRC rules). A constraint-solving engine generates a new version of the layout that is DRC correct and is now "lithography/OPC friendly". Depending on a user accessible set of parameters, design correction could be done with or without polygons edge segmentation and without critical area increase. Different lithography technologies (such as immersion lithography with hyper NA) and different process models could be applied. Device electrical performance in conjunction with simulated and extracted silicon shapes is discussed. Layout correction is done on a minimum edge/polygon movement principle, which leads to DRC clean and LVS respectful solution.
Highly accurate model-based verification using SEM image calibration method
Byung-ug Cho, Dae-jin Park, Dong-suk Chang, et al.
In the past when design rule is not tight, CD-based OPC modeling was acceptable. But shrinkage of design rule eventually led to small process window, which in part increased MEEF(Mask Error Enhancement Factor). Hence, data for OPC modeling have also become more complex and diverse in order to characterize the critical OPC models. The number of measurement points for OPC model evaluation has increased to several hundred points per layer, and metrology requests for realizing pattern shapes on the wafer are no longer simple one-dimensional measurements. Traditional CD-based OPC modeling is based on 1 dimensional parameter fitting and has limited information. Due to this reason, the accuracy of the model has intrinsic limitations. Recently, development of modeling methodology resulted in SEM image calibration. SEM image calibration use SEM image to calibrate large volume 2 dimensional information. SEM image calibration is based on real SEM image which has several thousands of CD information. It needs only SEM images instead of several hundred CD data, so data feedback is more easy. But this approach makes it difficult to achieve confidential level for predictability because SEM image is restricted to local region. And modeling accuracy is highly dependent on SEM image quality and local position. In this paper, we propose SEM image calibration method that feeds back SEM image calibrated model to model-based verification. By using this method, modeling accuracy is increased and better post OPC verification can be made. We will discuss the application result on sub-60nm device and the feasibility of this approach.
The study for increasing efficiency of OPC verification by reducing false errors from bending pattern by using different size of CD error non-checking area with various corner lengths
Sang-Uk Lee, Yong-Suk Lee, Jeahee Kim, et al.
In recent years, model based verification for optical proximity effect correction (OPC) has become one of the most important items in semiconductor industry. Major EDA companies have released various softwares for OPC verification. They have continuously developed and introduced new methods to achieve more accurate results of OPC verification. The way to detect only real errors by excluding false errors is the most important thing for accurate and fast verification process, because more time and human resource are needed to inspect the result of verification as increasing false errors. A major source of false errors is bending patterns. The number of those from bending patterns is over thousands and they are inevitable. The most verification tools have the scheme for excluding those by using CD error non-checking or filtering area. Real errors around bending pattern will not be able to detect with too big size of area, while too many false error will be reported with too small size of area. Since currently most verification tools had only a fixed area size for filtering, it has been impossible to achieve most accurate and efficient verification results. Through the optimization of area size with different corner length, we could get more accurate and efficient results and decrease the time for review to find real errors. In this paper, the suggestion in order to increase efficiency of OPC verification process by using different size of CD error non-checking area with various corner lengths is presented.
DFM flow by using combination between design-based metrology system and model-based verification at sub-50nm memory device
Cheol-kyun Kim, Jungchan Kim, Jaeseung Choi, et al.
As the minimum transistor length is getting smaller, the variation and uniformity of transistor length seriously effect device performance. So, the importance of optical proximity effects correction (OPC) and resolution enhancement technology (RET) cannot be overemphasized. However, OPC process is regarded by some as a necessary evil in device performance. In fact, every group which includes process and design, are interested in whole chip CD variation trend and CD uniformity, which represent real wafer. Recently, design based metrology systems are capable of detecting difference between data base to wafer SEM image. Design based metrology systems are able to extract information of whole chip CD variation. According to the results, OPC abnormality was identified and design feedback items are also disclosed. The other approaches are accomplished on EDA companies, like model based OPC verifications. Model based verification will be done for full chip area by using well-calibrated model. The object of model based verification is the prediction of potential weak point on wafer and fast feed back to OPC and design before reticle fabrication. In order to achieve robust design and sufficient device margin, appropriate combination between design based metrology system and model based verification tools is very important. Therefore, we evaluated design based metrology system and matched model based verification system for optimum combination between two systems. In our study, huge amount of data from wafer results are classified and analyzed by statistical method and classified by OPC feedback and design feedback items. Additionally, novel DFM flow would be proposed by using combination of design based metrology and model based verification tools.
Application of enhanced dynamic fragmentation to minimize false error from post OPC verification
Jae-Hyun Kang, Sang uk Lee, Jeahee Kim, et al.
Conventional OPC fragmentation method operates under a set of simple guiding principles. All patterns are to be uniform in finite size from edge of polygon. Within each fragment, the intensity profile (aerial image) and edge-placement error (EPE) are calculated at a settled location. Finally, the length of the entire fragment is moved to correct for the EPE at that location. This is to be often against simulation like a model based OPC. In the strict sense, model based OPC is depended on simulation results not only moving of all fragments in the layout are reduced to zero but also dividing of all polygon edges. This drastically increased data volume and the computation time required to perform OPC. Therefore, more powerful fragmentation mechanism will be one of major factors for the success of OPC process. In this study, a new approach of fragmentation has been tested, which reduces OPC correction error. First, we check the weak point of all patterns using slope, EPE, MEEF and contrast. Second, weak points apply high frequency fragmentation based on simulation contour images. The others are divided into normal correction recipe. This improves to accurate OPC correction for weak point which can divide a fine classification. It also is possible to reduce OPC time for non critical pattern applied moderate fragmentation.
Pattern decomposition for double patterning from photomask viewpoint
Double Patterning Technology (DPT) has been evaluated and reported since 32nm half pitch is recognized to be required with conventional immersion ArF lithography. DPT requires pattern decomposition into two pattern sets and the decomposition becomes more complex for especially so-called logic pattern including irregular pattern placement and many-vertices polygons. The innocent decomposition often creates forced segmentation of those polygons and two different aspect of photomasks such as density or substantial line direction. Those decomposed photomasks not only produce large possibilities of different error behavior but also leave annoyance complexity untouched. It is well known that line-ends and dense twisted lines produce large MEF. Then tighter specification for photomask fabrication have been required since the resolution limit was getting below the exposure wavelength. So the decomposition that creates tight patterns into separate two photomasks has possibilities of the fabrication load lighter. In this paper, the decomposition of criteria for DPT which helps photomask fabrication with a small possibilities is evaluated and discussed. Furthermore though it's getting to popular that overlay and CD uniformity of photomasks for DPT impact to completed CD with wafer exposure directly, considering other errors such as CD shift or phase error which are supposed to recover by exposure in addition to those errors are also studied.
Impacts of optical proximity correction settings on electrical performances
Meng-Fu You, Philip C. W. Ng, Yi-Sheng Su, et al.
Due to non-ideal optical effects such as aberration and optical diffraction, printed poly gates on the wafer suffer from across-gate linewidth variation (AGLV) and across-chip linewidth variation (ACLV,) especially in the subwavelength regime. The poly gate distortion affects device electrical characteristics, including drive current (Ion), leakage current (Ioff), and threshold voltage (Vt). For circuits sensitive to layout, such as compact memory cells, electrical performances can vary with image distortion of each transistor even after applying resolution enhancement technologies (RETs) such as optical proximity corrections. In this paper, we demonstrate the impact of OPC settings on the performance of 6T-SRAM cells. The printed transistor gate and active region patterns are simulated by an in-house OPC engine. The device model for each distorted transistor is then extracted based on approximating each distorted channel pattern with a set of smaller rectangles. Consequently, Electrical performance such as static noise margin (SNM) can be obtained by incorporating these extracted device models into a circuit simulator. Preliminary results show that OPC settings such as segmentation length and numbers of corrections can affect wafer image quality and electrical performance in different ways.
Lithography enhanced manufacturability analysis by using multilevel simulated contours
Beom-Seok Seo, Woon-Hyuk Choi, Jong-Woon Park, et al.
Since the sub-50nm logic lithography approaches to k1 value of 0.3, it seems to be an impossible task to print typical logic patterns composed of random shapes and mixed pitches using the conventional resolution enhancement technology (RET). As one of the effective solutions to deal well with this issue, lithography friendly design (LFD) and advanced optical proximity correction (OPC) technology have been considered and developed. However, the investigation on the distortion types of various 2-dimensional patterns has rarely been preceded up to now, while lithographical hot spots are observed are dominated by the 2-dimensional patterns rather than in the 1-dimensional patterns. In order to provide a LFD layout and a good OPC performance for the future node logic device, the analysis and the hot spot's classification of the 2-dimensional pattern need to be performed. Based on our analysis of various pattern types at mimic-logic test block, a feedback strategy was implemented to reduce the 2-dimensional hot spots through the correction stage of the OPC recipes. In our study, we find out the proper value of ground rule and the cost-effective methodology which should go with reciprocal encouragement in OPC and LFD. This will give us a good methodology for the lithography technology nodes and upstream design for manufacturability (DFM) approaches.
Scanner-characteristics-aware OPC modeling and correction
As scanner projection lens captures only a finite number of IC pattern diffraction orders. This low pass filtering leads to a range of optical proximity effects such as pitch-dependent CD variations, corner rounding and line-end pullback, resulting in imaged IC pattern excursions from the intended designs. These predictable OPEs are driven by the imaging conditions, such as wavelength, illuminator layout, reticle technology, and lens numerical aperture. To mitigate the pattern excursion due to OPEs, the photolithography community developed optical proximity correction methodologies, adopted and refined by the EDA industry. In the current implementations, OPC applied to IC designs can correct layouts to compensate for OPEs and to provide imaged patterns meeting the design requirements.
Wire sizing and spacing for lithographic printability optimization
As the VLSI feature size has already decreased below lithographic wavelength, the printability problem due to strong diffraction effects poses a serious threat to the progress of VLSI technology. A circuit layout with poor printability implies that it is difficult to make the printed features on wafers follow designed shapes without distortions. The development of Resolution Enhancement Techniques (RET) can alleviate the printability problem but cannot reverse the trend of deterioration. Moreover, over-usage of RET may dramatically increase photo-mask cost and increase the cycle time for volume production. Thus, there is a strong demand to consider the sub-wavelength printability problem in circuit layout designs. However, layout printability optimization should not degrade circuit timing performance. In this paper, we introduce a wire sizing and spacing method to improve wire printability with minimal adverse impact on interconnect timing performance. A new printability model is proposed to handle partially coherent illuminations. The difficult problem of printability optimization due to its multimodal nature is handled with a sensitivity based heuristic in timing aware fashion. Lithographic simulation results show that our approach can improve the printability in term of EPE (Edge Placement Error) by 20%-40% without violating timing, wire width and spacing constraints.
A rigorous method to determine printability of a target layout
We present a necessary condition for an arbitrary 2-dimensional pattern to be printable by optical projection lithography. We call a pattern printable if it satisfies a given set of edge-placement tolerances for a given lithography model and process-window. The test can be made specific to a lithography model, or it can be made generic for a wavelength and numerical aperture. In the generic form, if a pattern is found to be not printable, the conclusion is valid for all RET technologies except for non-linear techniques such as litho-etch-litho-etch double-patterning and multi-photon lithography. The test determines printability of a target layout without applying RET/OPC.
Double patterning technology: process-window analysis in a many-dimensional space
We consider a memory device that is printed by double patterning (litho-etch-litho-etch) technology wherein positive images of 1/4-pitch lines are printed in each patterning step. We analyze the errors that affect the width of the spaces. We propose a graphical method of visualizing the many-dimensional process-window for double patterning. Controlling the space-width to ±10% of half-pitch is not possible under the worst combination of errors. Statistical analysis shows that overlay and etch bias are the most significant contributors to the variability of spaces. 3&sgr;[space-width] = 17% and 11% of nominal space can be achieved for 3&sgr;[Overlay] = 6 nm and 3 nm, respectively, for a 40-nm half pitch array printed using NA=0.93.
Novel technique to separate systematic and random defects during 65nm and 45nm process development
Defect inspections performed in R&D may often result in 100k to 1M defect counts on a single wafer. Such defect data combine systematic and random defects that may be yield limiting or just nuisance defects. It is difficult to identify systematic defects from defect wafer map by traditional defect classification where random sample of 50 to 100 defects are reviewed on review SEM. Missing important systematic defect types by traditional sampling technique can be very costly in device introduction. Being able to efficiently sample defects for SEM review is not only challenging, but can result in a Pareto that lacks in usefulness for R& D and for yield improvement. To mitigate the issue and to reduce yield improvement cycle in advanced technology, a novel method has been proposed. Instead of using random sampling method, we have applied a pattern search engine to correlate defect of interest (DOI) to its pattern background. Based on the approach we have identified an important defect type, STI cave defect, to be the major defect type on defect Pareto. For the defect type, stack die map was generated that indicated a distinctive signature. The result was compared against design layout to confirm that the defects were occurring at certain locations of design layout. Afterwards the defect types were reviewed using SEM and in-line FIB for further confirmation. We have found the cause of this void defect type to be poor gap-fill in deposition step. Based on the novel technique, we were able to filter out a systematic defect type quickly and efficiently from wafer map that consist of random and systematic defects.
Intelligent fill pattern and extraction methodology for sensitive RF/analog or SoC products
A. Balasinski, J. Cetin, A. Kahng
Distribution of mask pattern density of isolating or conducting, layers of the die, such as active, poly, or metals, impacts product voltage tolerance and frequency response. At active level, nonuniform pattern density lowers punch-through or breakdown voltages. At metal levels, planarity issues give rise to high via resistances and variations of inter-layer capacitive coupling. Devices required to build an SoC product such as precision resistors, inductors, RF FETs, and capacitors, have diversified geometry characteristics. Usually, the differences in pattern density they cause over the die cannot be mitigated at design stage. Therefore, die pattern density has to be made uniform at die integration stage, by global addition of fill features (waffles). This presents significant challenge as the criteria for this addition are often contradictory or difficult to meet. The basic, but time consuming way of equalizing pattern density calls for manual adding of dummy features. In comparison, a simplistic, automated geometric approach is to add fill pattern of fixed density, which would then become target pattern density of the die. However, it may not be possible to equalize pattern density over all the regions even allowing for some changes in the die architecture in the course of either manual or automated waffling process. In addition, the approach tends to add dummy features even if unnecessary, creating local extremes of pattern density. Such outcome is not always preferred by the product RF/analog applications, which can be compromised by capacitive coupling through the waffles. In our methodology proposed in this work, the die pattern density is first analyzed, followed by the adjustable, intelligent fill of dynamic density. This way, it is possible to keep the original pattern density and work only on the areas of small density. We also propose to adjust the standard cell methodology in order to enable pre-die level modifications of pattern density and its extraction and ensure that all the required blocks could be placed on the product with their parasitics accounted for.
Scanner parameter sensitivity analysis for OPE
The imaging power of microlithographic lenses, normalized to minimum feature size, has become lower and lower for each generation, even with the progress of high-NA lithography. The rate of increase of NA has not kept pace with Moore's law. Therefore, low k1 lithography techniques such as RET (Resolution Enhancement Technology) have been applied for more than a decade. RET, however, is a technique to increase the imaging contrast only for dedicated pattern types/sizes, and may decrease the imaging power for other than dedicated patterns. To fill this gap between actual imaging power and required imaging power, OPC (Optical Proximity Correction) technologies are becoming more and more important for leading edge lithography. The accuracy of OPC is important for high performance quality chips. On the other hand, due to low relative imaging power and high NA of the current projection lens, imaging performance is very sensitive to various kinds of errors such as defocus, dose error, aberration, apodization, flare, polarization aberration, polarization status, etc. In order to solve this, scanner parameters, which can be obtained even before the scanner itself has been completed, should be embedded in the imaging simulation in OPC design and verification to improve the accuracy. In addition, OPE (Optical Proximity Effect) data simulated with the scanner parameters may be useful for early stage reticle design before actual exposure using first lot scanners. We have studied the sensitivity of OPE to scanner parameters and prioritized parameters to be input to the OPC design and/or verification process for improving the accuracy without significant increase in the amount of calculation.
OPC and design verification for DFM using die-to-database inspection
JungChan Kim, HyunJo Yang, JooKyoung Song, et al.
The downscaling of the feature size and pitches of the semi-conductor device requires the improvement of device characteristics and high yield continuously. In lithography process, RET techniques such as immersion and polarization including strong PSM mask have enabled this improvement of printability and downscaling of device. It is true that optical lithography is approaching its limit. So other lithographic technique such as EUV is needed but the application is not yet available. In this point of view, the realization of lithography friendly layout enables good printability and stable process. And its scope is being enlarged and applied in most semi-conductor devices. Therefore, in order to realize precise and effective lithography friendly layout, we need full chip data feedback of design issue, OPC error and aberration and process variables. In this paper, we report the results of data feedback using new DFM verification tool. This tool enables full chip inspection through E-beam scan method with fast and accurate output. And these data can be classified with each item for correction and stability check through die to database inspection. Especially in gate process, total CD distributions in full chip can be displayed and analyzed for each target with simple method. At first we obtain accuracy data for each target and CD uniformity from hundreds of thousands of gate pattern. And second we detect a delicate OPC error by modeling accuracy and duty difference. It is difficult to get from only measurement of thousands pattern. Finally we investigated specific pattern and area for electrical characteristic analysis in full chip. These results should be considered and reflected on design stage.
Self-assembled dummy patterns for lithography process margin enhancement
James Moon, Byoung-Sub Nam, Joo-Hong Jeong, et al.
Over the last couple of years, Design For Manufacturability (DFM) has progressed from concept to practice. What we thought then is actually applied to the design step to meet the high demand placed upon very high tech devices we make today. One of the DFM procedures that benefit the lithography process margin is generation of dummy patterns. Dummy pattern generated at design step enables stable yet high lithography process margin for many of the high technology device. But actual generation of the dummy pattern is very complex and risky for many of the layer used for memory devices. Dummy generation for simple pattern layers such as Poly or Isolation layer is not so difficult since pattern composed for these layers are usually 1 dimensional or very simple 2 dimensional patterns. But for interconnection layers that compose of complex 2 dimensional patterns, dummy pattern generation is very risky and requires lots of time and effort to safely place the dummy patterns. In this study, we propose simple self assembled dummy (SAD) generation algorithm to place dummy pattern for the complex 2 dimensional interconnection layers. This algorithm automatically self assembles dummy pattern based on the original design layout, therefore insuring the safety and simplicity of the generated dummy to the original design. Also we will evaluate SAD on interconnection layer using commercial Model Based Verification (MBV) tool to verify its applicability for both litho process margin and DFM perspective.
Modeling spatial gate length variation in the 0.2µm to 1.15mm separation range
Paul Friedberg, Willy Cheung, George Cheng, et al.
Circuit performance variability is significantly impacted by variations in gate length caused in microlithographic pattern transfer [1]. Previous studies [2] have shown through simulation that by completely reconciling sources of deterministic variation, long-range (millimeter separation scale) spatial correlation in the remaining variation is virtually zero. To complete the model for spatial variation and correlation in critical dimension (CD), a new set of electrical linewidth metrology (ELM) test structures were then designed to target the sub-mm regime [3]. In this work, we report measurement results from those micron-scale ELM test structures. The micron-scale (0.2μm to 1.15mm) variation can be decomposed into a very large chip-to-chip component, a small and systematic density-dependent component, and a small random component; spatial correlation in gate length for the micron-scale regime is negligible.
Novel method for quality assurance of two-dimensional pattern fidelity
Shimon Maeda, Ryuji Ogawa, Seiji Shibazaki, et al.
This paper proposes an evaluation pattern generating method that realizes stable printing for any two-dimensional feature. Below 65nm design node, even in the case of using the most advanced optical techniques, the resolution limit is approached. As a result, patterning fidelity to the target worsens in low k1 lithography conditions. Complex layout patterns, especially two-dimensional features, become increasingly sensitive to photo-resist bridging and necking. This means that the need for rich two-dimensional patterns is increasing in order to cope with lithographic patterning fidelity issues, such as quality assurance of OPC script and establishment of the design rule. A new pattern generating method reported in this paper can provide plenty of unexpected two-dimensional patterns by employing the Monte Carlo method. It can also take the design rule checker into account to present patterns without any design rule violation. In addition, to narrow significant patterns down to real efficient patterns, we employ a device that generates the characteristic features of each layer. More than 2000 feature variations of feature can be generated in less than half day by this new method and verifying OPC with the generated 2000 patterns is estimated to be equal to verifying OPC with all pattern variations that appear in 10 real products. More examples are provided to verify the efficacy of two-dimensional patterns generated by this approach. It is shown that the proposed method is significantly efficient for detecting hotspots that are unfaithful to the target with low k1 factor.
A systematic approach for capturing interconnects hot spots
Mohamed Al-Imam, H. Y. Liao, Jochen Schacht, et al.
Overlay variations between different layers in Integrated Circuits fabrication can result in poor circuit performance, even worst it can cause circuit mal function and consequently affect process yield. Coupled with other lithographic process variations this effect can be highly magnified. This leads to the fact that searching for interconnects hot spots should include overlay variations into account. The accuracy of inclusion of the overlay variation effect comes at the expense of a more complex simulation setup. Many issues should be taken into consideration including runtime, process combinations to be considered and the feasibility of providing a hint function for correction. In this paper we present a systematic approach for classification of interconnects durability through the lithographic process, taking into account focus, dose and overlay variations, the approach also provides information about the cause for the low durability that can be useful for building a more robust design. This classification can be accessible at the layout design level. With this information in hand, designers can test the layout while building up their circuit. Modifications to the layout for higher interconnects durability can be easily made. These modifications would be extremely expensive if they had to be made after design house tape out. We verify this method by showing real wafer failures, due to bad interconnect design, against interconnects' durability classifications from our method.
Ensuring production-worthy OPC recipes using large test structure arrays
The continual shrinking of design rules as the industry follows Moore's Law and the associated need for low k1 processes, have resulted in more layout configurations becoming difficult to print within the required tolerances. OPC recipes have needed to become more complex as tolerances decreased and acceptable corrections harder to find with simple algorithms. With this complexity comes the possibility of coding errors and ensuring the solutions are truly general. OPC Verification tools can check the quality of a correction based on pre-determined specifications for CD variation, line-end pullback and Edge Placement Error and then highlight layout configuration where violations are found. The problem facing a Mask Tape-Out group is that they usually have little control over the Design Styles coming in. Different approaches to eliminating problematic layouts have included highly restrictive Design Rules [1], whereby certain pitches or orientations are disallowed. Now these design rules are either becoming too complex or they overly restrict the designer from benefiting from the reduced pitch of the new node. The tight link between Design and Mask Tape-Out found in Integrated Device Manufacturers [2] (IDMs) i.e. companies that control both design and manufacturing can do much to dictate manufacturing friendly layout styles, and push ownership of problem resolution back to design groups. In fact this has been perceived as such an issue that a new class of products for designers that perform Lithographic Compliance Check on design layout is an emerging technology [3]. In contrast to IDMs, Semiconductor Foundries are presented with a much larger variety of design styles and a set of Fabless customers who generally are less knowledgeable in terms of understanding the impact of their layout on manufacturability and how to correct issues. The robustness requirements of a foundry's OPC correction recipe, therefore needs to be greater than that for an IDM's tape-out group. An OPC correction recipe which gives acceptable verification results, based solely on one customer GDS is clearly not sufficient to guarantee that all future tape-outs from multiple customers will be similarly clean. Ad hoc changes made in reaction to problems seen at verification are risky, while they may solve one particular layout issue on one product there is no guarantee that the problem may simply shift to another configuration on a yet to be manufactured part. The need to re-qualify a recipe over multiple products at each recipe change can easily results in excessive computational requirements. A single layer at an advanced node typically needs overnight runs on a large processor farm. Much of this layout, however, is extremely repetitive, made from a few standard cells placed tens of thousands of times. An alternative and more efficient approach, suggested by this paper as a screening methodology, is to encapsulate the problematic structures into a programmable test structure array. The dimensions of these test structures are parameterized in software such that they can be generated with these dimensions varied over the space of the design rules and conceivable design styles. By verifying the new recipe over these test structures one could more quickly gain confidence that this recipe would be robust over multiple tape-outs. This paper gives some examples of the implementation of this methodology.
Intelligent visualization of lithography violations
New methods for visualizing process window effects on simulated lithography violations are shown. Three types of analysis of simulation errors are discussed. Worst site violations are those geometries in which at least one process condition shows largest deviations from target. For these errors, variations of Cleveland dot charts are useful for showing key attributes such pinpointing which process condition(s) cause the largest violations and the distribution of violations among focus and exposure conditions. Modifications of dot charts are also useful to visualize violations across the process window for the entire chip as opposed to selected sites. Lastly, linearity charts combined with box/whisker objects can be used to show deviations from target over a range of drawn dimensions.
Production-worthy OPC verification methods for protecting against process variability
James A. Bruce, Norman Chen, Vinay Chinta, et al.
In this paper we report on alternate solutions to protect against process variability - while also focusing on minimizing simulation time. We have investigated a variety of techniques, including the use of aerial image parameters to flag sites that might be sensitive to changes in dose, a mask error enhancement factor (MEEF) check based on biasing of the optical proximity correction (OPC) layer to reflect mask variations, and a sorting approach where sites with suspect parameters (e.g. high MEEF or poor aerial image quality, such as low slope) are simulated using multiple process conditions. All of these techniques represent shortcuts as compared to simulations of the full chip at multiple process conditions, and thus savings in CPU time. However, use of these short cuts can have several down-sides: first, increased risk of missing a real error, and second, increases in the number of false errors reported (where false errors are sites which are predicted to fail, but actually have an adequate window to allow for process variability). The challenge is to find methods to make the short cuts as selective as possible, so that they will flag all potentially failing sites, without flagging too many false errors.
Automatic OPC mask shape repair
James Word, Dragos Dudau, Nick Cobb
Virtual manufacturing, enabled by rapid, full-chip simulation is a critical component of the mask tapeout flow in the low-k1 lithography era. Virtual manufacturing enables first-time-right silicon manufacturing by detecting printing failures before a costly and time-consuming mask tapeout and wafer print has occurred. This is especially true in the latest tapeout flows which include Optical Proximity Correction (OPC) and various Resolution Enhancement Technologies (RET). One issue arising with the addition of virtual manufacturing to the mask tapeout flow is the proper user response to any detected failures. The authors will present one vision for OPC mask shape defect handling which can be completely integrated within the user's existing flow, and most importantly, requires no human intervention to disposition, waive, and/or repair detected mask shape defects.
SOFT: smooth OPC fixing technique for ECO process
SOFT (Smooth OPC Fixing Technique) is a new OPC flow developed from the basic OPC framework. It provides a new method to reduce the computation cost and complexities of ECO-OPC (Engineering Change Order - Optical Proximity Correction). In this paper, we introduce polygon comparison to extract the necessary but possibly lost fragmentation and offset information of previous post-OPC layout. By reusing these data, we can start the modification on each segment from a more accurate initial offset. In addition, the fragmentation method in the boundary of the patch in the previous OPC process is therefore available for engineers to stitch the regional ECO-OPC result back to the whole post-OPC layout seamlessly. For the ripple effect in the OPC, by comparing each segment's movement in each loop, we much free the fixing speed from the limitation of patch size. We handle layout remodification, especially in three basic kinds of ECO-OPC processes, while maintaining other design closure. Our experimental results show that, by utilizing the previous post-OPC layout, full-chip ECO-OPC can realize an over 5X acceleration and the regional ECO-OPC result can also be stitched back into the whole layout seamlessly with the ripple effect of the lithography interaction.
The accuracy of a calibrated PROLITH physical resist model across illumination conditions
In this paper, the portability of a calibrated PROLITH resist model outside of the exposure conditions used during its generation is tested. Can a single physical resist model accurately predict results of different illumination schemes without recalibration at each condition? Can a calibrated physical resist model accurately describe observed lithographic behaviors outside the conditions used to create it? PROLITH model accuracy in predicting verification data exposed at conditions different from the calibration condition is shown to be 1 nm RMS across dose, focus and mask pitch.
Feedback flow to improve model-based OPC calibration test patterns
Walid A. Tawfic, Mohamed Al-Imam, Karim Madkour, et al.
Process models are responsible for the prediction of the latent image in the resist in a lithographic process. In order for the process model to calculate the latent image, information about the aerial image at each layout fragment is evaluated first and then some aerial image characteristics are extracted. These parameters are passed to the process models to calculate wafer latent image. The process model will return a threshold value that indicates the position of the latent image inside the resist, the accuracy of this value will depend on the calibration data that were used to build the process model in the first place. The calibration structures used in building the models are usually gathered in a single layout file called the test pattern. Real raw data from the lithographic process are measured and attached to its corresponding structure in the test pattern, this data is then applied to the calibration flow of the models. In this paper we present an approach to automatically detect patterns that are found in real designs and have considerable aerial image parameters differences with the nearest test pattern structure, and repair the test patterns to include these structures. This detect-and-repair approach will guarantee accurate prediction of different layout fragments and therefore correct OPC behavior.
Double pattern EDA solutions for 32nm HP and beyond
The fate of optical-based lithography hinges on the ability to deploy viable resolution enhancement techniques (RET). One such solution is double patterning (DP). Like the double-exposure technique, double patterning is a decomposition of the design to relax the pitch that requires dual masks, but unlike double-exposure techniques, double patterning requires an additional develop and etch step, which eliminates the resolution degradation due to the cross-coupling that occurs in the latent images of multiple exposures. This additional etch step is worth the effort for those looking for an optical extension [1]. The theoretical k1 for a double-patterning technique of a 32nm half-pitch (HP) design for a 1.35NA 193nm imaging system is 0.44 whereas the k1 for a single-exposure technique of this same design would be 0.22 [2], which is sub-resolution. There are other benefits to the DP technique such as the ability to add sub-resolution assist features (SRAF) in the relaxed pitch areas, the reduction of forbidden pitches, and the ability to apply mask biases and OPC without encountering mask constraints. Similarly to AltPSM and SRAF techniques one of the major barriers to widespread deployment of double patterning to random logic circuits is design compliance with split layout synthesis requirements [3]. Successful implementation of DP requires the evolution and adoption of design restrictions by specifically tailored design rules. The deployment of double patterning does spawn a couple of issues that would need addressing before proceeding into a production environment. As with any dual-mask RET application, there are the classical overlay requirements between the two exposure steps and there are the complexities of decomposing the designs to minimize the stitching but to maximize the depth of focus (DoF). In addition, the location of the design stitching would require careful consideration. For example, a stitch in a field region or wider lines is preferred over a transistor region or narrower lines. The EDA industry will be consulted for these sound automated solutions to resolve double-patterning sensitivities and to go beyond this with the coupling of their model-based and process-window applications. This work documented the resolution limitations of single exposure, and double-patterning with the latest hyper-NA immersion tools and with fully optimized source conditions. It demonstrated the best known methods to improve design decomposition in an effort to minimize the impact of mask-to-mask registration and process variance. These EDA solutions were further analyzed and quantified utilizing a verification flow.
Optimizing gate layer OPC correction and SRAF placement for maximum design manufacturability
Sub-resolution assist features (SRAFs) or scatter bars (SBs) have steadily proliferated through IC manufacturer data preparation flows as k1 is pushed lower with each technology node. The use of this technology is quite common for gate layer at 130 nm and below, with increasingly complex geometric rules being utilized to govern the placement of SBs in proximity to target layer features. Recently, model based approaches for placement of SBs has arisen. In this work, the variety of rule-based and model-based SB options are explored for the gate layer by using new characterization and optimization functions available in the latest generation of correction and OPC verification tools. These include the ability to quantify across chip CD control with statistics on a per gate basis. The analysis includes the effects of defocus, exposure, and misalignment, and it is shown that significant improvements to CD control through the full manufacturing variability window can be realized.
Assist features for modeling three-dimensional mask effects in optical proximity correction
Liberal use of assist features of both tones is an important component of the 45nm lithography strategy for many layers. These features are often sized at λ/4 on the mask or smaller. Under these conditions, formerly successful approximations of the mask near field using boundary layer methods or domain decomposition methods break down. Rigorous simulations of the mask near field must include a three-dimensional (3D) Maxwell's equation analysis, but these computations are cost-prohibitive for full-chip OPC, RET, and lithographic compliance checking applications. The purpose of this paper is to describe a simple and computationally efficient method that can improve model fidelity for 45nm assist features of either tone, while still retaining computational simplicity. While the model lacks the generality of a rigorous solution of Maxwell' sequations, it can be well-anchored to the real physics by calibrating its performance to a lithographic TCAD mask simulator. The approach provides a balanced tradeo. between speed and accuracy that makes it a superior approach to boundary layer and domain decomposition methods, while retaining the capability to realistically be deployed on a full-chip lithography simulation.
Circuit-based SEM contour OPC model calibration
In order to achieve the necessary OPC model accuracy, the requisite number of SEM CD measurements has exploded with each technology generation. At 65 nm and below, the need for OPC and/or manufacturing verification models for several process conditions (focus, exposure) further multiplies the number of measurements required. SEM-contour based OPC model calibration has arisen as a powerful approach to deliver robust and accurate OPC models since every pixel now adds information for input into the model, substantially increasing the parameter space coverage. To date however, SEM contours have been used to supplement the hundreds or thousands of discreet CD measurements to deliver robust and accurate models. While this is still perhaps the optimum path for high accuracy, there are some cases where OPC test patterns are not available, and the use of existing circuit patterns is desirable to create an OPC model. In this work, SEM contours of in-circuit patterns are utilized as the sole data source for OPC model calibration. The use scenario involves 130 nm technology which was initially qualified for production with the use of rule-based OPC, but is shown to benefit from model based OPC. In such a case, sub-nanometer accuracy is not required, and in-circuit features can enable rapid development of sufficiently accurate models to provide improved process margin in manufacturing.
Boundary-based cellwise OPC for standard-cell layouts
David M. Pawlowski, Liang Deng, Martin D. F. Wong
Model based optical proximity correction (OPC) has become necessary at 90nm technology node. Cellwise OPC is an attractive technique to reduce the mask data size as well as the prohibitive runtime of full-chip OPC. As feature dimensions have gotten smaller, the radius of influence for edge features has extended further into neighboring cells such that it is no longer sufficient to perform cellwise OPC independent of neighboring cells, especially for the critical layers. The methodology described in this work accounts for features in neighboring cells and allows a cellwise approach to be applied to cells with a printed gate length of 45nm with the projection that it can also be applied to future technology nodes. OPC-ready cells are generated at library creation (independent of placement) using a boundary-based technique. Each cell has a tractable number of OPC-ready versions due to an intelligent characterization of standard cell layout features. Results are very promising: the average edge placement error (EPE) for all metal1 features in 100 layouts is 0.731nm which is less than 1% of metal1 width; the maximum EPE for poly features reduced to 1/3, compared to cellwise OPC without considering boundaries, creating similar levels of lithographic accuracy while obviating any of the drawbacks inherent in layout specific full-chip model-based OPC.
Statistical analysis of gate CD variation for yield optimization
A new method for analysis of variation and yield across the whole chip is presented. This method takes into account the stochastic distribution of the input process parameters such as focus and exposure, and performs simulations of the design at the extreme points of the process window. Using a robust model to extrapolate the points within the process window, a full distribution of CDs is produced for each gate, which then is analyzed to provide information about both the individual gate and the variation across the chip.
Minimizing poly end cap pull back by application of DFM and advanced etch approaches for 65nm and 45nm technologies
Russell Callahan, Gunter Grasshoff, Stefan Roling, et al.
As feature sizes decrease and the overall design shrinks, it is becoming increasingly difficult to reliably pattern gate line ends, or poly end caps, so that they are able to extend over to the field area without bridging into an adjacent feature. Furthermore, the trimming of the lines during the gate etch process is necessary due to the desire to decrease the poly length. However, the line end is also trimmed while trimming the gate sidewall, often at higher rates than the sidewall itself. This investigation focuses on decreasing the poly line end pullback, defined as the tip of the gate past active, using lithography techniques and advanced etch approaches for the 65 nm and 45 nm nodes.
Real-time VT5 model coverage calculations during OPC simulations
For a robust OPC solution, it is important to isolate and characterize the detractors from high quality printability. Failure in correctly rendering the design intent in silicon can have multiple causes. Model inability in predicting lithographic and process implications is one of them. Process model accuracy is highly dependant on the quality of data used in the calibration phase of the model. Structures encountered during the OPC simulation that have not been included in the calibration patterns, or even structures somewhat similar to those used in calibration, are some times incorrectly predicted. In this paper a new method for studying VT5 model coverage during OPC simulations is investigated. The aerial image parameters for a large number of test structures used for model calibration are first calculated. A novel sorting and data indexing algorithm is then applied to classify the computed data into fast accessible look-up tables. These tables are loaded in the beginning of a new OPC simulation where they are used as a reference for comparing aerial image parameters calculated for new design fragments. Such new approach enables real time classification of design fragments based on how well covered they are by the VT5 model. Employing this method avoids catastrophic misses in the correction phase and allows for a robust approach to MBOPC.
A simple and practical approach for building lithography simulation models using a limited set of CD data and SEM pictures
A new method to calibrate optical lithography model using a combination of measured Critical Dimension (CD) data from the standard patterns and product layout SEM pictures have been developed. The CD data is composed of the measured CDs of through-pitch line patterns as well as isolated line and isolated space patterns. The SEM pictures for contour CD calibrations are from the product layouts. The small set of 1-D CD data is firstly used to calibrate the model. After best one-dimensional (1-D) data calibration accuracy is achieved, the model is used to predict the contour of the product layouts where the SEM pictures are taken. The simulated contours are overlaid with the SEM pictures to identify the mismatch locations. Additional calibration gauges at the locations are then added to the model to improve the predicted CD accuracy of 2- dimensional (2-D) patterns such as line-to-tip, tip-to-tip, and corner. In comparison with the SEM picture CDs, this procedure can be repeated several times until desired accuracy of the predicted contours is achieved. This method can increase the model's 2-D edge prediction accuracy and can reduce the amount of CD data required for model calibration. This calibration method is used to generate the models for lithography process simulations for Xilinx's 65 nm product developments. Hot spots and out-of-spec OPC CD locations are identified using the models and later confirmed from in-line data.
More on accelerating physical verification using STPRL: a novel language for test pattern generation
In this work, test-patterns, test-cases and layout-patterns generations are widely investigated in the sense of turnaround time for creation and/or modification. STPRL, a novel behavioral modeling language for test-pattern creation, is being proposed. The turn-around time for both creation and modification is hugely reduced at no degradation in either accuracy or performance. Furthermore, STPRL provides considerable performance improvements in custom testpatterns creation over available automatic layout creation tools. Our method has been verified with real data at different node-technologies and for migration from and between different technology nodes.
DRC and mask friendly pattern and probe aberration monitors
This paper considers modifications of the Pattern-and-Probe monitors to make them suitable for including within the circuit design as drop in monitors for the lithography process. Nonidealities such as lens aberrations can be monitored through patterns derived from the Zernike polynomials. However, the non-Manhattan geometries produced by this theoretical method are not mask friendly, and in fact took many hours to manufacture in their first attempt. This paper presents modifications to original aberration monitors to allow them to pass DRC checks and thus be more mask-friendly. Additionally, the original patterns' quantitative interpretation also requires over exposure sequences, special SEM reading of dots instead of linewidth, and separate calibration of the EM performance of the central reference probe. The principles expressed in the original aberration monitors can be integrated into more traditional circuit layouts to create more processing acceptable patterns, with the example shown in this paper retaining 68% of it sensitivity and no decrease in orthogonality.
Mask manufacturing rules checking (MRC) as a DFM strategy
Mask Manufacturing Rules Checking (MRC) has been established as an automated process to detect mask pattern data that will cause mask inspection problems. This methodology is unique from the Design Rule Checking (DRC) or Design for Manufacturing (DFM) checks typically performed before sending pattern data to the mask manufacturer in that it examines the entire mask layout and the spatial relationship between multiple patterns in their final orientation, scale, and tone. In contrast, DRC and DFM checks are usually performed on individual pattern files. Also, DRC and DFM checks are not always performed after all pattern transformations are complete, and errors can be introduced that are not caught until the mask is eventually printed on wafers. Therefore, MRC can often be the only comprehensive geometric integrity test performed before the mask is manufactured and the last opportunity to catch critical errors that might have disastrous consequences to yield and consequently to product schedules. In this paper we review the concepts and implementation of MRC in a merchant mask manufacturing enterprise and introduce methods to empower DFM decisions by mask customers based on MRC results.
Design for manufacturing approach to second level alternating phase shift mask patterning
A significant barrier to implementing APSM in volume production has been the expense of the mask. The cost of the mask is driven partially by the complexity of the two layer process flow required to make the mask. Typically, the 2nd level pattern is generated by upsizing the first level pattern of the pi apertures by a small amount in order to provide some overlay margin. The amount of upsizing is limited by the smallest chrome feature present in the pattern. The overlay margin between the first and 2nd level patterns can be improved by sizing the 2nd level more on larger chrome structures, when present. With a simple set of rules, it is possible to generate a 2nd level pattern with greater than ten times reduction in the number of corners, as measured by the number of vertices in the pattern, and minimize the number of marginal patterns in the design. This also has the beneficial side effect of significantly reducing the file size of the 2nd level pattern which can reduce the write time on some writers. Existing design rules can be exploited or additional rules imposed that can further improve the capability of the 2nd level APSM process. The right set of mask design rules can enable the use of lower fidelity writer for 2nd level patterning which can significantly reduce cost. The improved margin can increase yield and may even enable a less capable/expensive patterning tool to be used for 2nd level patterning.