New version page

A Model and Case Study for Memory Organizations

Upgrade to remove ads

This preview shows page 1-2 out of 6 pages.

Save
View Full Document
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience

Upgrade to remove ads
Unformatted text preview:

Yield Prediction for Architecture Exploration in NanometerTechnology Nodes: A Model and Case Study for MemoryOrganizationsA. Papanikolaou, T. Grabner, M. Miranda, P. Roussel and F. CatthoorIMEC vzw, Kapeldreef 75, Leuven, BelgiumABSTRACTProcess variability has a detrimental impact on the perfor-mance of memories and other system components, whichcan lead to parametric yield loss at the system level due totiming violations. Conventional yield models do not allowto accurately analyze this, at least not at the system level.In this paper we propose a technique to estimate this sys-tem level yield loss for a number of alternative memory or-ganization implementations. This can aid the designer intomaking educated trade-offs at the architecture level betweenenergy consumption and parametric timing yield by usingmemories from different available libraries with different en-ergy/performance characteristics considering the impact ofmanufacturing variations. The accuracy of this techniqueis very high, an average error of less than 1% is reported,which enables an early exploration of the available options.Categories and Subject DescriptorsB.8.2 [Performance and Reliability]: Performance Analysisand Design AidsGeneral TermsDesign, Performance, ReliabilityKeywordsParametric yield, system exploration, process variability1. INTRODUCTIONEmbedded system that run real-time, power sensitive applica-tions are becoming a very important part of the consumer electronicproduct market. System level design in the embedded systems do-main has primarily been concerned with three main cost metrics:timing, energy consumption and area. Timing is important becauseembedded system typically run applications with hard real-timedeadlines, such as communication services and multimedia appli-cations. Minimizing energy consumption, on the other hand, cannot only extend the time between battery recharges, but also en-able new handheld applications and high-end mobile computing.A lot of research has been performed on how to minimize energyPermission to make digital or hard copies of all or part of this work forpersonal or classroom use is granted without fee provided that copies arenot made or distributed for profit or commercial advantage and that copiesbear this notice and the full citation on the first page. To copy otherwise, torepublish, to post on servers or to redistribute to lists, requires prior specificpermission and/or a fee.CODES+ISSS’06, October 22–25, 2006, Seoul, Korea.Copyright 2006 ACM 1-59593-370-0/06/0010 ...$5.00.consumption, by employing run-time techniques such as voltagescaling for instance. These two metrics, together with area whichhas a direct proportional impact on cost, define the quality of thedesign. However, the main assumption so far at the system leveldesign abstraction has been that timing and energy consumption ofthe individual system components and the final system implementa-tion itself are deterministic and predictable. Limited variations havebeen tackled by embedding worst-case margins in the design of thesystem components, such as processors and memories, so that thespecified performance and energy consumption can be guaranteedfor use by the system designers.Technology scaling past the 90nm technology node, however,introduces a lot more unpredictability in the timing and energy con-sumption of the designs due to random intra-die process variability.Treating these metrics at the system design level as deterministicv alues requires the design margins to become so large that they caneat up all the benefits of moving to a more advanced technologynode. Therefore some degree of uncertainty will always have to betolerated in the component. This has to be considered during circuitand even architecture design. It has to lead to new statistical de-sign paradigms [3], such as statistical timing analysis or even moregeneral yield-aware design [16, 12]. Depending on the componentbeing considered (e.g. memory or datapath), energy and/or perfor-mance vs. area trade-off decisions have to be made [17]. However ,for embedded system design the most critical trade-offs are notmade at the component or IP block level, but at the architectureor even at the application level. Therefore solutions for (paramet-ric) yield aware design have started being developed that tackle theproblem at the architecture lev el while allowing some degree of un-certainty in the parametric energy and performance figures of the IPblocks [2, 5]. These solutions aim to tackle the system-level yieldloss that is the result of timing violations due to the paramet ric driftin the performance of the individual system components caused byrandom process variability. Functional/catastrophic yield loss, dueto manufacturing defects needs different solution techniques tar-geted at lower abstraction levels [6]. They are necessary to tacklethe traditional yield issues and the proposed estimation techniquefor parametric yield is complementary to these. Functional andparametric yield issues are both crucial but anyway require differ-ent solution methods so it makes sense to attempt to solve them ina decoupled manner. In a realistic design en vironment both will berequired.The use of the solutions for system yield results in trade-offson the main system metrics as noted above. Therefore, techniquesand tools allowing to reason in terms of trade-offs between yieldenergy, performance and area at the architecture level are a must forsuccessful embedded system design at the 65nm technology nodeand be yond. The ITRS road-map [13] discusses the need of such253tools in its 2005 Design chapter and predicts that they will becomemainstream for design in a few years. This paper is a step towardfilling the gap in that direction. It proposes a new procedure ofdealing with parametric yield loss at the system level.yield estimation toolmemory library (incl. manufacturing information)Memory architectureYieldClock periodEnergy consumptionMemory organizationFigure 1: Estimating timing yield andaverage/worst-case energy for the system’s memoryorganisationThe main requirement of such a tool is that the energy/delay distri-butions of the individual components must be a priori characterizedusing models of the manufacturing process of the target foundry.Obviously, the more accurate the characterization is, the better theestimation results will match reality. Indeed, this characterizationor calibration task is tedious and not straightforward as it


Download A Model and Case Study for Memory Organizations
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view A Model and Case Study for Memory Organizations and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view A Model and Case Study for Memory Organizations 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?