DOC PREVIEW
Statistical Blockade

This preview shows page 1-2 out of 6 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Statistical Blockade: A Novel Method for Very Fast Monte Carlo Simulation of Rare Circuit Events, and its Application Amith Singhee, Rob A. Rutenbar Dept. of ECE, Carnegie Mellon University, Pittsburgh, Pennsylvania, 15213 USA{asinghee,rutenbar}@ece.cmu.eduAbstractCircuit reliability under statistical process variation is an area ofgrowing concern. For highly replicated circuits such as SRAMs andflip flops, a rare statistical event for one circuit may induce a not-so-rare system failure. Existing techniques perform poorly when taskedto generate both efficient sampling and sound statistics for these rareevents. Statistical Blockade is a novel Monte Carlo technique that al-lows us to efficiently filter—to block—unwanted samples insuffi-ciently rare in the tail distributions we seek. The method synthesizesideas from data mining and Extreme Value Theory, and shows speed-ups of 10X -100X over standard Monte Carlo.1. IntroductionCircuit reliability under statistical process variation is an area ofgrowing concern. Designs that add excess safety margin, or rely onsimplistic assumptions about “worst case” corners no longer suffice.Worse, for critical circuits such as SRAMs and flip flops, replicatedacross 10K - 10M instances on a large design, we have the new prob-lem that statistically rare events are magnified by the sheer number ofthese elements. In such scenarios, an exceedingly rare event for onecircuit may induce a not-so-rare failure for the entire system. Monte Carlo analysis remains the gold standard for the required sta-tistical modeling. Standard Monte Carlo techniques are, by construc-tion, most efficient at sampling the statistically likely cases. Indeed,classical modifications such as Importance Sampling [1] allow Mon-te Carlo methods to avoid sampling these unlikely (i.e., unimportant)events. Ours is the mirror image problem: how can we efficientlysample only the statistically rare events? How can we model the sta-tistics in the tails of these heavy-tailed distributions? ImportanceSampling also gives us some help to sample in the tails [2], butchanges the statistics of these rare samples. Unfortunately, we needboth samples and rigorous statistics to determine the reliability ofcritical circuits like large SRAMs, or flips flop in aggressivelyclocked designs operating with small setup slack. Standard MonteCarlo methods are poorly suited to this important problem. One avenue of attack is to abandon Monte Carlo. Several analyticaland semi-analytical approaches have been suggested to model thebehavior of SRAM cells [3][4][5] and digital circuits [6] in the pres-ence of process variations. All suffer from approximations neces-sary to make the problem tractable. [4] and [6] assume a linearrelationship between the statistical variables and the performancemetrics (e.g. static noise margin), and assume that the statisticalprocess parameters and resulting performance metrics are normallydistributed. This can result in gross errors, especially while model-ing rare events, as we shall show later. When the distribution variessignificantly from Gaussian, [4] chooses an F-distribution in an adhoc manner. [3] presents a complex analytical model limited to aspecific transistor model (the transregional model) and further lim-ited to only static noise margin analysis for the 6T SRAM cell. [5]again models only the static noise margin (SNM) for SRAM cellsunder assumptions of independence and identical distribution of theupper and lower SNM, which may not always be valid.A different avenue of attack is to modify the Monte Carlo strategy.[2] shows how Importance Sampling can be used to predict failureprobabilities. Recently, [7] applied an efficient formulation of theseideas for modeling rare failure events for single 6T SRAM cells,based on the concept of Mixture Importance Sampling from [8].The approach uses real SPICE simulations with no approximatingequations. However, the method only estimates the exceedenceprobability of a single value of the performance metric. A re-run isneeded to obtain probability estimates for another value. No com-plete model of the tail of the distribution is computed. The methodalso combines all performance metrics to compute a failure proba-bility, given fixed thresholds. Hence, there is no way to obtain sep-arate probability estimates for each metric, other than a separate runper metric. Furthermore, given that [2] advises against importancesampling in high dimensions, it is unclear if this approach will scaleefficiently to large circuits with many statistical parameters.In this paper, we present a novel, general and efficient Monte Carlomethod that addresses both problems previously described: very fastgeneration of samples—rare events—with sound models of the tail sta-tistics for any performance metric. The method imposes almost no apriori limitations on the form of the statistics for the process parame-ters, device models, or performance metrics. The method is conceptu-ally simple, and it exploits ideas from two rather nontraditional sources. To obtain both samples and and statistics for rare events, we mayneed to generate and evaluate an intractable number of Monte Carlosamples. Generating each sample is neither challenging nor expen-sive: we are merely creating the parameters for a circuit. Evaluatingthe sample is expensive, because we simulate it. What if we couldquickly filter these samples, and block those that are unlikely to fallin the low-probability tails we seek? Many samples could be gener-ated, but very few simulated. We show how to exploit ideas fromdata mining [9] to build classifier structures, from a small set of Mon-te Carlo training samples, to create the necessary blocking filter. Giv-en these samples, we show how to use the rigorous mathematics ofExtreme Value Theory (EVT [10], the theory of the limiting behaviorof sampled maxima and minima) to build sound models of these taildistributions. The essential “blocking” activity of the filter gives thetechnique its name: Statistical Blockade.Statistical blockade has been tested on both SRAM and flip-flop de-signs, including a complete 64-cell SRAM column (a 403-parameterproblem), accounting for both local and global variations. (In contrastto several prior studies [5-6,9] we shall see that simulating only one celldoes not correctly estimate the critical tail statistics.) However, statisti-cal blockade allows us to generate both samples and accurate statistics,with speedups of 10X


Statistical Blockade

Download Statistical Blockade
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Statistical Blockade and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Statistical Blockade 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?