Unformatted text preview:

CORRECTING FUNCTIONAL ERRORSin hard-ware designs can be very costly, thus placingstringent requirements on functional validation.Moreover, validation is so complex that, eventhough it consumes the most computationalresources and time, it is still the weakest link inthe design process. Ensuring functional correct-ness is the most difficult part of designing a hard-ware system.Progress in formal verification techniques haspartially alleviated this problem. However, auto-mated methods invariably involve exhaustiveanalysis of a large state space and are thereforeconstrained to small portions of a design.Methods that scale to systems of practical sizerequire either formal, hierarchical design descrip-tions with clean, well-defined interfaces or con-siderable human effort after the design iscompleted. Either way, applying these methodsto complex circuits with multiple designers is dif-ficult. Software simulation’s computationalrequirements, on the other hand, scale well withsystem size. For this reason, and perhaps becauseof its intuitive appeal, simulation remains themost popular functional validation method.Nevertheless, validation based on simulationcan be only partially complete. To address thisincompleteness, simulation-based semiformalmethods have been developed. These methodsexert better control over simulation by using var-ious mechanisms to produce input stimuli andevaluate simulation results. Validation cover-age, however vague the concept, is essential forevaluating and guiding such combinations ofsimulation-based and formal techniques. Theideal is to achieve comprehensive validationwithout redundant effort. Coverage metrics helpapproximate this ideal by■ acting as heuristic measures that quantifyverification completeness, and■ identifying inadequately exercised designaspects and guiding input stimulus gener-ation.Coverage analysis can be instrumental in allo-cating computational resources and coordi-nating different validation techniques.1,2Although coverage-based techniques areroutinely applied successfully to large industri-al designs,3-5increasing design complexitieshave led to renewed interest in this area. In thisarticle, we summarize existing work on cover-Coverage Metrics forFunctional Validation ofHardware Designs Formal Verification2Software simulation remains the primary means offunctional validation for hardware designs.Coverage metrics ensure optimal use of simulationresources, measure the completeness of validation,and direct simulations toward unexplored areas ofthe design. This article surveys the literature, anddiscusses the experiences of verificationpractitioners, regarding coverage metrics.Serdar Tasiran Compaq Systems Research CenterKurt Keutzer University of California, Berkeley0740-7475/01/$10.00 © 2001 IEEEIEEE Design & Test of Computersage metrics, report on industrial experiences inusing them, and identify the strengths andweaknesses of each metric class and of cover-age-based validation in general.Coverage analysis: imprecise butindispensableOne principal use of coverage analysis is tomeasure the validation effort’s adequacy andprogress. Ideally, increasing the coverageshould increase confidence in the design’s cor-rectness. Direct correspondence between cov-erage metrics and error classes should ensurethat complete coverage with respect to a met-ric will detect all possible errors of a certaintype. The lack of well-established formal char-acterization for design errors makes ascertain-ing such correspondence difficult. Unlikemanufacturing testing, where practical experi-ence has shown that stuck-at faults are a goodproxy for actual manufacturing defects, nocanonical error model achieves the same forcommon design errors.3Design errors are lesslocalized and more difficult to characterize,making it difficult to establish a formal errormodel—a common denominator—for evensubsets of design bugs. As a result, there is, atmost, an intuitive or empirical connectionbetween coverage metrics and bugs, and theparticular choice of metrics is as often motivat-ed by ease of definition and measurement asby correspondence with actual errors.Designers typically use a set of metrics tomeasure simulation-based validation progress,starting with simple metrics that require littleeffort and gradually using more sophisticatedand expensive ones. Giving a formal meaning to“more sophisticated” also proves difficult. Evenif metric M1subsumes metric M2, the input stim-uli that achieve complete M1coverage are notnecessarily any better at detecting bugs thaninput stimuli that achieve complete M2coverage.(Metric M1subsumes metric M2if and only if, onany design when a set of input stimuli S achieves100% M1coverage, S also achieves 100% M2cov-erage.) To make matters worse, a practically use-ful, formal way of comparing coverage metricshas not yet been devised.6Except for trivialcases, no metric is provably superior to another.In the absence of useful formal relationshipsbetween coverage metrics, a comparative mea-sure of “good at uncovering bugs” also needs tobe established intuitively or empirically.Despite these drawbacks, coverage metricsare indispensable tools for simulation-based val-idation. Input stimuli guided by coverage infor-mation commonly detect more bugs thanconventional simulation and handwritten,directed tests.3-5Moreover, coverage metrics pro-vide more measures of verification adequacythan bug detection statistics alone.7At the cur-rent state of the art, no set of metrics hasemerged as a de facto standard of how muchsimulation is enough. However, as designgroups accumulate experience with certainmetrics applied to particular design classes, cri-teria are evolving to assess how much simula-tion is not enough.8It is in this capacity, inaddition to guiding simulation input generation,that coverage metrics have found their mostwidespread use. Because metrics play a crucialrole in functional validation, extensive studiesare needed to correlate classes of bugs and cov-erage metrics. Despite some preliminary work,3,9progress in this area is far from satisfactory. Fewdesign groups today feel confident that theyhave a comprehensive set of metrics.Besides corresponding well with designerrors, another important requirement for cov-erage metrics is ease of use:■ The overhead of measuring a coverage met-ric should be tolerable.■ Generating input stimuli


View Full Document

U of U CS 5780 - Hardware Designs

Documents in this Course
Lab 1

Lab 1

5 pages

FIFOs

FIFOs

10 pages

FIFOs

FIFOs

5 pages

FIFO’s

FIFO’s

12 pages

MCU Ports

MCU Ports

12 pages

Serial IO

Serial IO

26 pages

Load more
Download Hardware Designs
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Hardware Designs and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Hardware Designs 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?