CS 350, slide set 10ReadingDeadlines, Guidelines - 1Project evaluationTesting: selected case studiesPSP/TSP approachBuild and integration strategies: big bangB & I strategies: one subsystem at a timeB & I strategies: add clustersB & I strategies: top downTypical testing goalsTest log includes:Documentation - 1Documentation - 2Postmortem scriptPIP objectivesPeer evaluationsCS 350, slide set 10M. OverstreetOld Dominion UniversitySpring 2006ReadingTSP text, Ch 9, 10Remember, you are supposed to have read the chapter on your role from ch. 11-15And ch. 16, 17, and 18.Deadlines, Guidelines - 1Project: due April 30.Submit to [email protected]I must be able to determine who did what. Name (or names) of who did it must be included with each item.I must be able to determine what works. Include actual output whenever appropriate.See web site checklist section for a complete list of due datesProject evaluationIndividual project grades determined as followsYour peer evaluations: 15%My impression of your contributions 15%Forms: 35%Evidence of execution 30%•Note that without execution, some formsmust be incompleteQuality/completeness of materials 10%As I go through each group’s submissions, I will record what you’ve done. Your grade will be based on that list, your peer evals (if I can believe them), and my impression of your contributionsTesting: selected case studiesRemember: hard to get this kind of dataMagellan spacecraft (1989-1994) to Venus22 KLOC – this is small186 defects found in system test•42 critical•only 1 critical defect found in 1st year of testingProject a success, but several software related emergenciesGalileo spacecraft (1989 launch, 1995)Testing took 6 yearsFinal 10 critical defects found after 288 weeks of testingPSP/TSP approachFind most defects before integration testing during:Reviews (requirements, HLD, DLD, test plans, code)InspectionsUnit testingEach of these activities is expensive, but testing is worseTSP goal: use testing to confirm that code is high quality.May need to return low quality code for rework or scrappingData shows strong relationship between defects found in testing & defects found by customersBuild and integration strategies: big bangBuild & test all pieces separately then put them all together at the end and see what happensOut of favorDebugging all pieces at the same time; harder to identify real causes of problemsIndustry experience: 10 defects/KLOC;•All-too-typical: system with 30,000 defectsB & I strategies: one subsystem at a timeDesign system so that it can be implemented in steps; each step usefulFirst test minimal systemAfter its components have been testedAdd one component at a timeDefects are more likely to come from new partsNot all systems admit to this approachB & I strategies: add clustersIf system has components with dependencies among them, it may be necessary to add clusters of interacting componentsB & I strategies: top downTop-down integrationIntegrate top-level components first•With lower-level components stubbed as necessaryMay identify integration issues earlier than other approachesI suggest this approach this projectWrite top level routines first when feasible.•It calls stubbed functions.As modules are available, they replace stubbed versionTypical testing goalsShow system provides all specified functionsDoes what is supposed to doShow system meets stated quality goalsMTBF, for exampleShow system works under stressful conditionsDoesn’t do “bad” things when other systems (e.g. power) fail, network overloads, disk fullIn reality, schedule/budget considerations may limit testing to most frequent or critical behaviors onlyTest log includes:Date, start and end time of testsName of testerWhich tests were runWhat code & configuration was testedNumber of defects foundTest resultsOther pertinent informationSpecial tools, system config., operator actionsSee sample test log, pg. 172Documentation - 1Probably needs another courseMust write from perspective of user of documentationOther programmers on teamFuture maintenance programmersInstallersManagersUsersBetter to hire English majors to write documentation?Easier teach them the computing part than to teach technical geeks how to write well?Documentation - 2Developers often do poor jobEven when proofing, omissions (what you forgot to tell reader) are often undetected since writer knows themStudent just finished MS thesis of software metrics of open-source code. Did not explain what KDSI meant until end of thesis! I missed it too!Guidelines: includeGlossary to define special termsDetailed table of contentsDetailed indexSections on•Error messages•Recovery procedures•Troubleshooting proceduresPostmortem scriptWe’ll skip; works better if 3 cyclesWill discuss in class; be ready to tell meWhere the process worked and where it did notHow did actual performance compare with expected?Where did your team do well? Where not?PIP objectivesWhile project is fresh, record good ideas on process improvementsImplicit goal: be skeptical about TSP as the solution to all software problemsEach organization and problem domain probably has their unique problems; one size does not fit allBut a request: be tolerant. Learn from others experience; don't reject too quicklyPeer evaluationsUse form from webIncludes your impression of who•had hardest role (% sum to 100)•had the most work (% sum to 100)You must use team member names and roles (unlike the text form)Written textYour evaluation of TSPYour evaluation of how well you fulfilled your role•What did you do that worked well?•What did you do that did not work
View Full Document