CS 350, slide set 10ReadingDeadlines, Guidelines - 1Additional FormProject evaluation - 1Project evaluation - 2Project code suggestionsTesting: selected case studiesAnnouncementsPSP/TSP approachBuild and integration strategies: big bangB & I strategies: one subsystem at a timeB & I strategies: add clustersB & I strategies: top downTypical testing goalsTest log includes:Documentation - 1Documentation - 2Testing scriptPostmortemPostmortem scriptPIP objectivesPeer evaluationsCS 350, slide set 10M. OverstreetOld Dominion UniversityFall 2005ReadingTSP text, Ch 9, 10Remember, you are supposed to have read the chapter on your role from ch. 11-15And ch. 16, 17, and 18.Deadlines, Guidelines - 1Project: due Monday, Dec. 12Submit to [email protected]I must be able to determine who did what. Name (or names) of who did it must be included with each item.I must be able to determine what works. Include actual output whenever appropriate.See web site checklist section for a complete list of due datesAdditional FormAdditional form: submission checklistList of everything submitted•Based on submissions checklist on webWhen submittedIdentification of who completed each item submitted (including forms)Project evaluation - 1I must be able to determine:Who did what.•Put names in documents (code, forms, test reps, etc)•If no name, no creditWhat process steps were completed.•Will rely on formsFor each module’s reviews and inspections•Will rely on formsWhat code actually works.•Will rely files produced during testing, so make sure these are included with modulesHow much testing was performed.•Will rely on test reportsProject evaluation - 2Individual project grades determined as followsYour peer evaluations: 15%My impression of your contributions 15%Forms: 35%Evidence of execution 30%•Note that without execution, some formsmust be incompleteQuality/completeness of materials 10%As I go through each group’s submissions, I will record what you’ve done. Your grade will be based on that listProject code suggestionsMost errors come from misunderstanding of requirementsThese types of errors should be identified in inspections.Testing: selected case studiesRemember: hard to get this kind of dataMagellan spacecraft (1989-1994) to Venus22 KLOC – this is small186 defects found in system test•42 critical•only 1 critical defect found in 1st year of testingProject a success, but several software related emergenciesGalileo spacecraft (1989 launch, 1995)Testing took 6 yearsFinal 10 critical defects found after 288 weeks of testingAnnouncementsExam 2: in-class Thursday.Take-home due Friday. On class web site.ACM making Ubuntu Linux CDs available in CS office in HughesVersions for PCs and MacsCan install and test without altering data on your hard drivePSP/TSP approachFind most defects before integration testing during:Reviews (requirements, HLD, DLD, test plans, code)InspectionsUnit testingEach of these activities is expensive, but testing is worseTSP goal: use testing to confirm that code is high quality.May need to return low quality code for rework or scrappingData shows strong relationship between defects found in testing & defects found by customersBuild and integration strategies: big bangBuild & test all pieces separately then put them all together at the end and see what happensOut of favorDebugging all pieces at the same time; harder to identify real causes of problemsIndustry experience: 10 defects/KLOC;•All-too-typical: system with 30,000 defectsB & I strategies: one subsystem at a timeDesign system so that it can be implemented in steps; each step usefulFirst test minimal systemAfter its components have been testedAdd one component at a timeDefects are more likely to come from new partsNot all systems admit to this approachB & I strategies: add clustersIf system has components with dependencies among them, it may be necessary to add clusters of interacting componentsB & I strategies: top downTop-down integrationIntegrate top-level components first•With lower-level components stubbed as necessaryMay identify integration issues earlier than other approachesI suggest this approach this projectWrite top level routines first when feasible.•It calls stubbed functions.As modules are available, they replace stubbed versionTypical testing goalsShow system provides all specified functionsDoes what is supposed to doShow system meets stated quality goalsMTBF, for exampleShow system works under stressful conditionsDoesn’t do “bad” things when other systems (e.g. power) fail, network overloads, disk fullIn reality, schedule/budget considerations may limit testing to most frequent or critical behaviors onlyTest log includes:Date, start and end time of testsName of testerWhich tests were runWhat code & configuration was testedNumber of defects foundTest resultsOther pertinent informationSpecial tools, system config., operator actionsSee sample test log, pg. 172Documentation - 1Probably needs another courseMust write from perspective of user of documentationOther programmers on teamFuture maintenance programmersInstallersManagersUsersBetter to hire English majors to write documentation?Easier teach them the computing part than to teach technical geeks how to write well?Documentation - 2Developers often do poor jobEven when proofing, omissions (what you forgot to tell reader) are often undetected since writer knows themStudent just finished MS thesis of software metrics of open-source code. Did not explain what KDSI meant until end of thesis! I missed it too!Guidelines: includeGlossary to define special termsDetailed table of contentsDetailed indexSections on•Error messages•Recovery procedures•Troubleshooting proceduresTesting scriptCovered in textPostmortemWhy? We’re still learning how to do this Organization goal: learn for this project to improve next oneWe shouldn’t keep making the same mistakesIndividual goal: make you a more valuable employeeUpdate your personal checklists, etc.Postmortem scriptWe’ll skip; works better if 3 cyclesWill discuss in class; be ready to tell meWhere the process worked and where it did notHow did
View Full Document