DOC PREVIEW
USC CSCI 512 - usccse2005-512

This preview shows page 1-2-3 out of 10 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Empirical Results from an Experiment on Value-Based Review (VBR) Processes Keun Lee Barry Boehm Center for Software Engineering University of Southern California Los Angeles, CA 90089-0781 (keunlee, boehm)@usc.edu Abstract As part of our research on value-based software engineering, we conducted an experiment on the use of value-based review (VBR) processes. We developed a set of VBR checklists with issues ranked by success-criticality, and a set of VBR processes prioritized by issue criticality and stakeholder-negotiated product capability priorities. The experiment involved 28 independent verification and validation (IV&V) subjects (full-time working professionals taking a distance learning course) reviewing specifications produced by 18 real-client, full-time student e-services projects. The IV&V subjects were randomly assigned to use either the VBR approach or our previous value-neutral checklist-based reading (CBR) approach. The difference between groups was not statistically significant for number of issues reported, but was statistically significant for number of issues per review hour, total issue impact, and cost effectiveness in terms of total issue impact per review hour. For the latter, the VBRs were roughly twice as cost-effective as the CBRs. 1. Introduction Finding more cost-effective techniques for achieving software quality is a major issue for developers. The review is one of the main processes to find defects from the initial development stage and increase the quality of the deliveries [1,2,3]. Peer reviews have been used in requirements analysis, architecture, design and coding. Many research efforts have focused on formulating effective review processes to find defects [4,5]. Studies have addressed review team composition and procedures, review preparation and duration, and criteria for focusing reviewers on sources of defects. Initial approaches for focusing reviewers involved adding checklists into the review process [2]. With the checklist-based reviewing (CBR), it is easy to understand the process so that CBR has become the most common review-focusing techniques in current practice. Another approach to reviewing artifacts is perspective-based reading (PBR). It focuses on different reviewers’ perspectives, like designer perspective and tester perspective [8]. Different review perspectives help to find more defects without overlaps. Another reading technique is defect-based reading, which is focusing on different defect classes [6]. Other review techniques proposed are functionality-based reading (FBR) [7], and usage-based reading (UBR) [9,10]. A number of studies have compared the effectiveness of these techniques [4,5,7,11,12]. They agree in finding that focused review methods do better than unfocused reviews; that methods’ cost effectiveness vary by the nature of the artifacts being reviewed; and generally that the preferred method to use is situation-dependent. However, the cost-effectiveness metrics used for the methods and their evaluation have been (except [10]) value-neutral, in that each defect is considered to be equally important. It means much effort is spent on trivial issues like obvious typos and grammar errors. Here we present a new form of PBR called value-based reading, and compare the use of VBR using value-based procedures and checklists with a value-neutral CBR approach. The two approaches were performed by 28 randomly-selected professional software engineers taking a graduate software engineering project course by distance learning at USC. Their project assignment was to independently verify and validate (IV&V) the artifacts from one of 18 real-client e-services project applications that were being developed by the on-campus MS-student teams in the course[14].Value-based review techniques and cost effectiveness metrics are explained in the next section. The experiment on Value-based review and traditional review with checklists is described in section 3. Results of the experiment results are presented in section 4. Threats to validity of the experiment are discussed in section 5. The discussions of the results and the conclusions are in section 6. 2. Value-based Verification & Validation (VBV&V) 2.1. Concepts in VBV&V The basic idea of VBV&V, as with other value-based software engineering (VBSE) activities, is to treat each V&V activity (analysis, review, test) as a candidate investment in improving the software development process[16]. Earlier VBSE activities involve prioritizing the capabilities to be developed. VBV&V activities are then sequenced by priority and criticality. Priority is determined from negotiations and meetings with clients. In the experiment, the values of priority are High, Medium, Low (or 3, 2, 1). The criticality is how critical a review issue is to the project’s success. Generally, the values of criticalities are furnished by experts, but reviewers can also determine the values in special circumstances. The values of criticality in the experiment are also High, Medium, Low (or 3, 2, 1). The numerical values of priority and criticality are used to guide V&V activities and evaluate their effectiveness. Whereas the priority represents stakeholders’ “win” conditions, the criticality is impacted more by domain-experts or developers. For example, if one of capabilities is not important to users, its priority can be determined as “low” through clients’ meeting. However, if the capability is related to other requirements and its failure impacts seriously to other capabilities, its criticality can be determined as “high” by domain-experts or developers. The priority and criticality determine a value of each issue. Effectiveness is based on a value of each issue. The assumption of Value-based review is that each issue has different value, and if higher value issues are reviewed and fixed first, then the effectiveness of review will be increased. Thus, effectiveness can be measured based on the value of each issue. Section 2.3 will explain the process of calculating the effectiveness. 2.2 Value-based Review We have developed an experimental set of value-based checklists for reviewing specifications to test the hypothesis that review activities will be more cost-effective if review effort is focused on the higher-priority system capabilities and the higher-criticality sources of implementation risk [13]. Our spiral approach to systems and software engineering emphasizes risk-driven


View Full Document

USC CSCI 512 - usccse2005-512

Documents in this Course
Load more
Download usccse2005-512
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view usccse2005-512 and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view usccse2005-512 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?