New version page

Knowledge-Sharing Issues in Experimental Software Engineering

Upgrade to remove ads

This preview shows page 1-2-20-21 out of 21 pages.

Save
View Full Document
Premium Document
Do you want full access? Go Premium and unlock all 21 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 21 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 21 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 21 pages.
Access to all documents
Download any document
Ad free experience

Upgrade to remove ads
Unformatted text preview:

Knowledge-Sharing Issues in Experimental Software Engineering Forrest Shull [email protected] Fraunhofer Center for Experimental Software Engineering, Maryland Manoel Mendonça [email protected] UNIFACS Victor Basili [email protected] Fraunhofer Center for Experimental Software Engineering and University of Maryland Jeffrey Carver [email protected] University of Maryland José C. Maldonado [email protected] ICMC-USP Sandra Fabbri [email protected] UFSCar Guilherme Horta Travassos [email protected] COPPE/UFRJ Maria Cristina Ferreira [email protected] ICMC-USP Abstract Recently the awareness of the importance of replicating studies has been growing in the empirical software engineering community. The results of any one study cannot simply be extrapolated to all environments because there are many uncontrollable sources of variation between different environments. In our work, we have reasoned that the availability of laboratory packages for experiments can encourage better replications and complementary studies. However, even with effectively specified laboratory packages, transfer of experimental know-how can still be difficult. In this paper, we discuss the collaboration structures we have been using in the Readers’ Project, a bilateral project supported by the Brazilian and American national science agencies that is investigating replications and transfer of experimental know-how issues. In particular, we discuss how these structures map to the Nonaka-Takeuchi knowledge sharing model, a well-known paradigm used in the knowledge management literature. We describe an instantiation of the Nonaka-Takeuchi Model for software engineering experimentation, establishing a framework for discussing knowledge sharing issues related to experimental software engineering. We use two replications to illustrate some of the knowledge sharing issues we have faced and discuss the mechanisms we are using to tackle those issues in Readers’ Project.1. Introduction In the past few years, there has been a growing awareness in the empirical software engineering community of the importance of replicating studies [e.g. (Brooks, 1996), (Johnson, 1996), (Lott, 1996), (Basili, 1999), (Miller, 2000), (Shull, 2001)]. Most researchers accept that no one study on a technology should be considered definitive. Too many uncontrollable sources of variation exist from one environment to another for the results of any study, no matter how well run, to be extrapolated to all possible software development environments. The goal is to build a consolidated, empirically based body of knowledge that identifies the benefits and costs of various techniques and tools to support the engineering of software. A result of this realization is an increased commitment to run more studies in a variety of environments. Replication in different environments is an important characteristic of any laboratory science. It is the basis for credibility and learning. Complementary, replicated studies allow researchers to combine knowledge directly or via some form of meta-analysis. Since intervening factors and threats to validity can almost never be completely ruled out of a study, complementary studies also allow more robust conclusions to be drawn when related studies can address one another’s weak points. In software engineering, this replication process enables us to build a body of knowledge about families of related techniques and basic principles of software development. It is important to note that for the purposes of building up a body of knowledge, “replication” needs to be defined relatively broadly: While in many contexts the term replication implies repeating a study without making any changes, this definition is too narrow for our purposes. In this work we will consider a replication to be a study that is run, based on the design and results of a previous study, whose goal is to either verify or broaden the applicability of the results of the initial study. For example, the type of replication where the same exact study is run could be used to verify results of an original study. On the other hand, if a researcher wished to explore the applicability of the result of a study in a different context, then the design of the original study may be slightly modified but still considered a replication. In our own work, we have reasoned that better replications and complementary studies can be encouraged by the availability of laboratory packages that document an experiment. A laboratory package describes an experiment in specific terms and provides materials for replication, highlights opportunities for variation, and builds a context for combining results of different types of experimental treatments. Laboratory packages build an experimental infrastructure for supporting future replications. They establish a basis for confirming or denying original results, complementing the original experiment, and tailoring the object of study to a specific experimental context. However, despite our high hopes, our experience has shown that replication is difficult and lab packages are not the solution by themselves. Even when both the original researchers and the replicating researchers are experienced experimentalists, there are so many sources of variation and implicit assumptions about the experimental context that composing a static lab package to describe all relevant aspects of the experiment, in such a way that unexpected sources of variation are not introduced into the replication, is nearly impossible. As examples, consider the following cases from our own experience: • A study of a software review technique unintentionally introduced a source of variation when a limit was placed on the time available for performing the review (in the original experiment, the time was open-ended.) The results were very different (and incomparable) between the two studies because subjects in the replication altered theirbehavior to try to prioritize aspects of the review and could not check their work, while subjects in the original study were allowed to work under more realistic conditions. • Another study of software review techniques introduced a possible variation in results when the wrong time estimate was given to reviewers. When the review took much longer than the reviewers had expected, they reported feeling frustrated and de-motivated with the technique and quite possibly reviewed less


Download Knowledge-Sharing Issues in Experimental Software Engineering
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Knowledge-Sharing Issues in Experimental Software Engineering and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Knowledge-Sharing Issues in Experimental Software Engineering 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?