Classifying Software Changes: Clean or Buggy?

Download: PDF.

“Classifying Software Changes: Clean or Buggy?” by Sunghun Kim, E. James Whitehead, Jr., and Yi Zhang. IEEE Transactions on Software Engineering, vol. 34, no. 2, March/April 2008, pp. 181-196.


This paper introduces a new technique for predicting latent software bugs, called change classification. Change classification uses a machine learning classifier to determine whether a new software change is more similar to prior buggy changes or clean changes. In this manner, change classification predicts the existence of bugs in software changes. The classifier is trained using features (in the machine learning sense) extracted from the revision history of a software project stored in its software configuration management repository. The trained classifier can classify changes as buggy or clean, with a 78 percent accuracy and a 60 percent buggy change recall on average. Change classification has several desirable qualities: 1) The prediction granularity is small (a change to a single file), 2) predictions do not require semantic information about the source code, 3) the technique works for a broad array of project types and programming languages, and 4) predictions can be made immediately upon the completion of a change. Contributions of this paper include a description of the change classification approach, techniques for extracting features from the source code and change histories, a characterization of the performance of change classification across 12 open source projects, and an evaluation of the predictive power of different groups of features.

Download: PDF.

BibTeX entry:

   author = {Sunghun Kim and E. James Whitehead,~Jr. and Yi Zhang},
   title = {Classifying Software Changes: Clean or Buggy?},
   journal = {IEEE Transactions on Software Engineering},
   volume = {34},
   number = {2},
   pages = {181--196},
   month = {March/April},
   year = {2008}

(This webpage was created with bibtex2web.)

Back to Program Analysis Group publications.