Response to "Adopting Software Engineering Research"

In the previous issue of WoW, Dovid Notkin addresses the question of why software engineering research is not adopted in industry. He identifies the dominant reason as a lack of research that meets the needs of practioners. He also notes that some researchers may have an attitude problem toward practitioners. But researching practically relevant problems and treating software practitioners with respect is not enough. The really dominant requirement is to provide credible evidence that a relevant problem has actually been solved, or at least simplified significantly.

The emphasis here is on credibility. Credible evidence comes from field studies, case studies, and controlled experiments in which proposed solutions are tested and compared in realistic environments. Collecting empirical evidence can sometimes be done quickly, but more often it can take years. For example, code inspections were originally proposed by Fagan in 1976. It is only now, twenty years later, that the body of knowledge about how best to organize inspections is rounding out. The proceedings of this conference contain five empirical studies about inspections and we can observe that diverse and independent experiments are now providing consistent results regarding the influence of various factors on defect detection and inspection interval. I think this is great research and the results are incredible. An organization deciding to adopt inspection today in on much firmer ground than in 1976.

I think software researchers are expecting (or are pressured into expecting) that their research is adopted on fairly flimsy evidence. Sometimes it is surprising on what poor evidence industry actually adopts a new idea. For example, OO technology has found widespread use without vigorous experimentation. Indeed, the jury on the benefits of OO is still out. There is some evidence that OO might result in longer development time and poorer quality than traditional techniques--in both development and maintenance. If this evidence can be corroborated, then introducing OO has actually been harmful. When comparing this situation with the way the medical profession adopts new treatments or drugs, I can't help seeing the software field as dangerously close to quackery.

My conclusions are as follows:

(a) For software researchers: don't expect your research to be adopted without convincing evidence. Software practitioners are much more sophisticated and critical today than they were in the seventies. A new idea without evidence about its effectiveness is justly ignored by practitioners and scientists alike. Instead of arguing about the benefits of your ideas, demonstrate the benefits.

(b) For practitioners: keep insisting on evidence from case studies, field studies, and experiments before adopting a new technology. In other words, trust the power of observation. Mistrust the claims by researchers, consultants, and salesmen unless backed up by data. It is encouraging to see that the number of empirical papers and papers with an empirical component has increased dramatically in this conference compared to earlier ones. I actually think that the software research community is getting on the right track, and that knowledge about software methods and tools is going to improve significantly in both quality and quantity.

-Walter Tichy, University of Karlsruhe, Germany