St.
Louis Ballroom D [Floor
Plan]
Session Chair: Alistair Sutcliffe
>
Use of Relative Code Churn
Measures to Predict System Defect Density Nachiappan Nagappan and Thomas
Ball
>
Main Effects Screening: A Distributed
Continuous Quality Assurance Process for Monitoring Performance Degradation
in Evolving Software Systems Cemal Yilmaz, Arvind Krishna, Atif Memon,
Adam Porter, Douglas Schmidt, and Aniruddha Gokhale
>
Effort Estimation of Use Cases for Incremental
Large-Scale Software Development Parastoo Mohagheghi, Bente Anda, and Reidar
Conradi
Security
19 May @ 2:00 PM
St.
Louis Ballroom E [Floor
Plan]
Session Chair: Constance Heitmeyer
>
Automatic Discovery of API-Level
Exploits Vinod Ganapathy, Sanjit Seshia, Somesh Jha, Thomas Reps, and Randal
Bryant
>
Sound Methods and Effective Tools for
Model-based Security Engineering with UML Jan Jürjens
>
Improving Software Security with a C Pointer
Analysis Dzintars Avots, Michael Dalton, V. Benjamin Livshits, and Monica
Lam
Requirements
& Testing
19 May @ 2:30 PM
St.
Louis Ballroom C [Floor
Plan]
Session Chair: Stefania
Gnesi
>
Developing Use Cases and Scenarios in the Requirements Process Neil Maiden and Suzanne Robertson
>
Observations and Lessons Learned
from Automated Testing Stefan Berner,
Roland Weber, Rudolf K. Keller
Michael Twidale (U. of
Illinois at Urbana-Champaign)
Silver Bullet or Fool's Gold: Supporting usability
in open source software development
19 May @ 2:00 PM
St.
Louis Ballroom A & B [Floor
Plan]
Session Chair: Hausi Muller
Biography: Michael
Twidale is an Associate Professor of the Graduate School of
Library and Information Science, University of Illinois at
Urbana-Champaign. Before that he was a faculty member of the
Computing Department at Lancaster University, UK. His research
interests include computer supported cooperative work, computer
supported collaborative learning, user interface design and
evaluation, information visualization, museum informatics, how people
cope with computers, scenario based design and the application of
ethnographic methods to computer systems design and evaluation.
All these involve the use of interdisciplinary techniques in order to better
understand the needs of end users and their difficulties with existing
computer applications as part of the process of designing more effective
systems. Current projects include over the shoulder learning, an investigation
into collaborative techniques for improving data quality in databases,
and the usability of open source software.
Abstract: At first
glance it can look like Open Source Software development
violates many, if not all, of the precepts of decades of careful
research and teaching in Software Engineering. One could take a
classic SE textbook and compare the activities elaborated and
advocated in the various chapters with what is actually done in plain
sight in the public logs of an OSS project in say SourceForge. For a
Professor of Software Engineering this might make for rather
depressing reading. Are the principles of SE being rendered obsolete?
Has OSS really discovered Brooks' Silver Bullet? Or is it just a flash
in the pan or Fool's Gold?
In this talk I will mainly look at one aspect of Open Source
Development, the 'problem' of creating usable interfaces, particularly
for non-technical end-users. Any approach involves the challenge of
how to coordinate distributed collaborative interface analysis and
design, given that in conventional software development this is
usually done in small teams and almost always face to face. Indeed all
the methods in any HCI text just assume same-time same-place work and
don't map to distributed work, let alone the looser mechanisms of OSS
development. Instead what is needed is a form of participatory
usability involving the coordination of end users and developers in a
constantly evolving redesign process.
Peter Ayton (City University,
London) How
Software Can Help or Hinder Human Decision Making (and vice–versa)
19 May @ 2:00 PM
St.
Louis Ballroom A & B [Floor
Plan]
Session Chair: Hausi Muller
Biography: Peter
Ayton is a professor of Psychology in the Department of
Psychology at City University, London, which he joined in 1992. He
holds a PhD in Psychology from University College London, (1988).
His
research has been concerned with judgmental forecasting, human
judgement of uncertainty and human choice. Applied research on decision
making has been a particular interest and he has been a collaborator
on multidisciplinary research projects funded to
investigate expert reasoning with toxicological risks, public perceptions
of food risk, convicted prisoners' perceptions of
recidivism risks and software reliability.
He was a contributing author to the 2001 Assessment Report of the
Intergovernmental Panel on Climate Change. He has published numerous
papers in international journals and is a member of the International
Institute of Forecasters, the Society for Judgment and Decision Making,
the European Association for Decision Making and the Experimental Psychology
Society.
Abstract: Developments
in computing offer experts in many fields specialised support
for decision making under uncertainty. However, the impact of these
technologies remains controversial. In particular, it is not clear how
advice of variable quality from a computer may affect human decision
makers.
Here I review research showing strikingly diverse effects of computer
support on expert decision-making. Decisions support can both
systematically improve or damaged the performance of decision makers in
subtle ways depending on the decision maker's skills, variation in the
difficulty of individual decisions and the reliability of advice from
the
support tool.
In clinical trials decision support technologies are often assessed in
terms
of their average effects. However this methodology overlooks the
possibility of differential effects on decisions of varying difficulty,
on
decision makers of varying competence, of computer advice of varying
accuracy and of possible interactions among these variables. Research
that
has teased apart aggregated clinical trial data to investigate these
possibilities has discovered that computer support was less useful for
- and
sometimes hindered - professional experts who were relatively good at
difficult decisions without support; at the same time the same computer
support tool helped those experts who were less good at relatively easy
decisions without support. Moreover, inappropriate advice from the
support
tool could bias decision makers' decisions and, predictably, depending
on
the type of case, improve or harm the decisions.