Conference Program


Click the image for the complete program in tabular form

Click here for the standard daily schedule.

> Technical Program
        > Research Papers
        > Experience Reports
        > Education & Training Reports
        > Research Demonstrations
> Keynote Speakers
> Panels
> InvitedTalks
        > State of the Art
        > Extending the Discipline
        > State of the Practice
        > Most Influential Paper of ICSE-17
> Plenary Sessions
> Doctoral Symposium
> New Software Engineering Faculty Symposium
> Special Events
        > Foundations of Empirical Software Engineering - The Legacy of
           Victor R. Basili

        > Midwest Software Engineering Consortium
        > Information Technology Summit
> Meetings
> Meals
> Social Events



STATE OF THE ART

Bev Littlewood (City University London)
Dependability Assessment of Software-based Systems: State of the Art
18 May @ 11:00 AM

St. Louis Ballroom A [Floor Plan]
Session Chair: Jeff Kramer
[Slides]

Biography: Bev Littlewood was co-Founder of the Centre for Software Reliability and Director from 1983-2003. He is Professor of Software Engineering at City University London. Bev has worked for many years on problems associated with the modeling and evaluation of dependability of software-based systems; he has published many papers in international journals and conference proceedings and has edited several books. He is a member of the UK Nuclear Safety Advisory Committee, of IFIP Working Group 10.4 on Reliable Computing and Fault Tolerance, and of the BCS Safety-Critical Systems Task Force. He is a Fellow of the Royal Statistical Society.
Abstract: Everyone knows that it is important to make systems dependable. Indeed, much of software engineering can be seen to be a means to this end (albeit not always acknowledged as such). Unfortunately, these means of achieving dependability - reliability, safety, security - cannot be guaranteed to succeed, particularly for systems in which complex software plays a key role. In particular, claims for system 'perfection' are never believable. It is therefore necessary to have procedures for assessing, preferably quantitatively, what level of dependability has actually been achieved for a particular system. This turns out to be a hard problem.

In this talk I shall describe the progress that has been made in recent years in quantitative assessment of modest levels of reliability for software-based systems, such as in a safety-case formalism. I shall identify deficiencies in our present capabilities, as in assessment of socio-technical systems, the limits to the levels of dependability that can be claimed, and in assessment of operational security. I shall identify, and critically analyse, some of the proposed ways forward, such as the use of BBNs and 'diversity'.



Armando Fox (Stanford University)
Addressing Software Dependability with Statistical and Machine Learning Techniques
18 May @ 11:00 AM

St. Louis Ballroom A [Floor Plan]
Session Chair: Jeff Kramer
[Slides]

Biography: Armando Fox joined the Stanford faculty as an Assistant Professor in January 1999. He received his Ph.D. from UC Berkeley, where he worked with Professor Eric Brewer (co-founder of Inktomi Corp.) building research prototypes of today's clustered Internet services and showing how to use them to support mobile computing applications, including the world's first graphical Web browser for handheld computers. His research interests include system dependability and ubiquitous computing. Armando was listed among the "Scientific American 50" of 2003 for his work on Recovery-Oriented Computing.

Prof. Fox has received the Associated Students of Stanford University Teaching Award and the Tau Beta Pi Award for Excellence in Undergraduate Engineering Education, and has been named a Professor of the Year by the Stanford chapter of the Society of Women Engineers. He received a BSEE from M.I.T. and an MSEE from the University of Illinois, and worked as a CPU architect at Intel Corp. He is also an ACM member and a founder of ProxiNet (acquired by Pumatech in 1999), which commercialized thin client mobile computing technology he helped develop at UC Berkeley. He can be reached at fox@cs.stanford.edu.

Abstract: Our ability to design and deploy large complex systems is outpacing our ability to understand their behavior. How do we detect and recover from "heisenbugs", which account for up to 40% of failures in complex Internet systems, without extensive application-specific coding? Which users were affected, and for how long? How do we diagnose and correct problems caused by configuration errors or operator errors? Although these problems are posed at a high level of abstraction, all we can usually measure directly are low-level behaviors---analogous to driving a car while looking through a magnifying glass. Machine learning can bridge this gap using techniques that learn "baseline" models automatically or semi-automatically, allowing the characterization and monitoring of systems whose structure is not well understood a priori. In this talk I'll discuss initial successes and future challenges in using machine learning for failure detection and diagnosis, configuration troubleshooting, attribution (which low-level properties appear to be correlated with an observed high-level effect such as decreased performance), and failure forecasting.



Roy Want (Intel Corp.)
System Challenges for Ubiquitous and Pervasive Computing
18 May @ 2:00 PM

St. Louis Ballroom A & B [Floor Plan]
Session Chair: David Garlan
[Slides]

Biography: Roy Want is a Principal Engineer at Intel Research/CTG in Santa Clara, California, and leader of the Ubiquity Strategic Research Project (SRP). He is responsible for exploring long-term strategic research opportunities in the area of Ubiquitous & Pervasive Computing. His interests include proactive computing, wireless protocols, hardware design, embedded systems, distributed systems, automatic identification and micro-electromechanical systems (MEMS).

Want received his BA in computer science from Churchill College, Cambridge University, UK in 1983 and continued research at Cambridge into reliable distributed multimedia-systems. He earned a PhD in 1988. He joined Xerox PARC's Ubiquitous Computing program in 1991. At PARC Want managed the Embedded Systems group. He joined Intel in 2000.Want is the author, or co-author, of more than 40 publications in the areas of mobile and distributed systems; and also holds over 50 patents in these areas. Contact information: Intel Corporation, 2200 Mission College Blvd, Santa Clara, CA 95052, USA, e-mail roy.want@intel.com

Abstract: The terms Ubiquitous and Pervasive computing were first coined at the beginning of the 90's, by Xerox PARC and IBM respectively, and capture the realization that the computing focus was going to change from the PC to a more distributed, mobile and embedded form of computing. Furthermore, it was predicted by some researchers that the true value of embedded computing would come from the orchestration of the various computational components into a much richer and adaptable system than had previously been possible.

Now some 15 years further on we have made progress towards these
aims. The hardware platforms encapsulate significant computation capability in a small volume, at low power and cost. However, the system software capabilities have not advanced at a pace that can take full advantage of this infrastructure. This talk will describe where software and hardware have combined to enable ubiquitous computing, where these systems have limitations and where the biggest challenges still remain.



Jeff Kephart (IBM Thomas J. Watson Research Center)
Research Challenges of Autonomic Computing
18 May @ 2:00 PM

St. Louis Ballroom A & B [Floor Plan]
Session Chair: David Garlan
[Slides]

Biography: Jeffrey O. Kephart manages the Agents and Emergent Phenomena group at the IBM Thomas J. Watson Research Center, and shares responsibility for developing IBM's Autonomic Computing research strategy. He and his group focus on the application of analogies from biology and economics to massively distributed computing systems, particularly in the domains of autonomic computing, e-commerce, antivirus, and anti-spam technology.

Kephart's research efforts on digital immune systems and economic software agents have been publicized in publications such as The Wall Street Journal, The New York Times, Forbes, Wired, Harvard Business Review, IEEE Spectrum, and Scientific American. In 2004, he co-founded the International Conference on Autonomic Computing. Kephart received a BS from Princeton University and a PhD from Stanford University, both in electrical engineering.

Abstract: The increasing complexity of computing systems is beginning to overwhelm the capabilities of software developers and system administrators to design, evaluate, integrate, and manage these systems. Major software and system vendors such as IBM, HP and Microsoft have concluded that the only viable long-term solution is to create computer systems that manage themselves.

Three years ago, IBM launched the autonomic computing initiative to meet the grand challenge of creating self-managing systems. Although much has already been achieved, it is clear that a worldwide collaboration among academia, IBM, and other industry partners will be required to fully realize the vision of autonomic computing. I will discuss several fundamental challenges in the areas of artificial intelligence and agents, performance modeling, optimization, architecture, policy, and human-computer interaction, and describe some of the initial steps that IBM and its partners in academia have taken to address those challenges.