Back to main program page
A series of presentions by renowned experts on some current and future challenges for Software Engineering.
Degradation, Measurement, and Other Challenges in Security and Privacy
Security and privacy issues shine an intense spotlight on areas that are problematic for software engineering in general. One major problem is the lack of graceful degradation; today, it often appears that virtually any security or privacy vulnerability can lead to catastrophic consequences. Obviously, a system that degrades more gracefully where only a small amount of security and privacy is lost - is preferable; while there have been several successful applications of this principle, it is still hard to generalize. Another significant challenge is measurement of security and privacy; even post-release statistics such as the number of vulnerabilities, patches, or exploits are difficult to interpret meaningfully, but software vendors really need metrics that can be computed much earlier in the engineering process. In both of these areas, approaches focused on attacking subsets of the overall problem show promise, but significant work is needed at both the engineering and research level.
Biography: Jon Pincus is a Senior Researcher within the Microsoft Research Programmers' Productivity Research Center, currently focusing on security and privacy. In the past, he has developed and deployed program analysis-based tools such as PREfix and PREfast, was founder CTO at Intrinsa (acquired by Microsoft along with PREfix and the rest of the company's assets), and regularly impresses people with his Halloween costumes. Backtracking a little, after receiving the appropriate degrees from the appropriate institutions, he worked in Design Automation (placement and routing for ICs, CAD frameworks) at GE Calma and EDA Systems. Spending a year based in Munich as an Application Engineer gave him a new appreciation for the importance of software quality; being nine time zones away from the home office and trying to communicate in your non-native language can change your perspective. After being acquired by Digital Equipment Corporation, he wound up as the Technical Director of Document Management, but lost his voice mail privileges in the process.
A new wave of Web Services systems are soon to be rolled out, and when this happens, the software engineering community will experience a sea-change. For the first time, we are building distributed, Web-based applications that truly interoperate and that are likely to play very sensitive roles for the organizations that deploy them. Yet the Web Services architecture inherits a legacy from the Internet: one of best-effort message delivery, inconsistent and unreliable failure detection, ad-hoc end-to-end fault-tolerance mechanisms, and a pervasive lack of information about the state of the network. Internet applications routinely operate in the dark with respect to even the most elementary properties of the environment! In this talk, we'll ask whether it might not be possible to "light up the dark", enabling applications on the client side of a Web Services system to share state, to sense the global state of the system and data centers, and to use this information to greatly improve availability, reliability, self-configuration and management. The Astrolabe system, a novel peer-to-peer technology, could help open the door to a new way of thinking about the client side, and in so-doing contribute to a radical reduction in cost of ownership for large Web Services applications and big advances in autonomic behavior. Astrolabe is part of QuickSilver, a platform tackling many aspects of Web Services availability and "autonomic behavior."
Biography: Professor Birman has worked in the area of software reliability and distributed fault-tolerance techniques since joining Cornell in 1982, after earning his PhD at U.C. Berkeley. He is best known for having developed the Isis Toolkit, the technology used to construct the communications systems in such mission-critical settings as the French Air Traffic Control System, the Swiss and New York Stock Exchanges, the US Naval AEGIS Warship, and the Florida Electric Power and Light Corporation. He went on to found two companies outside of Cornell, while publishing a series of papers and books on reliability in his role as a Cornell faculty member; his most recent book, "Reliable Distributed Systems", is about to be published by Springer Verlag. He also headed the Horus, Ensemble and most recently the Spinglass projects at Cornell; Astrolabe, the system about which he will speak, was developed as a part of the Spinglass effort. Professor Birman is a Fellow of the ACM, and was Editor in Chief of ACM Transactions on Computing Systems from 1993-1998.
Small and Large: Distributed Systems and Global Communities
Grid technologies seek to enable collaborative problem solving and resource sharing within distributed, multi-organizational "virtual organizations." Two characteristics of Grid environments make the engineering of systems and applications particularly challenging. First, we face the familiar difficulties that arise when developing software that must provide reliablity, performance, and security in an environment that may be heterogeneous, unpredictable, unreliable, and hostile; second, we must allow this software to be deployed, operated, and evolved in an environment characterized by multiple participants with different and perhaps conflicting views on system function and design. I introduce work that is being done to address these challenges. I speak first to "Grids in the small," and describe the work being performed within the Open Grid Services Architecture framework to define a standard set of Grid protocols layered on Web Services. I explain the relationship of OGSA to Web Services, the evolution of OGSA to better exploit emerging Web Services standards, requirements Grid is placing on those emerging Web Services standards, and the landscape of protocols that are being defined upon Web Services to meet Grid requirements. I then turn to problems associated with "Grids in the large" and discuss how Grid technologies can evolve to address the challenges associated with community development of complex software systems.
Biography: Ian Foster is Associate Director of the Mathematics and Computer Science Division of Argonne National Laboratory and Professor of Computer Science at the University of Chicago. His research interests are in distributed and parallel computing and computational science, and he has published six books and over 200 articles and technical reports on these and related topics. He is an internationally recognized researcher and leader in Grid computing, a term that denotes technologies that enable the sharing and integration of resources and services across distributed, heterogeneous, dynamic "virtual organizations." The Distributed Systems Lab that he heads at Argonne and Chicago is home to the Globus Toolkit, the open source software that has emerged as the de facto standard for Grid computing in both e-business and e-science. He also leads projects applying Grid technologies to scientific and engineering problems, in such fields as high-energy physics, climate data analysis, and earthquake engineering. Foster is a fellow of the American Association for the Advancement of Science and the British Computer Society. His awards include the British Computer Society's award for technical innovation, the Global Information Infrastructure (GII) Next Generation award, the British Computer Society's Lovelace Medal, and R&D Magazine's Innovator of the Year.
the Internet: Changing the Engines in Mid-Flight
The Internet has grown very rapidly in the last decade; this phenomenal growth continues today despite the bursting of the .com bubble. At the same time, greater reliability and performance are being demanded, as the Internet becomes mission-critical for many businesses. In this talk I will discuss the hard research problems currently being faced by the Internet, and speculate on some possible solutions. Key to many of these problems is the difficulty of evolving a system of 200 million machines whilst simultaneously keeping it running. The analogy of attempting to change the engines on an aircraft in mid flight is unfortunately an apt one. It is worth noting that Software Engineering researchers and Networking researchers rarely pay much attention to each other's problems and potential solutions. In particular, networking protocol design often lacks the rigour that good software engineering methods could bring to the process. At the same time, much distributed systems middleware attempts to abstract away the fundamental limitations of the network. In passing, this talk will touch on why this may be and what we can do about it.
Biography: Mark Handley received his BSc in Computer Science with Electronic Engineering from University College London in 1988 and his PhD from UCL in 1997. After two years working for the University of Southern California's Information Sciences Institute, in 1999 he moved to Berkeley as one of the founders of the new AT&T Center for Internet Research. Professor Handley's research is in the field of networking, and especially concerns the design and analysis of Internet Protocols. He is very active in the IETF, which is the standards body responsible for specifying how Internet systems should work. He currently serves on the Internet Architecture Board, which oversees the whole Internet standards process. In July 2003, he returned to UCL to be Professor of Networked Systems, and to head the Networks Research Group in the Department of Computer Science.
Back to main program page