Title of the Talk: Measuring Developer Productivity and the New Future of Work
Abstract: Developer productivity is about more than an individual's activity levels or the efficiency of the engineering systems, and it cannot be measured by a single metric or dimension. In this talk I will discuss how to use the SPACE framework to measuring developer productivity across multiple dimensions to better understand productivity in practice. I will also discuss common myths around developer productivity, and propose a collection of sample metrics to navigate around those pitfalls. Measuring developer productivity at Microsoft has allowed us to build new insights about what challenges the shift to remote work has introduced for software engineers, and how to overcome many of those challenges moving forward into a new future of work.
Sr. Principal Researcher, Microsoft Research
Thomas Zimmermann is a Sr. Principal Researcher in the Productivity and Intelligence (P+I) and Software Analysis and Intelligence (SAINTes) groups at Microsoft Research. His professional interests are software engineering, data science, and recommender systems. He is best known for his research on systematic mining of version archives and bug databases to conduct empirical studies and to build tools to support developers and managers. At Microsoft, he uses both quantitative and qualitative methods to investigate and overcome software engineering challenges. His current work is on productivity of software developers and data scientists at Microsoft. In the past, he analyzed data from digital games, branch structures, and bug reports. He is Co-Editor in Chief for the Empirical Software Engineering journal. He is the Chair of ACM SIGSOFT, the Special Interest Group on Software Engineering. He is a Distinguished Member of the ACM and an IEEE Fellow for his "contributions to data science in software engineering, research and practice." His homepage is http://thomas-zimmermann.com. Follow him on Twitter @tomzimmermann.
Gail E. Kaiser
Title of the Talk: Why Isn't The Bug Apocalypse Coming Yet?
Abstract: Major news outlets have reported that a 'bug apocalypse' is coming, but unfortunately for people dependent on software directly or indirectly - which is pretty much everyone - they mean insects. We've been finding and fixing software bugs for 75 years, counting from Grace Hopper's moth, so why isn't a 'software bug apocalypse' coming yet – or here?
There are many fine researchers in formal methods, programming languages, and other areas trying to eliminate software bugs at the source, the developers' mistakes, and I hope they are successful. In the meantime, I work in program analysis and software testing: trying to find bugs that fallible developers have created, so they can be fixed. But finding complicated bugs in modern software is very hard. I will talk about some challenges and work by myself and others in the community trying to meet those challenges.
Prof. Gail Kaiser
Department of Computer Science Columbia University, USA.
Gail E. Kaiser is a Professor of Computer Science in the Computer Science Department at Columbia University. Prof. Kaiser conducts research in software engineering and security from a systems perspective, focusing on program analysis and software testing. In the 1980s and 1990s, Kaiser investigated semantics-focused extensions to language-based editors and process-oriented team software development environments, forerunners to today's IDEs and Continuous Integration, and in the late 1990s and early 2000s she investigated self-adaptation for the then-emerging cloud computing, particularly techniques for retrofitting legacy systems. Since then she has concentrated on testing and analysis, often working at the bytecode/binary level. Beginning with her sabbatical at Columbia's Center for Computational Learning Systems in 2005-2006, Kaiser and her former PhD student Chris Murphy were among the first to adapt software engineering testing techniques, particularly metamorphic testing, to finding bugs in machine learning software. In recent years her work in program analysis ranges across static and dynamic techniques, across source code and executable (bytecode/binaries) targets, and investigates AI4SE as well as SE4AI. Prof. Kaiser received her PhD from CMU and her ScB from MIT.
Title of the Talk: Automating Software Engineering with Machine Learning
Abstract: Software plays a crucial role in our everyday lives. The scarcity of skilled software engineers has become a bottleneck in delivering better software at scale. Can we automate software engineering to help improve developer productivity and software quality? Can we take advantage of massive codebases to learn about building correct and scalable software?
In this talk, Aditya will present some recent advances in automated software engineering using machine learning. Along the way, he will relate the data-driven techniques to traditional, algorithmic program analysis techniques. He will discuss representative deep learning methods to analyze and synthesize source code. Even though we are witnessing exciting new advances in machine learning for software engineering, we shall reflect on what challenges remain and the way forward.
Prof. Aditya Kanade
Department of Computer Science and Automation Indian Institute of Science, India.
Aditya Kanade is an Associate Professor at the Indian Institute of Science. His research interests span machine learning, software engineering and automated reasoning. He has received an ACM best paper at EMSOFT 2008, and faculty awards from IBM, Microsoft Research India and Mozilla Foundation. He has been a Visiting Researcher at General Motors, Microsoft Research and most recently, at Google Brain. He is particularly excited about developing machine learning techniques to automate software engineering, and designing trustworthy and deployable machine learning.
Title of the Talk: Naturalness and Artifice of Code: Exploiting the Bi-Modality
Abstract: While natural languages are rich in vocabulary and grammatical flexibility, most human are mundane and repetitive. This repetitiveness in natural language has led to great advances in statistical NLP methods.
In our lab, we discovered (almost a decade ago) that, despite the considerable power and flexibility of programming languages, large software corpora are actually even more repetitive than NL Corpora. We also showed that this “naturalness” of code could be captured in language models, and exploited within software tools. This line of work has prospered, and been turbo-charged by the tremendous capacity and design flexibility of deep learning models. Numerous other creative and interesting applications of naturalness have ensued, from colleagues around the world, and several industrial applications have emerged. Recently, we have been studying the consequences and opportunities arising from the observation that Software is bimodal: it's written not only to be run on machines, but also read by humans; this makes software amenable to both algorithmic analysis, and statistical prediction. Bimodality allows new ways of training machine learning models, new ways of designing analysis algorithms, and new ways to understand the practice of programming. In this talk, I will begin with a backgrounder on "Naturalness" studies, and the promise of bimodality.
Prof. Premkumar Devanbu
Distinguished Professor of Computer Science, University of California, Davis
Premkumar Thomas Devanbu (பிரேம் தேவன்பு) grew up in various small towns around Tamil Nadu; he graduated from IIT Madras, and got his PhD at Rutgers University. After working for 20 years at Bell Labs and various offshoots in New Jersey, he joined University of California, Davis, where he is now a Distinguished Professor of Computer Science. He is a winner of the 2021 SIGSOFT Outstanding Research Award, and multiple distinguished, test-of-time and most-influential paper awards: at ICSE, ASE, MSR, ESEC/FSE and ISSRE. He serves on the Editorial Board of CACM. He is an ACM Fellow.
Ahmed E. Hassan
Title of the Talk: Challenges for the Industrial Adoption of AIOps Innovations
Abstract: Over the past two decades, my team has worked extensively on improving the quality of ultra-large-scale software systems. This talk discusses several AIOps innovations that we developed to cope with the enormous complexities of such systems while highlighting the key challenges that we faced to ensure the industrial adoption of such innovations. In particular, I will emphasize that focusing on top-performing AI models is not sufficient. Instead, AIOps solutions must be trustable, interpretable, scalable, maintainable, and evaluated in context.
Prof. Ahmed E. Hassan
School of Computing at Queen's University, Canada
Ahmed E. Hassan is an IEEE fellow, an ACM SIGSOT Influential Educator, an IEEE TCSE Distinguished Educator, an NSERC Steacie Fellow, and a Canada Research Chair (CRC) in Software Analytics at the School of Computing at Queen’s University, Canada. His research interests include mining software repositories, empirical software engineering, load testing, and log mining. Hassan spearheaded the creation of the Mining Software Repositories (MSR) conference and its research community. Hassan serves/d on the editorial boards of IEEE Transactions on Software Engineering, Springer Journal of Empirical Software Engineering, and PeerJ Computer Science.