ICSE/FSE Invited Talks
Becoming Agile: Agile Transitions in Practice
Agile adoption has been typically understood as a one-off organisational process involving a staged selection of Agile development practices. This does not account for the differences in the pace and effectiveness of individual teams transitioning to Agile development. Based on a Grounded Theory study of 31 Agile practitioners drawn from 18 teams across five countries, in this talk Dr Hoda will present the ‘theory of becoming Agile’ as a network of ongoing transitions across five dimensions – software development practices, team practices, management approach, reflective practices and culture.
The unique position of a software team through this network and their pace of progress along the five dimensions, explains why individual Agile teams present distinct manifestations of agility and unique transition experiences. The theory expands the current understanding of agility as a holistic and complex network of ongoing multidimensional transitions and will help software teams, their managers and organisations better navigate their individual Agile journeys.
Dr. Rashina Hoda is a Senior Lecturer in Software Engineering and the Founder of the SEPTA research group at the University of Auckland, New Zealand. Her research focuses on human and social aspects of software engineering, including agile teams; and on human-computer interaction, including serious game design. She has published 60+ research papers in journals and conferences including the IEEE Transactions on Software Engineering, IEEE Transactions on Education, Empirical Software Engineering, Journal of Systems and Software, Information and Software Technology and more. She recently received a Distinguished Paper Award at the International Conference on Software Engineering (ICSE2017). She is an Associate Editor for the Journal of Systems and Software, Co-chair of the Research Workshops at XP2018, and Chair of the Impact-to-Industry track at EASE2018. More on http://www.rashina.com
Optimizing Test Placement for Module-Level Regression Testing
Modern build systems help increase developer productivity by performing incremental building and testing. These build systems view a software project as a group of interdependent modules and perform regression test selection at the module level. However, many large software projects have imprecise dependency graphs that lead to wasteful test executions. If a test belongs to a module that has more dependencies than the actual dependencies of the test, then it is executed unnecessarily whenever a code change impacts those additional dependencies. In this paper, we formulate the problem of wasteful test executions due to suboptimal placement of tests in modules. We propose a greedy algorithm to reduce the number of test executions by suggesting test movements while considering historical build information and actual dependencies of tests. We have implemented our technique, called TestOptimizer, on top of CloudBuild, the build system developed within Microsoft over the last few years. We have evaluated the technique on ﬁve large proprietary projects. Our results show that the suggested test movementscanleadtoareductionof21.66milliontestexecutions (17.09%) across all our subject projects. We received encouraging feedback from the developers of these projects; they accepted and intend to implement ≈80% of our reported suggestions.
Dr. Shuvendu Lahiri is a Principal Researcher at Microsoft Research, Redmond, WA, USA. His research interests lie in formal and rigorous approaches for various software engineering tasks including verification, testing and code review of systems, primarily production software. His current research interests are in differential program verification and analysis, angelic verification and runtime verification. He has lead the development of tools (HAVOC, SymDiff, AV, STORM, Randoop) that have been shipped, used internally at Microsoft and found hundreds of bugs in mature products that have been fixed. Earlier, he has worked on decision procedures, SMT solvers, logics and verifiers for heap-manipulating programs, invariant generation and abstraction techniques for proving distributed systems and microprocessors, and test generation for object-oriented software. He received a B.Tech in Computer Science from Indian Institute of Technology, Kharagpur, India and a PhD in Computer Engineering from Carnegie Mellon University. He was the recipient of the ACM SIGDA Outstanding PhD Dissertation Award (2005), Best Paper Award at Runtime Verification Conference (2016), Distinguished Paper Award at ICSE (2017) and a 10-year ICSE Most Influential Paper Award (2017).
Factors Influencing Code Review Processes in Industry
Code review is known to be an efficient quality assurance technique. Many software companies today use it, usually with a process similar to the patch review process in open source software development. However, there is still a large fraction of companies performing almost no code reviews at all. And the companies that do code reviews have a lot of variation in the details of their processes. For researchers trying to improve the use of code reviews in industry, it is important to know the reasons for these process variations. We have performed a grounded theory study to clarify process variations and their rationales. The study is based on interviews with software development professionals from 19 companies. These interviews provided insights into the reasons and influencing factors behind the adoption or non-adoption of code reviews as a whole as well as for different process variations. We have condensed these findings into several hypotheses and a classification of the influencing factors. Based on these results, we performed a survey among 240 commercial software development teams. Our results show the importance of cultural and social issues for review adoption. They trace many process variations to differences in development context, whereas other hypotheses for example on the influence of desired review effects could not be supported.
Tobias Baum is a researcher working in the area of code reviews at Leibniz University, Hanover, Germany. His main interest is in Cognitive Support Code Review Tools, but also in the use of code reviews in general and in other aspects of software development in SMEs. In addition, he is part of the management board at SET GmbH, a Hanover-based software company, and has a background of over a decade of working as a software developers.
Invited Talk (Test of Time)
ISEC Test of time paper is given for influencing SE practices: paper that proposed methods, techniques, framework. tools etc that were used by many in practice and reported their results at academic and/or industry journals/conferences/forums. Papers from three ISEC conferences that were held 9-11 years before the ISEC conference in which these awards are being presented. For 2018 edition, papers from ISEC 2008 and 2009 were considered.
Girish Maskeri Rama
Mining business topics in source code using Latent Dirichlet Allocation – A retrospective
The application of Machine Learning (ML) to software engineering problems seems to have come of age as evidenced by the plethora of such papers in the recent past. This talk will reflect back on our humble efforts in that direction around 10 years back. Specifically, the circumstances and ideas that led to the work reported in the paper. We chart our journey starting with the problem of software modularization and program comprehension which was (and still is?) the bane of large software maintenance projects. Subsequently, describe the various attempts made in addressing these problems culminating in the application of Latent Dirichlet Allocation (LDA) to mine business topics in source code, thereby combining the hitherto disparate research areas of ML and program comprehension. We will briefly summarize the evolution of that line of research and understand how ML is still relevant and holds promise for addressing the current issues in software engineering.
Girish Maskeri Rama is a Lead Principal (Education and Research), at Infosys. He has nearly 17 years of experience in applied research and product development. His current research focus is on applying program analysis and ML/AI to the problem of assessments, accelerated learning and mastery. Previously, he worked in software metrics and measurement, software refactoring, program comprehension, and model driven software development. Girish has published extensively in journals and conferences including the IEEE Transactions on Software Engineering, Journal of Systems and Software, ICSE, ASE and more. Girish received his Masters in Computer Science from University of York, UK. and recently submitted his PhD thesis at IISc Bangalore.