|1||Lov Kumar, BITS Pilani, Hyderabad||An Empirical Framework to Investigate the Impact of Bug Fixing on Internal Quality Attributes||90m|
|Software testing is the process of fixing bugs by changing the design of the software or simple logic of the software. The early identification of the bug fixing process helps to improve the quality of software and reduce the cost required to fix these bugs. In this session, we will present the impact of bug fixing operation on four different in- ternal quality attributes such as Complexity, Cohesion, Inheritance and Coupling. We will introduce the basic use of various artificial intelligence (AI) techniques and feature selection (FS) methods for Android malware prediction. The focus of this session is on the investigation of the possibility of prediction models for predicting changes in internal quality attributed using source code metrics. In particular, we will focus on four important concepts: (1) a framework to extract important features, (2) a framework to validate the source code metrics and identify a suitable set of source code metrics with the aim to reduce irrelevant features and improve the performance of the malware prediction model. (3) application of different machine learning algorithms to build models for predicting the bug fixing process, i.e., how to fix these bugs by changing the design of the software or simple logic of the software? (4) A framework to evaluate the effectiveness of the developed models. In addition to the basic introduction and motivation, we will discuss the open research problems, important literature, proposed approach, experimental results, and future directions.|
|2||Janardan Misra & Nisha Ramachandra, Accenture Labs||Reliability Analysis of Machine Learning based Data-Driven Software||45m|
|Within a span of a decade or so, software design and development has witnessed significant transition from primarily code driven processto data-driven process applying various machine learning (ML) techniques. As design of such ML based data-driven systems is becoming relatively less code-intensive, assessing the reliability of final software product to be deployed in practice is becoming increasingly harder. In this tech briefing, we aim to present key challenges while assessing expected reliability of ML based software during its development as well as during deployment. Next, we will focus on main approaches to resolve these challenges, which have been recently proposed in the literature, their effectiveness and key limitations and close with a discussion on emerging directions having potential to address these limitations including some of our own works in that direction.|
|3||Atri Mandal & Shivali Agarwal, IBM Research||AI Application Lifecycle Management: A Software Engineering Perspective||45m|
|IT support services industry is going through a major transformation with AI becoming commonplace. There has been a lot of research effort in infusing AI to automate every human touchpoint in IT support process such as ticket creation and dispatch, incident resolution and monitoring, chatbot QA etc. Although ongoing research in this area has claimed considerable success in using AI based automation, there are, as yet, no clear guidelines on how to deploy such AI applications at scale. Most of these automation systems use complex machine learning models requiring extensive computational resources and infrastructure and as such are often not suitable for widespread adoption in the services industry where KPI metrics and other practical constraints have to be kept in mind.|
|4||Sonu Mehta, Microsoft Research||Using Data to Build Better Services||45m|
|There has been a fundamental shift in the software engineering and development in the past few years, specifically transition from Boxed products to Cloud Services. The large scale services experience extremely frequent changes to code and configuration and depend on Continuous Integration/ Continuous Deployment (CI/CD) processes to maintain their agility and code-quality. As part of this entire process, they collect huge amounts of data in the form of telemetry, which are related to commits, pull-requests, bugs, bug-reports, reviewers, authors etc. How do we use this data to improve development, diagnostics and infrastructure design? Project Sankie infuses data-driven techniques into engineering processes, development environments and software lifecycles of large services. The main goal of the project is to build new and improve existing Machine Learning models for solving various developer productivity and infrastructure problems. In this session, I will focus on two such models(Rex and WhoDo) that help improve developer productivity. These two models are deployed at scale within Microsoft and is being used by thousands of developers everyday. I would also talk about some of the important learnings as part of enabling an ML model at scale that can work for thousands of repositories having different characteristics. It is intended for people having background in Software Engineering and in Applied Machine Learning or either of the two. It may of the interest to anybody who is aware of challenges in the large services world or wants to apply ML to different domains.|
|5||Monika Gupta & Hagen Volzer, IBM Research||Analyzing Software Repositories using Process Mining to Identify Automation and Improvement Opportunities||90m|
|A lot of data is generate during the software development and maintenance process which gets stored in repositories such as Github, ticketing systems (such as Service now, Bugzilla), slack which can be used for conversations, and logging systems (such as logDNA). While these repositories have been analyzed for various different purposes using a variety of data mining techniques, potential of process oriented techniques that is, process mining for process improvement decisions is relatively less explored. Process mining consists in mining event logs generated from business process execution supported by information systems to capture business process. The findings from process mining have been shown to be highly useful for process assessment and to support process improvements. Recent research studies
demonstrate that process mining can provide a promising lens to study software processes since it can provide a holistic perspective as opposed to alternative repository mining techniques.
|6||Utkarsh Desai & Srikanth G Tamilselvam, IBM Research||Refactor Monolith To Microservices||45m|
|Increasingly, Enterprises want to refactor their applications into microservices architecture as part of their journey to cloud. Microservices is an architectural style that structures an application as a set of smaller services (Lewis and Fowler262014). These services are built around business functionalities and follow “Single Responsibility Principle”. But identifying functional boundaries on the existing code is a hard task (Gouigoux and Tamzalit 2017) and the effort gets multiplied when done without the help of original developers. In this talk, we will cover a novel approach for monolith decomposition, that maps the implementation structure of a monolith application to a functional structure that in turn can be mapped to business functionality. Graphs are a natural way to represent application implementation structure. The core entities in the application like the programs, transactions, tables, jobs can be considered as nodes and its interaction with the other entities can be considered as edges. The invocation pattern can be captured as node attributes or heterogenous edge attributes. Therefore, the application refactoring problem can be viewed as a graph based clustering task. Each of the clusters can be mapped to a core business function. The method also highlights supporting utilities needed across functional clusters and recommends program files that needs modification to make the functional clusters independent and deployable. The clustering technique attempts to maximize cohesion and minimize coupling both at the implementation structure level and at the functional structure level. This results in microservice candidates that are naturally aligned with the different business functions exposed by the application, while also exploiting natural implementation seams in the monolithic code.
We have evaluated our approach on multiple Java, .NET and COBOL applications. We will present the results on open benchmark applications like DayTrader and demonstrate the candidate microservice advisor tool developed
|7||Santanu K. Rath, NIT Rourkela||Why so many web based projects fail in the present day scenario?||45m|
|Depending on the survey on success of web based software, it is acclaimed that anywhere from 25% – 68% of development projects fail. But despite the years of experience now accumulated, and the refining of process and tools for predicting and preventing such outages, the same mistakes
are repeated countless times.
So, there is always a need to identify the most common reasons that make so many web projects fail, and need to be analyzed regularly.
The phrase ‘software engineering’, deliberately chosen in the NATO conference held at Garmisch during October 1967, as being provocative, in implying the need for software manufacture to be based on the types of theoretical foundations and practical disciplines, that are traditional in the established branches of engineering, are still not followed by applicationists. The NATO Software Engineering Conferences in 1968 and 1969 helped to address some of the problems and started the process of finding solutions (Software). However, I don’t believe that the software crisis has ever really been resolved.