ModSym Keynote - Title
Model Driven Development - A practitioner takes stock and a look into the future
Model driven development (MDD) is an approach that aims to altogether eliminate the accidental complexity in developing software systems and to reduce the intrinsic complexity to the extent possible. It can be said that the basic technological pieces for supporting MDD are in place. Many tools with a varying degree of sophistication exist. Other important aspects such as usability, learnability, performance need to be improved which in essence is a continuous process. Focus of MDD community has been on developing technologies that address how to model. Barring the domain of safety-critical systems, these models are used only for generating a system implementation. Rather, modelling language design/definition is influenced very heavily by its ability to be transformed into an implementation that can be executed on some platform. However, modern enterprises face wicked problems most of which are addressed in an ad hoc manner. Can modeling and model based techniques provide a more scientific and tractable alternative? Will it be possible to model at least a small subset of modern complex enterprise so as to demonstrate that model is the organization? A practitioner, benefiting from experience of developing MDD technology that was used to deliver several large business critical software systems over past 17 years, discusses what went right, what went wrong, and what needs to happen in future.
Vinay is a Chief Scientist of Tata Research Development and Design Centre (TRDDC) at Tata Consultancy Services (TCS). He is a member of the TCS Corporate Technology Council that oversees all
R&D and innovation activities at TCS. His research interests include model-driven software engineering,self-adaptive systems, and enterprise modeling. His work in model-driven software engineering has led to
a toolset that has been used to deliver several large business-critical systems over the past 15 years. Much of this work has found way into OMG standards, three of which Vinay contributed to in a leadership
role. An alumnus of Indian Institute of Technology Madras, Vinay also serves as Visiting Professor at Middlesex University, London.
The costs of computing have decreased a billion times over half a century. The focus of software engineering has consequently transformed from trying to squeeze as much as possible from every compute cycle and of every bit of memory, to improving developer productivity, and, as of late, to engineering user experiences and behaviors.
As computing becomes a commodity, software is omnipresent in all parts of life and, therefore, it either helps end users make decisions or it makes decisions for them. Because most users are not able to understand software systems or articulating their needs, software systems have both to collect massive amounts of operational data related to user activities and to analyze and use that data to provide user experiences that lead to desired outcomes, e.g., increasing sales revenue or the quality of software (if the user happens to be a software developer).
It no longer suffices to deliver software that requires, for example, an entry field for a specific piece of data. Instead, the software has to ensure that users can and will enter the relevant data or it has to obtain the data by observing user behavior. Moreover, the software has to ensure that the resulting data reflects the intended quantities, and that the quality of that data is sufficient to make important decisions either automatically of with human support.
Such software is
engineered to provide accurate and actionable evidence and, therefore, it requires novel approaches to design, implement, test, and operate it. The criteria for success demand much more than performing a well-defined task according to specification. Software has to provide evidence that is both accurate and also leads to the intended user behavior.
In contexts where the desired user behaviors are relatively well defined, some existing software systems achieve these goals through detailed measurement of behavior and massive use of AB testing (in which two samples of users provided slightly different versions of software in order to estimate the effect these differences have on user activity). It is not clear if and how these approaches could generalize to the setting where the desired behaviors are less clearly defined or vary among users.
As operation and measurement are becoming increasingly a part of software development, the separation between the software tools and end-user software are increasingly blurred. Similarly, the measurement associated with testing and use of software is increasingly becoming an integral part of the software delivered to users.
Software engineering needs to catch up with these realities by adjusting the topics of its study. Software construction, development, build, delivery, and operation will become increasingly critical tools and an integral part of the software system. Simply concerning ourselves with architectures and languages to support scalable computation and storage will not be enough. Software systems will have to produce compelling evidence, not simply store or push bits around. Software engineering will, therefore, need to become evidence engineering.
Audris Mockus is the Ericsson-Harlan D. Mills Chair Professor of Digital Archeology in the Department of Electrical Engineering and Computer Science of the University of Tennessee, Knoxville. He also continues to work part-time as a consulting research scientist at Avaya Labs Research. Audris Mockus studies software developers' culture and behavior through the recovery, documentation, and analysis of digital remains, in other words, Digital Archaeology. These digital traces reflect projections of collective and individual activity. He reconstructs the reality from these projections by designing data mining methods to summarize and augment these digital traces, interactive visualization techniques to inspect, present, and control the behavior of teams and individuals, and statistical models and optimization techniques to understand the nature of individual and collective behavior.
Automated Test Generation Using Concolic Testing
In this talk, I will talk about the recent advances and
challenges in concolic testing and symbolic execution. Concolic
testing, also known as directed automated random testing (DART) or
dynamic symbolic execution, is an efficient way to automatically and
systematically generate test inputs for programs. Concolic testing
uses a combination of runtime symbolic execution and automated theorem
proving techniques to generate automatically non-redundant and
exhaustive test inputs. Concolic testing has inspired the development
of several industrial and academic automated testing and security
tools such as PEX, SAGE, and YOGI at Microsoft, Apollo at IBM, Conbol
at Samsung, and CUTE, jCUTE, CATG, Jalangi, SPLAT, BitBlaze, jFuzz,
Oasis, and SmartFuzz in academia. A central reason behind the wide
adoption of concolic testing is that, while concolic testing uses
program analysis and automated theorem proving techniques internally,
it exposes a testing usage model that is familiar to most software
A key challenge in concolic testing techniques is scalability for
large realistic programs---often the number of feasible execution
paths of a program increases exponentially with the increase in the
length of an execution path. I will describe MultiSE, a new technique
for merging states incrementally during symbolic execution, without
using auxiliary variables. The key idea of MultiSE is based on an
alternative representation of the state, where we map each variable,
including the program counter, to a set of guarded symbolic
expressions called a value summary. MultiSE has several advantages
over conventional DSE and state merging techniques: 1) value summaries
enable sharing of symbolic expressions and path constraints along
multiple paths, 2) value-summaries avoid redundant execution, 3)
MultiSE does not introduce auxiliary symbolic values, which enables it
to make progress even when merging values not supported by the
constraint solver, such as floating point or function values. We have
open-source tool . Our evaluation of MultiSE on several programs
shows that MultiSE can run significantly faster than traditional
Koushik Sen is an associate professor in the Department of Electrical Engineering and Computer Sciences at the University of California, Berkeley. His research interest lies in Software Engineering, Programming Languages, and Formal methods. He is interested in developing software tools and methodologies that improve programmer productivity and software quality. He is best known for his work on “DART: Directed Automated Random Testing” and concolic testing. He has received a NSF CAREER Award in 2008, a Haifa Verification Conference (HVC) Award in 2009, a IFIP TC2 Manfred Paul Award for Excellence in Software: Theory and Practice in 2010, a Sloan Foundation Fellowship in 2011, and a Professor R. Narasimhan Lecture Award in 2014. He has won several ACM SIGSOFT Distinguished Paper Awards. He received the C.L. and Jane W-S. Liu Award in 2004, the C. W. Gear Outstanding Graduate Award in 2005, and the David J. Kuck Outstanding Ph.D. Thesis Award in 2007, and a Distinguished Alumni Educator Award in 2014 from the UIUC Department of Computer Science. He holds a B.Tech from Indian Institute of Technology, Kanpur, and M.S. and Ph.D. in CS from University of Illinois at Urbana-Champaign.