Abstract
Despite ever increasing computational power, the history of computing is characterized also by a constant battle with complexity. We will briefly review these trends and argue that, due to its focus on abstraction, automation, and analysis, the modeling community is ideally positioned to facilitate the development of future computing systems. More concretely, a few, select, technological and societal trends and developments will be discussed together with the research opportunities they present to researchers interested in modeling.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
1 Introduction
The development of computing is remarkable in many ways, and perhaps most of all in its progress and impact. However, due to the economic significance of computing and the pace of societal and technological change, we are constantly presented with new questions, challenges, and problems, giving us little time to reflect on how far we have come. Also, computing has become such a large and fragmented field that it is impossible to keep abreast all research developments.
This paper wants to briefly review some select past and present developments. Its main goal is to inform, stimulate, and inspire, not to convince. It will attempt to do so in a somewhat eclectic, anecdotal manner without claims of comprehensiveness, mostly driven by the author’s interest, but with ample references to allow interested readers to dig deeper.
2 Complexity
“Complexity, I would assert, is the biggest factor involved in anything having to do with the software field.”
Robert L. Glass [23]
In general, complex systems are characterized by a large number of entities, components or parts, many of which are highly interdependent and tightly coupled such that their combination creates synergistic, emergent, and non-linear behaviour [29]. One of the prime examples of a complex system is the human brain consisting, approximately, of \(10^{11}\) neurons connected by \(10^{15}\) synapses [11].
Figure 1 shows the size of software in different kinds of products. Noteworthy here are not only the absolute numbers, but also the rate of increase. Automotive software is a good example here. Just over 40 years ago, cars were devoid of software. In 1977, the General Motors Oldsmobile Tornado pioneered the first production automotive microcomputer ECU: a single-function controller used for electronic spark timing. By 1981, General Motors was using microprocessor-based engine controls executing about 50,000 lines of code across its entire domestic passenger car production. Since then, the size, significance, and development costs of automotive software has grown to staggering levels: Modern cars can be shipped with as much as 1 GB of software encompassing more than 100 million lines of code; experts estimate that more than 80% of automotive innovations will be driven by electronics and 90 % thereof by software, and that the cost of software and electronics can reach 40 % of the cost of a car [25].
The history of avionics software tells a similar story: Between 1965 and 1995, the amount of software in civil aircraft has doubled every two years [14]. If growth continues at this pace, experts believe that limits of affordability will soon be reached [79].
Lines of code is a doubtful measure of complexityFootnote 1. Nonetheless, it appears fair to say the modern software is one of the most complex man-made artifacts.
2.1 Why has Complexity Increased so Much?
An enabler necessary for building and running modern software certainly is modern hardware. Today’s software could not run on yesterday’s hardware. The hardware industry has produced staggering advances in chip design and manufacturing which have managed to deliver exponentially increasing computing power at exponentially decreasing costs. Compared to the Apollo 11 Guidance Computer used 1969Footnote 2 a standard smart phone from 2015 (e.g., iPhone 6) has several tens of million of times the computational power (in terms of instructions per second)Footnote 3. In 1985, an 2011 iPad2 would have rivaled a four-processor version of the Cray 2 supercomputer in performance, and in 1994, it still would have made the list of world’s fastest supercomputers [45]. According to [47], the price of a megabyte of memory dropped from US$411,041,792 in 1957 to US$0.0037 in December 2015 — a factor of over 100 billion! The width of each conducting line in a circuit (approx. 15 nm) is approaching the width of an atom (approx. 0.1 to 0.5 nm).
But, it is not just technology that is getting more complex, life in general does, too. According to anthropologist and historian Josef Tainter, “the history of cultural complexity is the history of human problem solving” [73]. Societies get more complex because “complexity is a problem solving strategy that emerges under conditions of compelling need or perceived benefit”. Complexity allows us to solve problems (e.g., food or energy distribution) or enjoy some benefit. Ideally, this benefit is greater than the costs of creating and sustaining the complexity introduced by the solution.
2.2 Consequences of Complexity
On the positive side, complex systems are capable of impressive feats. AlphaGo, the Go playing system that in March 2016 became the first program to beat a professional human Go player without handicaps on a full-sized board in a five-game match, was said by experts to be capable of developing its own moves: “All but the very best Go players craft their style by imitating top players. AlphaGo seems to have totally original moves it creates itself” [5], providing a great example of — seemingly or real — emergent, synergistic behaviour.
On the negative side, complexity increases risk of failure. Data on the failures of software or software development are hard to come by; according to the US National Institute of Standards and Technology, the cost of software errors in the US in 2001 was US$ 60 billion [63] and in 2012 the worldwide cost of IT failure has been estimated to be $3 trillionFootnote 4.
A recent example illustrates how subtle bugs can be and how difficult it is to build software systems correctly: Chord is a protocol and algorithm for a peer-to-peer distributed hash table first presented in 2001 [72]. The work identified relevant properties and provided informal proofs for them in a technical report. Chord has been implemented many timesFootnote 5 and went on to win the SIGCOMM Test-of-Time Award in 2011. The original paper currently has over 12,000 citations on Google scholar and is listed by CiteSeer as the 9th most cited Computer Science article. In 2012, it was shown that the protocol was not correct [82].
2.3 How to Deal with Complexity
Computer science curricula teach students a combination of techniques to deal with complexity, the most prominent of which are decomposition, abstraction, reuse, automation, and analysis. Of these, abstraction, automation, and analysis lie at the heart of MDE. These principles have served us amazingly well. Examples include the development of programming languages in general, and Peter Denning’s ground-breaking work on virtual memory in particular [15]. But, e.g., ‘The Law of Leaky Abstractions’Footnote 6, the ‘Automation Paradox’ [22], and the Ariane 5 accident in 1996 [1] have also taught us that even these techniques must be used with care.
3 Developments and Opportunities
“I have no doubt that the auto industry will change more in the next five–10 years than it has in the last 50”
Mary Barra, GM Chairman and CEO, January 2016 [24]
“Only 19 % of [175] interviewed auto executives describe their organizations as prepared for challenges on the way to 2025”
B. Stanley, K. Gyimesi, IBM IBV, January 2015 [71]
Making predictions in the presence of exponential change is very difficultFootnote 7. For instance, when asked to imagine life in the year 2000, 19th century French artists came up with robotic barbers, machines that read books to school children, and radium-based fireplacesFootnote 8; when the concept of a personal computer was first discussed at IBM, a senior executive famously questioned its valueFootnote 9. However, predicting further accelerating levels of change appears to be a safe bet. Increasing amounts of software are very likely to come with that, meaning there should be lots of things to do for software researchers.
The following list is highly selective and meant to complement more comprehensive treatments such as [65]. Also, we will focus most on technology; however, as pointed out in [65], more technology is not always the answer.
3.1 Semantics Engineering
Capturing the formal semantics of general purpose programming languages has been a topic of research for a long time, but the richness of these languages present challenges that limit a more immediate, practical application of the results contributing to a widespread belief that formal semantics are for theoreticians only. However, the recent interest in Domain Specific Languages (DSLs) appears to present new opportunities to leverage formal semantics. Compared to General Purpose Languages (GPLs), a DSL typically consists of a smaller number of carefully selected features. Often, semantically difficult GPL constructs such as objects, pointers, iteration, or recursion can be avoided; expressiveness is lost, but tractability is gained.
The literature contains some examples showing how this increased tractability can be leveraged to facilitate formal reasoning. For instance, automatic verifiers have been built for DSLs for hardware description [13], train signaling [18], graph-based model transformation [66], and software build systems [10].
However, the improved tractability of DSLs might also greatly facilitate the automatic generation of supporting tooling. Looking at how widely used techniques to describe the syntax of a language have become to generate syntax processing tools, the vision is clear: Use descriptions of the semantics of a language to facilitate the construction of semantics-aware tools for the execution and analysis of that language.
An Inspiring Example. This idea has already been explored in the context of programming languages [6, 28, 52, 77]) and modeling languages [19, 43, 53, 83] to, e.g., implement customizable interpreters, symbolic execution engines, and model checkers. However, the work in [40], in which abstract interpreters for a language are generated automatically from a description of its formal semantics, shows that more is possible. Given a description of the operational semantics of a machine-language instruction set such as x86/IA32, ARM, or SPARC in a domain-specific language called TSL, and a description of how the base types and operators in TSL are to be interpreted “abstractly” in an abstract semantic domain, the TSL tool automatically creates an implementation of an abstract interpreter for the instruction set:
The abstract interpreter can then be used by different analysis engines (e.g., for finding a fixed-point of a set of dataflow equations using the classical worklist algorithm, or for performing symbolic execution) to obtain an analyzer that is easily retargetable to different languages. The tool offers an impressive amount of generality by supporting different instruction sets and different analyses. It has been used to build analyzers for the IA32 instruction set that perform value/set analysis, definition/use analysis, model checking, and Botnet extraction with a precision at least as high as manually created analyzers.
Lowering Barriers, Increasing Benefit. Recent formalizations of different industrial-scale artifacts including operating system kernels [35], compilers [38], and programming languages including C [17], JavaScript [57] and Java [4] provide some evidence that large-scale formalizations are becoming increasingly feasible. Efforts are underway to make the expression, analysis, and reuse of descriptions of semantics more scalable, effective, and mainstream [21, 54, 62]. Paired with the increasing maturity and adoption of language workbenches such as XtextFootnote 10, this work may allow substantial progress on the road towards the automatic generation of semantics-aware tools such as interpreters, static analyzers, and compilers. Descriptions of semantics might one day be as common and useful as descriptions of syntax are today.
3.2 Synthesis
The topic of synthesis has been receiving a lot of attention recently. For most of these efforts, ‘synthesis’ refers to the process of automatically generating executable code from information given in some higher level form: Examples include the generation of code that manipulates many different artifacts (e.g., bitvectors [70], concurrent data structures [69], database queries [9], data reprentations [68], or spreadsheets [26]), gives feedback to students for programming assignments [68], or implements an optimizing compiler [8]. Some of these examples use a GPL, some use a DSL. The synthesis itself is implemented either using constraint solving or machine learning. Different proposals on how to best integrate synthesis into programming languages have been made and have targeted GPLs such as Java [31, 49] and DSLs [75].
Given that abstraction, automation and analysis are central to MDE, synthesis certainly also is of interest to the modeling community and the work on synthesis and its applications should be followed closely. In [74, 75], the idea of “solver-aided DSLs” is introduced. The paper presents a framework in which such DSLs can be created and illustrates its use with a DSL for example-based web scraping in which the solver is used to generate an XPath expression that retrieves the desired data.
MDE features a range of activities and situations which might potentially benefit from a little help from a solver capable of finding solutions to constraints. Could the idea of synthesis and the use of solvers facilitate, e.g.,
-
the development of models via extraction or autocompletion,
-
the support for partial models with incomplete or uncertain information,
-
the analysis of models,
-
the refinement of models via, e.g., the generation of substate machines from interface specifications,
-
the generation of correct, efficient code from models,
-
the generation of different views from a model?
How could synthesis be leveraged in language workbenches that generate supporting tools such as analyzers and code generators, or in model transformation languages and engines that support different transformation intents [44]?
Some attempts to leverage synthesis for, e.g., model creation [36], transformation authoring [2], design space exploration [27] already exist, but the topic hardly seems exhausted. Indeed, some of the technical issues Selic mentions in [65] might be mitigated using synthesis including dealing with abstract, incomplete models, model transformation, and model validation.
3.3 Reconciling Formal Analysis and Evolution
There is a fundamental conflict between analysis and evolution: As soon as the model evolves (changes), any analysis results obtained on the original version may be invalidated and the analysis may have to be rerun. Unfortunately, both seem unavoidable not just in the context of MDE, but software engineering in general.
Most analyses require the creation of supporting artifacts that represent analysis-relevant information about the model. For instance, software reverse engineering tools collect relevant information about the code in a so-called fact repository typically containing a collection of tuples encoding graphs [34]; most static analysis tools require some kind of dependence graph, and test case generation tools often rely on symbolic execution trees.
When the cost of the analysis rises, the motivation to avoid a complete re-analysis after a change and to leverage information about the nature of the change to optimize the analysis increases as well. In general, aspects of this topic are handled in the literature on impact analysis [39]; however, the analyses considered typically are either manual (comprehension, debugging) or rather narrow (regression testing, software measurement via metrics), and do not consider, e.g., static analyses or analyses based on formal methods.
Two Approaches. Assuming the analysis requires supporting artifacts, there are, in principle, at least two ways of reconciling analysis and evolution [33]:
1. Artifact-oriented (Fig. 2 ): The goal here is to update the supporting artifact \(A_1\) as efficiently as possible, but in such a way that it becomes fully reflective of the information in the changed program. To this end, the impact of the change \(\varDelta \) on the artifact original artifact \(A_1\) is determined, and the parts of the artifact possibly affected are recomputed, while leaving parts known to be unaffected unchanged. Then, the updated artifact \(A_2\) can be used as before to perform all analyses it is meant to support. For instance, for analyses based on dependence graphs such as slicing or impact analysis, the parts of the graph affected by the change are updated and the result is used to recompute the result. Similarly, for a dead code analysis (or test case generation) using a symbolic execution tree (SET), affected parts of the tree would be updated to produce a tree corresponding to the changed program. In this approach, the savings come from avoiding the reconstruction of parts of the supporting artifact \(A_2\).
2. Analysis-oriented (Fig. 3 ): Here, the focus is on updating the result of the analysis as efficiently as possible, rather than the supporting artifact. To this end, the impact of the change \(\varDelta \) on the analysis result is determined, and the parts of the analysis that may lead to a different result due to the change are redone, ignoring any parts known to produce the same result. For instance, when impact analysis is used during regression testing, only tests for executions that were introduced by the change are run; tests covering unaffected executions are ignored [60]. In this approach, the focus is on reestablishing the analysis result \(R_2\) as some combination \(R_2 = op(\varDelta ,R_1, R'_2)\) of the previous result \(R_1\) and the partial result \(R'_2\). E.g., an analysis-oriented optimization of the dead code analysis mentioned above (or test case generation) would use the most efficient means to determine dead code in (or test cases for) the affected parts and the construction of the full SET for the changed program may not be necessary for that; in this case, \(R_1\) would be the dead code in (or test cases for) \(M_1\) and the partial result \(R'_2\) would be the dead code in (test cases for) the parts of the model introduced by the change; the operation \(op(\varDelta ,R_1,R'_2)\) would return the union of \(R'_2\) and the dead code (test cases) in \(R_1\) not impacted by the change. In this case, the savings come from avoiding unnecessary parts of the analysis.
Comparing the two approaches, we see an interesting tradeoff: The first approach does not speed up the analysis itself (only the update of the supporting artifact). However, it results in a complete supporting artifact (e.g., dependence graphs, SET) that can then be used for whatever analyses it supports (e.g., different static analyses for dependence graphs, and, test case generation, dead code analysis for SETs). Moreover, the result of the analysis of the changed model does not rely on the result of the analysis of the original program at all. The second approach speeds up the analysis itself, but since it focusses on the changed parts, it is partial only. E.g., the updated program can only be concluded to be free of dead code, if the second and the first analysis say so.
In sum, the second approach is more restricted compared to the first, but might well hold additional optimization potential. Recent research on program analysis using formal techniques has begun to explore these possibilities, and analysis-oriented approaches to optimize model checking [81] and symbolic execution [58] have been developed. Inspired by these proposals, we have developed prototypes that use both approaches to optimize the symbolic execution of Rhapsody statemachines [33]. Results indicate that both approaches are complementary and effective in different situations.
3.4 Open Science
In 2010, two Harvard economists published a paper entitled “Growth in a Time of Debt” in a non-peer reviewed journal which provided support for the argument that excessive debt is bad for growth. The paper was used by many policy makers to back up their calls for fiscal austerity. However, in 2013, the paper was shown to have used flawed methodology and to not support the authors’ conclusionsFootnote 11.
Reproducibility. Examples of research producing doubtful results due to unintended or even intended flaws in the data or methodology have been going through the media recently and many disciplines have begun to investigate the reproducibility of their research results. For instance, a study in economics showed that 78 % of the 162 replication studies conducted “disconfirm a major finding from the original study” [16]. A study focusing on research in Computer Systems [12], examined 601 papers from eight ACM conferences and five journals: of the papers with results backed by code, the study authors were able to build the system in less than 30 min only 32 % of the time; in 54 % of cases the study authors failed to build the code, but the paper authors said that the code does build with reasonable effort.
The U.S. President Steps In. However, it has been pointed out in prominent places that in many disciplines these days reproducibility means the availability of programs and data [30, 50, 64]. In other words, since software, programming, and the use and manipulation of data plays such a central role in so many disciplines, some of the problems with reproducibility in other disciplines are due to limitations in programming, software, and the use and manipulation of data, that is, they are due to problems that the computing community is at least partially responsible for and should put on its research agendaFootnote 12. About a year ago, the world’s most powerful man has done exactly that with an executive order to create a “National Strategic Computing Initiative” which includes accessibility and workflow capture as central objectives [56].
A Good Start: Encouraging Artifact Submission. The research community has begun to adjust with, e.g., no less than four events devoted to reproducibility at the 2015 Conference for High Performance Computing, Networking, Storage and Analysis (SC’15)Footnote 13, and Eclipse’s Science Working Group announcing specific initiatives (Eclipse Integrated Computational Environment and Data Analysis Workbench). However, more should be done and promoting the value of artifact submission at workshops, conferences, and journal appears to be a good place to start. According to [12], 19 Computer Science conferences have participated in an artifact submission and evaluation process between 2011 and 2016, including PLDI’15, OOPSLA’15, ECOOP’15, and POPL’16, but more need to join. The availability of the artifacts that research is based and their integration into the scientific evaluation process should be the norm, not the exception.
3.5 Provenance
A topic closely related to open science and reproducibility is provenance. In general, data provenance refers to the description of the origins of a piece of data and the process by which it was created or obtained with the goal to allow assessments of quality, reliability, or trustworthiness. It has traditionally been studied in the context of databases, but has also been used for data found on the web or data used in scientific experiments. Domains of application include
-
science, to make data and experimental results more trustworthy and experiments more reproducible,
-
business, to demonstrate ownership, responsibility, or regulatory compliance and facilitate auditing processes, and
-
software development, to aid certification and establish adherence to licensing rules.
OPM and PROV: Metamodels and Standards for Provenance. There are tools specifically devoted to the collection and representation of provenance data such as KarmaFootnote 14 but also workflow engines supporting provenance such as KeplerFootnote 15. Many of these tools support the Open Provenance Model (OPM), a data model (i.e., metamodel) for provenance information [51] based on directed, edge-labeled, hierarchical graphs with three kinds of nodes representing things (Artifact, Agent, and Process) and five kinds of edges representing causal relationships (used, wasGeneratedBy, wasControlledBy, wasTriggeredBy, and wasDerivedFrom). OPM graphs are subject to well-formedness constraints, can contain time information, and have inference rules (allowing, e.g., the inclusion of derived information via transitive edges) and operations (for, e.g., union, intersection, merge, renaming, refinement and completion) associated with them. A formal semantics of OPM graphs published recently views them as temporal theories on the temporal events represented in the graph [37], but does not account for Agents. OPM has been a major influence in the design of the PROV family of documents by the World Wide Web Consortium (W3C) [78] which not only defines a data model, but also corresponding serializations and other supporting definitions to enable the interoperable interchange of provenance information in heterogeneous environments such as the Web.
Open-ended Opportunities. There appears to be a lot of opportunity for researchers with background in graph transformation, formal methods, or modeling to advance the state-of-the-art in provenance. Many established topics (e.g., formal semantics, constraint solving, traceability, querying, language engineering for graphical DSMLs, and model management), but also emerging topics (e.g., the use of models and modeling to support inspection, certification and compliance checking [20, 46, 55] and data aggregation and visualization [41, 42, 48, 76]) appear potentially relevant. Moreover, no approaches have been found to build models that allow the quantification of the quality or trustworthiness of data. In case of producer/consumer relationships, service level agreements guaranteeing data with a certain level of quality might also be of interest.
3.6 Open Source Modeling Tools
The need to improve MDE tooling has been expressed before [32, 65, 80]. At the same time, significant efforts to develop industrial-strength open source modeling tools and communities that support and sustain them are currently being made. Sample tools include AutoFocusFootnote 16, xtUMLFootnote 17, PapyrusFootnote 18 [3], and PapyrusRTFootnote 19 [59].
The development and availability of complete, industrial-strength open source MDE tools is a radical shift from past practices and presents both exciting opportunities and substantial challenges for everybody interested in MDE, regardless of whether they use the tools for industrial development, research, or education. Due to the importance of tooling to the success of MDE, this shift has the potential to provide a much-needed stimulus for major advances in its adoption, development, and dissemination.
4 Conclusion
“We can only see a short distance ahead, but we can see plenty there that needs to be done.”
Alan Turing
As we continue to entrust more and more complex functions and capabilities to software, our ability to build this software reliably and effectively should increase as well. Much more work is needed to make this happen and this paper has suggested some starting points.
The fragmentation that plagues many research areas is harmful. Any scientific community should keep an open mind and remain willing to learn from others about existing and new problems and potentially new ways to solve them [67].
Notes
- 1.
So many alternative ones have been proposed [61] that even the study of complexity appears complex.
- 2.
A web-based simulator can be found at http://svtsim.com/moonjs/agc.html.
- 3.
- 4.
- 5.
At least 8 implementations are listed at https://github.com/sit/dht/wiki/faq.
- 6.
- 7.
- 8.
- 9.
- 10.
- 11.
A discussion of the paper and the controversy it caused can be found at https://en.wikipedia.org/wiki/Growth_in_a_Time_of_Debt.
- 12.
Computers are even said to have “broken science”, https://www.eclipsecon.org/na2016/session/how-computers-have-broken-science-and-how-we-can-fix-it.
- 13.
- 14.
- 15.
- 16.
- 17.
- 18.
- 19.
References
Ariane 5 flight 501 failure, report by the inquiry board (1996). http://esamultimedia.esa.int/docs/esa-x-1819eng.pdf
Baki, I., Sahraoui, H.: Multi-step learning and adaptive search for learning complex model transformations from examples. ACM Trans. Softw. Eng. Methodol. (2016) (in print)
Barrett, R., Bordeleau, F.: 5 years of ‘Papyrusing’ – migrating industrial development from a proprietary commercial tool to Papyrus (invited presentation). In: Workshop on Open Source Software for Model Driven Engineering (OSS4MDE 2015), pp. 3–12 (2015)
Bogdănaş, D., Roşu, G.: K-Java: a complete semantics of Java. In: ACM SIGPLAN/SIGACT Symposium on Principles of Programming Languages (POPL 2015), pp. 445–456. ACM, January 2015
Borowiec, S., Lien, T.: AlphaGo beats human Go champ in milestone for artificial intelligence. Los Angeles Times, 12 March 2016
Borras, P., Clement, D., Despeyroux, T., Incerpi, J., Kahn, G., Lang, B., Pascual, V.: Centaur: the system. In: ACM SIGSoft/SIGPlan Software Engineering Symposium on Practical Software Development Environments (SDE 1987) (1987)
Charette, R.N.: Why software fails. IEEE Spectr. 42(9), 42–49 (2005)
Cheung, A., Kamil, S., Solar-Lezama, A.: Bridging the gap between general-purpose and domain-specific compilers with synthesis. In: Summit oN Advances in Programming Languages (SNAPL 2015) (2015)
Cheung, A., Solar-Lezama, A., Madden, S.: Optimizing database-backed applications with query synthesis. ACM SIGPLAN Not. 48(6), 3–14 (2013)
Christakis, M., Leino, K.R.M., Schulte, W.: Formalizing and verifying a modern build language. In: Jones, C., Pihlajasaari, P., Sun, J. (eds.) FM 2014. LNCS, vol. 8442, pp. 643–657. Springer, Heidelberg (2014)
Chudler, E.H.: Neuroscience for kids. https://faculty.washington.edu/chudler/what.html
Collberg, C., Proebsting, T.A.: Repeatability in computer systems research. Commun. ACM 59(3), 62–69 (2016)
Cook, B., Launchbury, J., Matthews, J.: Specifying superscalar microprocessors in Hawk. In: Workshop on Formal Techniques for Hardware and Hardware-like Systems (1998)
Potocki de Montalk, J.P.: Computer software in civil aircraft. Cockpit/Avionics Eng. 17(1), 17–23 (1993)
Denning, P.J.: Virtual memory. ACM Comput. Surv. 2(3), 153–189 (1970)
Duvendack, M., Palmer-Jones, R.W., Reed, W.R.: Replications in economics: a progress report. Econ. Pract. 12(2), 164–191 (2015)
Ellison, C., Roşu, G.: An executable formal semantics of C with applications. In: ACM SIGPLAN/SIGACT Symposium on Principles of Programming Languages (POPL 2012), pp. 533–544 (2012)
Endresen, J., Carlson, E., Moen, T., Alme, K.-J., Haugen, Ø., Olsen, G.K., Svendsen, A.: Train control language - teaching computers interlocking. In: Computers in Railways XI. WIT Press (2008)
Engels, G., Hausmann, J.H., Heckel, R., Sauer, S.: Dynamic meta modeling: a graphical approach to the operational semantics of behavioral diagrams in UML. In: Evans, A., Caskurlu, B., Selic, B. (eds.) UML 2000. LNCS, vol. 1939, pp. 323–337. Springer, Heidelberg (2000)
Falessi, D., Sabetzadeh, M., Briand, L., Turella, E., Coq, T., Panesar-Walawege, R.K.: Planning for safety standards compliance: a model-based tool-supported approach. IEEE Softw. 29(3), 64–70 (2012)
Felleisen, M., Findler, R.B., Flatt, M.: Semantics Engineering with PLT Redex. MIT Press, Cambridge (2009)
Geer, D.E.: Children of the magenta. IEEE Secur. Priv. 13(5) (2015)
Glass, R.L.: Sorting out software complexity. Commun. ACM 45(11), 19–21 (2002)
GM: GM chairman and CEO addresses CES. https://www.gm.com/mol/m-2016-Jan-boltev-0106-barra-ces.html, 6 Jan 2016
Grimm, K.: Software technology in an automotive company – major challenges. In: International Conference on Software Engineering (ICSE 2003) (2003)
Gulwani, S., Harris, W., Singh, R.: Spreadsheet data manipulation using examples. Commun. ACM 55, 97–105 (2012)
Hendriks, M., Basten, T., Verriet, J., Brassé, M., Somers, L.: A blueprint for system-level performance modeling of software-intensive embedded systems. Softw. Tools Technol. Transf. 18, 21–40 (2016)
Henriques, P.R., Pereira, M.J.V., Mernik, M., Lenic, M., Gray, J., Wu, H.: Automatic generation of language-based tools using the LISA system. IEE Proc. Softw. 152, 54–69 (2005)
Homer-Dixon, T.: The Ingenuity Gap. Vintage Canada (2001)
Ince, D.C., Hatton, L., Graham-Cumming, J.: The case for open computer programs. Nature 482, 485–488 (2012)
Jeon, J., Qiu, X., Foster, J.S., Solar-Lezama, A.: Jsketch: sketching for Java. In: Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering (ESEC/FSE 2015) (2015)
Kahani, N., Bagherzadeh, M., Dingel, J., Cordy, J.R.: The problems with Eclipse modeling tools: a topic analysis of Eclipse forums, April 2016 (submitted)
Khalil, A., Dingel, J.: Incremental symbolic execution of evolving state machines. In: ACM/IEEE International Conference on Model Driven Engineering Languages and Systems (MODELS 2015) (2015)
Kienle, H.M., Mueller, H.A.: Rigi – an environment for software reverse engineering, exploration, visualization, and redocumentation. Sci. Comput. Prog. 75, 247–263 (2010)
Klein, G., Elphinstone, K., Heiser, G., Andronick, J., Cock, D., Derrin, P., Elkaduwe, D., Engelhardt, K., Kolanski, R., Norrish, M., Sewell, T., Tuch, H., Winwood, S.: Formal verification of an OS kernel. In: ACM SIGOPS Symposium on Operating Systems Principles (SOSP 2009), pp. 207–220. ACM (2009)
Koksal, A.S., Pu, Y., Srivastava, S., Bodik, R., Piterman, N., Fisher, J.: Synthesis of biological models from mutation experiments. In: ACM SIGPLAN/SIGACT Symposium on Principles of Programming Languages (POPL 2013) (2013)
Kwasnikowska, N., Moreau, L., Van den Bussche, J.: A formal account of the Open Provenance Model. ACM Trans. Web 9, 10:1–10:44 (2015)
Leroy, X.: Formal verification of a realistic compiler. Commun. ACM 52(7), 107–115 (2009)
Li, B., Sun, X., Leung, H., Zhang, S.: A survey of code-based change impact analysis techniques. Softw. Test. Verification Reliab. 23, 613–646 (2012)
Lim, J., Reps, Th.: TSL: a system for generating abstract interpreters and its application to machine-code analysis. ACM Trans. Program. Lang. Syst. 35(1), 4:1–4:59 (2013)
Lima, M.: Visual complexity website. http://www.visualcomplexity.com/vc
Lima, M.: The Book of Trees: Visualizing Branches of Knowledge Hardcover. Princeton Architectural Press (2014)
Lu, Y., Atlee, J.M., Day, N.A., Niu, J.: Mapping template semantics to SMV. In: IEEE/ACM International Conference on Automated Software Engineering (ASE 2004) (2004)
Lúcio, L., Amrani, M., Dingel, J., Lambers, L., Salay, R., Selim, G.M.K., Syriani, E., Wimmer, M.: Model transformation intents and their properties. Softw. Syst. Model., 1–38 (2014)
Markoff, J.: The iPad in your hand: as fast as a supercomputer of yore. New York Times article based on interview with Dr. Jack Dongarra, 9 May 2011. http://bits.blogs.nytimes.com/2011/05/09/the-ipad-in-your-hand-as-fast-as-a-supercomputer-of-yore
Mayr, A., Plösch, R., Saft, M.: Objective safety compliance checks for source code. In: Companion Proceedings of the 36th International Conference on Software Engineering, ICSE Companion 2014 (2014)
McCallum, J.C.: Memory prices (1957–2015). http://www.jcmit.com/memoryprice.htm. Accessed Mar 2016
McCandless, D.: Information is beautiful: Million lines of code. http://www.informationisbeautiful.net/visualizations/million-lines-of-code
Milicevic, A., Rayside, D., Yessenov, K., Jackson, D.: Unifying execution of imperative and declarative code. In: International Conference on Software Engineering (ICSE 2011) (2011)
Monroe, D.: When data is not enough. Commun. ACM 58(12), 12–14 (2015)
Moreau, L., Clifford, B., Freire, J., Futrelle, J., Gil, Y., Groth, P., Kwasnikowska, N., Miles, S., Missier, P., Myers, J., Plale, B., Simmhan, Y., Stephan, E., van den Bussche, J.: The open provenance model core specification (v1.1). Future Gener. Comput. Syst. 27(6), 743–756 (2011)
Mosses, P.: Sis: A compiler-generator system using denotational semantics. Technical report 78-4-3, Department of Computer Science, University of Aarhus (1978)
Muller, P.-A., Fleurey, F., Jézéquel, J.-M.: Weaving executability into object-oriented meta-languages. In: Briand, L.C., Williams, C. (eds.) MoDELS 2005. LNCS, vol. 3713, pp. 264–278. Springer, Heidelberg (2005)
Mulligan, D.P., Owens, S., Gray, K.E., Ridge, T., Sewell, P.: Lem: Reusable engineering of real-world semantics. SIGPLAN Not. 49(9), 175–188 (2014)
Nair, S., de la Vara, J.L., Melzi, A., Tagliaferri, G., de-la-Beaujardiere, L., Belmonte, F.: Safety evidence traceability: problem analysis and model. In: Salinesi, C., Weerd, I. (eds.) REFSQ 2014. LNCS, vol. 8396, pp. 309–324. Springer, Heidelberg (2014)
The President of the United States: Executive order: creating a national strategic computing initiative, 29 July 2015. https://www.whitehouse.gov/the-press-office/2015/07/29/executive-order-creating-national-strategic-computing-initiative
Park, D., Ştefănescu, A., Roşu, G.: KJS: a complete formal semantics of JavaScript. In: SIGPLAN Conference on Programming Language Design and Implementation (PLDI 2015), pp. 346–356. ACM, June 2015
Person, S., Yang, G., Rungta, N., Khurshid, S.: Directed incremental symbolic execution. In: ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI 2011) (2011)
Posse, E.: PapyrusRT: modelling and code generation (invited presentation). In: Workshop on Open Source Software for Model Driven Engineering (OSS4MDE 2015) (2015)
Ren, X., Shah, F., Tip, F., Ryder, B.G., Chesley, O.: Chianti: A tool for change impact analysis of Java programs. In: ACM Conference on Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA 2004) (2004)
Riguzzi, F.: A survey of software metrics. Technical report DEIS-LIA-96-010, Università degli Studi di Bologna (1996)
Roşu, G., Şerbănuţă, T.F.: An overview of the K semantic framework. J. Logic Algebraic Prog. 79(6), 397–434 (2010)
RTI: The economic impacts of inadequate infrastructure for software testing. Technical report Planning Report 02–3, National Institute of Standards & Technology (NIST), May 2002
Schuwer, R., van Genuchten, M., Hatton, L.: On the impact of being open. IEEE Softw. 32, 81–83 (2015)
Selic, B.: What will it take? A view on adoption of model-based methods. Softw. Syst. Model. 11, 513–526 (2012)
Selim, G.M.K., Lúcio, L., Cordy, J.R., Dingel, J., Oakes, B.J.: Specification and verification of graph-based model transformation properties. In: Giese, H., König, B. (eds.) ICGT 2014. LNCS, vol. 8571, pp. 113–129. Springer, Heidelberg (2014)
Shapiro, S.: Splitting the difference: the historical necessity of synthesis in software engineering. IEEE Ann. Hist. Comput. 19(1), 20–54 (1997)
Singh, R., Gulwani, S., Solar-Lezama, A.: Automated feedback generation for introductory programming assignments. ACM SIGPLAN Not. 48, 15–26 (2013). ACM
Solar-Lezama, A., Jones, C., Bodik, R.: Sketching concurrent data structures. ACM SIGPLAN Not. 43, 136–148 (2008). ACM
Solar-Lezama, A., Rabbah, R., Bodík, R., Ebcioğlu, K.: Programming by sketching for bit-streaming programs. ACM SIGPLAN Not. 40, 281–294 (2005). ACM
Stanley, B., Gyimesi, K.: Automotive 2025 – industry without borders. Technical report, IBM Institute for Business Value, January 2015. http://www-935.ibm.com/services/us/gbs/thoughtleadership/auto2025
Stoica, I., Morris, R., Karger, D., Kaashoek, F.M., Balakrishnan, H.: Chord: a scalable peer-to-peer lookup service for internet applications. In: ACM Conference on Applications, Technologies, Architectures, and Protocols for Computer Communications (SIGCOMM 2001), pp. 149–160 (2001)
Tainter, J.A.: Complexity, problem solving, and sustainable societies. In: Costanza, R., Segura, O., Martinez-Alier, J. (eds.) Getting Down to Earth: Practical Applications of Ecological Economics. Island Press (1996)
Torlak, E., Bodik, R.: Growing solver-aided languages with Rosette. In: ACM International Symposium on New Ideas, New Paradigms, and Reflections on Programming & Software, Onward! 2013, pp. 135–152 (2013)
Torlak, E., Bodik, R.: A lightweight symbolic virtual machine for solver-aided host languages. In: ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI 2014) (2014)
Tufte, E.: Beautiful Evidence. Graphics Press, Cheshire (2006)
den van Brand, M.G.J., van Deursen, A., Heering, J., de Jong, H.A., de Jonge, M., Kuipers, T., Klint, P., Moonen, L., Olivier, P.A., Scheerder, J., Vinju, J.J., Visser, E., Visser, J.: The ASF+SDF meta-environment: a component-based language development environment. In: Wilhelm, R. (ed.) CC 2001. LNCS, vol. 2027, p. 365. Springer, Heidelberg (2001)
W3C Working Group. PROV-Overview: An overview of the PROV family of documents. In: Groth, P., Moreau, L. (eds.) W3C Working Group Note. W3C (2013)
Ward, D.: Avsis system architecture virtual integration program: proof of concept demonstrations. In: INCOSE MBSE Workshop, 27 January 2013
Whittle, J., Hutchinson, J., Rouncefield, M., Heldal, R.: Industrial adoption of model-driven engineering: are the tools reallythe problem? In: ACM/IEEE International Conference on Model-Driven Engineering Languages and Systems (MODELS 2013) (2013)
Yang, G., Dwyer, M., Rothermel, G.: Regression model checking. In: International Conference on Software Maintenance (ICSM 2009), pp. 115–124. IEEE (2009)
Zave, P.: Using lightweight modeling to understand Chord. ACM SIGCOMM Comput. Commun. Rev. 42(2), 50–57 (2012)
Zurowska, K., Dingel, J.: A customizable execution engine for models of embedded systems. In: Roubtsova, E., McNeile, A., Kindler, E., Gerth, C. (eds.) BM-FA 2009-2014. LNCS, vol. 6368, pp. 82–110. Springer, Heidelberg (2015)
Acknowledgment
This work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC), and by the Ontario Ministry of Research and Innovation (MRI).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing Switzerland
About this paper
Cite this paper
Dingel, J. (2016). Complexity is the Only Constant: Trends in Computing and Their Relevance to Model Driven Engineering. In: Echahed, R., Minas, M. (eds) Graph Transformation. ICGT 2016. Lecture Notes in Computer Science(), vol 9761. Springer, Cham. https://doi.org/10.1007/978-3-319-40530-8_1
Download citation
DOI: https://doi.org/10.1007/978-3-319-40530-8_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-40529-2
Online ISBN: 978-3-319-40530-8
eBook Packages: Computer ScienceComputer Science (R0)