Paper Accepted at FASE 2016

Standard
FASE at ETAPS 2016

FASE at ETAPS 2016

A Graph-Based Semantics Workbench for Concurrent Asynchronous Programs by Claudio Corrodi, Alexander Heußner, and myself was accepted for publication at the International Conference on Fundamental Approaches to Software Engineering (FASE 2016).

A number of novel programming languages and libraries have been proposed that offer simpler-to-use models of concurrency than threads. It is challenging, however, to devise execution models that successfully realise their abstractions without forfeiting performance or introducing unintended behaviours. This is exemplified by SCOOP—a concurrent object-oriented message-passing language—which has seen multiple semantics proposed and implemented over its evolution. We propose a “semantics workbench” with fully and semi-automatic tools for SCOOP, that can be used to analyse and compare programs with respect to different execution models. We demonstrate its use in checking the consistency of semantics by applying it to a set of representative programs, and highlighting a deadlock-related discrepancy between the principal execution models of the language. Our workbench is based on a modular and parameterisable graph transformation semantics implemented in the GROOVE tool. We discuss how graph transformations are leveraged to atomically model intricate language abstractions, and how the visual yet algebraic nature of the model can be used to ascertain soundness.

Paper postprint (.pdf)

Paper postprint (.pdf)

Presentation slides (.pdf)

Presentation slides (.pdf)

Paper Accepted at GPCE 2015

Standard
GPCE 2015

GPCE 2015

Contract-Based General-Purpose GPU Programming by Alexey Kolesnichenko, myself, Sebastian Nanz, and Bertrand Meyer was accepted for publication at the International Conference on Generative Programming: Concepts and Experiences (GPCE 2015).

Using GPUs as general-purpose processors has revolutionized parallel computing by offering, for a large and growing set of algorithms, massive data-parallelization on desktop machines. An obstacle to widespread adoption, however, is the difficulty of programming them and the low-level control of the hardware required to achieve good performance. This paper suggests a programming library, SafeGPU, that aims at striking a balance between programmer productivity and performance, by making GPU data-parallel operations accessible from within a classical object-oriented programming language. The solution is integrated with the design-by-contract approach, which increases confidence in functional program correctness by embedding executable program specifications into the program text. We show that our library leads to modular and maintainable code that is accessible to GPGPU non-experts, while providing performance that is comparable with hand-written CUDA code. Furthermore, runtime contract checking turns out to be feasible, as the contracts can be executed on the GPU.

Paper postprint (.pdf)

Paper postprint (.pdf)

Paper Accepted at GaM 2015

Standard
GaM 2015

GaM 2015

Towards Practical Graph-Based Verification for an Object-Oriented Concurrency Model by Alexander Heußner, myself, Claudio Corrodi, and Benjamin Morandi was accepted for publication at the Graphs as Models Workshop (GaM 2015).

To harness the power of multi-core and distributed platforms, and to make the development of concurrent software more accessible to software engineers, different object-oriented concurrency models such as SCOOP have been proposed. Despite the practical importance of analysing SCOOP programs, there are currently no general verification approaches that operate directly on program code without additional annotations. One reason for this is the multitude of partially conflicting semantic formalisations for SCOOP (either in theory or by-implementation). Here, we propose a simple graph transformation system (GTS) based run-time semantics for SCOOP that grasps the most common features of all known semantics of the language. This run-time model is implemented in the state-of-the-art GTS tool GROOVE, which allows us to simulate, analyse, and verify a subset of SCOOP programs with respect to deadlocks and other behavioural properties. Besides proposing the first approach to verify SCOOP programs by automatic translation to GTS, we also highlight our experiences of applying GTS (and especially GROOVE) for specifying semantics in the form of a run-time model, which should be transferable to GTS models for other concurrent languages and libraries.

Paper postprint (.pdf)

Paper postprint (.pdf)

Bidirectionality and How to Fake It

Standard

The general problem of maintaining consistency between two related sources of information is recurrent across computer science, surfacing in communities as diverse as database management (e.g. the construction of updatable views for relational databases), programming languages (e.g. invertible computations), and model-driven engineering (e.g. synchronising views of software models). The various mechanisms constructed for solving these problems and maintaining consistency have, collectively, become known as bidirectional transformations (BX), and the last decade has seen several cross-discipline initiatives to study the common challenges and underpinnings; most notably the GRACE-BX meeting in 2008 and the BX wiki and workshop series.

Some months ago (and almost certainly in one of York’s fine pubs), Richard Paige, Mike Dodds, Arend Rensink and I turned our attention to the buzz surrounding BX in one particular subcommunity: that of model transformations. This is a community concerned with making models first-class engineering artefacts, manipulated by automatic consistency-preserving translations (i.e. model transformations). Intuitively, these models could be system views at different levels of abstraction (perhaps all the way down to code), with model transformations the means of linking them, and consistency a property satisfied if all the models actually are views of the same system. These transformations are typically unidirectional, in the sense that they can translate a source model S into a target model T – but not a T into an S. It’s often quite fine that a transformation cannot do this; the reverse direction is not necessarily interesting or useful (who modifies compiled code and expects the source files to be updated?). But often it is: for example, in integrating separately-modelled systems that must be consistent; or in the database view problem, where low-level users are modifying restricted views of a database, while administrators are concurrently querying and modifying a model of the entire thing.

BX support concurrent engineering activities

BX support concurrent engineering activities

Bidirectional model transformations (BX) that solve such problems simultaneously describe transformations in both directions – from S to T and T to S – and are consistency-preserving by construction. This property makes BX scientifically fascinating (for myself and many others), but whether they are a productive means of solving problems in model-driven engineering is quite another issue. The inherent complexity that BX must encode make them very difficult to implement; so much so, that many transformation languages do not support them at all, or do so with strings attached and severe restrictions (e.g. bijective transformations only). Perhaps worse, while the QVT-R language (part of an OMG standard) does support them, it does so with an ambiguous semantics and limited tool support.

At some point, one starts to ask: does the difficulty in expressing and executing BX actually outweigh the practical benefits arising from their consistency guarantees? And if engineers aren’t writing BX to solve the problems that they are intended to solve… what are they writing instead? For the latter question, the answer is typically “pairs of unidirectional transformations”. These are certainly easier to write than BX, and a pair of transformations could still maintain consistency; but the emphasis here is on could – we lack the guarantee. Arend, Mike, Richard and I, however, began to wonder whether this could be automatically checked. And we were not alone:

“If a framework existed in which it were possible to write the directions of a transformation separately and then check, easily, that they were coherent, we might be able to have the best of both worlds.” — Perdita Stevens

We have come to believe that the Epsilon family of model management languages has all of the ingredients necessary to facilitate such checks. BX can be “faked” in Epsilon as pairs of update-in-place transformations that separately manipulate the source and target models, with an inter-model constraint expressing precisely what it means for the two models to be consistent. The idea is that immediately after an update of the source (resp. target) model, the constraint is violated, and the corresponding update of the target (resp. source) model is triggered. We claim that a fake BX in this sense is indistinguishable from a true BX, if it can be proven that the constraint is invariant under executions of the transformations in both directions. Furthermore, we conjecture that we will be able to check this, automatically, using rigorous proof techniques adapted from graph rewriting.

We explored these ideas further in our short paper, Towards Rigorously Faking Bidirectional Model Transformations, presented by Richard on our behalf at AMT 2014. We applied them successfully to a small conceptual example, but whether they work more generally, and prove to be a practical alternative to true BX, remains to be evaluated. We are certainly on the case, but would appreciate your thoughts, suggestions, and criticisms in the meantime.

Paper postprint (.pdf)

Paper postprint (.pdf)

Presentation slides (.pdf)

Presentation slides (.pdf)

Verifying Graph Transformations: Paths, Bipartiteness, and Beyond

Standard

For several years I have maintained an interest in graph transformation, which combines the strengths of graphs and rules into a single, abstract computational model. In this setting, the state of a computation is represented as a graph, and the computational steps are applications of graph rewrite rules (akin to applications of rules in Chomsky string grammars, but lifted to the richer domain of graphs).

Some rules, a program, and an example execution

Some rules, a program, and an example execution

Consider for example the figure to the right, which contains a set of rules, a program constructed over them, and an example execution on the empty graph Ø. Even without giving the formal semantics, it may already be clear – simply from the graphical nature of the rules – that executing the program on the empty graph will construct a tree. Such an execution will apply the rule init exactly once, which dutifully (and unconditionally) creates a fresh node. Then, as demanded by the iteration construct !, the rule grow is repeatedly and nondeterministically applied until some (unspecified) termination condition is satisfied. Each application of this rule requires a node in the graph to be matched (the number establishes which node on the right-hand side is the same one), before gluing onto it an outgoing edge and fresh node. Observe that no matter how many times (if any) the rule grow is applied, the property – or rather, invariant – of being a tree is maintained.

But how do we prove that this invariant is maintained? Or more generally, how do we verify that a program based on graph transformation is correct with respect to a given specification? One might wonder at this point whether this question is even worth answering – but to see that it is, we must look somewhat beyond our tree-generating program, which on its own, fails to convey the power and flexibility of graph transformation. Correctness counts in many of its more practical applications throughout computer science and software engineering: from the semantics and implementation of functional programming languages, to the specification and analysis of pointer structures, object-oriented systems, and model transformations.

Research in verifying graph transformations broadly belongs to one of two camps: model checking or proofs. Although the former is supported by some excellent tools (GROOVE being amongst my favourites), it is also important to explore the latter, which involves the use of logic, semantics, and proof calculi to unequivocally show that programs satisfy their specifications. The seminal work here is due to Karl-Heinz Pennemann, who paved the way to automatic correctness proofs with respect to a logic of first-order graph properties; the theory of which my thesis extended to attribute-manipulating rules.

First-order logic is too weak for graphs

First-order logic is too weak for graphs

Unfortunately, the restriction to first-order properties is quite a limiting factor in the context of graphs. In particular, it excludes the possibility of verifying so-called “non-local” properties, of which there are many familiar and important ones: the existence of arbitrary-length paths, the absence of cycles, connectedness, and bipartiteness; to name just a few. Without paths or cycles, what can we prove of interest about our tree-generating program? And in a more practical setting, it is easy to conjure scenarios in which we will suffer. How might we prove the absence of deadlock, for example, in a model of some distributed algorithm, when we cannot reason about the absence of cycles?

Detlef Plump and I set out to address this limitation in our recent paper, Verifying Monadic Second-Order Properties of Graph Programs. First, we extended Pennemann’s logic for graphs to one equivalently expressive to monadic second-order (MSO) logic – a logic very well studied in the context of graph structure (most famously by Bruno Courcelle and his colleagues). The extension allows one to quantify over sets of nodes or edges, and assert that particular nodes or edges belong to particular sets. While a conceptually simple extension, it is rather a powerful one, allowing for a large class of important non-local properties to be expressed – including all of those listed above. Then, and most importantly, we showed that the logic can be integrated into proof systems by defining, and proving correct, an effective weakest precondition construction relative to arbitrary rules and postconditions.

The paper is primarily a theoretical contribution, demonstrating (for the curious!) that this weakest precondition construction can be extended to an MSO-based graph logic. Unfortunately, all but the most trivial formulae of this logic are difficult for users to write and interpret, and the constructed preconditions are not particularly concise. A more practical approach for verification – that the paper also begins to explore – might be to work with a set of more “direct” predicates for particular non-local properties of interest, rather than full MSO. The paper, for example, explores a predicate for expressing the existence of an arbitrary-length path between two given nodes; an important non-local property, but non-trivial to specify in MSO logic. We were particularly pleased to discover that we could define the weakest precondition construction for such predicates entirely in terms of such predicates, i.e. without the need to introduce any set quantifiers.

We published and presented the work at ICGT 2014 (as part of STAF 2014) in York, UK, and would be glad to receive your comments and criticisms. The postprint and slides are available below, and the printed version can be downloaded from SpringerLink.

Paper postprint (.pdf)

Paper postprint (.pdf)

Presentation slides (.pdf)

Presentation slides (.pdf)

Petri Nets and True Concurrency

Standard

One of the first keynotes I had the pleasure of listening to as a doctoral student was given by Javier Esparza at SPIN/ICGT 2010. As a joint (or rather, parallel) meeting of the model checking and graph transformation communities, his lecture – A False History of True Concurrency – presented the key insights and developments in the history of a research area that was interesting for both: the so-called unfolding approach to model checking.

Unfoldings are a mathematical formalism that give a truly concurrent semantics to a system; that is to say, a partial-order semantics that identifies immaterial orderings of events – orderings that would otherwise be forced in concurrency defined by interleaving. They were interesting for the SPIN community as a means to alleviate state space explosion; interesting too for those at ICGT, with the technique having an analogous use in automatically verifying graph grammars. As a fresh-faced doctoral student, I was fascinated by the theory and applications that the talk examined; although I never got a chance to explore them in my own work.

A Petri net unfolding

A Petri net unfolding

I was delighted however to be able to revisit some of these ideas in the concurrency course that I teach at ETH (jointly with Sebastian Nanz and Bertrand Meyer). The second half of the course has traditionally focused on formal models and process calculi for concurrency, but has never covered the important mathematical modelling language of Petri nets; an omission that this year we addressed. In addition to covering the intuition needed for constructing and interpreting Petri nets, we also used them as a vehicle to compare a semantics based on interleaving and a semantics based on true concurrency, applying the unfolding technique directly to nets and discussing the implications for automatic analysis. Though the lecture is only short, and omits many important details (such as how to search for an appropriate finite prefix), I hope that it still might serve as a useful gentle introduction to Petri nets and true concurrency; hence this post to share it!