Speakers
Giovanni Bacci (Aalborg University, Denmark)
From Bisimulations to Metrics via Couplings (Slides)
In probability theory, the coupling method is a proof technique that allows one to compare two, possibly unrelated, distributions P
and P’
by constructing a joint probability distribution, called coupling, with left and right marginals corresponding to P
and P’
respectively. The choice of such a coupling can exhibit interesting relations among P
and P’
.
In this lecture we argue that classical concepts like bisimilarity and bisimilarity metrics are closely related to couplings. Specifically, we show that in many cases (not necessarily involving probability distributions) the task of finding a bisimulation or computing the bisimilarity distance can be conveniently restated in the form of coupling methods among automata.
David Baelde (ENS Rennes & IRISA)
Formal Proofs of Cryptographic Protocols with Squirrel (Slides)
Cryptographic protocols organize many of our electronic activities: chatting, shopping, voting, etc. Technically speaking, they are small concurrent programs making use of cryptographic primitives: encryption, signature, hashes, etc. Analyzing cryptographic protocols is difficult because they execute in unknown, adverse environments, and because the expected properties are complex, involving notions like knowledge and indistinguishability.
Formal methods have been used for decades to analyze security protocols, with many success stories. A widespread approach, pioneered by Dolev and Yao, abstracts messages as formal terms. Other techniques stick to the cryptographer’s model where messages are arbitrary bitstrings and adversaries perform arbitrary probabilistic computations.
We will present a specific approach to protocol verification, pioneered by Bana and Comon, which leverages the symbolic toolkit of logic but provides guarantees in the cryptographer’s model for provable security. In a nutshell, the idea is to consider first-order logic with terms interpreted as probabilistic PTIME computations, and a predicate modelling computational indistinguishability. Building on this logic of message indistinguishability, a meta-logic has then been proposed to naturally express properties of (unbounded) protocol executions. This meta-logic comes with its own specific proof systems, which have been implemented in a dedicated proof assistant, called Squirrel. The course will introduce its theoretical foundations, and will illustrate its particular reasoning style on simple examples.
Christel Baier (Technische Universität Dresden, Germany)
From Verification to Causality-based Explications (Slides)
The early success story of the model checking approach relies fundamentally on two features. First, the algorithms provide a push-button technology: As soon as the model and specification have been generated, one obtains a result in a fully automated way. Second, if the algorithm terminates with a negative result, then it can infer counterexamples to the specification. Counterexamples are the first instances for what we use the term explication, which refers to a mathematical concept that in some way sheds light on why the model checker has returned the result. While counterexamples are single instances of execution traces violating the specification, they provide little insights in what causes the specification violation. To enhance the system transparency, more crisp explications for the satisfaction or violation of properties are demanded. The talk presents an overview of techniques that go in this direction by using formal notions of causality and responsibility to explicate verification results.
The first part of the talk will consider concepts to define the degree of responsibility that are inspired by Halpern and Pearl’s definition of counterfactual causality in structural equation systems or rely on the concept of Shapley values. The second part will address concepts that have been developed towards explications in the context of probabilistic model checking. Among others, this includes notions of cause-effect relations in Markovian models that rely on the probability-raising principle.
Wojciech Czerwiński (University of Warsaw, Poland)
The Reachability Problem for Vector Addition Systems (Slides)
I will talk about both upper and lower bounds on the complexity of the reachability problem for Vector Addition Systems. Even though to explain in detail best complexity results we would need several hours I plan to give you intuitions behind the techniques used in the recent advances. Namely, I will sketch the idea behind the decidability proof of the reachability problem and present main techniques required to obtain its Ackermann-hardness.
Bartek Klin (Oxford University, United Kingdom)
Computation Theory over Sets with Atoms (Slides)
In computation theory, many things are finite. Automata, logical circuits and formulas, and even Turing machines are finite objects. Importantly, the alphabets that various classical computation models operate upon, are usually finite.
Sometimes all this finiteness is an unwelcome restriction. In particular, we sometimes want to consider infinite alphabets to compute upon. This is useful for modelling systems that operate on data items that come from potentially infinite domains, such as names, passwords, nonces, timestamps, or indeed any data that we want to treat in an “atomic” way, disregarding any low-level representation as sequences of bits.
Computation theory over infinite alphabets is a rich and complex subject. I will present a rather radical approach to it, where infinite domains of data atoms are built into the very fabric of mathematics, all the way down to set-theoretic foundations. The advantage of this is one can develop a theory of computation with data atoms in a familiar way, using the same definitions as in the classical setting; only the definitions are interpreted over different mathematical foundations. The disadvantage is that the mathematics of sets with atoms differs from the classical theory in tricky ways: for example, the powerset of a finite set need not be finite there, so one has to be careful not to use that principle in proofs.
The presentation will be loosely based on two books (both of which contain significantly more material than what I will be able to cover):
- M. Bojanczyk: “Slightly Infinite Sets”. Draft available from https://www.mimuw.edu.pl/~bojan/upload/main-6.pdf
- A. Pitts: “Nominal Sets: Names and Symmetry in Computer Science”. Cambridge University Press, 2013.
Laura Kovács (Vienna University of Technology, Austria)
First-Order Theorem Proving and Vampire (Slides: (1), (2), (3))
First-order theorem proving is one of the earliest research areas within artificial intelligence and formal methods. It is undergoing a rapid development thanks to its successful use in program analysis and verification, security analysis, symbolic computation, theorem proving in mathematics, and other related areas. Breakthrough results in all areas of theorem proving have been obtained, including improvements in theory, implementation, and the development of powerful theorem proving tools. However, recent developments are not always easily accessible to non-specialists. This tutorial aims to introduce the audience to first-order theorem proving. The workhorse used for a demonstration of concepts discussed at the tutorial will be our award-winning theorem prover Vampire.
The tutorial will first focus on practicalities while using Vampire for validating mathematical theorems. We will then further introduce the core concepts of automating first-order theorem proving in first-order logic with equality. We will discuss the resolution and superposition calculus, introduce the saturation principle, present various algorithms implementing redundancy elimination, and demonstrate how these concepts are implemented in Vampire.
Anca Muscholl (LaBRI & Université Bordeaux, France)
A View on String Transducers (Slides)
This tutorial will review recent results on string transducers, ranging from expressiveness questions to algorithmic ones. In particular, I will present two-way and streaming string transducers, with and without origin semantics, and I will discuss the equivalence problem for transducers.
Nir Piterman (Chalmers University of Technology, Sweden)
Reactive Synthesis (Slides)
Reactive systems such as robot controllers and web interfaces are systems that persistently interact with their environment. Such systems are notoriously hard to design and implement. Synthesis is a technique for automatically generating correct-by-construction reactive systems from high-level descriptions. This is one of the greatest challenges in the area of formal methods. Recent years have seen significant theoretical and practical progress in this field leading to applications in hardware design, robotics, user programming, and model-driven development.
This lecture will cover the theoretical background of synthesis of reactive systems from linear temporal logic specifications. We will introduce the problem and show how it can be viewed as a two-player game between a system and its environment. We will consider algorithms for solving several types of simple games and show a direct reduction from LTL synthesis to parity games. We will then turn to highlight some of the problems with this approach and follow several possible lines of attack. We will also consider other practical problems in applications of synthesis.
Amaury Pouly (IRIF, France)
Linear Dynamical Systems: Reachability and Invariant Generation (Slides)
Linear dynamical systems are ubiquitous: linear loops, linear recurrent sequences, linear differential equations, Markov chains or hybrid linear automata all fall under this general class. Such systems appear naturally when modelling software or cyber-physical systems and, therefore, is are fundamental for automated verification. Reachability questions are one of the most fundamental building blocks of verification. One such question is the termination of linear loops. In this tutorial, I will explain why reachability questions are generally hard even on simple systems, and undecidable for mild generalisations. In such challenging cases, a natural and interesting question is whether one can produce an invariant. Invariants are overapproximation of the reachable set that can typically be used as certificates for non-reachability. In this tutorial, I will focus on the particular class of polynomial invariants. I will explain some of the state-of-the-art results for this class and the challenges that remain.
Renaud Vilmart (LMF & Inria)
How to Verify Quantum Processes (Slides)
Programming on a quantum computer promises some difficulties, to name a few: the cost of running a program on a quantum coprocessor, the impossibility of debugging a program as one would do conventionally (not to mention the probabilistic aspect of the recovered result)… Especially if the required power has never been reached yet! All these considerations motivate the use of formal methods to be able to reason about the program before running it. We will thus try to give an overview of the research in formal methods for quantum computing.
Kim Guldstrand Larsen (Aalborg University, Denmark)
Synthesis and Learning of Safe and Optimal Strategies for Cyber-Physical Systems (Slides)
I will present the vision of the newly funded VILLUM Investigator Grant S4OS: to develop mathematically well-founded, scalable methods and tools that integrate machine learning and model-based verification techniques for the construction of safe and optimal Cyber-Physical Systems. In particular, I will present the most recent version of the tool UPPAAL STRATEGO (www.uppaal.org). Taking as inputs 1) an Euclidian Markov Decision Processes E, 2) a safe constraint S and 3) an objective functions O to be optimized, UPPAAL STRATEGO first provides a most permissive safety strategy ensuring S using a Timed Automata abstraction of E. Here well-known symbolic model checking techniques are used. Next, applying various learning methods, sub-strategies (thus still safe) optimizing O are subsequent. The talk will present the (Q-, M-, ..) learning methods developed, results on their convergence, and the ability to learn and output small and explainable strategies using decision trees, and the approach for taking partial observability into account. The talk will provide a demonstration of the new UPPAAL STRATEGO on the Framing Challenge of the Dagstuhl seminar “Analysis of Autonomous Mobile Collectives in Complex Physical Environments” (2019). Also, on-going applications of UPPAAL STRATEGO on water-management, traffic-light control, energy-aware building, WORDLE, and others will be provided.