Abstracts
Jasmin Blanchette (Technical University Munich)
Sharing the Burden of (Dis)proof with Nitpick, Quickcheck, and Sledgehammer
Isabelle/HOL owes much of its success to its ease of use and powerful
automation. Much of the automation is performed by external tools: The
metaprover Sledgehammer relies on resolution provers and SMT solvers
for its proof search, the counterexample generator Quickcheck uses the
ML compiler as a fast evaluator for ground formulas, and its rival
Nitpick is based on the model finder Kodkod, which performs a
reduction to SAT. This talk will explain how these three tools work
under the hood.
Chad E. Brown (Saarland University)
Using Satallax to Generate Proof Terms for Conjectures in Coq
Satallax is a higher-order automated theorem prover for simple type
theory. The particular form of simple type theory Satallax supports
is classical and includes extensionality and choice principles. Coq
has a very rich type theory based on the Calculus of (co)Inductive
Constructions. The simple types can be seen as a subset of Coq's
types, namely, those inductively generated from Prop, an inhabited
base type, and closed under function types. If one assumes a choice
operator and extensionality principles for these simple types in Coq,
then the logic of Satallax can be easily embedded into Coq. If one
works within this fragment of Coq, then a Coq file with conjectures
can be given as input to Satallax. For each conjecture, if Satallax
finds a proof automatically, then it will replace the conjecture with
a theorem and give the proof as a proof term using the Coq tactic
exact. In my talk I will discuss the embedding of simple type theory
into Coq, how Satallax searches for a proof, and how Satallax
generates a Coq proof term upon success.
Jean-Christophe Filliâtre (University Paris-Sud)
Combining Interactive and Automated Theorem Proving in Why3
Why3 is a platform for deductive program verification. It features a
rich logical language with polymorphism, algebraic data types, and
inductive predicates. Why3 provides an extensive library of proof
task transformations that can be chained to produce a suitable input
for a large set of theorem provers, including SMT solvers, TPTP
provers, as well as interactive proof assistants. In this talk, we
show how this technology is used to combine interactive and automated
theorem provers in, though not limited to, the context of program
verification.
Serdar Tasiran (Koç University)
Generalizing Reduction and Abstraction to Simplify Concurrent Programs: The QED Approach
A key difficulty in concurrent program verification is the large number of interleavings of fine-grain actions.
The typical correctness argument makes use of a separation of concerns. At each phase, one worries only about
interference due to concurrency, or the sequential behavior of a code block. We present a proof system, QED, that
builds on this intuition. In QED, we generalize Lipton's reduction and the idea of commutativity of actions to be
more semantic, less syntactic. Commutativity checks between actions of the program are converted to verification
conditions and are checked by the Z3 theorem prover. A key computational advantage is that QED proof steps have
locally-checkable antecedents. QED proofs apply reduction and abstraction iteratively and transform a program to
consist of larger-grain actions. We show that this approach greatly simplifies proofs of assertions, atomicity
and linearizability, even in optimistically concurrent programs.
Rob Arthan (Lemma 1 Ltd.)
Now f is continuous (exercise!)
Abstract (PDF file)
Jesper Bengtson (IT University of Copenhagen)
A framework for higher-order separation logic in Coq
We present a comprehensive set of tactics for working with a shallow embedding
of a higher-order separation logic for a subset of Java in Coq. The tactics
make it possible to reason at a level of abstraction similar to pen-and-paper
separation-logic proof outlines. In particular, the tactics allow the user to
reason in the embedded logic rather than in the concrete model, where the
stacks and heaps are exposed. The development is generic in the choice of heap
model, and most of the development is also independent of the choice of
programming language.
This is joint work with Jonas Braband Jensen and Lars Birkedal
Jasmin Blanchette (Technical University Munich)
TFF1: The TPTP Typed First-Order Form with Rank-1 Polymorphism
The TPTP World is a well-established infrastructure for automatic theorem
provers. It defines several concrete syntaxes, notably an untyped first-order
form (FOF) and a typed first-order form (TFF0), that have become de facto
standards. This talk introduces the TFF1 format, an extension of TFF0 with
rank-1 polymorphism. It presents its syntax, typing rules, and semantics, as
well as a translation to TFF0. The format is designed to be easy to process by
existing reasoning tools that support ML-style polymorphism. It opens the door
to useful middleware, such as monomorphizers and other translation tools.
Ultimately, the hope is that TFF1 will be implemented in popular automatic
theorem provers.
This is joint work with Andrei Paskevich.
Maria Paola Bonacina (Dipartimento di Informatica, Università degli Studi di Verona)
Interpolation for resolution, superposition and DPLL(Gamma+T)
Given two inconsistent formulae, a (reverse) interpolant is
a formula implied by one, inconsistent with the other and
only containing symbols they share. Interpolants are used in
program analysis and synthesis to compute over-approximations
of images, refine abstractions or generate invariants. We study
interpolation for the theorem proving method DPLL(Gamma+T),
which integrates an inference system with resolution and
superposition (Gamma) into a DPLL(T) based SMT-solver.
Interpolants are computed inductively from partial interpolants.
We discuss how approaches based on tracking symbols, to ensure
that only shared ones appear in the interpolant, do not extend
from ground to general refutations. Then we present a novel
two-stage approach that yields the first complete interpolation
system for resolution and superposition without restrictions.
Its combination with an interpolation system for DPLL(T) gives
an interpolation system for DPLL(Gamma+T).
Joint work with Moa Johansson
Pierre Boutillier (PPS, University Paris Diderot)
A dependent pattern matching compiler that preserves programmers' intuition
Dependent pattern matching relies either on a subtle unification
procedure or on heavily annoted terms. Explicitly writing the
annotations is a nightmare for users. Actually, there does exist a
translation from unification-based system to an annotation-based
system; yet it massively uses heterogeneous equalities and therefore
is not axiom-free. Instead of using equalities, I suggest using
commutative cuts, together with pattern matching in type annotations.
I will formally delimit the language upon which the transformation can
be done automatically. Furthermore I will show that reduction commutes
with the latter compilation scheme.
Predrag Janicic (University of Belgrade)
Automated Synthesis of Geometric Construction Procedures
Geometric constructions with ruler and compass are, essentially,
linear procedures, and can be subject to automation --- both in
the synthesis and in the verification phase. We focus on triangle
construction problems, typically given by short and simple
specifications. Our analysis of several families of such problems
show that the needed underlying geometry knowledge is relatively
small and, if properly represented, enables simple search process.
Our synthesis mechanism uses, in a controlled manner, definitions,
lemmas, and primitive construction steps organized in layers,
and leads to solutions of most of the problems from the considered
families. The generated construction procedures are verified by
algebraic geometry theorem provers and our plan is to develop
also a formal verification mechanism within a proof assistant.
This is joint work with Vesna Marinkovic
Tuomas Launiainen (Helsinki University of Technology)
Building a modern concolic tester
Recent advances in SAT solvers and their extensions, SMT
solvers, have resulted in the possibility to build highly automated
software testing tools. They use SMT solvers and instrumented
execution to compute program inputs that will exercise unexplored
program paths. The result is an automatically generated high-coverage
test suite. This talk covers a concolic tester developed at Aalto
University, and is joint work with Keijo Heljanko and Kari Kähkönen.
David Pereira (University of Porto)
Deciding Regular Expression (In-)Equivalence in Coq
This work is about a mechanically verified implementation of an
algorithm for deciding regular expression (in-)equivalence within the
Coq proof assistant. This algorithm is a version of a functional
algorithm proposed by Almeida et al. which decides regular
expression equivalence through an iterated process of testing the
equivalence of their partial derivatives. The main point of this
certified implementation, w.r.t existing work, is its handling of
in-equivalence of regular expressions. Such handling is based on a
refutation step that improves at the formalization level, the general
efficiency of the subjacent decision procedure by detecting conditions
that enforce the in-equivalence of regular expressions at early stages
of computation.
This is joint work with Nelma Moreira and Simão Melo de Sousa.
Corneliu Popeea (Technical University Munich)
Synthesizing Software Verifiers from Proof Rules
Automatically generated tools can significantly improve programmer
productivity. For example, parsers and dataflow analyzers can be
automatically generated from declarative specifications in the form of
grammars, which tremendously simplifies the task of implementing a
compiler.
In this paper, we present a method for the automatic synthesis of
software verification tools. Our synthesis procedure takes as input a
description of the employed proof rule, e.g., program safety checking
via inductive invariants, and produces a tool that automatically
discovers the auxiliary assertions required by the proof rule, e.g.,
inductive loop invariants and procedure summaries.
We show how our method synthesizes automatic safety and liveness
verifiers for programs with procedures, multi-threaded programs, and
functional programs. Our experimental comparison of the resulting
verifiers with existing state-of-the-art verification tools confirms
the practicality of the approach.
This is joint work with Sergey Grebenshchikov, Nuno Lopes and Andrey
Rybalchenko at the Technical University Munich.
Jean-Francois Raskin (The Free University of Brussels)
Strategy synthesis for quantitative games
Stefan Ratschan (Academy of Sciences of the Czech Republic)
Physical/Software Systems and their Formal Safety Verification
Nowadays, software is more and more integrated into physical systems.
However, the field of software verification usually only addresses
system models based on discrete time (therefore being ill-suited for
taking into account physical laws that are usually formulated in
continuous time), and the field of hybrid systems usually only
addresses system models whose discrete part is bounded to finitely many
states (therefore having severe restrictions in modeling the complex
data structures prevalent in computer software).
In our talk we will present a system model whose state space of the new
model is formed by the Cartesian product of n-dimensional real space
(evolving in continuous time according to ordinary differential
equations) and k data types (e.g., integers, arrays, and lists).
Moreover, we provide an algorithm for the formal safety verification of
such systems.
Philipp Rümmer (Uppsala University)
Software Verification Using k-Induction
We present combined-case k-induction, a novel technique for verifying
software programs. This technique draws on the strengths of the
classical inductive-invariant method and a recent application of
k-induction to program verification. We present a new k-induction rule
that takes an unstructured, reducible control flow graph (CFG), a
natural loop occurring in the CFG, and a positive integer k, and
constructs a single CFG in which the given loop is eliminated via an
unwinding proportional to k. Experiments, using two implementations of
the k-induction method and a large set of benchmarks, demonstrate that
our k-induction technique frequently allows program verification to
succeed using significantly weaker loop invariants than are required
with the standard inductive invariant approach.
This talk is based on a paper presented at the 18th Intern. Static
Analysis Symposium.
Joint work with Alastair F. Donaldson, Leopold Haller, Daniel Kroening.
Claudio Sacerdoti (University of Bologna)
Unification in the Matita ITP
Higher order unification plays a fundamental role in the interpretation of
the user's input. This input is in general incomplete and the system is
demanded to infer a type for it. Since the logics of Matita allows types to
depend on terms, inferring types also means to infer terms, thus content
like functions, their input or even proofs. This makes type inference a
powerful form of automation. The higher order unification algorithm is in
charge of comparing, up to computation, types and terms instantiating
unification variables so that two apparently different expressions evaluate
to a common form.
In this talk we detail the algorithm implemented in the Matita
interactive theorem prover. In particular: how the user can teach to
the unification engine how to exploit the content of the library of
already formalized knowledge; how the algorithm handles expensive
reductions and how the user can drive the algorithm toward a
particular solution for an intrinsically ambiguous higher order
unification problem.
This is joint work with Enrico Tassi.
Julian Samborski-Forlese (IMDEA)
Translation of RLTL into Buchi Automata
We present two efficient translations of Regular Linear Temporal Logic (RLTL) into automata on infinite words. RLTL is a temporal logic
that fuses Linear Temporal Logic (LTL) with regular expressions, extending its expressive power to all omega-regular languages. We first
present a novel bottom up translation from RLTL into alternating parity automata of linear size that requires only colors 0, 1 and 2, and
that enjoy the stratified internal structure of hesitant automata. Our translation is defined inductively for every operator, and does
not require an upfront transformation of the expression into a normal form. Our construction builds at every step two automata: one
equivalent to the formula and another to its complement. Inspired by this construction, we extend RLTL with new operators, including
universal sequential composition, that enrich the logic with duality laws and negation normal forms. Finally, we show a ranking
translation of the resulting alternating automaton into non-deterministic Büchi automata that is optimal for the LTL fragment of the
logic.
Alejandro Sánchez (IMDEA)
A Decision Procedure for Skiplists with Unbounded Height and Length
We present a theory of skiplists with a decidable quantifier free
satisfiability problem. This theory can express skiplists memory
layouts of arbitrary unbounded height, and with an arbitrary number of
elements.
A skiplist is an imperative data structure used to implement sets by
maintaining several layers of ordered singly-linked lists in
memory. The elements in each layer are a subset of the elements in
lower layers, where all pointers forming all layers respect the
order. The performance of the skiplist datatypes is comparable to
balanced binaries trees, while the implementation is much more
straightforward.
In previous work we showed that the theory of skiplists with a fixed
height is decidable. In this work we prove the satisfiability problem
for the theory of skiplists with arbitrary height. The proof involves
a non-trivial model theoretical argument that implies that the theory
enjoys the finite-model property.
In turn, this theory can be used to verify practical implementations
of the skiplist datatype written in a C-like programming language.
Fu Song (LIAFA, University Paris Diderot)
Efficient CTL Model-Checking for Pushdown Systems
Pushdown systems (PDS) are well adapted to model sequential programs with
(possibly recursive) procedure calls. Therefore, it is important to have
efficient model checking algorithms for PDSs. We consider CTL model
checking for PDSs. We consider the ''standard'' CTL model checking problem
where a configuration of a PDS satisfies an atomic proposition or not
depends only on the control state of the configuration. We consider also
CTL model checking with regular valuations, where the set of configurations
in which an atomic proposition holds is a regular language. We reduce these
problems to the emptiness problem in Alternating Buchi Pushdown Systems,
and we give an algorithm to solve this emptiness problem. Our algorithms
are more efficient than the other existing algorithms for CTL model
checking for PDSs in the literature. We implemented our techniques in a
tool, and we applied it to different case studies. Our results are
encouraging. In particular, we were able to find bugs in linux source code.
This is a joint work with Tayssir Touili.