1 Introduction

Solvers for Satisfiability Modulo Theories (SMT) are currently at the heart of several formal method tools such as extended static checkers [3, 16], bounded and unbounded model-checkers [1, 14, 21], symbolic execution tools [25], and program verification environments [10, 49]. The main functionality of an SMT solver is to determine if a given input formula is satisfiable in the logic it supports—typically some fragment of first-order logic with certain built-in theories such as integer or real arithmetic, the theory of arrays and so on. Most SMT solvers combine advanced general-purpose propositional reasoning (SAT) engines with sophisticated implementations of special-purpose reasoning algorithms for built-in theories. They are rather complex and large tools, with codebases often between 50k and 100k lines of C++. As a consequence, the correctness of their results is a long-standing concern. In an era of tour-de-force verifications of complex systems [24, 27], noteworthy efforts have been made to apply formal verification techniques to algorithms for SAT and SMT [18, 29]. Verifying actual solver code is, however, still extremely challenging [28] due to the complexity and size of modern SMT solvers.

One approach to addressing this problem is for SMT solvers to emit independently checkable evidence of the results they report. For formulas reported as unsatisfiable, this evidence takes the form of a refutation proof. Since the relevant proof checking algorithms and implementations are much simpler than those for SMT solving, checking a proof provides a more trustworthy confirmation of the solver’s result. Interactive theorem proving can benefit as well from proof producing SMT solvers, as shown recently in a number of works [11, 12]. Complex subgoals for the interactive prover can be discharged by the SMT solver, and the proof it returns can be checked or reconstructed to confirm the result without trusting the solver. Certain advanced applications also strongly depend on proof production. For example, in the Fine system, Chen et al. translate proofs produced by the Z3 solver [32] into their internal proof language, to support certified compilation of richly typed source programs [13]. For Fine, even a completely verified SMT solver would not be enough since the proofs themselves are actually needed by the compiler. Besides Z3 other examples of proof-producing SMT solvers are clsat, cvc3, Fx7, and veriT [6, 15, 31, 38].

A significant enabler for the success of SMT has been the SMT-LIB standard input language [5], which is supported by most SMT solvers. So far, no standard proof format has emerged. This is, however, no accident. Because of the ever increasing number of logical theories supported by SMT solvers, the variety of deductive systems used to describe the various solving algorithms, and the relatively young age of the SMT field, designing a single set of axioms and inference rules that would be a good target for all solvers does not appear to be practically feasible. We believe that a more viable route to a common standard is the adoption of a common meta-logic in which SMT solver implementors can describe the particular proof systems used by their solver. With this approach, solvers need to agree just on the meta-language used to describe their proof systems. The adoption of a sufficiently rich meta-logic prevents a proliferation of individual proof languages, while allowing for a variety of proof systems. Also, a single meta-level tool suffices to check proofs efficiently on any proof system described in the meta-language. A fast proof checker can be implemented once and for all for the meta-language, using previously developed optimizations (e.g. [43, 44, 50]. Also, different proof systems can be more easily compared, since they are expressed in the same meta-level syntax. This may help identify in the future a common core proof system for some significant class of SMT logics and solvers.

In this paper we propose and describe a meta-logic, called LFSC, for “Logical Framework with Side Conditions”, which we have developed explicitly with the goal of supporting the description of several proof systems for SMT, and enabling the implementation of very fast proof checkers. In LFSC, solver implementors can describe their proof rules using a compact declarative notation which also allows the use of computational side conditions. These conditions, expressed in a small functional programming language, enable some parts of a proof to be established by computation. The flexibility of LFSC facilitates the design of proof systems that reflect closely the sort of high-performance inferences made by SMT solvers. The side conditions feature offers a continuum of possible LFSC encodings of proof systems, from completely declarative at one extreme, using rules with no side conditions, to completely computational at the other, using a single rule with a huge side condition. We argue that supporting this continuum is a major strength of LFSC. Solver implementors have the freedom to choose the amount of computational inference when devising proof systems for their solver. This freedom cannot be abused since any decision is explicitly recorded in the LFSC formalization and becomes part of the proof system’s trusted computing base. Moreover, the ability to create with a relatively small effort different LFSC proof systems for the same solver provides an additional level of trust even for proof systems with a substantial computational component—since at least during the developing phase one could also produce proofs in a more declarative, if less efficient, proof system.

We have put considerable effort in developing a full blown, highly efficient proof checker for LFSC proofs. Instead of developing a dedicated LFSC checker, one could imagine embedding LFSC in declarative languages such as Maude or Haskell. While the advantages of prototyping symbolic tools in these languages are well known, in our experience their performance lags too far behind carefully engineered imperative code for high-performance proof checking. This is especially the case for the sort of proofs generated by SMT solvers which can easily reach sizes measured in megabytes or even gigabytes. Based on previous experimental results by others, a similar argument could be made against the use of interactive theorem provers (such as Isabelle [37] or Coq [8]), which have a very small trusted core, for SMT-proof checking. By allowing the use of computational side conditions and relying on a dedicated proof checker, our solution seeks to strike a pragmatic compromise between trustworthiness and efficiency.

We introduce the LFSC language in Sect. 2 and then describe it formally in Sect. 3. In Sect. 4 we provide an overview of how one can encode in LFSC a variety of proof systems that closely reflect the sort of reasoning performed by modern SMT solvers. Building on the general approaches presented in this section, we then focus on a couple of SMT theories in Sect. 5, with the goal of giving a sense of how to use LFSC’s features to produce compact proofs for theory-specific lemmas. We then present empirical results for our LFSC proof checker which show good performance relative to solver-execution times (Sect. 6). Finally, Sect. 7 touches upon a new, more advanced application of LFSC: the generation of certified interpolants from proofs.

The present paper builds on previously published workshop papers [3840, 45] but it expands considerably on the material presented there. A preliminary release of our LFSC proof checker, together with the proof systems and the benchmark data discussed here are available online.Footnote 1

1.1 Related work

SMT proofs produced by the solver Fx7 for the AUFLIA logic of SMT-LIB have been shown to be checked efficiently by an external checker in [31]. Other approaches [19, 17] have advocated the use of interactive theorem provers to certify proofs produced by SMT solvers. The advantages of using those theorem provers are well known; in particular, their trusted core contains only a base logic and a small fixed number of proof rules. While recent works [2, 11] have improved proof checking times, the performance of these tools still lags behind C/C++ checkers carefully engineered for fast checking of very large proofs. Besson et al. have recently proposed a similar meta-linguistic approach, though without the intention of providing a formal semantics for user-defined proof systems [9]. We are in discussion with authors of that work, on how to combine LFSC with the approach they advocate.

1.2 Notational conventions

This paper assumes some familiarity with the basics of type theory and of automated theorem proving, and will adhere in general to the notational conventions in the field. LFSC is a direct extension of LF [22], a type-theoretic logical framework based on the λΠ calculus, in turn an extension of the simply typed λ-calculus. The λΠ calculus has three levels of entities: values; types, understood as collections of values; and kinds, families of types. Its main feature is the support for dependent types which are types parametrized by values.Footnote 2 Informally speaking, if τ 2[x] is a dependent type with value parameter x, and τ 1 is a non-dependent type, the expression Πx:τ 1.τ 2[x] denotes in the calculus the type of functions that return a value of type τ 2[v] for each value v of type τ 1 for x. When τ 2 is itself a non-dependent type, the type Πx:τ 1.τ 2 is just the arrow type τ 1τ 2 of simply typed λ-calculus.

The current concrete syntax of LFSC is based on Lisp-style S-expressions, with all operators in infix format. For improved readability, we will often write LFSC expressions in abstract syntax instead. We will write concrete syntax expressions in typewriter font. In abstract syntax expressions, we will write variables and meta-variables in italics font, and constants in sans serif font. Predefined keywords in the LFSC language will be in bold sans serif font.

2 Introducing LF with side conditions

LFSC is based on the Edinburgh Logical Framework (LF) [22]. LF has been used extensively as a meta-language for encoding deductive systems including logics, semantics of programming languages, as well as many other applications [7, 26, 33]. In LF, proof systems can be encoded as signatures, which are collections of typing declarations. Each proof rule is a constant symbol whose type represents the inferences allowed by the rule. For example, the following transitivity rule for equality

can be encoded in LF as a constant eq_trans of type

$${\varPi} t_1{:}\textsf{term}.\, t_2{:}\textsf{term}.\, t_3{:}\textsf{term}.\, {\varPi} u_1{:}\textsf{holds}\,(t_1 = t_2).\, {\varPi} u_2{:}\textsf{holds}\,(t_2 = t_3).\, \textsf{holds}\,(t_1 = t_3). $$

The encoding can be understood intuitively as saying: for any terms t 1,t 2 and t 3, and any proofs u 1 of the equality t 1=t 2 and u 2 of t 2=t 3, eq_trans constructs a proof of t 1=t 3. In the concrete, Lisp-style syntax of LFSC, the declaration of the rule would look like

figure a

where ! represents LF’s Π binder, for the dependent function space, term and holds are previously declared type constructors, and = is a previously declared constant of type Πt 1:term. t 2:term. term (i.e., termtermterm).

Now, pure LF is not well suited for encoding large proofs from SMT solvers, due to the computational nature of many SMT inferences. For example, consider trying to prove the following simple statement in a logic of arithmetic:

$$ (t_1 + (t_2 + ( \ldots + t_n) \ldots)) - ((t_{i_1} + (t_{i_2} + ( \ldots + t_{i_n}) \ldots))) = 0 $$
(1)

where \(t_{i_{1}} \ldots t_{i_{n}}\) is a permutation of the terms t 1,…,t n . A purely declarative proof would need Ω(nlogn) applications of an associativity and a commutativity rule for +, to bring opposite terms together before they can be pairwise reduced to 0.

Producing, and checking, purely declarative proofs in SMT, where input formulas alone are often measured in megabytes, is unfeasible in many cases. To address this problem, LFSC extends LF by supporting the definition of rules with side conditions, computational checks written in a small but expressive first-order functional language. The language has built-in types for arbitrary precision integers and rationals, inductive datatypes, ML-style pattern matching, recursion, and a very restricted set of imperative features. When checking the application of an inference rule with a side condition, an LFSC checker computes actual parameters for the side condition and executes its code. If the side condition fails, because it is not satisfied or because of an exception caused by a pattern-matching failure, the LFSC checker rejects the rule application. In LFSC, a proof of statement (1) could be given by a single application of a rule of the form:

figure b

where normalize is the name of a separately defined function in the side condition language that takes an arithmetic term and returns a normal form for it. The expression (^ (normalize t) 0) defines the side condition of the eq_zero rule, with the condition succeeding if and only if the expression (normalize t) evaluates to 0. We will see more about this sort of normalization rules in Sect. 5.2.

3 The LFSC language and its formal semantics

In this section, we introduce the LFSC language in abstract syntax, by defining its formal semantics in the form of a typing relation for terms and types, and providing an operational semantics for the side-condition language. Well-typed value, type, kind and side-condition expressions are drawn from the syntactical categories defined in Fig. 1 in BNF format. The kinds \(\textbf {\textsf {type}}^{\mathrm{c}}\) and \(\textbf {\textsf {type}}\) are used to distinguish types with side conditions in them from types without.

Fig. 1
figure 1

Main syntactical categories of LFSC. Letter c denotes term constants (including rational ones), x denotes term variables, k denotes type constants. The square brackets are grammar meta-symbols enclosing optional subexpressions

Program expressions s are used in side condition code. There, we also make use of the syntactic sugar \((\textbf {\textsf {do}}\ s_{1} \cdots s_{n}\ s)\) for the expression \((\textbf {\textsf {let}}\ x_{1}\ s_{1} \ \cdots\ \textbf {\textsf {let}}\ x_{n}\ s_{n} \ s)\) where x 1 through x n are fresh variables. Side condition programs in LFSC are monomorphic, simply typed, first-order, recursive functions with pattern matching, inductive data types and two built-in basic types: infinite precision integers and rationals. In practice, our implementation is a little more permissive, allowing side-condition code to pattern-match also over dependently typed data. For simplicity, we restrict our attention here to the formalization for simple types only.

The operational semantics of the main constructs in the side condition language could be described informally as follows. Expressions of the form (c s 1s n+1) are applications of either term constants or program constants (i.e., declared functions) to arguments. In the former case, the application constructs a new value; in the latter, it invokes a program. The expressions \((\textbf {\textsf {match}}\ s\ (p_{1}\ s_{1}) \cdots (p_{n+1}\ s_{n+1}))\) and \((\textbf {\textsf {let}}\ x\ s_{1}\ s_{2})\) behave exactly as their corresponding matching and let-binding constructs in ML-like languages. The expression \((\textbf {\textsf {markvar}}\ s)\) evaluates to the value of s if this value is a variable. In that case, the expression has also the side effect of toggling a Boolean mark on that variable.Footnote 3 The expression \((\textbf {\textsf {ifmarked}}\ s_{1}\ s_{2}\ s_{3})\) evaluates to the value of s 2 or of s 3 depending on whether s 1 evaluates to a marked or an unmarked variable. Both \(\textbf {\textsf {markvar}}\) and \(\textbf {\textsf {ifmarked}}\) raise a failure exception if their arguments do not evaluate to a variable. The expression \((\textbf {\textsf {fail}}\ \tau)\) always raises that exception, for any type τ.

The typing rules for terms and types, given in Fig. 2, are based on the rules of canonical forms LF [47]. They include judgments of the form ΓeT for checking that expression e has type/kind T in context Γ, where Γ, e, and T are inputs to the judgment; and judgments of the form ΓeT for computing a type/kind T for expression e in context Γ, where Γ and e are inputs and T is output. The contexts Γ map variables and constants to types or kinds, and map constants f for side condition functions to (possibly recursive) definitions of the form (x 1:τ 1 ⋯ x n :τ n ):τ=s, where s is a term with free variables x 1,…,x n , the function’s formal parameters.

Fig. 2
figure 2

Bidirectional typing rules and context rules for LFSC. Letter y denotes variables and constants declared in context Γ, letter T denotes types or kinds. Letter ε denotes the state in which every variable is unmarked

The three top rules of Fig. 2 define well-formed contexts. The other rules, read from conclusion to premises, induce deterministic type/kind checking and type computation algorithms. They work up to a standard definitional equality, namely βη-equivalence; and use standard notation for capture-avoiding substitution ([t/x]T is the result of simultaneously replacing every free occurrence of x in T by t, and renaming any bound variable in T that occurs free in t). Side conditions occur in type expressions of the form Πy:τ 1{s t}. τ 2, constructing types of kind \(\textbf {\textsf {type}}^{\mathrm{c}}\). The premise of the last rule, defining the well-typedness of applications involving such types, contains a judgement of the form Δσ; ss′;σ′ where Δ is a context consisting only of definitions for side condition functions, and σ and σ′ are states, i.e., mappings from variables to their mark. Such judgment states that, under the context Δ, evaluating the expression s in state σ results in the expression s′ and state σ′. In the application rule, Δ is |Γ| defined as the collection of all the function definitions in Γ. Note that the rules of Fig. 2 enforce that bound variables do not have types with side conditions in them—by requiring those types to be of kind \(\textbf {\textsf {type}}\), as opposed to kind \(\textbf {\textsf {type}}^{\mathrm{c}}\). An additional requirement is not formalized in the figure. Suppose Γ declares a constant d with type Πx 1:τ 1.⋯Πx n :τ n . τ of kind \(\textbf {\textsf {type}}^{\mathrm{c}}\), where τ is either k or (k t 1t m ). Then neither k nor an application of k may be used as the domain of a Π-type. This is to ensure that applications requiring side condition checks never appear in types. Similar typing rules, included in the appendix, define well-typedness for side condition terms, in a fairly standard way.

A big-step operational semantics of side condition programs is provided in Fig. 3 using Δσ; ss′;σ′ judgements. For brevity, we elide “Δ⊢” from the rules when Δ is unused. Note that side condition programs can contain unbound variables, which evaluate to themselves. States σ (updated using the notation σ[xv]) map such variables to the value of their Boolean mark. If no rule applies when running a side condition, program evaluation and hence checking of types with side conditions fails. This also happens when evaluating the fail construct \((\textbf {\textsf {fail}}\ \tau)\), or when pattern matching fails. Currently, we do not enforce termination of side condition programs, nor do we attempt to provide facilities to reason formally about the behavior of such programs.

Fig. 3
figure 3

Operational semantics of side condition programs. We omit the straightforward rules for the rational operators − and +

Our implementation of LFSC supports the use of the wildcard symbol _ in place of an actual argument of an application when the value of this argument is determined by the types of later arguments. This feature, which is analogous to implicit arguments in theorem provers such as Coq and programming languages such as Scala, is crucial to avoid bloating proofs with redundant information. In a similar vein, the concrete syntax allows a form of lambda abstraction that does not annotate the bound variable with its type when that type can be computed efficiently from context.

We conclude by pointing out that LFSC’s type system is a refinement of LF’s, in the following sense. Let ∥τ∥ denote the type obtained from τ by erasing any side condition constraints from the Π-abstractions in τ; let \(\Vert \textbf {\textsf {type}}^{\mathrm{c}}\Vert \) be \(\textbf {\textsf {type}}\); and extend this notation to contexts in the natural way. Then, we have the following.

Theorem 1

For all Γ, t and τ, if Γt:τ in LFSC, thenΓ∥⊢t:∥τin LF.

Proof

By a straightforward induction on LFSC typing derivations. □

4 Encoding propositional and core SMT reasoning in LFSC

In this section and the next, we illustrate the power and flexibility of LFSC for SMT proof checking by discussing a number of proof systems relevant to SMT, and their possible encodings in LFSC. Our goal is not to be exhaustive, but to provide representative examples of how LFSC allows one to encode a variety of logics and proof rules while paying attention to proof checking performance issues. Section 6 focuses on the latter by reporting on our initial experimental results.

Roughly speaking, proofs generated by SMT solvers, especially those based on the DPLL(T) architecture [36], are two-tiered refutation proofs, with a propositional skeleton filled with several theory-specific subproofs [20]. The conclusion, a trivially unsatisfiable formula, is reached by means of propositional inferences applied to a set of input formulas and a set of theory lemmas. These are disjunctions of theory literals proved from no assumptions mostly with proof rules specific to the theory or theories in question—the theory of real arithmetic, of arrays, etc.

Large portions of the proof’s propositional part consist typically of applications of some variant of the resolution rule. These subproofs are generated similarly to what is done by proof-producing SAT solvers, where resolution is used for conflict analysis and lemma generation [20, 51]. A proof format proposed in 2005 by Van Gelder for SAT solvers is based directly on resolution [46]. Input formulas in SMT differ from those given to SAT solvers both for being not necessarily in Conjunctive Normal Form and for having non-propositional atoms. As a consequence, the rest of the propositional part of SMT proofs involve CNF conversion rules as well as abstraction rules that uniformly replace theory atoms in input formulas and theory lemmas with Boolean variables. While SMT solvers usually work just with quantifier-free formulas, some of them can reason about quantifiers as well, by generating and using selected ground instances of quantified formulas. In these cases, output proofs also contain applications of rules for quantifier instantiation.

In the following, we demonstrate different ways of representing propositional clauses and SMT formulas and lemmas in LFSC, and of encoding proof systems for them with various degrees of support for efficient proof checking. For simplicity and space constraints, we consider only a couple of individual theories, and restrict our attention to quantifier-free formulas. We note that encoding proofs involving combinations of theories is more laborious but not qualitatively more difficult; encoding SMT proofs for quantified formulas is straightforward thanks to LFSC’s support for higher-order abstract syntax which allows one to represent and manipulate quantifiers as higher-order functions, in a completely standard way.Footnote 4

4.1 Encoding propositional resolution

The first step in encoding any proof system in LFSC (or LF for that matter) is to encode its formulas. In the case of propositional resolution, this means encoding propositional clauses. Figure 4 presents a possible encoding, with type and type constructor declarations in LFSC’s concrete syntax. We first declare an LFSC type var for propositional variables and then a type lit for propositional literals. Type lit has two constructors, pos and neg, both of type Πx:var. lit Footnote 5 which turn a variable into a literal of positive, respectively negative, polarity. We use these to represent positive and negative occurrences of a variable in a clause. The type clause, for propositional clauses, is endowed with two constructors that allow the encoding of clauses as lists of literals. The constant cln represents the empty clause (□). The function clc intuitively takes a literal l and a clause c, and returns a new clause consisting of l followed by the literals of c. For an example, a clause like P∨¬Q can be encoded as the term (clc (pos P) (clc (neg Q) cln)).

Fig. 4
figure 4

Definition of propositional clauses in LFSC concrete syntax

Figure 5 provides LFSC declarations that model binary propositional resolution with factoring. The type holds, indexed by values of type clause, represents the type of proofs for clauses. Intuitively, for any clause c, values of type (holds c) are proofs of c. The side-condition function resolve takes two clauses and a variable v, and returns the result of resolving the two clauses together with v as the pivot,Footnote 6 after eliminating any duplicate literals in the resolvent. The constructor R encodes the resolution inference rule. Its type

$$\begin{array}{l} {\varPi} c_1{:}\mathsf{clause}.\, {\varPi} c_2{:}\mathsf{clause}.\, {\varPi} c_3{:}\mathsf{clause}.\\ \quad {\varPi} u_1{:}\mathsf{holds}\ c_1.\, {\varPi} u_2{:}\mathsf{holds}\ c_2.\, {\varPi} v{:}\mathsf{var}\, \{(\mathsf{resolve}\ c_1\ c_2\ v)\ c_3\}.\, \mathsf{holds}\ c_3 \end{array} $$

can be paraphrased as follows: for any clauses c 1,c 2,c 3 and variables v, the rule R returns a proof of c 3 from a proof of c 1 and a proof of c 2 provided that c 3 is the result of successfully applying the resolve function to c 1,c 2 and v. The side condition function resolve is defined as follows (using a number of auxiliary functions whose definition can be found in the appendix). To resolve clauses c 1 and c 2 with pivot v, v must occur in a positive literal of c 1 and a negative literal of c 2 (checked with the in function). In that case, the resolvent clause is computed by removing (with remove) all positive occurrences of v from c 1 and all negative ones from c 2, concatenating the resulting clauses (with append), and finally dropping any duplicates from the concatenation (with drop_dups); otherwise, resolve, and consequently the side condition of R, fails.

Fig. 5
figure 5

The propositional resolution calculus in LFSC concrete syntax

In proof terms containing applications of the R rule, the values of its input variables c 1,c 2 and c 3 can be determined from later input values, namely the concrete types of u 1,u 2 and v, respectively. Hence, in those applications c 1,…,c 3 can be replaced by the wildcard _, as mentioned in Sect. 3 and shown in Fig. 6.

Fig. 6
figure 6

An example refutation and its LFSC encoding, respectively in abstract and in concrete syntax (as argument of the check command). In the concrete syntax, (% x τ t) stands for λx:τ. t; for convenience, the ascription operator : takes first a type and then a term

The single rule above is enough to encode proofs in the propositional resolution calculus. This does not appear to be possible in LF. Without side conditions one also needs auxiliary rules, for instance, to move a pivot to the head of the list representing the clause and to perform factoring on the resolvent. The upshot of this is a more complex proof system and bigger proofs. Other approaches to checking resolution proofs avoid the need for those auxiliary rules by hard coding the clause type in the proof checker and implementing it as a set of literals. An example is work by Weber and Amjad on reconstructing proofs produced by an external SAT solver in Isabelle/HOL [48]. They use several novel encoding techniques to take advantage of the fact that the native sequents of the Isabelle/HOL theorem prover are of the form Γϕ, where Γ is interpreted as a set of literals. They note the importance of these techniques for achieving acceptable performance over their earlier work, where rules for reordering literals in a clause, for example, were required. Their focus is on importing external proofs into Isabelle/HOL, not trustworthy efficient proof-checking in its own right. But we point out that it would be wrong to conclude that their approach is intrinsically more declarative than the LFSC approach: in their case, the computational side-conditions needed to maintain the context Γ as a set have simply been made implicit, as part of the core inference system of the theorem prover. In contrast, the LFSC approach makes such side conditions explicit, and user-definable.

Example 1

For a simple example of a resolution proof, consider a propositional clause set containing the clauses c 1:=¬V 1V 2, c 2:=¬V 2V 3, c 3:=¬V 3∨¬V 2, and c 4:=V 1V 2. A resolution derivation of the empty clause from these clauses is given in Fig. 6. The proof can be represented in LFSC as the lambda term below the proof tree. Ascription is used to assign type (holds □) to the main subterm (R _ … v 2) under the assumption that all four input clauses hold. This assumption is encoded by using the input (i.e., lambda) variables p 1,…,p 4 of type (holds c 1),…,(holds c 4), respectively. Checking the correctness of the original proof in the resolution calculus then amounts to checking that the lambda term is well-typed in LFSC when its _ holes are filled in as prescribed by the definition of R. In the concrete syntax, this is achieved by passing the proof term to the check command.

The use of lambda abstraction in the example above comes from standard LF encoding methodology. In particular, note how object-language variables (the V i ’s) are represented by LFSC meta-variables (the λ-variables v 1,…,v 4). This way, safe renaming and safe substitution of bound variables at the object level are inherited for free from the meta-level. In LFSC, an additional motivation for using meta-variables for object language variables is that we can efficiently test the former for equality in side conditions using variable marking. In the resolution proof system described here, this is necessary in the side condition of the R rule—for instance, to check that the pivot occurs in the clauses being resolved upon (see Appendix B).

4.2 Deferred resolution

The single rule resolution calculus presented above can be further improved in terms of proof checking performance by delaying the side condition tests, as done in constrained resolution approaches [41]. One can modify the clause data structure so that it includes constraints representing those conditions. Side condition constraints are accumulated in resolvent clauses and then checked periodically, possibly just at the very end, once the final clause has been deduced. The effect of this approach is that (i) checking resolution applications becomes a constant time operation, and (ii) side condition checks can be deferred, accumulated, and then performed more efficiently in a single sweep using a new rule that converts a constrained clause to a regular one after discharging its attached constraint.

There are many ways to implement this general idea. We present one in Fig. 7, based on extending the clause type of Fig. 4 with two more constructors: clr and con. The term (clr l c) denotes the clause consisting of all the literals of c except l, assuming that l indeed occurs in c. The expression (con c 1 c 2) denotes the clause consisting of all the literals that are in c 1 or in c 2. Given two clauses c 1 and c 2 and a pivot variable v, the new resolution rule DR, with no side conditions, produces the resolvent (con (clr (pos v) c 1) (clr (neg v) c 2)) which carries within itself the resolution constraint that (pos v) must occur in c 1 and (neg v) in c 2. Applications of the resolution rule can alternate with applications of the rule S, which converts a resolvent clause into a regular clause (constructed with just cln and clc) while also checking that the resolvent’s resolution constraints are satisfied. A sensible strategy is to apply S both to the final resolvent and to any intermediate resolvent that is used more than once in the overall proof—to avoid unnecessary duplication of constraints.

Fig. 7
figure 7

New clause type and rules for deferred resolution

The side condition function for S is provided in pseudo-code (for improved readability) in Fig. 8. The pseudo-code should be self-explanatory. The auxiliary function append, defined only on regular clauses, works like a standard list append function. Since the cost of append is linear in the first argument, simplify executes more efficiently with linear resolution proofs, where at most one of the two premises of each resolution step is a previously proved (and simplified) lemma. Such proofs are naturally generated by SMT solvers with a propositional engine based on conflict analysis and lemma learning—which means essentially all SMT solvers available today. In some cases, clauses returned by simplify may contain duplicate literals. However, such literals will be removed by subsequent calls to simplify, thereby preventing any significant accumulation in the clauses we produce.

Fig. 8
figure 8

Pseudo-code for side condition function used by the S rule

Our experiments show that deferred resolution leads to significant performance improvements at proof checking time: checking deferred resolution proofs is on average 5 times faster than checking proofs using the resolution rule R [38]. The increased speed does come here at the cost of increased size and complexity of the side condition code, and so of the trusted base. The main point is again that LFSC gives users the choice of how big they want the trusted base to be, while also documenting that choice explicitly in the side condition code.

4.3 CNF conversion

Most SMT solvers accept as input quantifier-free formulas (from now on simply formulas) but do the bulk of their reasoning on a set of clauses derived from the input via a conversion to CNF or, equivalently, clause form. For proof checking purposes, it is then necessary to define proof rules that account for this conversion. Defining a good set of such proof rules is challenging because of the variety of CNF transformations used in practice. Additional difficulties, at least when using logical frameworks, come from more mundane but nevertheless important problems such as how to encode with proof rules, which have a fixed number of premises, transformations that treat operators like logical conjunction and disjunction as multiarity symbols, with an arbitrary number of arguments.

To show how these difficulties can be addressed in LFSC we discuss now a hybrid data structure we call partial clauses that mixes formulas and clauses and supports the encoding of many CNF conversion methods as small step transformations on partial clauses. Partial clauses represent intermediate states between an initial formula to be converted to clause form and its final clause form. We then present a general set of rewrite rules on partial clauses that can be easily encoded as LFSC proof rules. Abstractly, a partial clause is simply a pair

$$(\phi_1, \ldots, \phi_m; l_1 \lor \cdots \lor l_n) $$

consisting of a (possibly empty) sequence of formulas and a clause. Semantically, it is just the disjunction ϕ 1∨⋯∨ϕ m l 1∨⋯∨l n of all the formulas in the sequence with the clause. A set {ϕ 1,…,ϕ k } of input formulas, understood conjunctively, can be represented as the sequence of partial clauses (ϕ 1;),…,(ϕ k );. A set of rewrite rules can be used to turn this sequence into an equisatisfiable sequence of partial clauses of the form ( ;c 1),…,( ;c n ), which is in turn equisatisfiable with c 1∧⋯∧c n . Figure 9 describes some of the rewrite rules for partial clauses. We defined 31 CNF conversion rules to transform partial clauses. Most rules eliminate logical connectives and let-bindings in a similar way as the ones shown in Fig. 9. Several kinds of popular CNF conversion algorithms can be realized as particular application strategies for this set of rewrite rules (or a subset thereof).

Fig. 9
figure 9

Sample CNF conversion rules for partial clauses (shown as a rewrite system). Φ is a sequence of formulas and c is a sequence of literals (a clause)

Formulating the rewrite rules of Fig. 9 into LFSC proof rules is not difficult. The only challenge is that conversions based on them and including rename are only satisfiability preserving, not equivalence preserving. To guarantee soundness in those cases we use natural-deduction style proof rules of the following general form for each rewrite rule pp 1, …,p n in Fig. 9: derive □, the empty clause, from (i) a proof of the partial clause p and (ii) a proof of □ from the partial clauses p 1,…,p n . We provide one example of these proof rules in Fig. 10, namely the one for rename; the other proof rules are similar. In the figure, the type formSeq for sequences of formulas has two constructors, analogous to the usual ones for lists. The constructor pc_holds is the analogous of holds, but for partial clauses—it denotes a proof of the partial clause (Φ;c) for every sequence Φ of formulas and clause c. Note how the requirement in rename that the variable v be fresh is achieved at the meta-level in the LFSC proof rule with the use of a Π-bound variable.

Fig. 10
figure 10

LFSC proof rule for rename transformation in Fig. 9

4.4 Converting theory lemmas to propositional clauses

When converting input formulas to clause form, SMT solvers also abstract each theory atom ϕ (e.g., s=t,s<t, etc.) occurring in the input with a unique propositional variable v, and store the corresponding mapping internally. This operation can be encoded in LFSC using a proof rule similar to rename from Fig. 10, but also incorporating the mapping between v and ϕ. In particular, SMT solvers based on the lazy approach [4, 42] abstract theory atoms with propositional variables to separate propositional reasoning, done by a SAT engine which works with a set of propositional clauses, from theory reasoning proper, done by an internal theory solver which works only with sets of theory literals, theory atoms and their negations. At the proof level, the communication between the theory solver and the SAT engine is established by having the theory solver prove some theory lemmas, in the form of disjunctions of theory literals, whose abstraction is then used by the SAT engine as if it was an additional input clause. A convenient way to produce proofs that connect proofs of theory lemmas with Boolean refutations, which use abstractions of theory lemmas and of clauses derived from the input formulas, is again to use natural deduction-style proof rules.

Figure 11 shows two rules used for this purpose. The rule assume_true derives the propositional clause ¬vc from the assumptions that (i) v abstracts a formula ϕ (expressed by the type (atom v ϕ)) and (ii) c is provable from ϕ. Similarly, assume_false derives the clause vc from the assumptions that v abstracts a formula ϕ and c is provable from ¬ϕ. Suppose ψ 1∨⋯∨ψ n is a theory lemma. A proof-producing theory solver can be easily instrumented to prove the empty clause from the assumptions \(\overline{\psi}_{1}, \ldots, \overline{\psi}_{n}\), where \(\overline{\psi}_{i}\) denotes the complement of the literal ψ i . This proof can be expressed by the theory solver with nested applications of assume_true and assume_false, and become a proof of the propositional clause l 1∨⋯∨l n , where each l i is the propositional literal corresponding to ψ i .

Fig. 11
figure 11

Assumption rules for theory lemmas in LFSC concrete syntax

Example 2

Consider a theory lemma such as ¬(s=t)∨t=s, say, for some terms s and t. Staying at the abstract syntax level, let P be a proof term encoding a proof of □ from the assumptions s=t and ¬(t=s). By construction, this proof term has type (holds □). Suppose a 1,a 2 are meta-variables of type (atom v 1 (s=t)) and (atom v 2 (t=s)), respectively, for some meta-variables v 1 and v 2 of type var. Then, the proof term

$$\begin{array}{l} (\mathsf{assume\_true}\ \_\ \_\ \_\ a_1\ (\lambda h_1. \\ \ (\mathsf{assume\_false}\ \_\ \_\ \_\ a_2\ (\lambda h_2.\,P))) \end{array} $$

has type holdsv 1v 2) and can be included in larger proof terms declaring v 1,v 2,a 1, and a 2 as above. Note that the λ-variables h 1 and h 2 do not need a type annotation here as their types can be inferred from the types of a 1 and a 2.

5 Encoding SMT logics

In this section, we show in more detail how LFSC allows one to represent SMT proofs involving theory reasoning. We consider two basic logics (i.e., fragments of first-order theories) in the SMT-LIB standard [5]: QF_IDL and QF_LRA.

5.1 Quantifier-free integer difference logic

The logic QF_IDL, for quantifier-free integer difference logic, consists of formulas interpreted over the integer numbers and restricted (in essence) to Boolean combinations of atoms of the form xyc where x and y are integer variables (equivalently, free constants) and c is an integer value, i.e., a possibly negated numeral. Some QF_IDL rules for reasoning about the satisfiability of sets of literals in this logic are shown in Fig. 12, in conventional mathematical notation. Rule side conditions are provided in braces, and are to be read as semantic expressions; for example, a+b=c in a side condition should be read as “c is the result of adding a and b.” Note that the side conditions involve only values, and so can be checked by (simple) computation. The actual language QF_IDL contains additional atoms besides those of the form of xyc. For instance, atoms such as x<y and xyc are also allowed. Typical solvers for this logic use then a number of normalization rules to reduce these additional atoms to the basic form. An example would be rule lt_to_leq in Fig. 12.

Fig. 12
figure 12

Sample QF_IDL rules and LFSC encodings (x, y, z are constant symbols and a, b, c, d are integer values)

Encoding typical QF_IDL proof rules in LFSC is straightforward thanks to the built-in support for side conditions and for arbitrary precision integers in the side condition language. As an example, Fig. 13 shows the idl_contra rule.

Fig. 13
figure 13

LFSC encoding of the idl_contra rule. Type mpz is the built-in arbitrary precision integer type (the name comes from the underlying GNU Multiple Precision Arithmetic Library, libgmp); as_int builds a term of the SMT integer type int from a mpz number; tt and ff are the constructors of the bool predefined type for Booleans; (ifneg x y z) evaluates to y or z depending on whether the mpz number x is negative or not

5.2 Quantifier-free linear real arithmetic

The logic QF_LRA, for quantifier-free linear arithmetic, consists of formulas interpreted over the real numbers and restricted to Boolean combinations of linear equations and inequations over real variables, with rational coefficients. We sketch an approach for encoding, in LFSC, refutations for sets of QF_LRA literals. We devised an LFSC proof system for QF_LRA that centers around proof inferences for normalized linear polynomial atoms; that is, atoms of the form

$$a_1 \cdot x_1 + \cdots + a_n \cdot x_n + a_{n+1} \sim 0 $$

where each a i is a rational value, each x i is a real variable, and ∼ is one of the operators =,>,≥. We represent linear polynomials in LFSC as (inductive data type) values of the form (pol a l), where a is a rational value and l is a list of monomials of the form (mon a i x i ) for rational value a i and real variable x i . With this representation, certain computational inferences in QF_LRA become immediate. For example, to verify that a normalized linear polynomial (pol a l) is the zero polynomial, it suffices to check that a is zero and l is the empty list. With normalized linear polynomials, proving a statement like (1) in Sect. 2 amounts to a single rule application whose side condition is an arithmetic check for equality between rational values.

Figure 14 shows a representative selection of the rules in our LFSC proof system for QF_LRA based on a normalized linear polynomial representation of QF_LRA literals.Footnote 7 Again, side conditions are written together with the premises, in braces, and are to be read as mathematical notation. For example, the side condition {p+p′=0}, say, denotes the result of checking whether the expression p+p′ evaluates to the zero element of the polynomial ring ℚ[X], where ℚ is the field of rational numbers and X the set of all variables. Expression of the form e↓ in the rules denote the result of normalizing the linear polynomial expression e. The normalization is actually done in the rule’s side condition, which is however left implicit here to keep the notation uncluttered. The main idea of this proof system, based on well known results, is that a set of normalized linear polynomial atoms can be shown to be unsatisfiable if and only if it is possible to multiply each of them by some rational coefficient such that their sum normalizes to an atom of the form p∼0 that does not hold in ℚ[X].

Fig. 14
figure 14

Some of proof rules for linear polynomial atoms. Letter p denotes normalized linear polynomials; a denotes rational values; the arithmetic operators denote operations on linear polynomials

Since the language of QF_LRA is not based on normalized linear polynomial atoms, any refutation for a set of QF_LRA literals must also include rules that account for the translation of this set into an equisatisfiable set of normalized linear polynomial atoms. Figure 15 provides a set of proof rules for this translation. Figure 16 shows, as an example, the encoding of rule \(\mathsf{pol\_norm_{+}}\) in LFSC’s concrete syntax. The type \((\mathsf{pol\_norm}\ t\ p)\) encodes the judgment that p is the polynomial normalization of term t—expressed in the rules of Fig. 15 as t=p. In \(\mathsf{pol\_norm_{+}}\), the polynomials p 1 and p 2 for the terms t 1 and t 2 are added together using the side condition poly_add, which produces the normalized polynomial (p 1+p 2)↓. The side condition of the rule requires that polynomial to be the same as the provided one, p 3. The other proof rules are encoded in a similar way using side condition functions that implement the other polynomial operations (subtraction, scalar multiplication, and comparisons between constant polynomials).

Fig. 15
figure 15

Proof rules for conversion to normalized linear polynomial atoms. Letter t denotes QF_LRA terms; a t and a p denote the same rational constant, in one case considered as a term and in the other as a polynomial (similarly for the variables v t and v p)

Fig. 16
figure 16

Normalization function for + terms in concrete syntax

We observe that the overall size of the side condition code in this proof system is rather small: about 60 lines total (and less than 2 kilobytes). Complexity-wise, the various normalization functions are comparable to a merge sort of lists of key/value pairs. As a result, manually verifying the correctness of the side conditions is fairly easy.

6 Empirical results

In this section we provide some empirical results on LFSC proof checking for the logics presented in the previous section. These results were obtained with an LFSC proof checker that we have developed to be both general (i.e., supporting the whole LFSC language) and fast. The tool, which we will refer to as lfsc here, is more accurately described as a proof checker generator: given an LFSC signature, a text file declaring a proof system in LFSC format, it produces a checker for that proof system. Some of its notable features in support of high performance proof checking are the compilation of side conditions, as opposed to the incorporation of a side condition language interpreter in the proof checker, and the generation of proof checkers that check proof terms on the fly, as they parse them. See previous work by Stump et al., as well as work by Necula et al., for more on fast LF proof checking [34, 35, 43, 44, 50].

6.1 QF_IDL

In separate work, we developed an SMT solver for the QF_IDL logic, mostly to experiment with proof generation in SMT. This solver, called clsat, can solve moderately challenging QF_IDL benchmarks from the SMT-LIB library, namely those classified with difficulty 0 through 3.Footnote 8 We ran clsat on the unsatisfiable QF_IDL benchmarks in SMT-LIB, and had it produce proofs in the LFSC proof system sketched in Sect. 5.1 optimized with the deferred resolution rule described in Sect. 4.2. Then we checked the proofs using the lfsc checker. The experiments were performed on the SMT-EXEC solver execution service.Footnote 9 A timeout of 1,800 seconds was used for each of the 622 benchmarks. Table 1 summarizes those results for two configurations: clsat (r453 on SMT-EXEC), in which clsat is run with proof-production off; and clsat+lfsc (r591 on SMT-EXEC), in which clsat is run with proof-production on, followed by a run of lfsc on the produced proof.

Table 1 Summary of results for QF_IDL (timeout: 1,800 seconds)

The Solved column gives the number of benchmarks each configuration completed successfully. The Unsolved column gives the number of benchmarks each configuration failed to solve before timeout due to clsat’s incomplete CNF conversion implementation and lack of arbitrary precision arithmetic. The first configuration solved all benchmarks solved by the second. The one additional unsolved answer for clsat+lfsc is diamonds.18.10.i.a.u.smt, the proof of which is bigger than 2 GB in size and the proof checker failed on due to a memory overflow. The Time column gives the total times taken by each configuration to solve the 539 benchmarks solved by both. Those totals show that the overall overhead of proof generation and proof checking over just solving for those benchmarks was 31.6 %, which we consider rather reasonable.

For a more detailed picture on the overhead incurred with proof-checking, Fig. 17 compares proof checking times by clsat+lfsc with clsat’s solve-only times. Each dot represents one of the 542 benchmarks that clsat could solve. The horizontal axis is for solve-only times (without proof production) while the vertical axis is for proof checking times. Both are in seconds on a log scale. It turned out that the proofs from certain families of benchmarks, namely fischer, diamonds, planning and post_office, are much more difficult to verify than those for other families. In particular, for those benchmarks proof checking took longer than solving. The worst offender is benchmark diamonds.11.3.i.a.u.smt whose proof checking time was 2.30 s vs 0.2 s of solving time. However, the figure also shows that as these benchmarks get more difficult, the relative proof overheads appear to converge towards 100 %, as indicated by the dotted line.Footnote 10

Fig. 17
figure 17

Solve-only times versus proof checking times for QF_IDL

6.2 Results for QF_LRA

To evaluate experimentally the LFSC proof system for QF_LRA sketched in Sect. 5.2, we instrumented the SMT solver cvc3 to output proofs in that system. Because cvc3 already has its own proof generation module, which is tightly integrated with the rest of the solver, we generated the LFSC proofs using cvc3’s native proofs as a guide. We looked at all the QF_LRA and QF_RDL unsatisfiable benchmarks from SMT-LIB.Footnote 11

Our experimental evaluation contains no comparisons with other proof checkers besides lfsc for lack of alternative proof-generating solvers and checkers for QF_LRA. To our knowledge, the only potential candidate was a former system developed by Ge and Barrett that used the HOL Light prover as a proof checker for cvc3 [19]. Unfortunately, that system, which was never tested on QF_LRA benchmarks and was not kept in sync with the latest developments of cvc3, breaks on most of these benchmarks. Instead, as a reference point, we compared proofs produced in our LFSC proof system against proofs translated into an alternative LFSC proof system for QF_LRA that mimics the rules contained in cvc3’s native proof format. These rules are mostly declarative, with a few exceptions.Footnote 12

We ran our experiments on a Linux machine with two 2.67 GHz 4-core Xeon processors and 8 GB of RAM. We discuss benchmarks for which cvc3 could generate a proof within a timeout of 900 seconds: 161 of the 317 unsatisfiable QF_LRA benchmarks, and 40 of the 113 unsatisfiable QF_RDL benchmarks. We collected runtimes for the following three main configurations of cvc3.

cvc: :

Default, solving benchmarks but with no proof generation.

lrac: :

Solving with proof generation in the LFSC encoding of cvc3’s format.

lra: :

Solving with proof generation in the LFSC signature for QF_LRA.

Recall that the main idea of our proof system for QF_LRA is to use a polynomial representation of LRA terms so that theory reasoning is justified with rules like those in Fig. 14. Propositional reasoning steps (after CNF conversion) are encoded with the deferred resolution rule presented in Sect. 4.2. As a consequence, one should expect the effectiveness of the polynomial representation in reducing proof sizes and checking times to be correlated with the amount of theory content of a proof. Concretely, we measure that as the percentage of nodes in a native cvc3 proof that belong to the (sub)proof of a theory lemma. For our benchmarks set, the average theory content was very low, about 8.3 %, considerably diluting the global impact of our polynomial representation. However, this impact is clearly significant, and positive, on proofs or subproofs with high theory content, as discussed below.

Figure 18 shows a summary of our results for various families of benchmarks. Since translating cvc3’s native proofs into LFSC format increased proof generation time and proof sizes only by a small constant factor, we do not include these values in the table. As the table shows, cvc3’s solving times (i.e., the runtimes of the cvc configuration) are on average 1.65 times faster than solving with native proof generation (the lrac configuration). The translation to proofs in our system (the lra configuration) adds additional overhead, which is however less than 3 % on average.

Fig. 18
figure 18

Cumulative results for QF_LRA, grouped by benchmark family. Column 2 gives the numbers of benchmarks in each family. Columns 3–5 give cvc3’s aggregate runtime for each of the 3 configurations. Columns 6–7 show the proof sizes for the two proof-producing configurations. Columns 8–9 show LFSC proof checking times. The last column show the average theory content of each benchmark family

The scatter plots in Fig. 19 are helpful in comparing proof sizes and proof checking times for the two proof systems. The first plot shows that ours, lra, achieves constant size compression factors over the LFSC encoding of native cvc3 proofs, lrac. A number of benchmarks in our test set do not benefit from using our proof system. Such benchmarks are minimally dependent on theory reasoning, having a theory content of less than 2 %. In contrast, for benchmarks with higher theory content, lra is effective at proof compression compared to lrac. For instance, over the set of all benchmarks with a theory content of 10 % or more, proofs in our system occupy on average 24 % less space than cvc3 native proofs in LFSC format. When focusing just on subproofs of theory lemmas, the average compression goes up significantly, to 81.3 %; that is to say, theory lemma subproofs in our proof system are 5.3 times smaller than native cvc3 proofs of the same lemmas. Interestingly, the compression factor is not the same for all benchmarks, although an analysis of the individual results shows that benchmarks in the same SMT-LIB family tend to have the same compression factor.

Fig. 19
figure 19

Comparing proof sizes and proof checking times for QF_LRA

It is generally expected that proof checking should be substantially faster than proof generation or even just solving. This was generally the case in our experiments for both proof systems when proof checking used compiled side conditions.Footnote 13 lfsc ’s proof checking times for both proof systems were around 9 times smaller than cvc3’s solving times. Furthermore, checking lra proofs was always more efficient than checking the LFSC encoding of cvc3’s native proofs; in particular, it was on average 2.3 times faster for proofs of theory lemmas.

Overall, our experiments with multiple LFSC proof systems for the same logic (QF_LRA), show that mixing declarative proof rules, with no side conditions, with more computational arithmetic proof rules, with fairly simple side conditions, is effective in producing smaller proof sizes and proof checking times.

7 Further applications: leveraging type inference

To illustrate the power and flexibility of the LFSC framework further, we sketch an additional research direction we have started to explore recently. More details can be found in [40]. Its starting point is the use of the wildcard symbol _ in proof terms, which requires the LFSC proof checker to perform type inference as opposed to mere type checking. One can exploit this feature by designing a logical system in which certain terms of interest in a proof need not be specified beforehand, and are instead computed as a side effect of proof checking. An immediate application can be seen in interpolant-generating proofs.

Given a logical theory T and two sets of formulas A and B that are jointly unsatisfiable in T, a T-interpolant for A and B is a formula ϕ over the symbols of T and the free symbols common to A and B such that (i) A T ϕ and (ii) B,ϕ T ⊥, where ⊨ T is logical entailment in T and ⊥ is the universally false formula. For certain theories, interpolants can be generated efficiently from a refutation of AB. Interpolants have been used successfully in a variety of contexts, including symbolic model checking [30] and predicate abstraction [23]. In many applications, it is critical that formulas computed by interpolant-generation procedures are indeed interpolants; that is, exhibit the defining properties above. LFSC offers a way of addressing this need by generating certified interpolants.

Extracting interpolants from refutation proofs typically involves the use of interpolant-generating calculi, in which inference rules are augmented with additional information necessary to construct an interpolant. For example, an interpolant for an unsatisfiable pair of propositional clause sets (A,B) can be constructed from a refutation of AB written in a calculus with rules based on sequents of the form (A,B)⊢c[ϕ], where c is a clause and ϕ a formula. Sequents like these are commonly referred to as partial interpolants. When c is the empty clause, ϕ is an interpolant for (A,B). The LF language allows one to augment proof rules to carry additional information through the use of suitably modified types. This makes encoding partial interpolants into LFSC straightforward. For the example above, the dependent type (holds c) used in Sect. 4.1 can be replaced with \((\mathsf{p\_interpolant}\ c\ \phi)\), where again c is a clause and ϕ a formula annotating c. This general scheme may be used when devising encodings of interpolant generating calculi for theories as well.

Figure 20 provides some basic definitions common to the encodings of interpolant generating calculi in LFSC for two sets A and B of input formulas. We use a base type color with two nullary constructors, A and B. Colors are used as a way to tag an input formula ψ as occurring in the set A or B, through the type (colored ϕ col), where col is either A or B. Since formulas in the sets A and B can be inferred from the types of the free variables in proof terms, the sets do not need to be explicitly recorded as part of proof judgment types. In particular, our judgment for interpolants is encoded as the type (interpolant ϕ).

Fig. 20
figure 20

Basic definitions for encoding interpolating calculi in LFSC

Our approach allows for two options to obtain certified interpolants. With the first, an LFSC proof term P can be checked against the type (interpolant ϕ) for some given formula ϕ. In other words, the alleged interpolant ϕ is explicitly provided as part of the proof, and if proof checking succeeds, then both the proof P and the interpolant ϕ are certified to be correct. Note that in this case the user and the proof checker must agree on the exact form of the interpolant ϕ. Alternatively, P can be checked against the type schema (interpolant _). If the proof checker verifies that P has type (interpolant ϕ) for some formula ϕ generated by type inference, it will output ϕ. In this case, interpolant generation comes as a side effect of proof checking, and the returned interpolant ϕ is correct by construction.

In recent experimental work [40] we found that interpolants can be generated using LFSC with a small overhead with respect to solving. In that work, we focused on interpolant generation for the theory of equality and uninterpreted functions (EUF), using the lfsc proof checker as an interpolant generator for proofs generated by cvc3. A simple calculus for interpolant generation in EUF can be encoded in LFSC in a natural way, with minimal dependence upon side conditions. Overall, our experiments showed that interpolant generation had a 22 % overhead with respect to solving with proof generation, indicating that the generation of certified interpolants is practicably feasible with high-performance SMT solvers.

8 Conclusion and future work

We have argued how efficient and highly customizable proof checking can be supported with the Logical Framework with Side Conditions, LFSC. We have shown how diverse proof rules, from purely propositional to core SMT to theory inferences, can be supported naturally and efficiently using LFSC. Thanks to an optimized implementation, LFSC proof checking times compare very favorably to solving times, using two independently developed SMT solvers. We have also shown how more advanced applications, such as interpolant generation, can also be supported by LFSC. This illustrates the further benefits of using a framework beyond basic proof checking.

Future work includes a new, more user-friendly syntax for LFSC signatures, a simpler and leaner, from scratch re-implementation of the lfsc checker—intended for public release and under way—as well as additional theories and applications of LFSC for SMT. We also intend to pursue the ultimate goal of a standard SMT-LIB proof format, based on ideas from LFSC, as well as the broader SMT community.