——————————————————————————————————————————————————————————————————————————
| We give here a (partial) reformulation of Danos-Joinet-Schellinx |
| "A new deconstructive logic: linear logic" (DJS) from a computational |
| point of view, by reasoning on the term structure of proofs. |
——————————————————————————————————————————————————————————————————————————
Written Mai 2010, typos fixed on April 2014
From the computational point of view, using variables to denote the
occurrences of formulas allows to have contraction and weakening
implicit. This is the only way we depart from DJS besides reasoning on
the proof-as-program structure instead of on the typed structure.
Syntax of LKtq
——————————————
inferences rules
c ::= (x|a) axiom
| (x|f) left introduction
| (u|a) right introduction
| (t|e) cut rule
notations for active premises
t ::= μ a.c premise with active occurrence a of a conclusion
e ::= μ~x.c premise with active occurrence x of an hypothesis
notations for introduction rules
u ::= (t,t) ∧_a
| (t*t) ∧_m
| λx.t →_m^j (= multiplicative implication with standard orientation)
| λa.e →_m^j (= multiplicative implication with non-standard orientation)
| ι_j t ∨_a
| λ_j a.t ∨_m^j (= multiplicative disjunction with orientation j)
| ¬
| ...
f ::= π_j e ∧_a
| λ_j x.e ∧_m^j (= multiplicative conjunction with orientation j)
| t.e →_m
| [e+e] ∨_a
| [e|e] ∨_m
| ¬
| ...
Moreover, each variable comes with a color → (also called t, for
"tête") or ← (also called q, for "queue") that we write postfix when
needed. When everything is colored t, the calculus is call-by-name,
when everything is colored q, the calculus is
call-by-value. Variables x→ and a← are attractive, meaning they
attract the opposite side of a cut whenever they can.
Note: the orientation tells how to order the two resulting cuts when
a logical cut happens (it e.g. amounts to orienting the asymmetrical
pair in call-by-value, or the asymmetrical case analysis in
call-by-name, i.e. to tell which component is more attractive than
the other).
Notation: we write ↑ for the coercion from f or a to e and from u or x to t
↑f =_def μ~y.(y|f) for y fresh
↑u =_def μ b.(u|b) for b fresh
↑a =_def μ~y.(y|a) for y fresh
↑x =_def μ b.(x|b) for b fresh
DJS terminology
———————————————
*Main* occurrence of a formula:
- x is a main occurrence of formula in (x|f) when x not in f
- a is a main occurrence of formula in (u|a) when a not in u
- x and a are main occurrences of formula in (x|a)
when x is main in (x|f), we write ↑f for μ~x.(x|f) and
when a is main in (u|a), we write ↑u for μ a.(u|a)
Note: we need to add the conditions "x not in f" or "x not in u"
because in our case contraction rules are implicit and attached to
the introduction rules. Main implicitly means that a value or
covalue is built.
*Main active interspaces* (m-a-interspaces)
The notion is not as interesting in an untyped setting. It is used
to show that reduction preserves the structure of other
redexes. Approximatively, we have:
- a main active interspace in active μ a.c is a one-hole context C[ ] s.t. c is C[a]
- a main active interspace in active μ~x.c is a one-hole context C[ ] s.t. c is C[x]
*Flat* main active interspace
- m-a-interspace C[ ] in μ a.C[a] is flat if C[ ] is (t|[])
- m-a-interspace C[ ] in μ a.C[a] is flat if C[ ] is ([]|e)
A *lucid* m-a-interspace is irrelevant from the computational
perspective: it coincides with a flat m-a-interspace (since it
differs from it only by the presence of explicit contraction rules).
A *newly-born* occurrence of formula is also irrelevant from the
computational perspective: it coincides with a main occurrence
(since it differs from it only by the possible use of an presence of
explicit weakening rule).
A connective is *hetero-style* when one introduction rule is multiplicative (in LL
sense) and the other is additive; in the absence of explicit weakening and contraction
rules, the multiplicative and additive forms of the binary introduction rules are
isomorphic (e.g. (t,t) and (t*t)); accepting hetero-style connective in DJS sense
just means that the notations (t,t) and (t*t), and [e+e] and [e|e] are merged.
Cut-elimination rules (Def 2, Section 3.2)
—————————————————————
L-step:
(μ a.c|μ~x.c') when a main in c and x main c'
(↑(t₁,t₂)|↑π_j e) → (t_j|e)
(↑(t₁*t₂)|↑λ₁x₁.μ~x₂.c) → (t₁|μ~x₁.(t₂|μ~x₂.c))
(↑(t₁*t₂)|↑λ₂x₂.μ~x₁.c) → (t₂|μ~x₂.(t₁|μ~x₁.c))
(↑λx.μ b.c|↑t.e) → (t|μ~x.(μ b.c|e))
(↑λb.t|↑t.e) → (μ b.(t|μ~x.c|e))
(↑ι_j t|↑[e1+e2]) → (t|e_j)
(↑λ_j a_j.μ a_~j.c|↑[e1|e2]) → (μ a_j.(μ a_~j.c|e_~j)|e_j)
(↑|↑) → (t|e)
Otherwise,
S1-step
(μ a.c|μ~x.c') with either a attractive not main in c or x attractive not main in c'
(t|μ~x→.c) → c[x:=t] (x attractive)
(μ a←.c|e) → c[a:=e] (a attractive)
S2-step
(μ a.c|μ~x.c') with either a attractive and main in c or x attractive and main in c'
(μ a→.c|↑f) → c[a:=↑f] (x attractive but main)
(↑u|μ~x←.c) → c[x:=↑u] (a attractive but main)
and c[a:=t] and c[x:=e] are standard λ-calculus-style substitutions except that
(x|a)[a:=μ~y.c] =_def c[x/y]
(x|a)[x:=μ b.c] =_def c[a/b]
(u|a)[a:=e] =_def (↑u[a:=e]|e)
(x|f)[x:=t] =_def (t|↑f[x:=t])
Note: When both reductions t and q are allowed, this is the same as
Urban's reduction system (Fig 2.4 of his PhD) but not the same as X
which uses instead (x|a)[a:=μ~y.c] =_def (x|μ~y.c) and (x|a)[x:=μ b.c]
=_def (μ b.c|a) and is thus not terminating (see Summers' PhD for
details).
Plethoric embedding in linear logic (Def 6, Section 4.2)
———————————————————————————————————
(Γ₁^t Γ₂^q ⊢ Δ₁^t Δ₂^q)* = !?Γ₁* !Γ₂* ⊢ ?Δ₁* ?!Δ₂*
(C^t)* = !?C*
(C^q)* = ?!C*
(A ∧_a B)* = A+ & B+
(A ∧_m^j B)* = !A+ ⊗ !B+
(A →_m^j B)* = !A+ -o ?B+
(A ∨_a B)* = A+ ⊕ B+
(A ∨_m^j B)* = ?A+ par ?B+
(¬ A)* = (A^⊥)+
where
C ::= A ∧_a B | A ∧_m^j B | A →_m^j B | A ∨_a B | A ∨_m^j B | ¬ A
A, B ::= C^t | C^q
Γ, Δ ::= A1, ..., An
A^⊥ dualize connectives and colors
Note: Two ?? or !! in a row collapse into one.
LKη
———
*η-constraint* on introduction rules (Def 20, Section 5.5)
Attractive subformulas active in irreversible rules are
main. Otherwise said, positive rules on the right or negative rules
on the left are built from values/covalues in their attractive
position.
Resulting constrained syntax
V ::= μ a.c s.t. a is main in c if attractive
E ::= μ~x.c s.t. x is main in c if attractive
u ::= (t,t) ∧_a
| (V*V) ∧_m
| λ(x,a).t →_m (orientation is now irrelevant)
| ι_j t ∨_a
| λ(a,b).t ∨_m (orientation is now irrelevant)
| ¬
| ...
f ::= π_j e ∧_a
| λ(x,y).e ∧_m (orientation is now irrelevant)
| V.E →_m
| [e+e] ∨_a
| [E|E] ∨_m
| ¬
| ...
New L-step rules (order of cuts does not matter for multiplicative connectives)
(↑(t₁,t₂)|a)|↑π_j e) → (t_j|e)
(↑(V₁*V₂)|a)|↑λ(x₁,x₂).c) → (V₁|μ~x₁.(V₂|μ~x₂.c))
(↑λ(x,b).c|b)|↑V.E) → (V|μ~x.(μ b.c|E))
(↑ι_j t|a)|↑[e1+e2]) → (t|e_j)
(↑λ(a₁,a₂).c|a)|↑[E₁|E₂]) → (μ a₁.(μ a₂.c|E₂)|E₁)
(↑|a)|↑) → (V|E)
Stringo-embedding to linear logic
Γ ⊢ Δ ↪ Γ+ ⊢ Δ-
(C^t)+ = !?C*
(C^q)+ = !C*
(C^t)- = ?C*
(C^q)- = ?!C*
(A ∧_a B)* = A- & B-
(A ∧_m B)* = A+ ⊗ B+
(A →_m B)* = A+ -o B-
(A ∨_a B)* = A+ ⊕ B+
(A ∨_m B)* = A- par B-
(¬ A)* = (A^⊥)+
where
C ::= A ∧_a B | A ∧_m B | A →_m B | A ∨_a B | A ∨_m B | ¬ A
A, B ::= C^t | C^q
Γ, Δ ::= A1, ..., An
Constrictive morphisms are η-expansions of the axiom rule. Cutting a
proof of A on the side where A is irreversible against the reversible
side of an η-expansion (and reducing the cut) sends the proof into
LKη.
LKθ
———
*θ-constraint* on introduction rules (Def 31, Section 5.7)
V ::= μ a.c s.t. a is main in c if attractive
E ::= μ~x.c s.t. x is main in c if attractive
u ::= (V,V) ∧_a
| (t*t) ∧_m
| λ(x,a).c_{x,a} →_m
| ι_j t ∨_a
| λ(a₁,a₂).c_{a₁,a₂} ∨_m
| ¬
| ...
f ::= π_j e ∧_a
| λ(x₁.x₂).c_{x₁,x₂} ∧_m
| t.e →_m
| [E+E] ∨_a
| [e|e] ∨_m
| ¬
| ...
where c_L means that for each attractive x (resp a) in L, x (resp a)
is either main in c or does not occur in c (otherwise said, only one
variable of L occurs in c and this one is main)
Note: probably in order to have a definition uniform over additive
and multiplicative rules, DJS tolerates that V (resp E) has the form
μ a.c (resp μ~_.c). However, this tolerance does not seem necessary.
θ-proofs are the following:
for →_m : (y|b) = (y|μ a.(λx.μ _.(x|a)|b) . μ~x.(λ_.μ a.(x|a)|b))
for ∧_m : (y|b) = ((μ a₁.(y|λx₁.μ~_.(x₁|a₁))) * (μ a₂.(y|λ_.μ~x₂.(x₂|a₂)))|b)
for ∨_m : (y|b) = (y|[μ~x₁.(λa₁.μ _.(x₁|a₁)|b)) | (μ~x₂.(λ_.μ a₂.(x₂|a₂)|b)))
for ∧_a : (y|b) = (y|pi₁ μ~x₁.(y|pi₂ μ~x₂.((↑x₁,↑x₂)|b)))
for ∨_a : (y|b) = (i₁ μ a₁.(i₂ μ a₂.(y|[↑a₁,↑a₂])|b)|b)
Cutting with θ-proofs ensures falling in the θ-fragment
LKηθ
————
*ηθ-constraint* on introduction rules
u ::= (V,V) ∧_a
| (V*V) ∧_m
| λ(x,a).c_{x,a} →_m
| ι_j t ∨_a
| λ(a₁,a₂).c_{a₁,a₂} ∨_m
| ¬
| ...
f ::= π_j e ∧_a
| λ(x₁.x₂).c_{x₁,x₂} ∧_m
| V.E →_m
| [E+E] ∨_a
| [E|E] ∨_m
| ¬
| ...
Reversion
—————————
The reversions R_b(c) and R_y(c) (obtained by a cut on the
irreversible side of an η-expanded axiom - Lemma 42) can be
described as follows:
for →_m: R_b(c) =_def (λ(x,a).c{b:=↑(↑x.↑a)}|b)
for ∧_m: R_y(c) =_def (y|λ(x₁,x₂).c{y:=↑(↑x₁*↑x₂)})
for ∨_m: R_b(c) =_def (λ(a₁,a₂).c{b:=↑[↑a₁|↑a₂]}|b)
for ∧_a: R_b(c) =_def ((μ a₁.c{b:=↑π₁↑a₁} , μ a₂.c{b:=↑π₂↑a₂})|b)
for ∨_a: R_y(c) =_def (y|[μ~x₁.c{y:=↑ι₁↑x₁} , μ~x₂.c{y:=↑ι₂↑x₂}])
for ¬: R_y(c) =_def (y|<μ a.c{y:=↑<↑a>}>)
for ¬: R_b(c) =_def (<μ~x.c{b:=↑<↑x>}>|b)
where c{b:=↑f} (resp c{y:=↑u}) is like c[b:=↑f] (resp c[y:=↑u])
except that if c is (u|b) (resp (y|f)) then the resulting logical
cut and consequent trivial S2-cuts are reduced
The r-step replaces S2 by
(μ a→.c|↑f) → (R_a(c)|↑f)
(↑u|μ~x←.c) → (↑u|R_x(c))
LKp and LKηp
————————————
*polarisation constraint* (Def 39, Section 6)
positive formulas on the right are coloured q and negative formulas
on the left are coloured t
Status of negation
DJS say that negation has to flip colours but I do not understand
the argument. DJS also says that only colour-flipping negation is
reversible but I do not understand the argument either.
A colour-flipping negation is just the sign of a change of
side in the sequent and we use notation ¬ for this negation.
It seems to me that there are otherwise two non colour-flipping
negations, a negative one (similar to "⊥ par A^⊥" and reversible on
the right) and a positive one (similar to "1 ⊗ A^⊥" and reversible
on the left). We use the notations ¬+ and ¬- for these latter two.
Syntax of LKηp
The syntax is the one of LKη (we may use the same notation for all
three negations).
Polaro embedding in linear logic
Γ ⊢ Δ ↪ Γ+ ⊢ Δ-
(N^t)+ = !N*
(P^q)+ = P*
(N^t)- = N*
(P^q)- = ?P*
(A ∧_a B)* = A- & B-
(A ∧_m B)* = A+ ⊗ B+
(A →_m B)* = A+ -o B-
(A ∨_a B)* = A+ ⊕ B+
(A ∨_m B)* = A- par B-
(¬+ A)* = (A^⊥)-
(¬- A)* = (A^⊥)+
(¬ P)* = (P^⊥)*
(¬ N)* = (N^⊥)*
X* = ?X
Y* = !Y
where
N ::= A ∧_a B | A →_m B | A ∨_m B | ¬- A | ¬ P | X
P ::= A ∧_m B | A ∨_a B | ¬+ A | ¬ N | Y
A, B ::= N^t | P^q
Γ, Δ ::= A1, ..., An
and the duality operator ^⊥ flips colours
Note: treatment of atoms is not fully correct in DJS, the notation
X^t (resp X^q) is abusively used to denote negative (resp positive)
atoms.