i. untyped Systems
3. untyped arithmetic expressions
- Beispiele: terms t ::= true | false | 0 | if t then t else t | succ t | pred t | iszero t
- Syntax entweder durch inference rules, oder, Induktion über Tiefe (S_{0} = {}, S_{1} = { Konstanten}, S_{i} = { Funktionen mit Argumenten aus S_{i-1}}
- Induktion über Terme: entweder über size, depth, oder structural induction
- Semantic styles
Evaluation
- {` if true\ "then"\ t_2\ "else"\ t_3 rarr t_2 `} evaluation with precondition
- {` (t_1 rarr t_1^') / (if t_1\ "then"\ t_2\ "else"\ t_3 rarr if t_1^'\ "then"\ t_2\ "else"\ t_3) `} small step evaluation of the condition
- determinacy of one-step evaluation ({`if t rarr t' and t rarr t'' \ "then"\ t' = t'' `}) must be proven for the given rules!
- {`rarr`} 1-step evalutation relation
- {`rarr^**`} multi-step evalutation relation
- a term t is in normalForm, if no evaluation rule applies to it, i.e. ∄ t' : t → t'
- uniqueness of normal forms: if t→*u' and t→*u" with u' and u" normalforms, then u'=u"
- termination of evaluation: for every term t there is a normalform u with t →* u
- the above ist smallStep Evaluation (1 argument evalutates 1 step)
- bigStep evalution: arguments are evaluated until end: {`( t_1 rArr true \ \ \ t_2 rArr v_2 ) / (if t_1\ "then " t_2 " else " t_3 rArr v_2`} (attention book uses ⇓ which is not available in asciiMath, I used ⇒ instead)
4. an ML implementation
is straight forward
5. The untyped Lambda-Calculus
- Alonzo Church 1941
- Terms: variable, abstraction, application: {`t ::= x | lambda x.t | t\ t`}
- evaluation = betaReduction: {` ( lambda x.t ) u rarr [x |-> u] t`} wich substitutes in t all free x (bound before by the dropped λ) by u
- precedence: application is left associative {`t u v ::= (t u) v`} and body in abstractions extends as far to the right as possible
- redex = reducible expression that may be reduced by betaReduction i.e. of the form {` ( lambda x.t ) u `}
- different reduction strategies
- fullBetaReduction any redex may be reduced at any time
- normalOrderStrategy leftmost, outermost redex is reduced first
- callByName disallows reductions inside abstractions ==> needs sharing between the terms (graph not tree reduction!) e.g. Algol60
- callByValue only outermost redex are reduced after its right-hand-side is reduced to a value. callByValue is strict arguments are evaluated first! In contrast are lazy strategies, like callByName or callByNeed
features
- currying: simulate multiargument functions by higher oder functions e.g. {`add(l, r) rArr lambda l. lambda r. body`}
- partial evaluation {`(lambda l. lambda r. body) a rarr [l |-> a ] (lambda r. body)`} in other languages, you must track wich arguments are substituted when ...
- substitution must only substitue free variables and avoid name capturing, simple method her
- equivalence of terms with name of bound variables does not matter!
- {`{([x |-> s]x,=,s,) ,([x |-> s]y, =,y,if x != y) ,([x |-> s](lambda y.t), =,(lambda y. [x |-> s] t),if x != y and y !in "freeVariables"(s) "otherwise rename y") ,([x |-> s](t u), =, ([x |-> s] t) ([x |-> s] u),) :}`}
- if a substitution does not work, we rename bound variables
- church booleans {`{("tru",=,lambda t.lambda f. t, "true returns first arg") ,("fls",=,lambda t.lambda f. f, "false returns second arg") ,("or",=,lambda l.lambda r. l\ "tru"\ r, "if l then tru else r") :}`}
- church number {`{("n0",=,lambda f.lambda x. x, "0 applications of f to x") ,("n1",=,lambda f.lambda x. f x, "1 applications of f to x") ,("n2",=,lambda f.lambda x. f(f x), "2 applications of f to x") ,("scc",=,lambda n.lambda f.lambda x. f (n f z), "successor, apply f one more time") ,("add",=,lambda l.lambda r. lambda f. lambda x. l\ f\ (r f z), "apply f r times and l times to the value") ,(,,, "also short (but tricky) definitions for multiplication and power") :}`}
- church number ==> Inf:php/e19Closure.php
- λNB λ-calculus plus nats and booleans from chapter 3
- divergent combinator {`"omega" = (lambda x.\ x\ x) (lambda x.\ x\ x)`} reducing omegoa yields omega again
- recursion not directly supported, need fixPointCombinator or call-by-value-Y-combinator operator {`"fix" = lambda f.(\lambda x . f (lambda y . x\ x\ y) ) (\lambda x . f (lambda y . x\ x\ y) )`}
- the evaluation rules (e.g. which part is a value or must allow a reduction step) determine the evaluationStrategy and thus properties like singleStepEvaluationDeterminacy etc..
6. nameless representation of terms
term up to renaming of bound variables, is a cumbersome to handle, there are different ways for it, here Nicolas De Bruijn nameless terms. Instead of names, indices are used: 0 for innermost λ 1 for second-innermost .... e.g. {`lambda x. lambda y. x (y\ x) rarr lambda . lambda . 1\ (0\ 1)`}
- define terms with free variables ∈ [0, ..., n]
- for free variables we need a naming context {`NN rarr "name"`}
- for substitution we need index shifting
ii Simple Types
8. Typed Arithmetic Expressions
- stuckTerm a term that does not evaluate to a value. Typing system should rule out these
- t: T or t ∈ T means t obviously evaluates to a value of the approriate Form. obviously meaning can be checked statically without evaluation
- typing rules for Bool
- T ::= Bool
- true: Bool
- false: Bool
- {` ( t_i : "Bool" ) / ( "if"\ t_1\ "then"\ t_2\ "else"\ t_3\ :\ "Bool" ) `}
- typing derivation = tree of instances of typing rules
- uniquness of Type: each term has at most one type. (if typable then unique). Proof structural induction over typing rules
- property: safety = soundness:
9. simply typed Lambda-Calculus
- function types {`T ::= "Bool"\ |\ T rarr T`}
- {`"assumptions"|--t:T`} typing is three-place-relation for functions e.g. {`(x:S |-- t:T) / (|-- lambda x: S . t\ :\ S rarr T) `}
- pure simpy typed Lambda-calculus
- terms: abstraction modified to {`t ::=\ ...\ lambda x : T.t\ ...`}
- values {`v ::= lambda x : T.t`}
- types {`T ::= T rarr T`}
- contexts {`Gamma ::= O/ | Gamma, x:T`}
- Evaluation: application ignores typing
- Typing {`{ ((x:T in Gamma) / (Gamma |-- x : T), "T-var"), ((Gamma, x : T_1 |-- t_2 : T_2) / (Gamma |-- lambda x : T_1.t_2 : T_2), "T-abs"), ((Gamma |-- t_1: T_1 rarr T_2\ \ Gamma |-- t_2 : T_1) / (Gamma |-- lambda x : t_1 t_2\ :\ T_2), "T-app"):}`}
- Progress, Preservation of Types etc.. is proved by structural induction
- the → type-constructor comes with 2 kind of typing rules
- {def+introductionRule) e.g. create the type by λ
- {def+eliminationRule) type is used by an application t t
- Curry-Howard correspondance (or isomorphism), can be extended to many type systems:
Logic | Programming |
---|---|
propositions | types |
proposition P ⊃ Q | type P → Q |
proposition P ∧ Q | type P × Q |
proof of proposition P | term t of type P |
proposition P is provable | type P contains some term |
- erasure: remove type from term: erase(λ x:T .t) = λ x. erase(t) etc..: forward/backwared between untyped and typed calculus
- curryStyle define term and evaluation and afterwards typing
- churchStyle define well-type term and define evaluation only for well-typed terms
11. simple extensions
- base types: uninterpreted types and their ops (e.g. int, float, Bool from Hardware)
- unitType simplest possible Type: Singleton unit: Unit
- sequence t; u (ignore result of t, useful for languages with sideeffects), two equivalent formalizations
- either {`{( (t rarr t') / (t;u rarr t';u) , "EvalSeq"), (unit;t rarr t , "EvalSeqNext"), ((Gamma |-- t: Unit\ \ Gamma |-- u\ :\ U) / (Gamma |-- t;u\ :\ U) , "TypingSeq - only for Unit terms, like assignment"):}`}
- or {`t;u\ "as abbreviation for"\ (lambda "Unit" : x . u) t`}
- wildcards λ_:S.t as abbreviation for λx:S.t where x does not occur in t
- derived Forms = syntactic sugar = language features already contained in the bas language
- ascription: t as T: give a type to a term
- let binding:let x=t in u. deriving it fromo (&lamda; x : T.t)u works only for evaluation, typechecker has to compute T
- pairs and tuples {`{({t_1, ..., t_n}, "term n-tuple"), (t.i, "term projection"), ({T_1, ..., T_n}, "Type n-tuple"),("...", "etc"):}`}
- analogous records
- { l_{1} = t'̣_1_', ...} with labels l_{i} instead of indices, omitted labes may just use an index
- pattern matching over (recursive) structure: {`{(p ::= x | {l_1 = p_1, ...}, "term pattern: variable or record"), ( "let"\ p = t\ "in"\ t , "term pattern binding"), ("match"(x, v) = [x |-> v], "matchingRuleVariable"), (("foreach"\ i\ "match"(p_i, v_i) = sigma_i) / ("match"({l_1=p_1, ...}, {l_1=v_1}) = sigma_1 @ sigma_2 @ ...), "matchingRuleRecord"), ("let"\ p=v\ "in"\ t rarr "match"(p, v) t, "evaluationLetValue"), ((t rarr t') / ("let"\ p=t\ "in"\ u\ rarr\ "let"\ p=t'\ "in"\ u), "evaluationLetProgress"):}`}
- sum = T + U: binary variants
- inl: T → T+U # inject left
- inr: U → T+U # inject right
- case t of inl x ⇒ t inr ⇒ t # case depending on type (left or right)
- however, we loose uniqueness of types. different solutions possible e.g. explicit annotion like inl t as T
- variants = sums generalized to labeled variants
- <l_{1} : T_{1}, l_{2} : T_{2}, ...> # variant type
- <l=t> as T # tagging term = lifting t to labeled variant
- case t of <l_{1} = x_{1}> ⇒ t_{1}, <l_{2} = x_{2}> ⇒ t_{2}, ... # case of variant
- special case enumeration e.g. <monday: Unit, tuesday: Unit, ...>
- single field variants V=<l:T>, z.B. um Typ zu markieren e.g. <dollar:Float>, <euro:Float>
- general recursion: the fixPointCombinator for untyped calculus, is not expressible in simplyTyped Calculus, thus we need a language construct fix etc.
- {`{("fix"\ t, "# term fixpoint of t"),("fix" (lambda x:T.u) rarr [x |-> (fix (lambda x:T.u)) ] u,"# evaluationFixBeta"), ((t rarr t') / ("fix"\ t rarr "fix"\ u), "# evaluationFix"), ((Gamma |-- t : T rarr T ) / (Gamma |-- "fix"\ t : T), "# typingFix"), ("letrec"\ x: T = t\ "in"\ u overset("def")(=) "let"\ x = "fix"\ (lambda\ x: T . t)\ "in"\ u , "# letrec derived form") :}`}
- Lists (Lisp style)
- nil[T] # empty List of T
- cons[T] t t #construct list by adding an element of Type T as head before an existing List T
- isnil[T] t #test if empty
- head[T] t #head element of type T of List
- tail[T] t # tail (of type List T)
12. Normalization
normalizable: evaluation of a well-typed term halts in a finite number of steps. This is true for the pure simple typed lambda-calculus, but of course false for extensions like fix (recursion).
Proof
- For a simply typed lambda-calculus over a single baseType A.
- induction over size of term does not work, because application may increase size
- define Set (used as predicate) R_{T} of closed terms of type T for each Type T, by
- Lemma1: if R_{T}(t) then t halts. obvious
- Lemma2: iff t: T and t → t' then R_{T}(t) iff R_{T}(t'). by induction over structure of Type T
- Lemma3: if x_{1}: T_{1}, ..., x_{n}: T_{n} ⊢ t : T and v_{i} closed terms of type T_{i} with R_{T i}(v_{i}) then R_{T}([x_{1} ↦ v_{1}, ..., x_{n} ↦ v_{n}] t). trick: instead of proving for closed term we generalize to instances of opent terms. by induction on derivation and Lemma2
- Theorem Normalization: if ⊢ t: T then t is normalizable. by Lemma3 and Lemma1
13 References
impure language feature: side effect to change value in store
- allocation ref: r = ref 5; allocate new cell, initialize with Nat 5
- dereference !: !r; to read the value from the cell
- assignment := : r := 77; change the value in the cell. assignment yields Unit, thus, we can use sequencing.
- aliasing = : s = r; s and r point now to the same cell
- references work als for compound types i.e. functions, e.g. array Nat → Nat
- NatArray = Ref (Nat → Nat);
- newarray = λ _:Unit. ref (&lambda n : Nat . 0);
- lookup = λ a:NatArray. λ m : Nat . &lambda n : Nat . (l! a) n;
- update = ....
- Store Typing: Σ = [l_{1} ↦ T_{1}, ...]
- Γ | Σ ⊢ l : Ref T : Typing is a 4-way relation TypingContext | StoreType ⊢ term : Type
- the rules get quite lengthy ....
14 Exceptions
- raise t # term raise exception with info
- try t with t # term handle exception
- (raise v) t → raise v # evaluationRaise1
- v (raise w) → raise w # evaluationRaise2
- ...
- try v with t → v # evaluationTryV
- try raise v with t → t v # evaluationTryRaise
- ....
- some difficutlies with typing: every term might get an exception (e.g. out of memory), thus Typing is no longer unique. different solutions, like subtyping
iii Subtyping
15 Subtyping
- <: subtype relationship: reflexive and transitive
- S <: T means
- any term of type S can safely be used, where a term of type T is expected
- the elements of S are a subset of the elements of T
- {`(Gamma |-- t : S\ \ S <: T) / (Gamma |-- t : T) " #rule of subsumption"`}
- subtyping for records
- {l_{1}: T_{1}, ..., l_{n}: T_{n+k}} <: {l_{1}: T_{1}, ..., l_{n}: T_{n}}
- if S_{i} <: T_{i} then {..., S_{i}, ...} <: {..., T_{i}, ...}
- { ... } <: { Permutation }
- function {`(T_1 <: S_1 \ \ S_2 <: T_2) / (S_1 rarr S_2 <: T_1 rarr T_2) " # contravariant in argument, covariant in result"`}
- Top supertype of any Type
- Bop subtype of any Type - is empty (no values)
- ascriptions t as T now become castings. Upcasting forgets part of the type information, downcasting should use a runtimetest, an possibly different processing for different types.
- problem with subtype semantics: e.g. if we use Int <: Float, we need a runtime tag, (e.g. memory tag) to decide whether integer or float ops are needed
- alternative: coercion semantics ==> translate to language without subtyping. translation function for types, subtyping and typing
- T ∧ U:Intersection Operator on Types, more precisely meet = greatest lower bound
- T ∧ U <: T
- T ∧ U <: U
- {`(S <: T\ \ S <: U) / (S <: T ^^ U)`}
- Union Types: Union of elements of types, no tag as with sums or variants
16 Metatheory of subtyping
Subtyping rules (subsumption, transitivity) can not be applied directly for analysis, because is might not termiate .... Solution:
- replace declarative subtyping from last chapter by two relations
- algorithmic subtyping and
- algorithmic typing
algorithmic subtyping
- the three record subtyping rules (width, depth, permutation) can be combined into a single rule {`({k_i} supe {l_j} \ \ k_i = l_j \ "implies"\ S_i <: T_j)/ ( {k_i: S_i} <: {l_j: T_j}) `}
- reflexivity and transitivity are not needed for derivations (already implied in record rules)
- algorithmic subtyping ↦ is generated by {`{ (|-> S <: "Top"), (({k_i} supe {l_j} \ \ if k_i = l_j \ "then"\ |-> S_i <: T_j)/ ( |-> {k_i: S_i} <: {l_j: T_j})), ((|-> T_1 <: S_1\ \ |-> S_2 <: T_2) / (|->\ S_1 rarr S_2\ <:\ T_1 rarr T_2)) :} `}
- sound and complete: S <: T iff ↦ S <: T
- we get a straightforward recursive algorithm for subtype(S, T) which always terminates
algorithmic typing is similar
- we need to modify application rule, to allow a subtype as argument
joins and meets
- join = supremum = least upper bound: J = S ∨ T iff S <: J and T <: J and ∀U if S <: U and T <: U then J <: U
- meet = infimum = greatest lower bound : M = S ∧ T iff M <: S and M <: T and ∀U if U <: S and U <: T then U <: M
- existence depends on specific typing defs
- need not be unique, but are subtype of each other
18 case study: imperative objects
oo
- multipleRepresentation the code a method executes is determined by the object- by contrast AbstractDataTypes consist of a set of values and a single implementation
- encapsulation internal representations is hidden to the outside
- subtyping type or interface is name and type of operations
- inheritance
- openRecursion invoke method of self
objects
- a counter object
c = let x = ref 1 in { get = λ_:Unit. !x inc = λ_:Unit. x := succ(!x)};
- type Counter = {get:Unit → Nat, inc:Unit →Unit}
- counter Generator:
newCounter = λ_:Unit. let x = ref 1 in { get = λ_:Unit. !x inc = λ_:Unit. x := succ(!x)};
- subtyping: object with additional methods
- grouping instance variables into a record
classes
- to reuse methods for a subclass, we have to parametrize with an arbitrary record:
counterRep = {x: nat}; counterClass = λr:counterRep . { get = λ_:Unit. !(r.x) inc = λ_:Unit. r.x := succ(!(r.x))}; newCounter = λ_:Unit. let r = {x = ref 1} in counterClass r;
- now we can build a subclass
resetCounterClass = λr:counterRep . let super = counterClass r in { get = super.get, inc = super.inc, reset = λ_:Unit. r.x := 1};
self and open recursion
- the very record we are constructing is passed back as self
SetCounter = {get: Unit → Nat, set: Nat → Unit; inc: Unit → Unit}; setCounterClass = λr:counterRep . fix ( λself:SetCounter . { get = λ_:Unit. !(r.x) set = λi:Nat. r.x := i, inc = λ_:Unit. self.set (succ(self.get unit))});
- the more general form of late binding = open recursion moves the fix operator from the class to the object creation
setCounterClass = λr:counterRep . λself:SetCounter . { get = λ_:Unit. !(r.x) set = λi:Nat. r.x := i, inc = λ_:Unit. self.set (succ(self.get unit))}; newSetCounter = λ_:Unit. let r = {x=ref 1} in fix (setCounterClass r);
- now counterClass is an abstraction of record of instance variables and of the class (methods)
- however, creating instances will diverge, because of a wrong evaluation order
- we can force the wanted order, by a trick: change self in a (no argument) function
setCounterClass = λr:counterRep . λself: Unit → SetCounter . { get = λ_:Unit. !(r.x) set = λi:Nat. r.x := i, inc = λ_:Unit. (self unit).set (succ((self unit).get unit))}; newSetCounter = λ_:Unit. let r = {x=ref 1} in fix (setCounterClass r) unit;
- however this is not efficient, because it postpones the calculation of method tables. Alternative: instead of SetCount use Ref SetCount (indirection bys reference instead of abstraction)
19 case study: featherweight Java
- Atsushi Igarashi, Benjamin Pierce and Philip Wadler. ACM Transactions on Programming Languages and Systems, 23(3):396-450, May 2001.
- structuralTypeSystem as used in the preceeding chapters: subtypes etc. are derived from the structure of the types, the name of the types are irrelevant, i.e. just an abbreviation for the definition
- moninalTypeSystem as used in Java and most real languages: subtypes etc. are declared explicitely - there is no subtype relationship, unless declared in the class definition
- featherweight Java is a very small subset of java, no assignments, no sideeffects, no interfaces .....
- Syntax
- prog ::= CL* t # program classes and term
- CL ::= class C extends C { C f, .....; K; M* } # class declaration
- K ::= C(C f, ...) { super(f, ...); this.f = f; ....} # constructor arguments 1:1 with fields, all fields must be assigned from arg of same name
- M ::= C m(C x, ...) return t; } # method declaration, only a single return
- t ::= x | t.f | t.m(t*) | new C(t*) | (c) T # term: variable, fieldaccess, method invocation, object creation, cast
- v ::= new C(v*) # values: object creation
- Object is top superclass
iv Recursive Types
20 Recursive Types
- μ = explicit recursion operator for types
- NatList = μX. <nil:Unit, cons: {Nat, X}> # declaration of list with variant <...> and pairs {...}
- two approaches what is relation between a Type and its one step unfolding (e.g. NatList and <nil:Unit, cons: {Nat, NatList}>)
- equiRecursive are definitionally equal - everything still works except structural induction
- isoRecursive excplicit fold and unfold, with an isomorphism
- fold[T] t # term fold
- unfold[T] t # term unfold
- fold[T] v # value fold
- X # Type Variable
- μX.T # recursive Type
- unfold[S] (fold [T] v) → v # evaluation unfold fold
- ...
- both approaches are practically used, isoRecursive normally automatically adds recursive definitions ....
21 metatheory of recursive types
- definitions
- Theorem Knaster, Tarski 1955:
- intersection of all Fclosed sets is least fixedpoint of F
- union of all Fconsistent sets is greatest fixedpoint of F
- corollary
- princple of induction: X Fclosed → leastFixpoint F ⊆ X
- princple of coinduction: X Fconsisnt → greatestFixpoint F ⊇ X
- we use only 3 typeconstructors →, × and Top
- a tree(type) is a partial function T: {1,2}^{*} → {→, ×, Top} with
- T(ε) is definded # empty sequence
- if T(πσ) is defined, then also T(π) # concatenation / prefix
- if T(π) = → or × then T(π1) and T(π2) are defined
- if T(π) = Top then T(π1) and T(π2) are undefined
- T_{1} × T_{2} ==> (T_{1} × T_{2})(ε) = × and (T_{1} × T_{2})(iπ) = T_{i}(π)
- T_{1} → T_{2} ==> dito
- define Ŝ: 2^{T × T} → 2^{T × T} by
- Ŝ(R) = {(t, Top) }
∪ {(S_{1} × S_{2}, T_{1} × T_{2}) | (S_{1}, T_{1}), (S_{2}, T_{2}) ∈ R}
∪ {(S_{1} → S_{2}, T_{1} → T_{2}) | (T_{1}, S_{1}), (S_{2}, T_{2}) ∈ R}
- Ŝ(R) = {(t, Top) }
- thus, Ŝ is maps a set of types to one step TypeDerivation
- a finite tree S is subtype of T if (S,T) ∈ leastFixpoint(Ŝ|_{finite trees})
- a tree S is subtype of T if (S,T) ∈ greatestFixpoint(Ŝ)
- again transitivity and reflexivity of the fixpoints can be derived by induction, and must not be postulated!
- GeneratorSet G_{x} = X ⊆ U | x ∈ F(X)}: U → 2^{2**U}
- F is invertible iff G_{x} is empty or contains unique minimal element X ∈ G_{x} with ∀Y ∈ G_{x}: X ⊆ Y
- ↑ means undefined or divergence
- ↓ means defined or termniation
- support_{F}(x) = If ∃X = unique minimal element in G_{x} then X else ↑
- gfp_{F}(X) =
if support(X) ↑ then false elif support(X) ⊆ X then true else gfp_{F}(support(X) ∪ X)
- is a terminating function returning whether X = (S, T) is subtype or not
extension for μ types, isoRecursive etc.
v Polymorphism
22 Type Reconstruction
type reconstruction: algorithm to compute principal type of term, with missing type annotations
- type substitution e.g. σ = [X ↦ T, Y ↦ U]
- theorem: Preservation of typing under substitution. Γ ⊢ t : T then σΓ ⊢ σt : σT
- two views on typevariables
- ∀σ term is well typed for every substitution
- ∃σ does there exist a substitution to get a well typed instance?
- a solution for a context Γ and a term t is a substitutionn σ and type T with σt: T
- σ unifys constraint set {S_{i} = T_{i}} iff σS_{i} and σT_{i} are identical for each i
- constraint typing relation Γ ⊢ t : T |_{X} C means under context Γ term t has type T if constraints C hold, using new typevariables X
- contstraint typing rules are translation from typing rules
- unification algo: unify on equation after the other, expanding substitution
- less specific ⊑: σ ⊑ σ' iff σ' = γ σ for some substitution γ
- principal unifier or most general unifier σ for C if σ ⊑ σ' for every solution σ'
implicit type annotations: parser fills in missing type annotation with fresh type variables or better a rult to the constraint typing relation
let-polymorphism (e.g. in ML): allow to use a let with different typing e.g.
let double = λ f. λ a. f(f(a)) in let dN = double (λ n: Nat. succ n)in let dB = double (λ b: Bool. b) in
- must split typing rule so that it adds typing constraint after use
- {`{((Gamma |-- [x |-> t]u : U)/(Gamma |-- "let"\ x=t\ "in"\ u: U)), ((Gamma |-- [x |-> t]u : U |_X C)/(Gamma |-- "let"\ x=t\ "in"\ u: U\ |_X\ C)):}`}
- however still nees some fixes for follow up problems ...
23 universal types
Variants of polymorphism
- parametric Polymorphismen with type variables, as elaborated in systemF
- ad-hoc polymorphism: different behaviour on different types, e.g. overloading
- multi-method displatch is a generalization, e.g.CLOS iallows that methods can be specialized upon any or all of their required arguments, not only the receiver
- intensional polymorphism or even more general with patterns...
systemF lamda calculus extended with Type Variables etc.
- λX.t # type abstraction
- t[T] # type application
- λX.t # value type abstraction
- X # term type variable
- ∀X.T # universal type
- Γ,X # context type variable binding
- {`(Gamma,X\ |--\ t: T)/(Gamma |--\ lambda X. t: AA X.T)`} # typing: Type abstraction
- {`(Gamma,X\ |--\ t: AA X.T)/(Gamma |--\ t[U]: [X |-> U] T)`} # typing: Type application
- type erasure as above.Wells 1994. given a closed untyped term, it is undeciable to decide whether there is a well type term in System F wich erases to it
24 existential types
existential types are similar to module systems, with packing/unpacking types from a module.
26 Bounded Quantification
- parametric polymorphism cannot handle a situation like: modify an argument and return it: we need to say that it returns the type of the argument but must be a subtype to allow the operation
- thus, extend systemFl with subtype relation and refine universal types with subtyping constraints
(i) kernel F_{<:}
- λX<:T.t # term type abstraction is always bounded, unbounded quantification is derived as λX<:Top.t
- ∀X<:T.T # universal type
- ... # all rules are adapted similarly
- {`(Gamma, X <: U |-- S <: T) / (Gamma |-- AAX <: U.S <: AAX <: U . T)`} # subtyping ∀ - kernel rule
- scoping is not obvious from the rules, but crucial. e.g. X<:a:Nat, b:X}
(ii) Full F_{<:}
- kernel compares quantified types only if bounds are equal, this is lifted here by
- {`(Gamma |-- T_1 <: S_1\ \ Gamma, X <: T_1 |-- S_2 <: T_2) / (Gamma |-- AAX <: S_1.S_2 <: AAX <: T_1 . T_2)`} # subtyping ∀ - full rule: contravariant on left-side, covariant on right-side
Theorems Safety: Preservation (if Γ ⊢ t :T and t → t' then Γ ⊢ t' : T) and Progress (if t closed, well.typed then either t is a value or ∃t → t')
- dito for existential types
- further extensions, like intersection types
27 metatheorie of bounded quantification
- algorithmic subtyping sound and complete for both kernel and full
- for kernel F_{<:} the subtyping algorithm terminates on all inputs
- for full F_{<:} the subtyping is undecidable
vi higher-order systems
29 type operators and kinding
kinds "type of types"
- * # kind of proper types
- * ⇒ * # type operations i.e. function from proper types to proper types
- * ⇒ * ⇒ * # kind of functions from proper type to type operators i.e. 2-argument operators
- (* ⇒ *) ⇒ * # kind of function from type operators to proper types
we extend systemF by giving a type to each TypeVariable X :: K
- X :: K . T # Type typeOperator abstraction
- T T # Type typeOperator application
- Γ , X::K # context type variable binding
- * #kind of proper types
- * ⇒ * #kind of operators
- {`(X::K in Gamma) /(Gamme |-- X :: K)`} # Kinding rule for Variable, analoguous for abstraction, application and arrow
- {`(S_1 -= S_2\ \ T_1 -= T_2) / (S_1 -> S_2\ -=\ T_1 -> T_2))`} # typeEquivalence arrow analoguous equivalence, abstraction ....
- these are 3 additions to simpy typed lambdaCalculus
- rules of kinding: how to combine type expression
- whenever a type appears, we must check, it is well formed i.e. has kind *
- rules for definition equivalence
30 Higher-Order Polymorphism
- System F_{ω} similar additions to systemF with universal types ∀X::K.T
- Theorems: Preservation, Progress, is decidable
- Hierarchy F_{i} with kinds K_{i}
- K_{1} = {*} (?)
- K_{i} = {*} ∪ { J ⇒ K | J ∈ K_{i-1} and K ∈ K'_i-1_ ' } (?)
- K_{ω} = ⋃K_{i}
- F_{1} only kind *, no quantification or abstraction over types => simply type lambda calculus
- F_{2} only kind *, quantification only over proper types => systemF
till now we have
- λx:T.t # family of terms indexed by terms
- λX::K.t # family of terms indexed by types
- λX::K.T # family of types indexed by types
missing is family by types index by terms! dependentTypes. e.g. allow to restrict a list to a list with n elements
- Πx:T_{1} . T_{2}
- however, typechecking will become intractable
31 higher-order subtyping
<: gets even more complicated in F'_omega_?, than it was in systemF