Symmetric categorial grammar
Michael Moortgat
British Logic Colloquium, Nottingham, Sept 2008
Symmetric categorial grammar Michael Moortgat British Logic - - PowerPoint PPT Presentation
Symmetric categorial grammar Michael Moortgat British Logic Colloquium, Nottingham, Sept 2008 Abstract Fifty years ago, Jim Lambek published the seminal Mathematics of sentence structure. In that paper, the familiar parts of speech
British Logic Colloquium, Nottingham, Sept 2008
Abstract Fifty years ago, Jim Lambek published the seminal ”Mathematics of sentence structure”. In that paper, the familiar parts of speech (nouns, verbs, ...) take the form of formulas of a substructural logic; determining whether a phrase is well-formed, and assigning it an interpretation, i.e. parsing, can then be seen as a process of deduction in the grammar logic. The original Syntactic Calculus has turned out to be strictly context free, and it has an NP-complete decision problem. The goal of recent extensions of Lambek-style categorial type logics is to find a balance between expressivity and computational tractability: can one combine the ability to recognize patterns be- yond context-free with polynomial parsing algorithms? In the talk, I show how symmetric categorial grammar answers that challenge. In addition to the Lambek connectives (product, for phrasal composition, and resid- ual left and right division) one considers a dual family: coproduct with residual left and right difference operations. The two families interact via structure-preserving distributivity principles originally studied by Grishin in 1983. I discuss modelthe-
I show how its derivations can be given a proofs-as-programs interpretation in the continuation passing style.
from: Lambek 1958
A categorial grammar in the tradition of Lambek consists of two components ◮ universal: syntactic calculus freely generated from a given set of basic types ◮ language-specific: a lexicon, associating each word with a finite number of types Language generated string w1 · · · wn is assigned type B by a categorial grammar if ◮ there are Ai such that (wi, Ai) is in the lexicon, and ◮ there is a ⊗-tree X with yield A1, . . . , An such that X → B is derivable in the syntactic calculus. Expressivity, tractability Searching for the appropriate syntactic calculus, we want to find a balance between ◮ expressivity: ability to handle patterns beyond CF ◮ computational tractability: polynomial parsing problem
In its original formulation, Lambek’s calculus falls short of the set aims. ◮ Lacking expressivity: L is strictly context-free (Pentus 1993) ◮ Computational complexity: L is NP-complete (Pentus 2006) I compare two strategies to address these problems. ◮ ♦, Modalities ⊲ Controlled structural rules ◮ ⊗, ⊕ Symmetric categorial grammar (Grishin 1983) ⊲ Structure-preserving distributivity principles Both strategies combine mild CS expressivity with polynomial parsing (Moot 2008).
The parts of speech are turned into ◮ formulas: logical perspective ◮ types: computational perspective A, B ::= p | atoms: s sentence, np noun phrase, . . . ♦A | A | features: key, lock A ⊗ B | A/B | B\A fusion, right vs left selection
Modal logic: ‘logic of structures’. Logic of language: grammatical structures. ◮ Frames F = W, R2
♦, R3 ⊗
⊲ W: ‘signs’, linguistic resources, expressions ⊲ R3
⊗: ‘Merge’, grammatical composition
⊲ R2
♦: ‘feature checking’, structural control
◮ Models M = F, V ◮ Valuation V : F → P(W): types as sets of expressions Remark The language is purely modal — no Boolean operations.
Inverse duality ⊗ and ♦ as existential multiplicative modalities; slashes and as duals with respect to the rotations of R⊗ and R♦ x ♦A iff ∃y.R♦xy and y A y B iff ∀x.R♦xy implies x B x A ⊗ B iff ∃yz.R⊗xyz and y A and z B y C/B iff ∀xz.(R⊗xyz and z B) implies x C z A\C iff ∀xy.(R⊗xyz and y A) implies x C Compare For ♦, : F, [P] in minimal temporal logic; for ⊗ and its residuals: fusion in relevant logics. p → Fp [P]p → p F[P]p → p → [P]Fp
The minimal grammar logic is given by the preorder laws for derivability (reflexivity A → A) and transitivity: from A → B and B → C, deduce A → C), together with the residuation laws below. Residuation laws relating pairs of opposites (inverse duals): (res-1) ♦A → B iff A → B (res-l) A ⊗ B → C iff A → C/B (res-r) A ⊗ B → C iff B → A\C Completeness With no constraints on the interpretation of Merge/Check we have A → B is provable iff ∀F, V, V (A) ⊆ V (B) Invariants The laws of the base logic hold no matter what the structural particularities
Grammatical notions and their properties, rather than being postulated, emerge from the type structure. Some examples: ◮ Subcategorization, valency. Intransitive np\s, transitive (np\s)/np, ditransitive ((np\s)/np)/np, etc ◮ Case. Subject s/(np\s), direct object ((np\s)/np)\(np\s), prepositional object (pp/np)\pp, etc ◮ Complements versus modifiers. Compare exocentric A/B with A = B versus endocentric A/A categories. Optionality of the latter follows. ◮ Filler-gap dependencies (rudimentary!). Nested implications C/(A\B) signal withdrawal of a gap hypothesis A in a domain B.
Inducing the lexicon from structured data (Buszkowski/Penn, Kanazawa) SU ⊗ (TV ⊗ OBJ) Lewis likes Alice np (np\s)/np np He likes Alice s/(np\s) (np\s)/np np Lewis likes her np (np\s)/np ((np\s)/np)\(np\s) Who likes Alice? wh/(np\s) (np\s)/np np Limitation One cannot reconcile semantic uniformity with structural disparity: Claudia ◦ ((lo ◦ presta) ◦ a Fabio) Claudia ◦ ((lo ◦ vuole) ◦ (prestare ◦ a Fabio)) Claudia ◦ (vuole ◦ ((prestar ◦ lo) ◦ a Fabio))
Restricted λ abstraction In a term (M λxA.NB)C, which positions can the A hy- pothesis (gap) occupy? Two kinds of discontinuity problems: ◮ Extraction: syntactic displacement. Example: wh “movement”. Leopold knows whatwh/(np???s) Molly suggested
np to Mulligan
◮ Infixation: non-local semantic construal. Examples: wh “in situ”; scope. Molly thinks someones???(np???s) is cheating Needed Logical tools to express under what structural deformations the form-meaning correspondence is preserved.
Instead of hard-wired options with a global effect (⊗ associativity, commutativity), languages use controlled structural reasoning, anchored in lexical type assignment. Structural modalities Residuated pair ♦, : ♦A → B iff A → B New forms of expressivity: ◮ Subtyping via ♦A → A → ♦A ◮ ♦ controlled structural rules: left versus right extraction (wh: q/(s/♦np)) (P1) ♦A ⊗ (B ⊗ C) → (♦A ⊗ B) ⊗ C (C ⊗ B) ⊗ ♦A → C ⊗ (B ⊗ ♦A) (P3) (P2) ♦A ⊗ (B ⊗ C) → B ⊗ (♦A ⊗ C) (C ⊗ B) ⊗ ♦A → (C ⊗ ♦A) ⊗ B (P4) Embeddings The expressivity of LP (implication/fusion fragment of intuitionistic Linear Logic) is regained through embedding translations (Kurtonina/Moortgat 1997).
◮ PhD theses @ OTS ⊲ Kurtonina 1995, Frames & Labels. ⊲ Moot 2002, Proof Nets for Linguistic Analysis. ⊲ Bernardi 2002, Reasoning with Polarity in Categorial Type Logic. ⊲ Vermaat 2006, The logic of variation. A cross-linguistic account of wh- question formation in type logical grammar. ◮ Moortgat 1997, Categorial type logics. Handbook of Logic and Language, Chap- ter 2. Elsevier/MIT Press.
Existing proposals to extend the syntactic calculus beyond CF model derivability as an asymmetric relation A1, . . . , An → B (the “intuitionistic” restriction). Pure residua- tion logic plus non-logical axioms for structural flexibility. Complementary strategy We restore the symmetry. ◮ LG = symmetric NL + structure preserving distributivity principles ◮ Symmetry: A1 ⊗ · · · ⊗ An → B1 ⊕ · · · ⊕ Bm ◮ Distributivities: respecting word order and phrase structure LG stands for Lambek-Grishin calculus; it is based on Grishin 1983
◮ Kripke relational semantics, soundness/completeness. Kurtonina & MM 2007.
◮ Proof nets. Moot 2007, generalizing Moot & Puite 2002. ◮ Complexity. LG is is mildly context-sensitive, polynomially parseable. Moot 2008. ⊲ The result applies also to the ♦ controlled extraction postulates. ◮ Grouptheoretic characterization of LG type similarity. Moortgat & Pentus FG07. ◮ Continuation semantics (Moortgat & Bernardi WoLLIC07). More To Explore ESSLLI07 course materials at http://symcg.pbwiki.com
A, B ::= p | atoms: s sentence, np noun phrase, . . . A ⊗ B | B\A | A/B | product, left vs right division A ⊕ B | A ⊘ B | B A coproduct, right vs left difference Pronounciation guide A/B “A over B” B\A “B under A” A ⊘ B “A minus B” B A “B from A” Antecedents The discovery of Grishin’s 83 paper is due to Lambek93. From the available options in Grishin’s paper, our choices differ in a number of respects from those made for Lambek’s bilinear systems.
We arrive at LG in two steps: ◮ the minimal symmetric system: LG∅ ◮ extension with interaction principles LG∅ is the pure logic of residuation: ◮ preoder axioms: A → A; from A → B and B → C conclude A → C ◮ (dual) residuation principles A → C/B iff A ⊗ B → C iff B → A\C B C → A iff C → B ⊕ A iff C ⊘ A → B Remark Compared to Grishin83, Lambek93, we don’t opt for associativity for ⊗, ⊕: types are assigned to phrases, not to strings. Also, the extension with units 1, 0 is not considered.
Lambek 1988 presents the syntactic calculus as a deductive system consisting of a class of arrows (proofs) and a class of types (formulas), with two mappings between them Arrows
source
− − − − →
target
− − − → Types (writing A
f
− − → B for source(f) = A, target(f) = B). This provides us with a language of proof terms to individuate derivations. Given are the identity arrow A
1A
− − → A and composition of arrows A
f
− − → B B
g
− − → C A
g ◦ f
− − − → C with 1B ◦f = f = f ◦1A for A
f
− − → B, and (hf)g = h(fg) for A
f
− − → B
g
− − → C
h
− − → D.
LG adds adjointness/residuation principles for \, ⊗, / and their duals for ⊘, ⊕, . A ⊗ B
f
− − → C B
f
− − → A\C C
f
− − → B ⊕ A C ⊘ A
f
− − → B A ⊗ B
f
− − → C A
f
− − → C/B C
f
− − → B ⊕ A B C
f
− − → A These inference rules are invertible (we write ′ etc).
A
f
− − → B A⊲
⊳
f ⊲
⊳
− − → B⊲
⊳
A
f
− − → B B∞
f ∞
− − → A∞ Types For atoms p⊲
⊳ = p; for complex types (A ⊗ B)⊲ ⊳ = B⊲ ⊳ ⊗ A⊲ ⊳, etc, as in the
translation tables below: ⊲ ⊳ C/D A ⊗ B B ⊕ A D C D\C B ⊗ A A ⊕ B C ⊘ D ∞ C/B A ⊗ B A\C B C B ⊕ A C ⊘ A Proofs (1A)⊲
⊳ = 1A⊲
⊳, (1A)∞ = 1A∞; (g ◦ f)⊲
⊳ = g⊲ ⊳ ◦ f ⊲ ⊳, (g ◦ f)∞ = f ∞ ◦ g∞;
∞ f f ′f ′f f f ′f ′f Klein’s V Compositions of ·⊲
⊳ and ·∞ obey the laws of the group V = {1, ⊲
⊳, ∞, (∞ ⊲ ⊳)}, Klein’s Vierengruppe, the smallest non-cyclic Abelian group.
Models (W, R⊗, R⊕, V ); valuation V : Form → P(W). Completeness: Kurtonina & Moortgat 2007 (weak filters). Alternatively, Generalized Kripke Frames, Chernilovskaya & Gehrke 2008. Multiplicative conjunction Merge, fusion, composition (⊗) and its residuals: x A ⊗ B iff ∃yz.R⊗xyz and y A and z B y C/B iff ∀xz.(R⊗xyz and z B) implies x C z A\C iff ∀xy.(R⊗xyz and y A) implies x C Multiplicative disjunction Fission (⊕) and its residuals: x A ⊕ B iff ∀yz.R⊕xyz implies (y A or z B) y C ⊘ B iff ∃xz.R⊕xyz and z B and x C z A C iff ∃xy.R⊕xyz and y A and x C Remark In the minimal symmetric system LG∅, fission R⊕ and merge R⊗ are distinct
laws to be considered.
From (RES) and (TRANS) one derives the monotonicity principles (MON) and the ((CO)UNIT) laws. Monotonicity From A
f
− − → B and C
g
− − → D, infer D\A
g\f
− − → C\B; A ⊗ C
f ⊗ g
− − − → B ⊗ D; A/D
f/g
− − → B/C; A ⊘ D
f ⊘ g
− − − → B ⊘ C; C ⊕ A
g ⊕ f
− − − → D ⊕ B; D A
g f
− − − → C B (Co)unit laws Add the images under ·⊲
⊳
(A/B) ⊗ B
ǫ
− − → A A
η
− − → (A ⊗ B)/B B (B ⊕ A)
η∞
− − → A A
ǫ∞
− − → B ⊕ (B A) Cut elimination The deductive systems ID+TRANS+RES and ID+RES+MON are
Motivation Moving to symmetric LG∅ by itself doesn’t give interesting new expres-
building a phrase, ⊘ is trapped in its ⊗ context: A1 ⊗ · · · Ai ⊗ (B ⊘ C) ⊗ Ai+2 · · · An → D Structure preservation Which properties of grammatical organization do we want the interaction principles to preserve? ◮ word order: interaction should respect the non-commutativity of ⊗/⊕ ◮ phrase structure: interaction should respect their non-associativity Distributivity laws Grishin 1983 provides the recipe to compute all combinatorial possibilities that satisfy these requirements.
Notation For ∗ ∈ {/, ⊗, \, , ⊕, ⊘}, we write a?∗b
df
= b ∗ a and a∗?b
df
= a ∗ b. The matrix We consider the 8 monotone operations, split in two groups, M and Λ = M∞ M = { ?⊗, ⊗?, ?⊘, ? } Λ = { ⊕?, ?⊕, \?, ?/ } = M∞ M × Λ defines 16 extensions of LG∅ in terms of postulates of the form aµbλc → bλaµc ◮ Eight of these are the same-sort associativities and commutativities: they violate structure preservation. ◮ The remaining eight relate the ⊗ and the ⊕ families; they are structure preserving.
⊕? ?⊕ \? ?/ ?⊗ I1 I2 ⊗? I4 I3 ?⊘ IV1 IV2 ? IV4 IV3 Grishin’s Class I versus Class IV distributivities realize the schema aµbλc → bλaµc as: (I1) (b ⊕ c) ⊗ a → b ⊕ (c ⊗ a) (b\c) ⊘ a → b\(c ⊘ a) (IV1) (I2) (c ⊕ b) ⊗ a → (c ⊗ a) ⊕ b (c/b) ⊘ a → (c ⊘ a)/b (IV2) (I3) a ⊗ (c ⊕ b) → (a ⊗ c) ⊕ b a (c/b) → (a c)/b (IV3) (I4) a ⊗ (b ⊕ c) → b ⊕ (a ⊗ c) a (b\c) → b\(a c) (IV4) Remark In each class, the choice of the operations µ, λ constitutes a group given by the ·⊲
⊳ symmetry:
C2 × C2 = {(1, 1), (1, ⊲ ⊳), (⊲ ⊳, 1), (⊲ ⊳, ⊲ ⊳)} (the Klein group again). Earlier presentations of Grishin’s work (Lambek93, Gor´ e99) have only given the (1, 1) and (⊲ ⊳, ⊲ ⊳) cases.
Among the interderivable forms of the distributivity laws, we choose the form that puts together the structural occurrences of the operations. E.g. IV3: (C\B) ⊘ A → C\(B ⊘ A) A ⊗ (B ⊘ C) → (A ⊗ B) ⊘ C For decidable proof search, we compose with TRANS. Below the Class IV distributivities as inference rules: A (B ⊗ C)
f
− − → D (A B) ⊗ C
L f
− − → D (A ⊗ B) ⊘ C
f
− − → D A ⊗ (B ⊘ C)
Lf
− − → D B (A ⊗ C)
f
− − → D A ⊗ (B C)
L f
− − → D (A ⊗ C) ⊘ B
f
− − → D (A ⊘ B) ⊗ C
Lf
− − → D The Class I distributivities, in this form, are the converses.
LG∅ + I + IV LG∅ + I
LG∅
◮ Class IV proves useful for the analysis of non-local scope construal ◮ Class I has not found linguistic uses (so far) (ideas. . . ?) ◮ both the Class I and the Class IV extensions are conservative ◮ the combination I+IV is non-conservative: ⊗ and ⊕ degenerate into associa- tive/commutative operations
Strategy Start from a lexical type assignment from which the Lambek assignment is derivable: someone (s ⊘ s) np → s/(np\s) We leave the local construal as an exercise. Below the non-local construal. np ⊗ (((np\s)/s) ⊗ (np ⊗ (np\s))) → s s → (s ⊘ s) ⊕ s np ⊗ (((np\s)/s) ⊗ (np ⊗ (np\s))) → (s ⊘ s) ⊕ s (s ⊘ s) (np ⊗ (((np\s)/s) ⊗ (np ⊗ (np\s)))) → s . . . np |{z}
Alice
⊗ (((np\s)/s) | {z }
thinks
⊗ (((s ⊘ s) np) | {z }
someone
⊗ (np\s) | {z }
left
)) → s ◮ The (s ⊘ s) moves upwards through ⊗ structure, leaving np behind ◮ When (s ⊘ s) has reached the top, it can jump to the rhs by means of the dual residuation principle.
Proofs as programs Derivations get a computational interpretation in terms of the lambda calculus. ◮ Lambek calculus ⊲ subsystem of intuitionistic logic ⊲ terms of the simply typed λ calculus; direct interpretation ◮ Lambek-Grishin ⊲ subsystem of classical logic ⊲ terms of the λµe µ calculus (Curien/Herbelin); CPS translation Joint work with Raffaella Bernardi: Bernardi, R. and M. Moortgat (2007) ‘Continuation semantics for symmet- ric categorial grammar’. Proceedings WoLLIC’07, Springer LNCS 4576,
In classical logic, one distinguishes values from evaluation contexts, for all types A. Evaluation strategies Reduction is non-confluent: critical pairs can arise when a value is cut against an evaluation context. We distinguish two evaluation strategies to deal with this situation: ◮ call by value: give precedence to the value ◮ call by name: give precedence to the evaluation context (aka continuation) CPS One obtains a constructive interpretation of classical derivations via a continuation- passing-style (CPS) transformation. SOURCE
CPS
TARGET something Curien/Herbelin: LKµe
µ sequents
IL λµe µ terms λ terms here: LG arrows
We define the CPS transformation for LG on types and on proofs. The target type language has a distinguished type R of responses, products and func- tions; all functions have range R. Types: call-by-value For each source language type A, the target language has values VA = ⌈A⌉ continuations KA = VA → R (functions from VA to R) computations CA = KA → R (functions from KA to R) For p atomic, ⌈p⌉ = p. For (co)implications: ⌈A\B⌉ = (⌈A⌉ × (⌈B⌉ → R)) → R ⌈B ⊘ A⌉ = ⌈B⌉ × (⌈A⌉ → R) ⌈B/A⌉ = ((⌈B⌉ → R) × ⌈A⌉) → R ⌈A B⌉ = (⌈A⌉ → R) × ⌈B⌉ Remark Reverting to a types-as-formulas view, think of R as ⊥. One then has ⌈A\B⌉ = ¬(⌈A⌉ ∧ ¬⌈B⌉); ⌈B ⊘ A⌉ = ⌈B⌉ ∧ ¬⌈A⌉, etc.
Types: call-by-name For each type A of the source language, the target language has continuations KA = ⌊A⌋ and computations CA = KA → R. The call-by-name interpretation ⌊·⌋ is obtained as the composition of the ·∞ duality map and the ⌈·⌉ interpretation: ⌊A⌋ ⌈A∞⌉ For atoms, ⌊p⌋ = ⌈p∞⌉. For the (co)implications, compare the cbv intepretation (left) with the cbn interpretation (right). ⌈A\B⌉ = (⌈A⌉ × (⌈B⌉ → R)) → R (⌊A⌋ × (⌊B⌋ → R)) → R = ⌊B ⊘ A⌋; ⌈B ⊘ A⌉ = ⌈B⌉ × (⌈A⌉ → R) ⌊B⌋ × (⌊A⌋ → R) = ⌊A\B⌋; ⌈B/A⌉ = ((⌈B⌉ → R) × ⌈A⌉) → R ((⌊B⌋ → R) × ⌊A⌋) → R = ⌊A B⌋; ⌈A B⌉ = (⌈A⌉ → R) × ⌈B⌉ (⌊A⌋ → R) × ⌊B⌋ = ⌊B/A⌋. Remark The CPS interpretation reflects the ·⊲
⊳ and ·∞ symmetries. For example,
⌈A\B⌉ = ⌈A ⊘ B⌉ → R.
For every arrow A
f
− − → B we distinguish a left-to-right perspective f > and a right-to- left perspective f <. ◮ A
f >
− − → B: focus on value B ◮ A
f <
− − → B: focus on evaluation context A CPS interpretation of proofs Given A
f
− − → B, infer ⌈A\B⌉ ⌈A⌉
⌈f >⌉
− − − → RR⌈B⌉ ; R⌈B⌉
⌈f <⌉
− − − → R⌈A⌉ ⌈B/A⌉ ⌊A ⊘ B⌋ ⌊B⌋
⌊f <⌋
− − − → RR⌊A⌋ ; R⌊A⌋
⌊f >⌋
− − − → R⌊B⌋ ⌊B A⌋ i.e. call-by-value ⌈f >⌉ maps A values to B computations; ⌊f >⌋ maps A computations to B computations; etc.
Identity Given A
1A
− − → A, we have ⌈(1A)>⌉ : ⌈A⌉ → (⌈A⌉ → R) → R x → λα.(α x) ⌈(1A)<⌉ : (⌈A⌉ → R) → ⌈A⌉ → R α → λx.(α x) Composition Given A
f
− − → B
g
− − → C, we have ⌈(g ◦ f)>⌉ : ⌈A⌉ → (⌈C⌉ → R) → R x → λγ.({⌈f >⌉ x} {⌈g<⌉ γ}) Here and below, ⌈f <⌉ = SWAP⌈f >⌉.
Implication A
f
− − → B C
g
− − → D D\A
g\f
− − → C\B ⌈(g\f)>⌉ : ⌈D\A⌉ → (⌈C\B⌉ → R) → R y → λk.(k λx, β.({⌈g>⌉ x} λm.(y m, {⌈f <⌉ β}))) Co-implication A
f
− − → B C
g
− − → D A ⊘ D
f ⊘ g
− − − → B ⊘ C ⌈(f ⊘ g)<⌉ : (⌈B ⊘ C⌉ → R) → ⌈A ⊘ D⌉ → R α → λx, δ.({⌈f >⌉ x} λy.(α y, {⌈g<⌉ δ}))
Montague-style direct interpretation of the slashes has (A\B)′ = (B/A)′ = A′ → B′. For the connection with CPS intepretation for LG we interpret R as {0, 1}. We have to lift lexical meanings from source language ci : A′
i to target language
ci : ⌈Ai⌉ (cbv) ci : ⌊Ai⌋ (cbn) so that the diagram commutes. (A = A1 ⊗ · · · ⊗ An) A
f
− − → B
(·)′
M[xi := ci]
M n : ⌊B⌋
ev Mn[xi:=ci] ev Mv[xi:=ci]
.
Remark ev : evaluation function, providing the trivial continuation as last step.
word type alias ⌈·⌉ cbv ⌊·⌋ cbn alice np ⌈np⌉ ⌊np⌋ left np\s iv R⌈np⌉×R⌈s⌉ ⌊s⌋ × R⌊np⌋ teases (np\s)/np tv RR⌈iv⌉×⌈np⌉ R⌊np⌋ × ⌊iv⌋ somebody s/(np\s) su RR⌈s⌉×⌈iv⌉ R⌊iv⌋ × ⌊s⌋ Call by value alice = alice left = λx, c.(c (left x)) teases = λv, y.(v λx, c.(c ((teases y) x))) somebody = λc, v.(∃ λx.(v x, c)) Call by name alice = λc.(c alice) left = λc, q.(q λx.(c (left x))) teases = λq, c, q′.(q′ λx.(q λy.(c ((teases y) x)))) somebody = λv, c.(∃ λx.(v c, λc′.(c′ x)))
How to express the meaning for a gq type assignment (s ⊘ s) np in terms of the logical constants ∃, ∀ of type (e → t) → t ? Target types (gq value) ⌈gq⌉ = Ks⊘s × Vnp (gq continuation) ⌊gq⌋ = ⌈gq∞⌉ = ⌈np∞/(s∞\s∞)⌉ = (Cnp × ((Ks × Cs) → R)) → R Target terms J is known as the lifting combinator. someone = no solution someone = λQ.(∃ λx.(Q λk.(k x), λc, p.(p c))) = λQ.(∃ λx.(Q (J x), J))
We illustrate the interplay between derivational and lexical semantics in LG. Our sample includes the following: ◮ type uniformity for simple GQ sentences ◮ surface scope vs inverted scope ◮ extensional vs higher order predicates ◮ complement clauses and non-local construal
someone = λQ.(∃ λx.(Q (J x), J)) left = λc, q.(q λx.(c (left x))) np → np s → s s → s (s ⊘ s) → (s ⊘ s) ⊘ s → ((s ⊘ s) ⊕ s) ′ (np\s) → (np\((s ⊘ s) ⊕ s)) \ (np ⊗ (np\s)) → ((s ⊘ s) ⊕ s) ′ ((s ⊘ s) (np ⊗ (np\s))) → s (((s ⊘ s) np) ⊗ (np\s)) → s L ⌊M⌋ = λc.((someone λq, m.(m c, λc′.(left c′, q)))) = λc.(∃ λx.(c (left x)))
Where the Lambek type s/(np\s) is restricted to subject positions, the LG type as- signment gq = (s ⊘ s) np can occur in any np position. alice = λc.(c alice) teases = λq, c, q′.(q′ λx.(q λy.(c ((teases y) x)))) someone = λQ.(∃ λx.(Q (J x), J)) s np alice tv teases gq someone ⌊M⌋ = λc.((someone λq, m.(m c, λc′.(teases q, c′, alice)))) = λc.(∃ λx.(c ((teases x) alice))
Ambiguity arises from the non-determinism in the choice of the active formula. s gq everyone tv teases gq someone λc.(evr λq, m.(sm λq′, m′.(m c, λc′.(m′ c′, λc′′.(teases q′, c′′, q))))) λc.(∀ λx.(∃ λy.(c ((teases y) x)))) λc.(sm λq′, m′.(evr λq, m.(m′ c, λc′.(m c′, λc′′.(teases q′, c′′, q))))) λc.(∃ λy.(∀ λx.(c ((teases y) x))))
We write cs for s/(s\s). The type of the constant ‘thinks’ below is then (iv/cs)′ = ((t → t) → t) → (e → t). thinks = λh, c, q.(q λx.(c ((thinks λc′.(h J, c′)) x))) s np alice iv/cs thinks gq someone iv left
λc.(thinks λm′, c′.(sm λq, m.(m c′, λc′′′.(m′ c′′′, λc′′.(left c′′, q)))), c, alice) λc.(sm λq, m.(m c, λc′.(thinks λm′, c′′′.(m′ c′′′, λc′′.(left c′′, q)), c′, alice)))
λc.(c ((thinks λc′.(∃ λx.(c′ (left x)))) alice)) λc.(∃ λx.(c ((thinks λc′.(c′ (left x))) alice)))
◮ Lambek’s syntactic calculus NL is too poor to model discontinuous dependencies. ◮ Known extensions such as L are no remedy: CF, NP-complete. ◮ Symmetric LG offers an alternative: mildly CS, polynomial. ◮ CPS translation connects LG derivational semantics to MG style interpretation. ◮ Next step: modalities for delimited continuations.