Language and Logics
An Introduction to the Logical Foundations of Language
Edinburgh Advanced Textbooks in Linguistics
Edinburgh: University Press, 2015
Paperback. viii+315 p. ISBN 978-0748691630. £24.99
Reviewed by Christian Bassac
Université de Lyon2
Language and Logics is probably the most comprehensive textbook in Logics and Linguistics to date. Up to now, students and instructors alike could mainly rely on Gamut (1990) or Moot and Rétoré (2012), but the former, albeit a comprehensive account of Montague semantics, was published 25 years ago and consequently could not include the rich developments in logics of the post 1990s, and the latter, similar in scope, concentrates more on categorial and Lambek grammars and is designed for graduate students. As for Morril (2011), it includes a section on Natural Language Processing, but it does not cover all the topics developed in Language and Logics. This is to say that there was a crying need for a textbook that takes undergraduates from the beginning of logical reasoning to the applications of contemporary logics to linguistic analysis.
In Part one, the classical picture of logic is developed in four sections, from basic Aristotelian propositional logic to Lambda Calculus. In these sections the author goes through the usual tree method for testing the validity of logical formulae, briefly presents predicate calculus and its interpretation, and offers a brief overview of quantification. A necessary but brief algebraic background on relations, sets and lattices is also presented.
Part two is devoted to modality with an introductory section on modal logics and a study of worlds and individuals.
Part three takes on the study of presuppositions, of many-valued logic, and culminates with the presentation of the important tool known as Curry-Howard isomorphism, central to all logical approaches to Natural Languages. This correspondence between the domains of reasoning and typing associates a typed lambda-term of a given type to a proof of the formula corresponding to this type. In algebraic background two, structures of monoid and group are presented, and algebraic background three is a quick note on Heyting algebras as models for the intuitionistic logic of Brouwer, Heyting and Kolmogorov.
The final part is devoted to the study of Substructural Logics and Categorial grammars, with two important sections on Linear Logic (from now on LL) and Combinators.
Substructural Logics are Logics which exclude some rules of classical logic. For instance LL, as it considers the premises in a demonstration as a consumable resource and not as a lemma used in a proof, rejects the rules of left and right contraction, as well as left and right weakening (somewhat misleadingly named “W” and “K” respectively in the book p. 225).
One of the many undoubted merits of the book is that the range of logical topics is wide enough to cover all a linguistic student needs to know about logic(s). This book takes the reader well beyond elementary logic, and most logics on the market in 2015 are presented: the only exception I can think of is Preller’s pregroup grammars, for instance as expressed in Preller (2007).
The book is also carefully written, each chapter contains useful and interesting notes and the final bibliography is as comprehensive as a bibliography on such a wide subject can be. This quick view shows that the book is welcome.
However, I found this book disappointing in several ways. For one thing, some statements are too cryptic for a linguistic student to feel comfortable with the notion alluded to. Here I think of note 7 p. 32 which is an interesting opening to the problems of cryptology and where the reader is expected to know a bit of modulo arithmetics: all this is very well, but a quick introduction to modulo arithmetics would have been necessary.
What proves even more unfortunate is first that the deep linguistic motivation for a logical analysis is never convincingly given, and second that the connection between linguistic problems and their solutions in the various logical frameworks presented is not given enough emphasis. Let me go into some detail here.
1) The motivation for a logical analysis
For instance p. 74, where the following example is given (in exercise 220.127.116.11)
(1) Margaret teaches maths
After reduction of the lambda-expression associated to (1), the semantics of (1) that the student is required to find is (2):
It would have been interesting to compare this with the semantics of the complement constituent of (3):
(3) I wonder what Margaret teaches
The classical syntactic analysis of (3) is (4), and its Logical Form is (5):
(4) [I wonder [whati [Margaret [teaches ti]]]]
(5) [I wonder [whatx [Margaret [teaches x]]]]
In the lambda-reduction process that yields (2), the lambda-operator binds a variable exactly like the operator-like what in (5). But never is the lambda-operator moved from a position on its right, contrary to the operator-like what in (5). Consequently, the semantics for the sentence in (2) is more straightforward and economical than its counterpart in (5). This exercise then could have been the ideal place to offer a strong motivation for all logical analyses presented in the book and to strongly state the fact that contrary to what goes on in syntactic theories where the semantics of a sentence is obtained via sequences of structural operations on strings of words or trees, the syntax and the semantics of a sentence (or any constituent of a sentence) can be considered as proofs in a logical demonstration.
A few pages should also have been devoted to a study of the link between mainstream generative syntax and logical analyses, in order to show how the structure building operations Merge and Move in the Minimalist Program can be cast in a categorical-like framework. Here, it seems to me that papers by Amblard et al. (2010), Lecomte & Rétoré (2001), or Stabler (1997) which all explore a possible way of encoding operations of minimalist syntax in a Lambek Grammar formalism should have been quoted and analyzed.
2) Applications of logic(s) to Natural Languages.
Here I have in mind the examples of application of LL to the syntax and semantics of Natural Languages. Precise linguistic phenomena should be given an important place in a textbook like this one. For instance p. 257, the reader is only asked to manipulate logical formulae, but no linguistic phenomenon is studied. The impression then is that the presentation remains somewhat cut off from precise syntactic phenomena.
Furthermore, in the chapter devoted to LL, the author states  that “implication and fusion (tensor) are the only connectives normally (emphasis mine) used in linguistic applications”. The question of course is: what does normally mean?
An interesting reason why implication is widely used is that applications of multiplicative connective “par” are subsumed by linear implication (which is not a primitive in LL), provided linear negation is used, as an important theorem of LL states that: A linearly implies B is equivalent to linear negation of A par B.
This should have been clearly stated.
Also, the additive disjunctive connective plus of LL has been used to encode type coercion (cf. Pustejovsky 1995 : 111), and exponentials (and not additives or multiplicatives only) do have linguistic applications (provided of course that you are not working with a fragment of LL without exponentials). For instance exponential “!” can be used to encode the blocking of extraction from a subject NP, as in (7):
(6) [The book that Jane read [ ]] is long
(7) *[The book that [the story in [ ]]] is long
This analysis goes back to Hodas (1997 : 170) (not in the bibliography), who suggests that in order to block extraction from the subject NP in (7), the type assignment of NP islands should be: !np
There was room for a deeper study of this syntactic phenomenon, all the more so as the issue is evoked (lightly) later on . It cannot be said that lexical ambiguity or extraction from an NP are marginal linguistic phenomena. What the author means by “normally” remains mysterious to me then.
Probably the scope and the ambitious purpose of the book prevented the author from devoting the necessary space to other important topics. For instance, what I found missing in the linear logic section is the link between a formula with linear implication and classical/intuitionistic logic implication. The translation formula is given  but what the fundamental opposition between the two means is not provided. A word here would have been in order to show that classical/intuitionistic logic is perfect for mathematical theorems (they express stable truths), but inadequate when it comes to causal implication, as causal implication cannot be iterated. This is implicit (for instance p. 248), when the author states that “linear implication can be read as a process which consumes (one instance of) p to produce (one instance of) q”. But as this is a crucial motivation for the emergence of LL, this again should have been made more explicit.
What is missing too is a few pages on models for LL. Whereas models for other logical frameworks are presented (monoids, groups, Heyting algebra), no model is given for the interpretation of LL and I think a word on coherent spaces would have been in order here.
I must add that I find the development in (3.16) p. 51 a bit confusing and incomplete. It is confusing, as the formal definition of a homomorphism is not given, and the only formal definition that appears is that of an isomorphism (modulo the condition that h in the example given is an onto). It is incomplete, as the next step should have been to give the definition of an automorphism here. The notion of automorphism is presented only p. 84, with no formal definition and no formal connection with the definition of a homomorphism. It would have been more coherent and convenient to show that it is a particular case of an isomorphism, right from 3.16.
Aside from the criticism expressed above I must say that I consider that there is much to recommend this book, which presents a wealth of logical material in a clear way.
Gamut, L.T.F (1990), Language, Logic and Meaning. University of Chicago Press.
Hodas, Joshua (1997), “A Linear Logic treatment of Phrase Structure Grammars for unbounded dependencies”. In Lecomte, A., Lamarche, F., Perrier G., (Eds) Logical Aspects of Computational Linguistics, Second conference, Nancy, France, September 1997. Berlin : Springer : 160-179.
Moot Richard & Rétoré, Christian (2012), The Logic of Categorial Grammars : A Deductive Account of Natural Language Syntax and Semantics, Springer.
Lecomte, Alain & Rétoré, Christian (2001), “Extending Lambek grammars : A logical account of minimalist grammars”. Proceedings of the 39th Annual Meeting of the Association for Computaional Linguistics : 354-361.
Morril, Glynn (2011), Categorial Grammar : Logical Syntax, Semantics, and Processing. Oxford: University Press.
Preller, Anne (2007), “Linear Processing with Pregroup Grammars”. In W. Buszkowski et al., Eds, Studia Logica 87 2/3 : 171-197.
Pustejovsky, James (1995), The Generative Lexicon. Cambridge: The MIT Press.
Stabler, Edward (1997), “Derivational Minimalism”. Logical Aspects of Computational Linguistics, Vol. 1328 : 68-95.
Cercles © 2016
All rights are reserved and no reproduction from this site for whatever purpose is permitted without the permission of the copyright owner.
Please contact us before using any material on this website.