On the day before the main summer school---June 22---we will offer three bootcamp courses:
Logic Meets Language: Formal Foundations of Semantics (š„): Shahriar Hormozi, Ryan Walter Smith
Logic Meets Language: Formal Foundations of Semantics
This course invites students from diverse academic backgroundsāincluding linguistics, philosophy, computer science, cognitive science, and logicāwho are eager to master the principles and techniques of formal semantics. Participants will gain a deep understanding of both the theoretical foundations and practical applications of formal semantics, exploring the meaning of natural language (spoken or sign) through mathematical and logical frameworks. The course emphasizes a balanced approach: while technical frameworks are crucial, they are ultimately tools for linguistic investigation. Students will engage in analyzing and comparing data-driven semantic insights, ensuring that their research stays rooted in linguistic evidence. Additionally, the course will cover a range of empirical phenomena that have shaped the field, offering a comprehensive foundation and underscoring the practical relevance of semantic analysis.
This course is ideal for students and researchers seeking an introduction to First-order Modal Logic (FOML), providing foundational tools and insights to help understand its complexities. Attendees are expected to be familiar only with the basics of propositional modal logic. FOML extends propositional modal logic by introducing quantifiers, allowing for reasoning about necessity, possibility, and other modal concepts in a first-order setting. This enhances the expressiveness of modal logic, which creates a powerful tool for analysing multi-agent systems and reasoning about relationships between entities under varying conditions. Hence, FOML finds applications in philosophy, computer science, linguistics, mathematics and artificial intelligence. This course emphasises the semantic approach to FOML and explores practical methods for integrating quantifiers and modal operators within the same system. Attendees will gain foundational knowledge of the semantics and associated proof theories.
Topology, Logic, and Epistemology (š„): Adam Bjorndahl
Topology, Logic, and Epistemology
This course is an introduction to topology and an exploration of some of its applications in epistemic logic. A passing familiarity with modal logic will be helpful, but is not essential; no background in topology is assumed. We'll begin by motivating and defining standard relational structure semantics for epistemic logic, and highlighting some classic correspondences between formulas in the language and properties of the structures. Next we'll introduce the notion of a topological space using a variety of metaphors and intuitions, and define topological semantics for the basic modal language. We'll examine the relationship between topological and relational semantics, establish the foundational result that S4 is āthe logic of spaceā (i.e., sound and complete with respect to the class of all topological spaces), and discuss richer epistemic systems in which topology can be used to capture the distinction between the known and the knowable. Roughly speaking, the spatial notion of ānearnessā can be co-opted as a means of representing uncertainty. This lays the groundwork to explore some more recent innovations in this area, such as topological models for evidence and justification, information update, and applications to the dynamics of program execution.
These courses meet in two blocks, from 10AM-1PM and 2-5PM.
The table below contains the schedule of the main summer school, from June 23-27. Some key information:
Each class meets every day at the same time for the entire week. There are 5 parallel sessions.
Locations (room numbers) will be posted closer to the event, but classes will be divded between Johnson Hall and Denny Hall on the UW campus.
Information for Zoom attendance (links, etc) will be emailed to participants before the event begins. If you have not done so, please fill out the Google Form listed on the registration page.
Zoom format / information key:
š„ = Zoom attendance + recording available
šŗ = Zoom attendance + no recording available
ā = In-person attendance only, no Zoom option
š§āš» = instructor will be remote
Time
Math/Logic
Formal Philosophy
Linguistics
Linguistics+
NLP and Beyond
9:00 - 10:20
Modern Set Theory - Mathematical Truth and the Multiverse
Michał Tomasz Godziszewski (👨💻)
Formal Methods for Fallibilism (🎥)
Sam Carter, Jeremy Goodman
Formal Methods for Fallibilism
Fallibilists hold that belief may be rational without being entailed by the evidence. Although fallibilism is widespread in contemporary epistemology, it is primarily discussed in informal settings. The focus of this course is on introducing formal methods for theorizing about fallibilism. The course will be structured around normality models, which have become increasingly popular as a framework for thinking about knowledge in a fallibilist setting. In the course of exploring these modes, we will look at how standard models for epistemic and doxastic logic can be enriched with various kinds of additional structure. Students can expect to be introduced to the basics of epistemic and doxastic logic, their models, and to the applications of probabilities, subject matters and metrics in those models.
This course provides a basic intro into computational models of language learning. We briefly review some known facts regarding child language development. We introduce three classes of models for language learning: rule-based symbolic models, Bayesian probabilistic models, and Neural Networks. We discuss prior and current research applying these models to better understand human language learning.
Proof-theoretic semantics offers a fresh perspective on meaning, replacing abstract ""truth in models"" with the dynamic process of proof construction. This course will equip you with tools to explore questions like ''How do logical rules define meaning?'' or ''What connects human reasoning to computation?''
The Content
Proof-theoretic semantics (PTS) challenges traditional approaches to meaning by rejecting the necessity of truth conditions in favor of proof conditions. Unlike model-theoretic semantics, which defines meaning through interpretations in mathematical structures, PTS characterizes logical connectives via their governing inference rules. This aligns with inferentialismāthe broader philosophical view that meaning arises from how expressions are used in deductive practices. Rooted in Wittgensteinās ""meaning is use"" principle, PTS gained formal rigor through mid-20th-century developments in general proof theory. Key figures like Prawitz, Dummett, and Martin-Lƶf built on Gentzenās systems of natural deduction and sequent calculus to establish frameworks where introduction and elimination rules define connective meanings. Their work linked philosophical insights about language use to technical innovations in normalization, cut-elimination, Curry-Howard isomorphism, and proof-theoretic validity, creating a bridge between logical practice and semanticĀ theory.
We are unaware of many things, and unaware that we are unaware of them. But what is (un)awareness, and how does it relate to other epistemic notions such as belief, knowledge and uncertainty? In this course, we will introduce models of awareness that have been developed in philosophy, computer science and economics. The topics that we will discuss include: the problem of logical omniscience, the Dekel-Lipman-Rustichini impossibility result, syntactic vs. semantic models of awareness and their respective sound and complete axiomatizations, awareness dynamics, (un)awareness and decision theory and reverse Bayesianism.
The syntax, semantics and pragmatics of tenseless languages (ā)
Yael Sharvit
The syntax, semantics and pragmatics of tenseless languages
Many languages seem to lack overt morphological tenses (e.g., past, present). According to some theories, such languages do not have tenses at all; they make the relevant temporal distinctions by appealing to aspect, in combination with pragmatic principles. According to other theories, languages with no overt tenses have tenses underlyingly. It is also possible that ātenselessā languages are not a uniform class (some resort to aspect+pragmatics, others have underlying tenses, and others employ some combination of these tools). The course will explore the different theoretical possibilities and the arguments that have been put forth for them. Taking the position that restrictions on the interpretation of embedded clauses (e.g., complement clauses of attitude verbs, restrictive and non-restrictive relative clauses, temporal adverbial clauses) provide the most reliable empirical basis for comparing the different theories of ātenselessā languages, the course will also explore the empirical consequences of these theories regarding embedded clauses. Finally, the course will explore the ābig pictureā theoretical consequences entailed by each of these explanations.
Generalized Quantifiers in the Wild: Typological Variation and Cognitive Reality (š„)
Generalized Quantifiers in the Wild: Typological Variation and Cognitive Reality
Generalized quantifier theory (GQT), with roots in the 1980s, explores the semantics of quantifier expressions like ""every,"" ""some,"" ""most,"" ""infinitely many,"" and ""uncountably many."" GQT has become a cornerstone of formal semantics, logic, theoretical computer science, philosophy, psycholinguistics, and cognitive science. While excellent surveys and courses exist, they typically focus on classical GQT from a logical or linguistic perspective.
This course takes a different approach. We delve into recent non-orthodox developments that bring GQT closer to the empirical reality of language and cognition. We focus on two key areas:
Typological Variation: Building on Barwise and Cooper's seminal work on quantifier universals, we examine how logical and computational methods can explain cross-linguistic variation in quantifier expressions. For instance, why do languages tend only to lexicalize monotone quantifiers (or their conjunctions)? This burgeoning research program offers exciting new insights into the nature and limits of quantification.
Cognitive Representations: Addressing the classic philosophical debate on mental representations of meaning, we explore how people actually understand and process quantifiers. Research reveals that logically equivalent quantifiers can be cognitively distinct, and individuals vary in their interpretations. We examine how computational and psychological frameworks can be integrated with GQT to create cognitively realistic models of quantifier representation in the mind and brain.
Based on our forthcoming book in the Cambridge Element Series, this course summarizes these two research strands and highlights significant open questions. We aim to demonstrate that GQT is a vibrant and evolving field with many intriguing puzzles yet to be solved.
The course will blend theoretical foundations with cutting-edge empirical research. We will:
Introduce core concepts of GQT, formal semantics, and logic, providing the necessary background for students from diverse disciplines.
Present computational cognitive modeling tools and experimental methods that together with logical and linguistic apparatus may shed new light on linguistic meanings.
Explore typological variation in quantifier expressions, examining quantifier universals and their explanations.
Analyze the ""polarity effect"" and other phenomena that challenge classical GQT.
Investigate individual differences in quantifier meanings and their cognitive underpinnings.
Discuss how neuroscientific research can inform our understanding of quantifiers.
Argumentation is a key reasoning paradigm that builds bridges across knowledge representation and reasoning in artificial intelligence (AI), natural argumentation in philosophy and linguistics, legal and ethical reasoning, mathematical and logical analysis, and graph-theoretic modeling. Formal and computational argumentation capture diverse forms of reasoning, especially in the presence of uncertainty and conflict. This course presents how argumentation in AI has evolved through several phases: from classical logic to nonclassical and nonmonotonic logic, to conflict management and formal argumentation, and further to argument mining and computational argumentation. This course is about combining logical methods from the area of knowledge representation and reasoning, and it provides an introduction to the three volumes of the Handbook of Formal Argumentation, particularly through the chapter āThirteen Challenges of Formal and Computational Argumentationā in the third volume.
Theories of rational decision making often abstract away from computational and other resource limitations faced by real agents. An alternative approach known as resource rationality puts such matters front and center, grounding choice and decision in the rational use of finite resources. Anticipated by earlier work in economics and in computer science, this approach has recently seen rapid development and application in the cognitive sciences. Here, the theory of rationality plays a dual role, both as a framework for normative assessment and as a source of scientific hypotheses about how mental processes in fact work. The goal of this course will be to introduce and discuss the major conceptual, mathematical, normative, and empirical aspects of the framework.
The Many Faces of Number: Variation in Numeral-Noun Constructions
Languages often make a morpho-syntactic distinction between singular and plural marking on nouns. But what is the relation between the morphosyntactic expression of number markers (e.g. a book vs books) and their semantic interpretation? How does this relation affect the variation we observe in morphological (un)markedness in numerically-modified contexts? For example, two books in English, two book in Turkish; or two book/books in Western Armenian. It is still an open question whether these differences are only semantic, only morpho-syntactic or both. This course will present an introduction to the morpho-syntax and semantics of nominal number, with a focus on universals and constrained cross-linguistic variation. While doing so, we will also discuss related topics such as the count-mass distinction, countability, and nominal concord. We will then examine how different theories have been extended to account for the typology of numeral-noun constructions. The broader conclusion is that number marking cannot be reduced to uninterpretable agreement on the noun; instead, variation depends on the location, availability and interpretation of number features in the nominal extended projection. We will also discuss the implications that this type of proposal has for the syntax and semantics of quantity (many, much, more) and size (big, large, small) adjectives and for associative plurals.
Logic for Natural Language, Logic in Natural Language (š„)
Logic for Natural Language, Logic in Natural Language
The overall theme of the course is inference in natural language. It will study logical systems which are relevant to semantics and also logical systems which try to carry out reasoning in languages the look like ``surface forms.'' It also will cover more standard topics, such as natural deduction proof systems, the typed lambda calculus; and first-order logic and its decidable fragments. It will also present several completeness/decidability theorems for logical systems which are closer to natural language than first-order logic, such as extended syllogistic logics. One day will be on monotonicity calculi and how inference connects to the syntax-semantics interface in grammatical frameworks like CCG. The course is algorithmic and uses interactive computer programs (Jupyter notebooks) to illustrate much of the material.
Natural Language Processing and Computational Social Science
Our course will guide students on data annotation and exploration, data ethics, and computational modeling towards answering questions in social science and linguistics (particularly pragmatics). The course offers undergraduates and early graduate students an end-to-end overview of our research process towards answering linguistic and social science questions using modern NLP methods. The course is intended for anyone interested in pursuing Computational Social Science research, especially on linguistic data. Basic programming proficiency in Python will be helpful, but is not required.
Bundled modalities typically combine a quantifier with a modality semantically. In recent years, these constructions have drawn increased attention for capturing non-normal modal logics and have led to the discovery of new decidable fragments of first-order modal logic, as well as modal interpretations of various non-classical logics. This course aims to introduce the concepts, fundamental techniques, and applications of bundled modalities in areas such as epistemic logic, deontic logic, intermediate logic, and first-order modal logic.
Tree-Adjoining Grammars: Theory and implementation
This course provides an introduction to the Tree-Adjoining Grammar (TAG) formalism, with a particular focus on Lexicalized Tree-Adjoining Grammar (LTAG). It also introduces the notions of grammar engineering and parsing in the context of TAG, using tools such as XMG and TuLiPA. Throughout the course, we will highlight the importance of TAG and related formalisms in computational linguistics by providing syntactic and semantic analyses of a range of linguistic phenomena, and by exploring implementations that demonstrate the formalismās adequacy for natural language analysis.
Effectful composition in natural language semantics (š„)
Effectful composition in natural language semantics
Computer programs are often factored into pure componentsāsimple, total functions from inputs to outputsāand components that may have side effectsāerrors, changes to memory, parallel threads, abortion of the current command, etc. In this course, we make the case that human languages are similarly organized around the give and pull of pure and effectful processes, and weāll aim to show how denotational techniques from computer science can be leveraged to support elegant and illuminating semantic analyses of natural language phenomena.
Current Formal Models of Counterfactuals and Causation
We use counterfactuals and causal claims either to explain the world or to change it: sociologists wonder how to fight poverty; historians ask why Rome fell; engineers want to ascertain what would have happened had the primary safety system in the Chernobyl power plant worked. This is why philosophy, linguistics, and cognitive science have long been interested in causality. The overarching aim of this course is to present participants with the latest developments in the exciting field of causal modeling. After the course, participants will have the necessary background knowledge to conduct their own research in the philosophy, linguistics, and cognitive science of causation and counterfactuals.
In recent years, modern machine learning systems have achieved unprecedented success in learning from data with minimal human guidance. In parallel to the advancements in AI, Cognitive Science has been very successful at applying a variety of computational models to human learning. Still, computational and cognitive learners are often āblack-boxesā lacking interpretation and explanation. How can we reason about, understand, and guide computational learning processes?
In this course, we introduce an approach for reasoning about learning that takes inspiration from Dynamic Epistemic Logic. Our lectures will feature both classical problems in learning and recent results about dynamic logics of learning. We will also provide supplementary exercises, slides, and dedicated reading material for those interested in a deeper understanding (see the relevant literature in the appendix).
Our target audience for this course is interdisciplinary, including students with backgrounds in mathematical logic, theoretical computer science, and formal philosophy, but also cognitive and social science.
This course will survey select open questions in the semantics of desire ascriptions. We'll cover topics like: What is the logic of desire? (For example: is 'want' upward monotonic?) How do various desire predicatesā'want', 'wish', 'hope', 'be glad'ārelate to one another? How should we account for conflicting desires? (Unlike conflicting beliefs, which are irrational and potentially call for special treatments in the semantics of 'believe', conflicting desires are commonplace and (often) rational.)
The recent advent of large-scale language datasets and their associated statistical models have given rise to two major kinds of questions bearing on linguistic theory and methodology:
How can semanticists use such datasets; i.e., how can the statistical properties of a dataset inform semantic theory directly, and what guiding principles regulate the link between such properties and semantic theory?
How should semantic theories themselves be modified so that they may characterize not only informally collected acceptability and inference judgments, but statistical generalizations observed from datasets?
This course brings the compositional, algebraic view of meaning employed by semanticists into contact with linguistic datasets by introducing and applying the framework of Probabilistic Dynamic Semantics.
Formal and computational linguistic perspectives on legal interpretation (š„)
Formal and computational linguistic perspectives on legal interpretation
This course offers a critical perspective on legal interpretation through the lenses of formal and computational linguistics. It begins with an overview of contemporary philosophical debates in statutory interpretation. Textualism, the dominant interpretive doctrine in U.S. jurisprudence, prioritizes a text's āordinary meaningā, often relying on tools such as dictionaries and the so-called canons of construction (Scalia & Garner, 2012). We will examine how these methods are deployed in practice, identifying their limitations in capturing the complexities of linguistic interpretation. Building on a tradition of scholarship at the intersection of linguistics and law (Solan, 1993; Cunningham et al., 1993; inter alia), we will then explore how formal linguistic theory addresses challenges that arise in the textualist approach. By examining past U.S. court cases, we will discuss how principles from syntax, semantics, and pragmatics can provide more rigorous and scientifically-informed guidance in hard cases of legal interpretation.
This course will also highlight the potential of computational linguistics to augment legal text analysis through data-driven approaches to the study of legal interpretation. Students will learn how tools such as automated syntactic parsing can complement insights from linguistic theory. We will also critically assess the growing role of LLMs in legal interpretation.
Prerequisites: Students should have some familiarity with formal semantics/pragmatics and basic syntax. Students are not expected to have any programming experience.