The weekly Principles of Programming and Verification (POPV) seminars are hosted by the POPV research group at Boston University. These seminars delve into various aspects of programming languages and verification, including but not limited to: program semantics, program logics, type systems, proof systems, synthesis and learning, as well as the mathematical foundations of these topics. While our focus is currently on in-person participation, we also welcome remote attendees via Zoom. Researchers from other institutions are also welcome to attend, both in-person and over Zoom.

The schedule for the current semester can be found below. We also maintain an archive of past seminars. To stay updated on upcoming talks and any schedule changes, subscribe to the POPV mailing list and the POPV Google Calendar (iCal). If you are interested in presenting at our seminars or visiting the POPV group, please reach out to our organizers: Ugur Yavuz, Cheng Zhang, or Marco Gaboardi.


Schedule (Spring 2024)

During the Spring 2024 term, weekly POPV seminars are scheduled every Tuesday from 13:45 to 14:45 EST. In-person participants can enjoy lunch starting at 13:30. The Zoom link will be shared via the POPV mailing list (subscribe here), and the in-person venue is CDS 1001; unless announced otherwise.

Spring 2024 schedule

Date Speaker Title
02/06 Ankush Das (BU) Probabilistic resource-aware session types
02/13 postponed due to inclement weather
02/20 Zena Ariola (Oregon) Computation as interaction
02/27 Alley Stoughton (BU) Adding well-founded relations, induction and recursion to EasyCrypt (I)
03/05 Adding well-founded relations, induction and recursion to EasyCrypt (II)
03/12 Spring Break
03/19 Matteo Cimini (UMass Lowell) To the help of language designers with the Lang-n-X series of tools
03/26 Alex Lew (MIT) Scaling probabilistic AI with automatic differentiation of probabilistic programs
04/02 Zachery Casey (BU) Unsettling type theory, meditation on the meaning of types
04/09 Steven Holtzen (Northeastern) New methods for scalably reasoning about probabilistic systems
04/16 Chris Martens (Northeastern) In search of deductive possibility spaces
04/23 Tim Nelson (Brown) The human factors of formal methods (and formal methods education)
04/30 Ji-Yong Shin (Northeastern) The atomic distributed object for modeling and verifying strongly consistent distributed systems
05/07 Guannan Wei (Purdue)

Probabilistic resource-aware session types

Time: Feb 6, Tuesday, 14:00-15:00 EST

SpeakerAnkush Das (Boston University)

In-person Location: CDS 1001

Abstract: Session types guarantee that message-passing processes adhere to predefined communication protocols. Prior work on session types has focused on deterministic languages but many message-passing systems, such as Markov chains and randomized distributed algorithms, are probabilistic. To implement and analyze such systems, this article develops the meta theory of probabilistic session types with an application focus on automatic expected resource analysis. Probabilistic session types describe probability distributions over messages and are a conservative extension of intuitionistic (binary) session types. To send on a probabilistic channel, processes have to utilize internal randomness from a probabilistic branching or external randomness from receiving on a probabilistic channel. The analysis for expected resource bounds is smoothly integrated with the type system and is a variant of automatic amortized resource analysis. Type inference relies on linear constraint solving to automatically derive symbolic bounds for various cost metrics. The technical contributions include the meta theory that is based on a novel nested multiverse semantics and a type- reconstruction algorithm that allows flexible mixing of different sources of randomness without burdening the programmer with complex type annotations. The type system has been implemented in the language NomosPro with linear-time type checking. Experiments demonstrate that NomosPro is applicable in different domains such as cost analysis of randomized distributed algorithms, analysis of Markov chains, probabilistic analysis of amortized data structures and digital contracts. NomosPro is also shown to be scalable by (i) implementing two broadcast and a bounded retransmission protocol where messages are dropped with a fixed probability, and (ii) verifying the limiting distribution of a Markov chain with 64 states and 420 transitions.

Computation as interaction

Time: Feb 20, Tuesday, 14:00-15:00 EST

Speaker: Zena Ariola (University of Oregon)

In-person Location: CDS 1001

Abstract: We review the correspondence between logic and languages and then present more recent advances. By putting truth and falsehood on equal footing, we express the search for truth as a dialog between two disagreeing parties. This suggests a model of computation based on the interaction between a term and its context. We conclude by illustrating how logic has provided inspiration for a better understanding of program behavior, program compilation, and more efficient implementations.

Adding well-founded relations, induction and recursion to EasyCrypt

Time: First session: Feb 27, Tuesday, 13:45-14:45 EST, second session: Mar 5, Tuesday, 13:45-14:45 EST

Speaker: Alley Stoughton (Boston University)

In-person Location: CDS 1001

AbstractDespite the EasyCrypt proof assistant’s many strengths, it originally lacked support for well-founded relations, induction and recursion. In this somewhat pedagogical talk, I’ll explain how I was able to implement an EasyCrypt theory providing support for these concepts, employing EasyCrypt’s higher order logic. As we will see, my definitions and proofs directly mirrored those used in set theory, but where predicates and relations are represented not as sets but as functions into the Booleans.

To the help of language designers with the Lang-n-X series of tools

Time: Mar 19, Tuesday, 13:45-14:45 EST

Speaker: Matteo Cimini (UMass Lowell)

In-person Location: CDS 1001

Abstract: Once we have created a programming language, the job of a language designer is not finished yet. Language designers need tools for flexibly combining their designs and establishing whether the properties that were intended at the time of design actually hold. In this talk, I will present the Lang-n-X series of tools, which I have created over the past years with the help of my students. Each tool in this suite addresses an aspect of language development and analysis. To make a few examples, the Lang-n-X suite contains the following tools:

  • Lang-n-Play is a functional language with languages as first-class entities and can be used to flexibly combine parts of languages together and enable language-oriented programming,
  • Lang-n-Prove can express proofs based on the specifics of a language definition given as input (which eliminations forms it has, typing rules, evaluation contexts, and so on), and can produce mechanized proofs for classes of languages rather than one language, and
  • Lang-n-Change can describe language transformations and can be used to automatically add, say, subtyping, to some languages.

(There are other systems in the suite.) I will give an overview of the Lang-n-X suite and I will discuss all of its systems in some detail.

Scaling probabilistic AI with automatic differentiation of probabilistic programs

Time: Mar 26, Tuesday, 13:45-14:45 EST

Speaker: Alex Lew (MIT)

In-person Location: CDS 1001

Abstract: By automating the error-prone math behind deep learning, systems such as TensorFlow and PyTorch have supercharged machine learning research, empowering hundreds of thousands of practitioners to rapidly explore the design space of neural network architectures and training algorithms. In this talk, I will show how new programming language techniques, especially generalizations of automatic differentiation, make it possible to generalize and extend such systems to support probabilistic models. Our automation is rigorously proven sound using new semantic techniques for reasoning compositionally about expressive probabilistic programs, and static types are employed to ensure important preconditions for soundness, eliminating large classes of implementation bugs. Providing a further boost, our tools can help users correctly implement fast, low-variance, unbiased estimators of gradients and probability densities that are too expensive to compute exactly, enabling orders-of-magnitude speedups in downstream optimization and inference algorithms.

To illustrate the value of these techniques, I’ll show how they have helped us experiment with new architectures that could address key challenges with today’s dominant AI models. In particular, I’ll showcase systems we’ve built for (1) auditable reasoning and learning in relational domains, enabling the detection of thousands of errors across millions of Medicare records, and (2) probabilistic inference over large language models, enabling small open models to outperform GPT-4 on several constrained generation benchmarks.

Unsettling type theory, meditation on the meaning of types

Time: Apr 2, Tuesday, 13:45-14:45 EST

Speaker: Zachery Casey (Boston University)

In-person Location: CDS 1001

AbstractDependent type theories, in the style of Martin-Löf’s intuitionistic type theory (MLTT) and Extended Calculus of Constructions (ECC), are influenced by an understanding of (ZF) set theory. The modern perspective on (dependent) type theory is that it captures the syntax of some categorical structure. Although this generalization away from sets is welcome, it only expands what kinds of models we construct which are still “set-like”. There is no drastic change in underlying intuitions, instead, we consider how a certain setting may influence our standard formations.

By re-evaluating what we intend in designing our type theories, theoretically and practically, we hope to move dependent type theory away from our implicit (ZF) set-theoretic biases. In doing so, we may alleviate many limitations which have arisen in the course of working with dependent type theories: certain meta-theoretic complexity, user-facing complexity, integration of non-monadic effects, linear types, and efficient execution (extraction) of programs.

We approach this process by considering in turn how each piece, equality, the inclusion predicate, predicate sentences, and meta-properties are strung from the set-theoretic to the type-theoretic. Related work, in-progress research, and conjectures are discussed. Given the breadth, density, and infancy the topic, the talk will take an informal, hybrid overview and introduction format; although, a basic familiarity with set theory and the standard presentation of MLTT is expected.

New methods for scalably reasoning about probabilistic systems

Time: Apr 9, Tuesday, 13:45-14:45 EST

Speaker: Steven Holtzen (Northeastern University)

In-person Location: CDS 1001

Abstract: Probabilistic systems are notoriously difficult to reason about, but this doesn’t stop people from making them: machine learning practitioners and scientists make probabilistic models for reasoning about the world, and software developers use randomness in designing their algorithms. What are the principles for designing scalable systems and tools to aid in the development, verification, and application of probabilistic uncertainty across today’s software stack? In this talk I will discuss two ongoing efforts in our group that attack this problem. The first, reasoning-via-compilation, seeks to scalably automate probabilistic reasoning by compiling probabilistic systems into tractable representations that afford efficient reasoning. I will discuss ongoing efforts and successes in applying reasoning-via-compilation to scaling probabilistic inference and decision-making. The second approach I will discuss is a deductive approach that seeks to prove facts about programs by reasoning about their behavior within a new probabilistic separation logic we call Lilac. Ultimately, I hope to show you these two complementary approaches to probabilistic reasoning ought to be part of a cohesive foundation for future scalable systems for automated (or semi-automated) reasoning about uncertainty.

In search of deductive possibility spaces

Time: Apr 16, Tuesday, 13:45-14:45 EST

Speaker: Chris Martens (Northeastern University)

In-person Location: CDS 1001

Abstract: There are three broadly accepted lenses for giving meaning to logic programs: proof theory, which identifies execution with proof search; model theory, which assigns truth values to propositional literals; and least fixed-points, which define meaning as the iteration of an immediate consequence operator to a fixed point starting from a least element of a lattice. Which of these applies depends on which logic you choose to work in, what connectives you include from that logic, and what proof search strategies you want to employ.

When we take an interest in representing mutually exclusive choices and fully characterizing the multiple valid meanings for such a program, proof theory falls short, leading us to model theory and the notion of a set of stable models. Unfortunately, the model-theoretic viewpoint limits our domain of discourse to closed worlds of boolean literals, erasing programmer intent along the way. We propose an alternate formulation of mutual-exclusive choice for logic programs which can be given meaning by least fixed points of an immediate consequence operator, and which has a natural correspondence with stable models. We describe the power, limitations, and open problems for this approach, situating it within the concrete application context of procedural generation and generative art + design.

The human factors of formal methods (and formal methods education)

Time: Apr 23, Tuesday, 13:45-14:45 EST

Speaker: Tim Nelson (Brown University)

In-person Location: CDS 1001

Abstract: As formal methods improve in expressiveness and power, they create new opportunities for non-expert adoption. In principle, formal tools are now powerful enough to enable developers to scalably validate realistic systems artifacts without extensive formal training. However, realizing this potential for adoption requires attention to not only the technical but also the human side—which has received extraordinarily little attention from formal-methods research.

This talk presents some of our efforts to address this paucity. We apply ideas from cognitive science, human-factors research, and education theory to improve the usability of formal methods. Along the way, we find misconceptions suffered by users, how technically appealing designs that experts may value may fail to help, and how our tools may even mislead users.

The atomic distributed object for modeling and verifying strongly consistent distributed systems

Time: Apr 30, Tuesday, 13:45-14:45 EST

Speaker: Ji-Yong Shin (Northeastern University)

In-person Location: CDS 1001

Abstract: Despite recent advances, guaranteeing the correctness of large-scale distributed systems through formal verification remains a challenge. Network and node failures are inevitable in these systems, and handling failures at an appropriate level closely relates to the ease of formal reasoning. Unfortunately, existing approaches either completely hide these failures behind an atomic state machine replication (SMR) interface or expose all network-level details, making it difficult to reason about the core system properties. This talk discusses a novel, compositional, atomic distributed object (ADO) model for modeling and verifying strongly consistent distributed systems that combines the best of both options. The object-oriented API abstracts over protocol-specific details and decouples high-level correctness reasoning from implementation choices. We demonstrate that proving properties even of composite distributed systems can be straightforward thanks to the ADO model. Widely-used protocols, such as multi-Paxos, Raft, and Chain Replication, refine the ADO and can be easily verified using the ADO model. The ADO model can be extended to model and verify the reconfiguration protocol of Raft and multi-Paxos and the consensus protocols under Byzantine environments.


Past seminars