OLIVIERFEST '25: Proceedings of the Workshop Dedicated to Olivier Danvy on the Occasion of His 64th Birthday

Full Citation in the ACM Digital Library

SESSION: Workshop Dedicated to Olivier Danvy on the Occasion of His 64th Birthday (OLIVIERFEST 2025)

Controlling Copatterns: There and Back Again

Copatterns give functional programs a flexible mechanism for responding to their context, and composition can greatly enhance their expressiveness. However, that same expressive power makes it harder to precisely specify the behavior of programs. Using Danvy's functional and syntactic correspondence between different semantic artifacts, we derive a full suite of semantics for copatterns, twice. First, a calculus of monolithic copatterns is taken on a journey from small-step operational semantics to abstract machine to continuation-passing style. Then within continuation-passing style, we refactor the semantics to derive a more general calculus of compositional copatterns, and take the return journey back to derive the other semantic artifacts in reverse order.

Danvy’s Mystery Functions in Slang

Danvy introduced the concept of "mystery functions'' as a way to teach multiple programming-related concepts that are often treated separately in common computer science curricula, in particular, specification, implementation, testing, recursion and induction. He presented concepts using the Rocq Prover.

As part of a course on software correctness, we have recast Danvy's pedagogical themes using Slang, a verification-enabled subset of the Scala programming language, and its companion verification tool Logika. Together, Slang and Logika allow developers to program in a rich programming language and apply a variety of specification and verification techniques, all phrased in directly in Scala/Slang syntax that is easily accessible to programmers. This synergistic combination of programming and proof features enables us to convincingly encourage students with the mantras that "doing proofs is programming'' (and vice versa).

As we teach concepts using Danvy's "mystery function'' with Slang/Logika in a full programming language that blends programs and proofs, we add another dimension to the students' learning. In this paper, we describe mystery functions in Slang and demonstrate the advances we have made in teaching the above-mentioned concepts in programming classes.

Defining Algebraic Effects and Handlers via Trails and Metacontinuations

This paper investigates algebraic effects and handlers from the viewpoint of well-established work on the four delimited continuation constructs known as shift, control, shift0, and control0. The whole development follows what Danvy and his colleagues have laid out. We start by defining two definitional interpreters for algebraic effects and handlers based on trails as introduced for dynamic continuation-passing style translation and metacontinuations as introduced for a hierarchy of control operators. Once definitional interpreters are obtained, we transform them using well-known meaning-preserving program translations to obtain abstract machines for algebraic effects and handlers. We also derive monomorphic type systems for algebraic effects and handlers by assigning types to the definitional interpreters, ones similar in spirit to the type system for shift and reset with answer type modification. The resulting semantic artifacts are close to those for delimited continuation constructs, enabling cross-fertilization between delimited continuation constructs and algebraic effects and handlers.

Simple Closure Analysis Revisited

Simple closure analysis (SCA) distills the essence of the machinery underlying guaranteed (near-)linear time constraint-based program analyses that emerged in the early 1990s. It involves three key steps. First, reduction to a constraint problem that captures abstract value flow in higher-order programming languages. Second, constraint normalization by instrumented unification closure. Finally, interpretation of the normalized constraints in the program analysis problem at hand.

In the present paper we revisit SCA and explain its original rationale of combining excellent asymptotic and practical run-time efficiency with guaranteed semantic robustness properties. We point out that SCA models induced flows bidirectionally, but keeps directional surface flow information. This preserves valuable directional flow information while breaking through the inherent quasi-cubic complexity of full closure analysis (0CFA), in which also induced flows are directional. We review some of the subsequent applications, extensions, and results on semantic robustness of SCA. We suggest that its modularity and the efficiency of its core data structure, a contracted and closed flow graph on abstract closures and structured data, be kept in mind for algorithmically efficient and semantically robust program analyses on large code bases.

This paper contains the original technical report on Simple Closure Analysis, with a few typographical changes and added explanations for clarification. It was posted on an FTP server at DIKU in 1992 and influenced several subsequent works, but was never submitted for publication (until now). It is a (very) late response to Olivier Danvy's original invitation to submit it to Higher Order Symbolic Computation with my heartfelt thanks for his encouragement.

Redundancy Checking in Reversible Flowcharts via Logic-Based Operational Semantics

This study addresses the problem of detecting redundant assertions in reversible programs. We investigate an interpretive method for the automatic verification of assertion redundancy by checking satisfiability of Constrained Horn Clauses (CHCs), a fragment of first-order logic. By formalizing a transition semantics of reversible flowcharts in CHC and using well-known examples, including Dijkstra’s permutation-to-code algorithm, we show that standard techniques and tools developed in the field of CHC-based verification appear suitable for addressing this unconventional optimization problem. The scalability of the approach may be achieved through techniques such as CHC specialization and abstract interpretation.

Continuations in Music

Continuations are useful in both programming languages and natural language. Given successful applications in both kinds of languages, it is tempting to explore the usefulness of continuations in other forms of languages, such as music. We identify common patterns in music that can be described as manipulation of continuations. The patterns guide us in how to design and implement musical continuations.

On the Structure of Abstract Interpreters

Static analysis aims at computing semantic properties of programs. Abstract interpretation provides a framework to design static analyses and allows one to divide the construction of a static analysis into the definition of abstract domains that describe families of logical predicates with operations to reason on them, and the semantic-guided formalisation of abstract interpreters. The latter relies on the abstract domains, that describe semantic properties, and on the concrete semantics. A large part of the research on static analysis focuses on the design of novel abstract domains, with ever more expressive and/or efficient computer representation for semantic properties. In this short paper, we consider more specifically the core of the abstract interpreters (also called the abstract iterators) and discuss several techniques to build them, that are inspired by functional programming. First, we briefly discuss common iteration techniques based on control flow graphs, which have often been used for program analyses aimed at computing state properties. Second, we consider iteration techniques that borrow principles from denotational semantics and are typically defined by induction over the syntax of programs. Last, we extend the latter family of techniques to relational abstractions that capture relations between program inputs and outputs.

A Compositional Semantics for eval in Scheme

The Scheme programming language was introduced in 1975 as a lexically-scoped dialect of Lisp. It has subsequently been improved and extended through many rounds of standardization, documented by the Scheme Reports.

In Lisp, eval is a function taking a single argument; when the value of the argument represents an expression, eval proceeds to evaluate it in the current environment. The Scheme procedure eval is similar, but takes an extra argument that determines the evaluation environment.

The so-called classic standards for Scheme and its latest modern standard all include a denotational semantics of core expressions and of some required procedures. However, the semantics does not cover eval. When Muchnick and Pleban compared the denotational semantics of Lisp and Scheme in 1980, they wrote that it was not possible to give a structural semantics of eval expressions. Similarly, in 1984, Clinger pointed out that the semantics of eval violates compositionality; he suggested to define eval using a non-compositional copy of the semantic function.

This paper adapts Clinger’s suggestion by letting the semantic function take an extra argument, then defining that argument by an explicit fixed point. The resulting semantics of eval expressions is fully compositional. The wellformedness of the semantics has been tested using a lightweight Agda formalization; verifying that the denotations have expected properties is work in progress.

How to Fold a Tree: Programming Exercises on Calder’s Mobiles

We explore several ways of folding over a binary tree, and illustrate their use to solve programming exercises on Calder's mobiles, focusing on algorithms that perform a single tree traversal. Dedicated to Olivier Danvy on the occasion of his 64th birthday.

Understanding Linux Kernel Code through Formal Verification: A Case Study of the Task-Scheduler Function select_idle_core

On the one hand, the Linux kernel task scheduler is critical to all application performance. On the other hand, it is widely agreed that its code complexity is spiraling out of control, and only a tiny handful of kernel developers understand it. We are exploring the opportunities and challenges in applying formal verification to Linux kernel task-scheduler code. Building on a previous work focusing on the evolution of the function should_we_balance, we here consider a version of the key task-placement function select_idle_core and the evolution of the iterators on which it relies.

Verified Nanopasses for Compiling Conditionals

We present a proof of correctness in Agda for four nanopasses that translate a source language with let binding, integer arithmetic, conditional expressions and Booleans into an x86-flavored register transfer language. The most interesting of these four nanopasses is a translation of conditional expressions into goto-style control flow that uses the continuation-oriented approach of Olivier Danvy's one-pass transformation into monadic normal form (2003).

What I Always Wanted to Know about Second Class Values

Second class values are allocated on the run-time stack and they may contain pointers to other values on the stack. They were first discussed in connection with the infamous funarg problem, but then forgotten as heap-allocated closures were discovered.

Recent work has resurrected the interest in second class values as they allow us to safely allocate some data structures (e.g., closures) on the run-time stack. This approach has the advantage of avoiding the cost and the unpredictable timing of garbage collection for these structures as their deallocation takes no extra time when the stack is popped. A system with qualified types ensures that second class values do not escape across stack pops.

We take a second look at this work and extend/modify their formal framework with first-class and second-class references in the style of ML. Along the way we answer a question that left us mystified in the previous work: What, exactly, is the meaning of a type qualifier?

Untyped Logical Relations at Work: Control Operators, Contextual Equivalence and Full Abstraction

While syntactic logical relations have been defined and applied primarily for calculi with types, it has been occasionally demonstrated that the technique can be employed in an untyped context as well, by replacing the type structure with step-indexing. In this article, we show how untyped step-indexed logical relations can be used both to prove contextual equivalence of expressions within a single calculus and to establish strong properties of program translations from one calculus to another.

To this end, we focus on the untyped call-by-value λ-calculus extended with the standard abortive control operators call/cc and abort, for which we define biorthogonal step-indexed binary logical relations. In contrast to the existing untyped logical relations, which are defined intensionally, i.e., based on constructors, our definition is extensional, i.e., based on eliminators. Such an approach scales better to richer languages, but it poses certain technical challenges that we show how to overcome. Using these relations we verify some interesting program equivalences, one of which is known to be hard to prove with the existing techniques such as bisimulations. We then define untyped inter-language logical relations in order to present an elegant proof of the full abstraction theorem for a standard CPS translation from the calculus with control operators to (a fragment of) the pure λ-calculus.

Encoding Product Types

Can product types be encoded in simply-typed lambda calculus with base types and function types? It depends.

A Pair of tricks

One way to ascertain the quality of the generated code -- its performance, hygiene, memory safety, deadlock freedom, etc. -- is to decorate it with annotations that explicate some aspect of the code: e.g., the count of particular operations, its free variables, the sequence of IO or memory allocation operations, correctness proof obligations. The annotations are built along with generating the code, and may be used to detect problems like scope extrusion or deadlock -- well before the whole program is generated, let alone executed. Annotations are a worthy alternative to fancy types.

Generating code along with annotations is almost straightforward. The challenge comes from generating functions or other variable-binding forms, with the often used so-called higher-order abstract syntax (HOAS). That was the challenge posed by Olivier Danvy.

The present paper describes a new, simpler and general solution of the challenge -- actually, of its general form: pairing of HOAS generators, or direct product of algebra-like structures with HOAS operations. The solution also overcomes the hitherto unsolved HOAS problem: latent effects.

Generic Reduction-Based Interpreters

Reduction-based interpreters are traditionally defined in terms of a one-step reduction function which systematically decomposes a term into a potential redex and context, contracts the redex, and recomposes it to construct the new term to be further reduced. While implementing such interpreters follows a systematic recipe, they often require interpreter engineers to write a substantial amount of code---much of it boilerplate. In this paper, we apply well-known techniques from generic programming to reduce boilerplate code in reduction-based interpreters.

Property-Based Testing of OCaml 5’s Runtime System: Fun and Segfaults with Interpreters and State Transition Functions

We describe our effort on using property-based testing to test the OCaml 5 multicore runtime system. In particular, we cover three case studies of increasing complexity that utilize a model-based state machine framework: (a) Testing the Array module, (b) testing weak hash sets, and (c) testing the garbage collector, with the latter two behaving non-deterministically from the point of view of the black-box testing process. We evaluate the approach empirically by analyzing the bugs found, and discuss both limitations of the approach and challenges we have met underway.

Safe-for-Space Linked Environments

Techniques for implementing a call-by-value λ-calculus are well known, including Reynolds's definitional interpreter and techniques that Danvy developed for moving between reduction rules and abstract machines. Most techniques focus on ensuring that an implementation produces the same evaluation result as a model, but time and space properties are also within reach. Proper tail-call handling falls out of Reynolds's approach, for example, but the stronger property of space safety is less readily available, particularly if worst-case time complexity matters. In this paper, we explore an approach to space safety that is realized through the garbage collector, instead of the interpreter loop, in the hope of finding a convenient implementation technique that matches the asymptotic time bounds of fast evaluation models and the asymptotic space bounds of compact evaluation models. We arrive within a size-of-source factor of achieving those bounds. Our implementation technique is comparable in complexity to some other interpreter variants; specifically, it requires some library functions for binary trees, specialized environment traversals in the garbage collector, and a compilation pass to gather the free variables of each expression and to rewrite each variable reference to its binding depth.

Towards Metaprogramming Defunctionalization in Rocq

Defunctionalization, a process to transform higher-order programs into first-order ones, has been heavily studied for its structure, its use-cases, its logical semantics, etc. This extended abstract is an invitation to study defunctionalization through the prism of a shallow embedding in an interactive theorem prover. We use metaprogramming techniques to automatize program transformation and proof generation, with a starting implementation in Rocq. The long term goals are to define defunctionalization for a large subset of Type Theory, to prove and generate proofs such as semantics and termination preservation, and more generally to use metaprogramming with strong guaranties to have a better understanding of this program transformation.

Invertible Syntax without the Tuples (Functional Pearl)

In the seminal paper Functional unparsing, Olivier Danvy used continuation passing to reanalyse printf-like format strings as combinators. In the intervening decades, the conversation shifted towards a concurrent line of work --- applicative, monadic or arrow-based combinator libraries --- in an effort to find combinators for invertible syntax descriptions that simultaneously determine a parser as well as a printer, and with more expressive power, able to handle inductive structures such as lists and trees. Along the way, continuation passing got lost. This paper argues that Danvy's insight remains as relevant to the general setting as it was to the restricted setting of his original paper. Like him, we present three solutions that exploit continuation-passing style as an alternative to both dependent types and monoidal aggregation via nested pairs, in our case to parse and print structured data with increasing expressive power.

Functional Programming and Computational Quantum Structures

We present a simulation of quantum systems where the state vectors and observables are polymorphic Haskell functions. The model is not restricted to qubits: It admits state spaces of infinite dimension, but with finite number of degrees of freedom. (So, the subject is quantum mechanics and not quantum field theory). Since the state vectors and operators are not represented or manipulated as symbolic formulae, they are opaque as all functions in Haskell, and they cannot be treated as classical data. In order to unveil their contents, they can only be applied. We expect from the reader a moderate familiarity with elementary quantum theory, but the part about quantum computing is kept at a basic level.

This article is accompanied by the sources of the relevant Haskell programs.

A Tale of Two Zippers

We apply the zipper construct of Huet to prove correct an optimiser for a simply-typed lambda calculus with force and delay. The work here is used as the basis for a certifying optimising compiler for the Plutus smart contract language on the Cardano blockchain.

The paper is an executable literate Agda script, and its source may be found in the file

Zippers.lagda.md

available as an artifact associated with this paper.