Forty years ago development started on Mix, a partial evaluator designed specifically for the purpose of self-application. The effort, led by Neil D. Jones at the University of Copenhagen, eventually demonstrated that non-trivial compilers could be generated automatically by applying a partial evaluator to itself. The possibility, in theory, of such self-application had been known for more than a decade, but remained unrealized by the start of 1984. We describe the genesis of Mix, including the research environment, the challenges, and the main insights that led to success. We emphasize the critical role played by program annotation as a pre-processing step, later automated in the form of binding-time analysis.
Neil Deaton Jones, professor emeritus at DIKU, the Department of Computer Science at the University of Copenhagen, passed away March 27th, 2023, shortly after his 82nd birthday. He is remembered for his seminal contributions to programming language research and theory of computation and for the impact his visions and his work have had on an entire generation of researchers, students and collaborators.
This paper presents some ideas concerning program manipulation and program transformation from the early days of their development. Particular emphasis will be given to program transformation techniques in the area of functional programming and constraint logic programming. We will also indicate current applications of program transformation techniques to the verification of program properties and program synthesis.
Incremental computation aims to compute more efficiently on changed input by reusing previously computed results. We give a high-level overview of works on incremental computation, and highlight the essence underlying all of them, which we call incrementalization---the discrete counterpart of differentiation in calculus. We review the gist of a systematic method for incrementalization, and a systematic method centered around it, called Iterate-Incrementalize-Implement, for program design and optimization, as well as algorithm design and optimization. At a meta-level, with historical contexts and for future directions, we stress the power of high-level data, control, and module abstractions in developing new and better algorithms and programs as well as their precise complexities.
An attempt is made to record events leading up to the The 0'th PEPM Event: the October 1987 Partial Evaluation and Mixed Computation IFIP TC2 Working Conference at Gl. Avernæs, Denmark.
Strymonas is a code-generation--based library (embedded DSL) for fast, bulk, single-thread in-memory stream processing -- with the declarative description of stream pipelines and yet achieving the speed and memory efficiency of hand-written state machines. It guarantees complete stream fusion in all cases.
So far, strymonas has been used on small examples and micro-benchmarks. In this work, we evaluate strymonas on a large, real-life application of Software-Defined Radio -- FM Radio reception, -- contrasting and benchmarking it against the synchronous dataflow system StreamIt, and the state-of-the art: GNU Radio.
Strymonas, despite being declarative, single-thread single-core with no explicit support for SIMD, no built-in windowing or convolution, turns out to offer portable high performance, well enough for real-time FM Radio reception. It is on par with (or, on Raspberry Pi Zero, outstripping) GNU Radio, while providing static guarantees of complete fusion and type safety.
A program generating a co-inductive data structure is called productive if the program eventually generates all the elements of the data structure. We propose a new method for verifying the productivity, which transforms a co-inductive data structure into a function that takes a path as an argument and returns the corresponding element. For example, an infinite binary tree is converted to a function that takes a sequence consisting of 0 (left) and 1 (right), and returns the element in the specified position, and a stream is converted into a function that takes a sequence of the form 0^n (or, simply a natural number n) and returns the n-th element of the stream. A stream-generating program is then productive just if the function terminates for every n. The transformation allows us to reduce the productivity verification problem to the termination problem for call-by-name higher-order functional programs without co-inductive data structures. We formalize the transformation and prove its correctness. We have implemented an automated productivity checker based on the proposed method, by extending an automated HFL(Z) validity checker, which can be used as a termination checker.
Using a dependently typed host language, we give a well scoped-and-typed by construction presentation of a minimal two level simply typed calculus with a static and a dynamic stage. The staging function partially evaluating the parts of a term that are static is obtained by a model construction inspired by normalisation by evaluation.
We then go on to demonstrate how this minimal language can be extended to provide additional metaprogramming capabilities, and to define a higher order functional language evaluating to digital circuit descriptions.
Toman et al. have proposed a type system for automatic verification of low-level programs, which combines ownership types and refinement types to enable strong updates of refinement types in the presence of pointer aliases. We extend their type system to support pointer arithmetic, and prove its soundness. Based on the proposed type system, we have implemented a prototype tool for automated verification of the lack of assertion errors of low-level programs with pointer arithmetic, and confirmed its effectiveness through experiments.
Many programs which solve complicated problems can be seen as inversions of other, much simpler, programs. One particular example is transforming verifiers into solvers, which can be achieved with low effort by implementing the verifier in a relational language and then executing it in the backward direction. Unfortunately, as it is common with inverse computations, interpretation overhead may lead to subpar performance compared to direct program inversion. In this paper we discuss functional conversion aimed at improving relational miniKanren specifications with respect to a known fixed direction. Our preliminary evaluation demonstrates a significant performance increase for some programs which exemplify the approach.
Flowchart languages are traditionally used to study the foundations of partial evaluation. This article presents a systematic and formal development of a method for partial evaluation of a reversible flowchart language. The results confirm that partial evaluation in this unconventional computing paradigm shows effects consistent with traditional partial evaluation. Experiments include specializing a symmetric encryption algorithm and a reversible interpreter for Bennett's reversible Turing machines. A defining feature of reversible languages is their invertibility. This study reports the first experiments composing program inversion and partial evaluation. The presented method is fully implemented. It is potentially of interest because reversible computing has found applications in areas as diverse as low-power computing, debugging, robotics, and quantum-inspired computing.
A type-preserving compiler converts a well-typed input program into a well-typed output program. Previous studies have developed type-preserving compilers for various source languages, including the simply-typed lambda calculus and calculi with control constructs. Our goal is to realize type-preserving compilation of languages that have facilities for manipulating first-class continuations. In this paper, we focus on algebraic effects and handlers, a generalization of exceptions and their handlers with resumable continuations. Specifically, we choose an effect handler calculus and a typed stack-machine-based assembly language as the source and the target languages, respectively, and formalize the target language and a type preserving compiler. The main challenge posed by first-class continuation is how to ensure safety of continuation capture and resumption, which involves concatenation of unknown stacks. We solve this challenge by incorporating stack polymorphism, a technique that has been used for compilation from a language without first-class continuations to a stack-based assembly language. To prove that our compiler is type preserving, we implemented the compiler in Agda as a function between intrinsically typed ASTs. We believe that our contributions could lead to correct and efficient compilation of continuation-manipulating facilities in general.