Erlang 2021: Proceedings of the 20th ACM SIGPLAN International Workshop on Erlang

Full Citation in the ACM Digital Library

SESSION: Invited Talk

Fifteen years of successfully dialyzing Erlang and Elixir code (invited talk)

As various researchers have discovered, the hard way, it is not easy to impose static type systems into dynamically typed languages, especially to those for which a significant amount of code already exists. Still, using types for documenting programmer intentions and for catching some programming errors early is widely recognized as important.

This summer marked the fifteen year anniversary of the publication of the "Success Typings" paper [Lindahl and Sagonas, 2006], which described the basic idea and first version of the type inference algorithm that Dialyzer defect detection tool is based on. The optimistic, 'never-wrong for defect detection', approach to type inference that success typings advocate has been key in Dialyzer's successful adoption not only by Erlang programmers but also more recently by the Elixir community. At various points during the same time period, other approaches to typing Erlang and Elixir code have not managed to gain similar levels of adoption.

In this invited talk, I will present the history behind success typings, their use and evolution within Dialyzer, and some reflections on what in my opinion were the key ingredients for Dialyzer's success within the communities of Erlang and Elixir programmers. I will also share some thoughts on what could have done better and/or differently, and where to go from here.

SESSION: Papers

Graft: general purpose raft consensus in Elixir

We present Graft, a generic tool for creating distributed consensus clusters running the Raft algorithm using state machines in Elixir. Our tool exhibits performance that is comparable to that of the Raft implementation supporting the original proposal, as well as the performance of other state-of-the-art Raft implementations running on the BEAM. The correctness of our tool is also ascertained through a number of complementary verification methods.

The Hera framework for fault-tolerant sensor fusion with Erlang and GRiSP on an IoT network

Classical sensor fusion approaches require to work directly with the hardware and involve a lot of low-level programming, which is not suited for reliable and user-friendly sensor fusion for Internet of Things (IoT) applications. In this paper, we propose and analyze Hera, a Kalman filter-based sensor fusion framework for Erlang. Hera offers a high-level approach for asynchronous and fault-tolerant sensor fusion directly at the edge of an IoT network. We use the GRiSP-Base board, a low-cost platform specially designed for Erlang and to avoid soldering or dropping down to C. We emphasize on the importance of performing all the computations directly at the sensor-equipped devices themselves, completely removing the cloud necessity. We show that we can perform sensor fusion for position and orientation tracking at a high level of abstraction and with the strong guarantee that the system will keep running as long as one GRiSP board is alive. With Hera, the implementation effort is significantly reduced which makes it an excellent candidate for IoT prototyping and education in the field of sensor fusion.

Detecting oxbow code in Erlang codebases with the highest degree of certainty

The presence of source code that is no longer needed is a handicap to project maintainability. The larger and longer-lived the project, the higher the chances of accumulating dead code in its different forms.

Manually detecting unused code is time-consuming, tedious, error-prone, and requires a great level of deep knowledge about the codebase. In this paper, we examine the kinds of dead code (specifically, oxbow code) that can appear in Erlang projects, and formulate rules to identify them with high accuracy.

We also present an open-source static analyzer that implements these rules, allowing for the automatic detection and confident removal of oxbow code in Erlang codebases, actively contributing to increasing their quality and maintainability.

Makina: a new QuickCheck state machine library

This article presents Makina, a new library and a domain specific language for writing property-based testing models for stateful programs. Models written in the new domain specific language are, using Elixir macros, rewritten into normal QuickCheck state machines. Our main goals with Makina are to facilitate the task of developing correct and maintainable models, and to encourage model reuse. To meet these goals, Makina provides a declarative syntax for defining model states and model commands. In particular, Makina encourages the typing of specifications, and ensures through its rewrite rules that such type information can be used by, e.g., the Dialyzer tool, to effectively typecheck models. Moreover, to promote model reuse, the domain specific language provides constructs to permit models to be defined in terms of collections of previously defined models.

Bidirectional typing for Erlang

Erlang is a strict, dynamically typed functional programming language popular for its use in distributed and fault-tolerant applications. The absence of static type checking allows ill-typed programs to cause type errors at run time. The benefits of catching these type errors at compile time are the primary motivation for introducing a static type system for Erlang. The greatest challenge is to find a balance between keeping the type checking sound while retaining the flexibility and philosophy of the Erlang. However, since Erlang allows higher-rank polymorphism, it is unavoidable to require type annotations for some functions to ensure decidability.

In this paper, we propose a static type system for Erlang based on bidirectional type checking. In bidirectional type checking, terms are either used to infer a type, or they are checked against a given type. With the bidirectional type checking, we are trying to keep type checking sound without limiting the language’s philosophy. In addition, type annotations are only required when inference fails, which only occurs at predictable places, such as usage of higher-ranked polymorphism.

What are the critical security flaws in my system?

Delivering secure software is a challenge that every software engineering team needs to face and solve. Methods based on static analysis can help programmers identify security risks in the software. Security checkers built using static analysis methods are a great help but they can overload the users with their findings. Today there is no security checker for Erlang that understands the severity of the found vulnerability and uses the information to prioritise the found vulnerabilities when presenting the results to the programmers.

In this paper we discuss how to prioritise vulnerabilities in Erlang programs. We propose a static analysis that determines the severity of a vulnerability. Building on top of our previous work, we extend the trust zone analyser algorithm with the proposed analysis to return prioritised results to the programmers. Our early evaluation shows that the trust zone analyser is able to identify and prioritise the most critical security flaws in an Erlang system.