Edge computing is one of the key success factors for future Internet solutions that intend to support the ongoing IoT evolution. By offloading central areas using resources that are closer to clients, providers can offer reliable services with higher quality. But even industry standards are still lacking a valid solution for edge systems with actual sense-making capabilities when no preexisting infrastructure whatsoever is available. The current edge model involves a tight coupling with gateway devices and Internet access, even when autonomous ad hoc IoT networks could perform partial or even complete tasks correctly.
In our previous research efforts, we have introduced Achlys, an Erlang programming framework that takes advantage of the GRiSP embedded system capabilities in order to bring edge computing one step further. GRiSP is an embedded board that can easily be programmed directly in Erlang without requiring deep low level knowledge, which offers the extensive toolset of the Erlang ecosystem directly on bare metal hardware. We have been able to demonstrate that our framework allows building reliable applications on unreliable networks of unreliable GRiSP nodes with a very simple programming API. In this paper, we present how Erlang can successfully be used to address edge computing challenges directly on IoT sensor nodes, taking advantage of our existing framework. We display results of deployed distributed programs at the edge and examples of the unique advantage that is offered by Erlang higher-order and concurrent programming in order to achieve reliable general-purpose computing through Achlys.
Recent releases of Erlang/OTP introduced features, which can be used to improve profiling tools for systems executed on the BEAM virtual machine. We discuss the need to introduce improvements into profiling tools with the Erlang-style concurrency in mind, so that they can help to understand performance of message passing and utilization of processes. We propose a new approach to implementation of the crucial element of such tools: the concurrent counters updates mechanism. To demonstrate the limitations of current tools in this area and verify the proposed approach, we present the results of a synthetic benchmark. The results clearly show that the proposed approach is a step towards a new generation of online profiling tools.
We describe a programming language called Web Prolog. We think of it as a web programming language, or, more specifically, as a web logic programming language. The language is based on Prolog, with a good pinch of Erlang added. We stay close to traditional Prolog, so close that the vast majority of programs in Prolog textbooks will run without modification. Towards Erlang we are less faithful, picking only features we regard as useful in a web programming language, e.g. features that support concurrency, distribution and intra-process communication. In particular, we borrow features that make Erlang into an actor programming language, and on top of these we define the concept of a pengine – a programming abstraction in the form of a special kind of actor which closely mirrors the behaviour of a Prolog top-level. On top of the pengine abstraction we develop a notion of non-deterministic RPC and the concept of the Prolog Web.
Energy efficiency is one of the key aspects of modern software. Therefore, we present a tool for measuring the energy consumption of Erlang programs. Using this tool, we measured the energy consumption of different basic language elements, such as data structures, higher-order functions and parallel language constructs. Based on the results of our measurements, we present refactorings that may help to decrease the energy consumption of Erlang software. The refactorings are part of the well-known RefactorErl static analyser and transformer framework for Erlang.
Callback-oriented Erlang/OTP behaviours such as the gen_server library are susceptible to malformed requests and ill-typed messages, causing server processes to crash unless a defensive programming style is used. We contribute an alternative approach in the form of a fully automatic hybrid analysis of callback modules using a notion of type safety based upon a sub-typing relation for Erlang. A combination of compile-time type inference, automatic code injection, and modifications to the request dispatch code of gen_server are used to demonstrate how generic server processes can be protected from client-side type errors.
Distributed Erlang, the process of transparently running Erlang programs over networks, has a long history of immense usefulness but has problems when distributed systems reach certain scales. We explain the issues and show research done towards the goal of transparently enhancing Erlang distribution, so that changes to existing applications and systems can be avoided. We propose several research directions together with prototype implementations that all serve the purpose of improving the current status quo. This includes using different transport protocols, generalizing implementation efforts and incorporating routing protocols for more dynamic node constellations. We then describe some background and history of various work to solve Erlang distribution scalability issues. We show that there is much room for improvement on the Erlang distribution layer without breaking abstractions that developers are used to and rely on.
In this article we test an Erlang implementation of the Noise Protocol Framework, using a novel form of white-box testing. We extend interoperability testing of an Erlang enoise implementation against an implementation of Noise in C. Testing typically performs a noise protocol handshake between the two implementations. If successful, then both implementations are somehow compatible. But this does, for example, not detect whether we reuse keys that have to be newly generated. Therefore we extend such operability testing: During the handshake the Erlang noise implementation is traced. The resulting protocol trace is refactored, obtaining as the end result a symbolic description (a functional term) of how key protocol values are constructed using cryptographic operations and keys. Therafter, this symbolic term is compared, using term rewriting, with a symbolic term representing the ideal symbolic execution of the tested noise protocol handshake (i.e., the "semantics" of the handshake). The semantic symbolic term is obtained by executing a symbolic implementation of the noise protocol that we have developed.
We present Lux, an open source system level test tool with expect-like properties and support for debugging. Lux solves the problem of testing real world scenarios with multiple concurrent sessions. A typical scenario is a server and multiple clients, with synchronisation between sessions (interaction with server and/or clients). Lux is written (almost) entirely in Erlang, with no use of additional libraries, proving Erlang to be a very good fit for this problem. Nevertheless, Lux is not limited to the testing of systems written in Erlang, but can test any system with text based interfaces or where writing text based adapters is feasible.