Read Rust

Computer Science

Covering data structures, algorithms, memory safety, etc.

Posts

Types bring order to code. For example, if a variable i:usize then we know i is a number that can be used to index a vector. Similarly, if v:vec<&str> then we can be sure that v is a collection of strings which may be indexed but of course, not used as an index. However, by itself usize doesn’t tell us how big or small the number and hence the programmer must still rely on their own wits, a lot of tests, and a dash of optimism, to ensure that all the different bits fit properly at run-time.

Refinements are a promising new way to extend type checkers with logical constraints that specify additional correctness requirements that can be verified by the compiler, thereby entirely eliminating various classes of run-time problems.

We’re excited to introduce Flux, a refinement type checker plugin that brings this technology to Rust.

This is the third post in a three-post series. In the first post we developed a stack-safe, ergonomic, and concise method for working with recursive data structures (using a simple expression language as an example). In the second post we made it fully generic, providing a set of generic tools for expanding and collapsing any recursive data structure in Rust.

In this post we will see how to combine these two things - expanding a structure and collapsing it at the same time, performing both operations in a single pass. In the process, we will gain the ability to write arbitrary recursive functions over traditional boxed-pointer recursive structures (instead of the novel `RecursiveTree` type introduced in my previous post) while retaining stack safety.

Reduction to the satisfiablility problem for constrained Horn clauses (CHCs) is a widely studied approach to automated program verification. The current CHC-based methods for pointer-manipulating programs, however, are not very scalable. This paper proposes a novel translation of pointer-manipulating Rust programs into CHCs, which clears away pointers and heaps by leveraging ownership. We formalize the translation for a simplified core of Rust and prove its correctness. We have implemented a prototype verifier for a subset of Rust and confirmed the effectiveness of our method.

In this series, we will explore cellular automata, both in theory and implementation. We will begin by building a simple automaton in Rust, and then later iterate on that design in subsequent posts. In the process, we will discuss various aspects of cellular automata, and how we might generalize our implementation to any automaton.

In this post, we will conceive a simple automaton to model forest fires. We will then implement this model using our best Rusty practices, as a quick prototype to get our ideas in code.

First, though, what are cellular automata?

I am planning to create a series of blog posts that includes some literature research, implementation of various data structures and our journey of creating a distributed datastore in distrentic.io.

You might be wondering why I start with a blog post explaining the Bloom Filter while I don’t have single clue about how to create a distributed datastore? My answer is simple: “I like the idea behind it”.

This next algorithm was really fun and a bit more challenging. The Algorithms book I’m going through went through several more examples of breaking a problem into a smaller, simpler problem and letting recursion CPU its way to a solution, like the merge sort we just looked at. Then I moved on to chapter 2 about “backtracking” into the n Queens Problem!

Data-oriented design is an approach to optimising programs by carefully considering the memory layout of data structures, and their implications for auto-vectorisation and use of the CPU cache. I highly recommend watching Mike Acton’s “Data-Oriented Design and C++” talk if you haven’t seen it already.

In this post we will cover 4 cases, using criterion for benchmarking.

I stumbled across a very useful paper called "Syntax error recovery in parsing expression grammars" by (Medeiros, S. and Fabio Mascarenhas, 2018) that I would like to share, and I'll be testing some of its concepts using a prototype parser written in Rust with the help of the nom crate.

parsing

Rust is an emerging programming language that aims at preventing memory-safety bugs without sacrificing much efficiency. The property is very attractive to developers, and many projects start using the language. However, can Rust achieve the memory-safety promise? This paper studies the question by surveying the bug reports collected from two public datasets, Advisory-db and Trophy-cases, which contain all existing CVEs (common vulnerability and exposures) of Rust.

Serverless containers and functions are widely used for deploying and managing software in the cloud. Their popularity is due to reduced cost of operations, improved utilization of hardware, and faster scaling than traditional deployment methods. The economics and scale of serverless applications demand that workloads from multiple customers run on the same hardware with minimal overhead, while preserving strong security and performance isolation. The traditional view is that there is a choice between virtualization with strong security and high overhead, and container technologies with weaker security and minimal overhead. This tradeoff is unacceptable to public infrastructure providers, who need both strong security and minimal overhead. To meet this need, we developed Fire-cracker, a new open source Virtual Machine Monitor (VMM)specialized for serverless workloads, but generally useful for containers, functions and other compute workloads within a reasonable set of constraints. We have deployed Firecracker in two publically available serverless compute services at Amazon Web Services (Lambda and Fargate), where it supports millions of production workloads, and trillions of requests per month. We describe how specializing for serverless in-formed the design of Firecracker, and what we learned from seamlessly migrating Lambda customers to Firecracker.

Years ago, Andrew Kennedy published a foundational paper about a type checker for units of measure, and later implemented it for F#. To this day, F# is the only mainstream programming language which provides first class support to make sure that you will not accidentally confuse meters and feet, euros and dollars, but that you can still convert between watts·hours and joules.

I decided to see whether this could be implemented in and for Rust. The answer is not only yes, but it was fun :)

Automatically finding a program that implements a given specification is called program synthesis. The main difficulty is that the search space is huge: the number of programs of size grows exponentially. Naïvely enumerating every program of size , checking whether each one satisfies the specification, and then moving on to programs of size and so on doesn’t scale. However, the field has advanced by using smarter search techniques to prune the search space, leveraging performance improvements in SMT solvers, and at times limiting the scope of the problem.

In this post, I’ll explain one approach to modern program synthesis: counterexample-guided iterative synthesis of component-based, loop-free programs, as described in Synthesis of Loop-Free Programs by Gulwani et al. We’ll dissect exactly what each of those terms mean, and we’ll also walk through an implementation written in Rust that uses the Z3 solver.

In trying to figure out the best reactive structure for druid, as well as how to communicate that to the world, I’ve been studying a wide range of reactive UI systems. I’ve found an incredible diversity, even though they have fairly consistent goals. This post is an attempt to find common patterns, to characterize the design space as a whole. It will be rough, at some points almost a stream of consciousness. If I had the time and energy, I think it could be expanded into an academic paper. But, for now, perhaps these rough thoughts are interesting to some people working in the space.

gui

I have blogged a few times before about Stacked Borrows, my proposed aliasing model for Rust. But I am a researcher, and what researchers (are expected to) do is write papers—so that’s what we did for Stacked Borrows as well. After a lot of work, I can now finally present to you our paper Stacked Borrows: An Aliasing Model for Rust.

I have been reading a lot of papers on linear types recently. Originally it was to understand better why Rust went down the path it did, but I found a lot more interesting stuff there. While some people now are familiar with linear typesas the basis for Rust’s memory management, they have been around for a long time and have lots of other potential uses. In particular they are interesting for improving resource allocation in functional programming languages by reusing storage in place where possible. Generally they are useful for reasoning about resource allocation. While the Rust implementation is probably the most widely used at present, it kind of obscures the underlying simple principles by adding borrowing, so I will only mention it a little in this post.

In modern runtime systems, memory layout calculations are hand-coded in low-level systems languages. The primitives in these languages are not powerful enough to describe a rich set of layouts, leading developers to rely on ad-hoc macros as well as numerous interrelated static constants and other boilerplate code. A memory management policy must also carefully orchestrate all the different address calculations to modify memory cooperatively, a task ill-suited to the low-level systems languages at hand which do not provide the proper safety mechanisms.

In this paper we introduce Floorplan, a declarative language for specifying memory layouts at a high level. In a case study of an existing implementation of the immix garbage collection algorithm, Floorplan eliminates 55 out of the 63 unsafe lines of code: 100% of the unsafe lines of code pertaining to memory safety.

Type systems are a ubiquitous and essential part of modern programming languages. They help to ensure that separately developed pieces of a program “fit together” properly, and they help to statically prevent programs from exhibiting a large class of runtime errors.

So what can we actually prove about a type system to ensure that it is doing its job? The most common answer is type soundness (or type safety).

Rust’s type system ensures memory safety: well-typed Rust programs are guaranteed to not exhibit problemssuch as dangling pointers, data races, and unexpected side effects through aliased references. Ensuringcorrectness properties beyond memory safety, for instance, the guaranteed absence of assertion failures ormore-general functional correctness, requires static program verification. For traditional system programminglanguages, formal verification is notoriously difficult and requires complex specifications and logics to reasonabout pointers, aliasing, and side effects on mutable state. This complexity is a major obstacle to the more-widespread verification of system software.

In this paper, we present a novel verification technique that leverages Rust’s type system to greatly simplifythe specification and verification of system software written in Rust.

One of our clients helps companies in becoming GDPR-compliant. A goal is to recognize sensitive pieces of user data in a big pile of registrations, receipts, emails, and transcripts, and mark them to be checked out later. As more and more data is collected by companies, finding and eliminating sensitive data becomes harder and harder, to the point where it is no longer possible for mere human employees to keep up without assistance.

From<Elixir>, Into<Rust>. I loved learning the Elixir language and how its pragmatic supervision trees and process model taught me the value fault tolerance as a quality of code than of infrastructure. Having safety and failure recovery as an idiomatic culture and mindset of the language made me a better thinker and developer. As a personal preference then in selecting new languages to learn, I look for potentially new perspectives and insights that it ascribes to its pilgrims. In general, a good learning curve is a good indicator since it has much to teach.

This is the first post on what is supposed to be an small series about writing compression algoritms in Rust, please take into account that I am no expert in compression so this is the perspective of a learner, if you think there is some improvements or corrections to be done, let me know.

You might be wondering why bit vectors are relevant for compression and so was I until I started reading a little bit about it. So, let's talk about compression.

We are proud to announce the release of our software codenano, available at https://dna.hamilton.ie/codenano/. Here, we give an introduction to what codenano can and can not do. The source code for codenano is hosted on a github repository: https://github.com/thenlevy/codenano, along with a short tutorial.

Codenano allows one to design and visualise DNA nanostructures specified using code, all in your browser. Codenano also has the ability to compute some simple interactions between DNA bases in order to help the user design DNA nanostructures that are feasible according to some simple criteria.

Cedarwood is an effort to speed up jieba-rs, an efficient implementation of trie is needed in order to satisfying the following needs.

In some domains of programming it’s common to want to write a data structure or algorithm that can work with elements of many different types, such as a generic list or a sorting algorithm that only needs a comparison function. Different programming languages have come up with all sorts of solutions to this problem: From just pointing people to existing general features that can be useful for the purpose (e.g C, Go) to generics systems so powerful they become Turing-complete (e.g. Rust, C++). In this post I’m going to take you on a tour of the generics systems in many different languages and how they are implemented. I’ll start from how languages without a special generics system like C solve the problem and then I’ll show how gradually adding extensions in different directions leads to the systems found in other languages.

The resource-management model of C++ and Rust relies on compiler-generated destructors called predictably and reliably. In current implementations, the generated destructor consumes stack space proportionally to the depth of the structure it destructs. We describe a way to derive destructors for algebraic data types that consume a constant amount of stack and heap. We discuss applicability to C++ and Rust, and also some implication for anyone wishing to extend an ML-style language with first-class resources.

This book aims to explain green threads by using a small example where we implement a simple but working program where we use our own green threads to execute code.

The motivation for this post is to recount my experiences developing a scientific tool written in Rust in the context of someone with a scientific background2. I'll explain why I made certain choices, and I'll document the things that I struggled with along the way.

This is the first post in a series detailing Ferrous System's plan to qualify the Rust Language and Compiler for use in the Safety Critical domain. We call this effort Sealed Rust.

Berlin based technology consultancy specialising in the rust programming language and related services.

In this post, I want to describe the Lifetimes in a different way that what I’m learned from the RFC. Audience: You may already have read the Rust Book. Nice if you took a compiler course.

Every once in a while this topic comes up on a social media or Rust user channel. I’d like to describe briefly the way I see where things are going by a little bit of history as well as some information about existing flux of Machine Learning/Deep Learning frameworks and major recent trends.

Rust is a recent programming language from Mozilla that attempts to solve these intertwined issues by detecting data-races at compile time. Rust's type system encodes a data-structure's ability to be shared between threads in the type system, which in turn allows the compiler to reject programs where threads directly mutate shared state without locks or other protection mechanisms. In this work, we examine how this aspect of Rust's type system impacts the development and refinement of a concurrent data structure, as well as its ability to adapt to situations where correctness is guaranteed by lower-level invariants (e.g., in lock-free algorithms) that are not directly expressible in the type system itself. We detail the implementation of a concurrent lock-free hashmap in order to describe these traits of the Rust language.

This is the fifth and final post in my series about refactoring varisat. In the last post varisat gained the heuristics needed to solve some non-trivial instances. In this post we’ll add incremental solving and proof generation. This brings varisat to feature parity with the old version.

During my final term at UWaterloo I took the CS444 compilers class with a project to write a compiler from a substantial subset of Java to x86, with a language and two teammates of your choice. My group of three chose to write our compiler in Rust and it was a fun experience. We spent time coming to design decisions that worked out really well and used Rust’s strengths. Our compiler ended up being around 6800 lines of Rust and I personally put in around 60 hours of solid coding and more on code review and design. In this post I’ll go over some of the design decisions we made and some thoughts on what it was like using Rust.

A pragmatic new design for high-level abstractions In this post, I’m going to describe a new approach to express monads in Rust. It is the most minimal design I have seen proposed and is, in my eyes, the first plausible design for such abstractions — those commonly known as “higher-kinded types”. This approach depends on a very minimal extension to Rust’s type system. In particular, this approach avoids the need for either higher-kinded types (e.g. as in this design) or full abstraction over traits (e.g. “traits for traits”). Most of the design challenges are tackled directly using existing features.

One of the promises of machine learning is to be able to use it for object recognition in photos. This includes being able to pick out features such as animals, buildings and even faces. This article will step you through using some existing models to accomplish face detection using rust and tensorflow.

Commonly used user space network drivers such as DPDK or Snabb currently have effectivelyfull access to the main memory via the unrestricted Direct Memory Access (DMA) capabilities of the PCI Express (PCIe) device they are controlling. This can be a security issue, as the driver can use the PCIe devices DMA access to read and / or write to main memory. In this thesis, support for using the IOMMU via the vfio-pci driver from the Linux kernel for the user space network driver ixy was implemented in C and Rust and the IOMMU and its impact on the drivers were investigated.

swym is a very performant Software Transactional Memory (STM) library. It uses a variation on the per-object Transactional Locking II algorithm. The paper does an excellent job explaining the algorithm, but it is not required reading for this article. swym is a generalization of seqlocks - one the TL2 paper almost achieves, but does not for whatever reason.

This is the fourth post in my series about refactoring varisat. In the last post we saw how conflict driven clause learning works, in this post we’re going to make it fast. To get there we add several smaller features that were already present in varisat 0.1. While there are still some things missing that varisat 0.1 supports, these are features like proof generation or incremental solving that don’t affect the solving performance.

This is the third post in my series about refactoring varisat. In this post the new code base turns into a working SAT solver. While you can use the command line tool or the library to solve some small and easy SAT problems now, there is still a lot ahead to gain feature and performance parity with varisat 0.1.

In the last post we saw how unit propagation is implemented. When some variables are known, unit propagation allows us to derive the values of new variables or finds a clause that cannot be satisfied. Unit propagation alone isn’t enough though, as there is no guarantee to make progress. To continue the search for a satisfying solution after propagating all assignments, it is necessary to make a guess. A natural way to handle this would be recursion and backtracking. This would give us a variant of the DPLL algorithm from which conflict driven clause learning evolved.

We left, at the end of the previous episode, with an intuitive understanding of Rust’s ownership system: we worked with vectors of integers, Vec<i32>, and we came up with a naive - but surprisingly fast! - scalar product implementation followed by a very simple sort function using the bubble sort algorithm.

In this episode we will implement a generic version of the same scalar product routine. This will require the introduction of several key concepts concerning Rust’s type system: generics, traits, operators, associated types, Copy.

we created a configurable Rust library for writing and executing Wireshark®-like filters in different parts of our stack written in Go, Lua, C, C++ and JavaScript Workers. We have now open-sourced this library under our Github account: https://github.com/cloudflare/wirefilter. This post will dive into its design, explain why we didn’t use a parser generator and how our execution engine balances security, runtime performance and compilation cost for the generated filters.

Rust is a major advancement in industrial programming languages due in large part to its success in bridging the gap between low-level systems programming and high-level application programming. This success has ultimately empowered programmers to more easily build reliable and efficient software, and at its heart lies a novel approach to ownership that balances type system expressivity with usability.

In this work, we set out to capture the essence of this model of ownership by developing a type systems account of Rust's borrow checker. To that end, we present Oxide, a formalized programming language close to source-level Rust (but with fully-annotated types). This presentation takes a new view of lifetimes as approximate provenances of references, and our type system is able to automatically compute this information through a flow-sensitive substructural typing judgment for which we prove syntactic type safety using progress and preservation. The result is a simpler formulation of borrow checking - including recent features such as non-lexical lifetimes - that we hope researchers will be able to use as the basis for work on Rust.

What I want to talk about though (and what I will eventually write a blog post about) is the technique that I'm using in luster for safe garbage collection. Inside luster are two libraries called "gc-arena" and "gc-sequence", and they represent a new (I believe novel?) system for safe garbage collection in Rust. There have been several attempts here before such as rust-gc and shifgrethor, and this represents another attempt with... different? limitations more appropriate for implementing language runtimes like Lua.

This is the second post in my series about refactoring varisat. Since the last post I started implementing some of the core data structures and algorithms of a CDCL based SAT solver: clause storage and unit propagation. In this post I will explain how the these parts work and the rationale behind some of the decisions I made.

My daily work revolves around building Machine Learning applications, while a lot of my evenings have been spent experimenting with Rust, getting more and more fascinated and in love with the language.

It couldn’t be helped: I started to have a look at what the Rust ecosystem had to offer for Machine Learning, Big Data and scientific computing at large. I quickly found out that there is a lot to be done and a lot of potential (see here or here). It got me really fired up 🔥

Wave Function Collapse is a procedural generation algorithm which produces images by arranging a collection of tiles according to rules about which tiles may be adjacent to each other tile, and relatively how frequently each tile should appear. The algorithm maintains, for each pixel of the output image, a probability distribution of the tiles which may be placed there. It repeatedly chooses a pixel to “collapse” - choosing a tile to use for that pixel based on its distribution. WFC gets its name from quantum physics. The goal of this post is to build an intuition for how and why the WFC algorithm works.

This is yet another library for writing parsers in Rust. What makes this one different is that I've combined some existing academic work in a way that I think is novel. The result is an unusually flexible parsing library while still offering competitive performance and memory usage.

Rust’s type system ensures memory safety: well-typed Rust programs are guaranteed to not exhibit problems such as dangling pointers, data races, and unexpected side effects through aliased references. Going beyond memory safety, for instance, to guarantee the absence of assertion failures or functional correctness, requires static program verification. Formal verification of system software is notoriously difficult and requires complex specifications and logics to reason about pointers, aliasing, and side effects on mutable state. This complexity is a major obstacle to a more widespread verification of system software.

In this paper, we present a novel verification technique that leverages Rust’s type system to greatly simplify the specification and verification of Rust programs. We analyse information from the Rust compiler and synthesise a corresponding core proof for the program in a flavour of separation logic tailored to automation. Crucially, our proofs are constructed and checked automatically; users of our work never work with the underlying formal logic. Users can add specifications at the abstraction level of Rust expressions; we show how to interweave these to extend our core proof to prove modularly whether these specifications are correct. We have implemented our technique for a subset of Rust; our initial evaluation on two thousand functions from widely-used Rust crates demonstrates its effectiveness

A while back, there was a discussion comparing the performance of using the hashbrown crate (based on Google’s SwissTable implementation) in the Rust compiler. In the last RustFest, Amanieu was experimenting on integrating his crate into stdlib, which turned out to have some really promising results. As a result, it’s being planned to move the crate into stdlib.

While the integration is still ongoing, there’s currently no blog post out there explaining SwissTable at the moment. So, I thought I’d dig deeper into the Rust implementation to try and explain how its (almost) identical twin hashbrown::HashMap works.

The post before this one covered how shifgrethor handles rooting: how we track for the garbage collector that this object is alive. That isn’t sufficient for implementing a tracing garbage collector though: the idea of a tracing garbage collector is that we can trace from rooted objects through all of the objects they reference. That way, instead of having to root everything you use, you can only root a few objects from which all of the live objects can be traced.

After the digression in the previous post, it’s time to get back to what I promised in the first post: a look at how shifgrethor handles rooting. Shifgrethor’s solution is somewhat novel and takes advantage of some of Rust’s specific features, so I want to start by looking briefly at some of the other options.

I’m really excited to share with you an experiment that I’ve been working on for the past 5 or 6 weeks. It’s a Rust library called shifgrethor. shifgrethor implements a garbage collector in Rust with an API I believe to be properly memory safe.
I’ll be going through all of the technical details in future blog posts, so I want to kick this series off with a high level overview of the project’s purpose and design decisions.

We introduce partially-stateful data-flow, a new streaming data-flow model that supports eviction and reconstruction of data-flow state on demand. By avoiding state explosion and supporting live changes to the data-flow graph, this model makes data-flow viable for building long-lived, low-latency applications, such as web applications. Our implementation, Noria, simplifies the backend infrastructure for read-heavy web applications while improving their performance.

A Noria application supplies a relational schema and a set of parameterized queries, which Noria compiles into a data-flow program that pre-computes results for reads and incrementally applies writes. Noria makes it easy to write high-performance applications without manual performance tuning or complex-to-maintain caching layers. Partial statefulness helps Noria limit its in-memory state without prior data-flow systems’ restriction to windowed state, and helps Noria adapt its data-flow to schema and query changes while on-line. Unlike prior data-flow systems, Noria also shares state and computation across related queries, eliminating duplicate work.

On a real web application’s queries, our prototype scales to 5× higher load than a hand-optimized MySQL baseline. Noria also outperforms a typical MySQL/memcached stack and the materialized views of a commercial database. It scales to tens of millions of reads and millions of writes per second over multiple servers, outperforming a state-of-the-art streaming data-flow system.

Implementing a genetic algorithm in JavaScript and Rust.

Hi there! This is the prelude to a series of posts to detailing how to build a language VM. If you are familiar with terms like registers, program counter, and assembly, feel free to skip this post. If not, read on. Please note this is nowhere near comprehensive, but enough to understand what we’re building.

There is no general “best” way to minimize a function; different kinds of functions require different strategies. However, Python’s scipy and R’s optim both prominently feature an algorithm called BFGS. I’ll explain what BFGS stands for, the problem that it solves, and how it solves it.

Rust is an emerging systems programming language with guaranteed memory safety and modern language features that has been extensively adopted to build safety-critical software. However, there is currently a lack of automated software verifiers for Rust. In this work, we present our experience extending the SMACK verifier to enable its usage on Rust programs. We evaluate SMACK on a set of Rust programs to demonstrate a wide spectrum of language features it supports.

A Brief Course in Computer Hardware. This is the prelude to a series of posts to detailing how to build a language VM. If you are familiar with terms like registers, program counter, and assembly, feel free to skip this post. If not, read on.

This weekend, I implemented logistic regression in Rust. For me, the most interesting parts were learning how to implement a stopping condition and how to automatically set a step size.

This article discusses a possible genetic algorithm implementation in Rust applied to the travelling salesman problem.

This paper is about the interface between languages which use a garbage collector and those which use fancy types for safe manual memory management. Garbage collection is the traditional memory management scheme for functional languages, whereas type systems are now used for memory safety in imperative languages. We use existing techniques for linear capabilities to provide safe access to copyable references, but the application to languages with a tracing garbage collector, and to data with explicit lifetimes is new. This work is related to mixed linear/non-linear programming, but the languages being mixed are Rust and JavaScript.

Poor usability of cryptographic APIs is a severe source of vulnerabilities. Aim: We wanted to find out what kind of cryptographic libraries are present in Rust and how usable they are. Method: We explored Rust's cryptographic libraries through a systematic search, conducted an exploratory study on the major libraries and a controlled experiment on two of these libraries with 28 student participants. Results: Only half of the major libraries explicitly focus on usability and misuse resistance, which is reflected in their current APIs. We found that participants were more successful using rust-crypto which we considered less usable than ring before the experiment. Conclusion: We discuss API design insights and make recommendations for the design of crypto libraries in Rust regarding the detail and structure of the documentation, higher-level APIs as wrappers for the existing low-level libraries, and selected, good-quality example code to improve the emerging cryptographic libraries of Rust.

In this article, I want to explain the term Generic Associated Types through a concrete example. I noticed that people (especially in video games development) need some tools to iterate in various manners mutably, efficiently and safely. I tried to write some convenient iterators over vectors and slices that solve those problems, but finally, I understood that some tools cannot be written with std::iter::Iterator. Doing so led me to the comprehension of generic associated types that I will abbreviate as GATs in this article. I will explain here what GATs are and why they are needed.

Rust represents a major advancement in production programming languages because of its success in bridging the gap between high-level application programming and low-level systems programming. At the heart of its design lies a novel approach to ownership that remains highly programmable.

In this talk, we will describe our ongoing work on designing a formal semantics for Rust that captures how programmers can understand ownership and borrowing without trying to grasp the details of lifetime analysis.

During the last couple of years, I’ve spent a lot of time writing parsers and parser generators, and I want to write down my thoughts about this topic. Specifically, I want to describe some properties of a parser generator that I would enjoy using. Note that this is not an “introduction to parsing” blog post, some prior knowledge is assumed.

Type-inferred gradually typed languages are a joy to use: easy to write, analyze, and refactor. In this blog post, I will showcase to the other dozen of programmers who are interested in the obscure art of programming language type inference.

Consensus is one of the most important challenges in designing and building distributed systems–how to make sure multiple nodes (or agents, processes, threads, tasks, participants, etc.) in a group agree on a specific value proposed by at least one of the nodes. As an open-source distributed scalable HTAP database, TiDB uses the Raft Consensus Algorithm in its distributed transactional key-value storage engine, TiKV, to ensure data consistency, auto-failover, and fault tolerance.

Abstract—Search indices are fundamental building blocks of many systems, and there is great interest in running them on encrypted data. Unfortunately, many known schemes that enable search queries on encrypted data achieve efficiency at the expense of security, as they reveal access patterns to the encrypted data. In this paper we present Oblix, a search index for encrypted data that is oblivious (provably hides access patterns), is dynamic (supports inserts and deletes), and has good efficiency. Oblix relies on a combination of novel oblivious-access tech- niques and recent hardware enclave platforms (e.g., Intel SGX). In particular, a key technical contribution is the design and implementation of doubly-oblivious data structures, in which the client’s accesses to its internal memory are oblivious, in addition to accesses to its external memory at the server. These algorithms are motivated by hardware enclaves like SGX, which leak access patterns to both internal and external memory. We demonstrate the usefulness of Oblix in several applications: private contact discovery for Signal, private retrieval of public keys for Key Transparency, and searchable encryption that hides access patterns and result sizes.

I just finished my Master's Thesis and part of it was writing a Python compiler in Rust. The goal of the thesis was to evaluate language features of Python that were hypothesized to cause performance issues. Quantifying the cost of these features could be valuable to language designers moving forward. Some interesting results were observed when implementing compiler optimizations for Python. An average speedup of 51% was achieved across a number of benchmarks.

In Rust, objects are either owned or borrowed. While there can be multiple borrows of the same object, each object has a unique owner. When objects are passed by value, ownership is transferred to the new scope, and the old scope can no longer access it. This makes it possible to implement “consuming” methods which take self by value, and therefore can only be called once. In the party and dealer types, such methods consume the previous state and return the next state in the protocol.

Rust is a new and promising high-level system programming language. It provides both memory safety and thread safety through its novel mechanisms such as ownership, moves and borrows. Ownership system ensures that at any point there is only one owner of any given resource. The ownership of a resource can be moved or borrowed according to the lifetimes. The ownership system establishes a clear lifetime for each value and hence does not necessarily need garbage collection. These novel features bring Rust high performance, fine low-level control of C and C++, and unnecessity in garbage collection, which differ Rust from other existing prevalent languages. For formal analysis of Rust programs and helping programmers learn its new mechanisms and features, a formal semantics of Rust is desired and useful as a fundament for developing related tools. In this paper, we present a formal executable operational semantics of a realistic subset of Rust, called KRust. The semantics is defined in K, a rewriting-based executable semantic framework for programming languages. The executable semantics yields automatically a formal interpreter and verification tools for Rust programs. KRust has been thoroughly validated by testing with hundreds of tests, including the official Rust test suite.

A software library for high performance and hardware accelerated simulation of Quantum Computers and Algorithms. Written with Rust and OpenCL.

In my last post, I talked about the new “pinned references” which guarantee that the data at the memory it points to will not, ever, be moved elsewhere. I explained how they enable giving a safe API to code that could previously only be exposed with unsafe, and how one could go about proving such a thing. This post is about another application of pinned references—another API whose safety relies on the pinning guarantees: Intrusive collections. It turns out that pinned references can almost be used for this, but not quite. However, this can be fixed by extending the guarantees provided by pinned references, as suggested by @cramertj.

I’ve been exploring various ways to write parsers. For a long time, I’ve used hand-written recursive descent for its straightforwardness, flexibility, and performance. There is another way—parser generators like Menhir, LALRPOP, or the venerable Bison use the bottom-up LR algorithm. I decided I would try an experiment: write an LR parser by hand, and see how readable I could make it.

This library implements several of the more commonly useful immutable data structures for Rust. They rely on structural sharing to keep most operations fast without needing to mutate the underlying data store, leading to more predictable code without necessarily sacrificing performance.

Specialization holds the dubious honor of being among the oldest post-1.0 features remaining in unstable limbo. That’s for good reason, though: until recently, we did not know how to make it sound.

I’ve been spending some time thinking about garbage collection in rust. I know, shame on me, it’s a systems language, we hate garbage collection, but… even in a systems programming language, garbage collection is still pretty damn useful.

Rust does not have dependent types, or GADTs like Haskell, but with a few tricks, we can use Rust's type system to emulate an Idris-like number system.

I am developing a library for stencil calculation in Rust.

In the previous post, pHash helped us to summarize our photo album. Now it’s time to employ BK-trees and efficiently search through the metric space of perceptual hashes. Let’s roll up the sleeves; more Rust awaits!

Rust is a modern programming language which is marketed primarily on the basis of its very nice type system, and I’d like to tell you about how you can use this type system to reason about your programs in interesting ways. Most of the time when its type system is discussed, the focus is on its guarantee of data race freedom and ability to enable so-called fearless concurrency (and rightfully so—this is a place where Rust truly shines!). Today, I have a different focus in mind, characterized perhaps most succinctly as follows:

Combine is a parser combinator library for the Rust programming language. I first announced version 3 of Combine back in August and back then I definitely expected to have a stable version by now. However other projects (cough gluon cough) got in the way and Combine fell to the wayside. It didn’t help that I didn’t have a killer feature for 3.0 either, user-defined error types make it possible to define parsers usable in #[no_std] crates which is great when you need it but it is still a fairly niche use-case.

parsing

To demonstrate the value of Rust's memory safety rules, I contrast the implementation of a simple vector library in C and Rust, highlighting where and how Rust's static analysis can prevent tricky memory errors.