Read Rust

Language

General posts about the Rust language.

Posts

Async/await syntax in Rust was initially released to much fanfare and excitement. Recently, the reception has been a bit more mixed. To some extent this is just the natural progression of the hype cycle, but I also think as we have become more distant from the original design process, some of the context has been lost.

async

It’s been nearly two and half years since I was an active contributor to the Rust project. There are some releases that I’ve been very excited about since then, and I’m heartened by Niko’s recent blog post emphasizing stability and polish over grand new projects. But I’ve also felt a certain apprehension at a lot of the directions the project has taken, which has often occupied my thoughts. From that preoccupation this blog post has emerged, hopefully the first in a series over the next few weeks outlining my thoughts on the design of Rust in 2023, especially in connection to async, and I hope its impact will be chiefly positive.

About 9 months ago we announced the creation of the Keyword Generics Initiative; a group working under the lang team with the intent to solve the function coloring problem through the type system not just for async, but for const and all current and future function modifier keywords as well.

We're happy to share that we've made a lot of progress over these last several months, and we're finally ready to start putting some of our designs forward through RFCs. Because it's been a while since our last update, and because we're excited to share what we've been working on, in this post we'll be going over some of the things we're planning to propose.

In 2019, “GUI” was the 6th most highly requested feature that was preventing adoption of Rust. This is fundamentally a limitation in Rust: the design of the language itself makes modelling common approaches to building UI difficult.

In this post, I’ll discuss why Rust’s unique memory management model and lack of inheritance makes traditional techniques to build a UI framework difficult and a few of the ways we’ve been working around it. I believe one of these approaches, or some combination of them, will ultimately lead to a stable cross-platform UI toolkit for high-performance UI rendering that everyone can use.

gui

In this blog post, I will show you how easily you can build and use small Rust programs inside Elixir using Rustler.

elixir

Do you think dynamically typed interpreted Ruby language and statically typed compiled Rust language could be friends? Yes, they can! And actually, they are!

Officially it all started when YJIT was ported to Rust and Ruby codebase has officially onboarded Rust code. This friendship matured when RubyGems 3.3.11 (with a new Add cargo builder for rust extensions feature) was released capable of compiling Rust-based extensions during gem installation process (similar to well-known C-based gem extensions like nokogiri, pg or puma).

And now, with Bundler 2.4, bundle gem skeleton generator can provide all the glue you need to start using Rust inside your gems thanks to the new --ext=rust parameter!

ruby

The core team used to put out a yearly call for blog posts. My colleage Nick published their "Rust in 2023" post last week, and encouraged others to do the same. I like the idea of taking a moment to reflect on larger topics, and so well, why not write a post!

December 21 is the anniversary of when I first heard about Rust way back in 2012. I used to write yearly posts about it; the last time I did was in 2018. That makes today ten years. I thought I’d have something big to say here, but… I just don’t.

What if I told you there was a way that we could ship one binary from Rust, have that work on every platform Go supports, and not have to modify the build process beyond a simple go build? Imagine how much easier that would be. It's easy to imagine that such a thing would let users not even know that Rust was involved at all, even if they consume a package or program that uses it.

I've done this with a package I call mastosan and here's why it exists as well as how I made it.

go wasm

As of Rust 1.65, which is set to release on November 3rd, generic associated types (GATs) will be stable — over six and a half years after the original RFC was opened. This is truly a monumental achievement; however, as with a few of the other monumental features of Rust, like async or const generics, there are limitations in the initial stabilization that we plan to remove in the future.

The goal of this post is not to teach about GATs, but rather to briefly introduce them to any readers that might not know what they are and to enumerate a few of the limitations in initial stabilization that users are most likely to run into.

gats

There has a ton of progress since the last progress report. There have been 303 commits since then. @afonso360 has been contributing a ton to improve Windows and AArch64 support. (Thanks a lot for that!)

rustc-codegen-cranelift

Philip Herron and Arthur Cohen presented an update on the "gccrs" GCC front end for the Rust language at the 2022 Kangrejos conference. Less than two weeks later — and joined by David Faust — they did it again at the 2022 GNU Tools Cauldron. This time, though, they were talking to GCC developers and refocused their presentation accordingly; the result was an interesting look into the challenges of implementing a compiler for Rust.

gccrs

One of the biggest such features – perhaps the biggest one – is RAII, C++’s and now Rust’s (somewhat oddly-named) scope-based feature for resource management. And while RAII is for managing all kinds of resources, its biggest use case is as part of a compile-time alternative to run-time garbage collection and reference counting.

I will start by talking about the problem that RAII was originally designed to solve. Then, I will re-hash the basics of how RAII works, and work through memory usage patterns where RAII needs to be combined with these other features, especially the borrow checker. Finally, I will discuss the downsides of these memory management techniques, especially performance implications and handling of cyclic data structures.

raii

This post is a case study of writing a Rust application using only minimal, artificially constrained API (eg, no dynamic memory allocation). It assumes a fair bit of familiarity with the language.

A couple of weeks ago, we released an update to rsevents, our crate that contains a rusty cross-platform equivalent to WIN32 events for signaling between threads and writing your own synchronization primitives, and rsevents-extra a companion crate that provides a few handy synchronization types built on top of the manual- and auto-reset events from the rsevents crate. Aside from the usual awesome helpings of performance improvements, ergonomics enhancements, and more, this latest version of rsevents-extra includes a Semaphore synchronization primitive – something that the rust standard library surprisingly lacks… but not without good reason.

Did you know that the standard library is full of useful checks that users never get to see? There are plenty of debug assertions in the standard library that will do things like check that char::from_u32_unchecked is called on a valid char, that CStr::from_bytes_with_nul_unchecked does not have internal nul bytes, or that pointer functions such as copy or copy_nonoverlapping are called on suitably aligned non-null (and non-overlapping) pointers. However, the regular standard library that is distributed by rustup is compiled without debug assertions, so there is no easy way for users to benefit from all this extra checking.

In this post I want to show some of the ergonomics improvements IntoFuture might enable, inspired by Swift's recent improvements in async/await ergonomics.

Since Rust 1.54, we can now use function-like macros in attributes. It has a lot of advantages for the #[doc] attribute, let's check some of them!

rustdoc

So, what does that mean? Well, all it really means is that when you use this feature on nightly, you'll no longer get the "generic_associated_types is incomplete" warning. However, the real reason this is a big deal: we want to stabilize this feature. But we need your help. We need you to test this feature, to file issues for any bugs you find or for potential diagnostic improvements. Also, we'd love for you to just tell us about some interesting patterns that GATs enable over on Zulip!

gats

We are happy to announce that the Rust 2021 edition is entering its public testing period. All of the planned features for the edition are now available on nightly builds along with migrations that should move your code from Rust 2018 to Rust 2021. If you'd like to learn more about the changes that are part of Rust 2021, check out the nightly version of the Edition Guide.

edition

The Async Foundations Working Group believes Rust can become one of the most popular choices for building distributed systems, ranging from embedded devices to foundational cloud services. Whatever they're using it for, we want all developers to love using Async Rust. For that to happen, we need to move Async Rust beyond the "MVP" state it's in today and make it accessible to everyone.

We are launching a collaborative effort to build a shared vision document for Async Rust. Our goal is to engage the entire community in a collective act of the imagination: how can we make the end-to-end experience of using Async I/O not only a pragmatic choice, but a joyful one?

async

Ferrocene is an effort led by Ferrous Systems and its newly formed subsidiary Critical Section GmbH to qualify the Rust Language and Compiler for use in the safety-critical domain. This is the third post in a series detailing our plans and actions around this effort, addressing topics discussed in The Pitch and The Plan. Ferrocene's draft name was "Sealed Rust."

The requirements of our current charter, and the RFC creating the group, are effectively fulfilled by the specification of "C unwind", so one option is to simply wind down the project group. While drafting the "C unwind" RFC, however, we discovered that the existing guarantees around longjmp and similar functions could be improved. Although this is not strictly related to unwinding1, they are closesly related: they are both "non-local" control-flow mechanisms that prevent functions from returning normally. Because one of the goals of the Rust project is for Rust to interoperate with existing C-like languages, and these control-flow mechanisms are widely used in practice, we believe that Rust must have some level of support for them.

To learn why this language is favored so much between developers, we have started a new series on Rust in production. In it, we’ll interview people that have used Rust for significant projects: apps, services, startup MVPs, and others.

For the first installment of the series, we interview Michael Fey, VP of Engineering at 1Password. Read further to find out why they chose Rust for their product, the benefits of Rust for security-centered applications, and what cool libraries you should look into if you’re developing something similar in Rust.

The Rust team is happy to announce a new version of Rust, 1.49.0. For this release, we have some new targets and an improvement to the test framework.

The Libs team is looking at how we can improve the std::sync module, by potentially splitting it up into new modules and making some changes to APIs along the way.
One of those API changes we're looking at is non-poisoning implementations of Mutex and RwLock.

To find the best path forward we're conducting a survey to get a clearer picture of how the standard locks are used out in the wild.

Rustup is now natively available for the new Apple M1 devices, allowing you to install it on the new Macs the same way you'd install it on other platforms!

Starting from this release of rustup (1.23.0) you can also install a minor version without specifying the patch version, like 1.48 or 1.45.

The Rust community takes its error handling seriously. There’s already a strong culture in place for emphasizing helpful error handling and reporting, with multiple libraries each offering their own take (see Jane Lusby’s thorough survey of Rust error handling/reporting libraries).

But there’s still room for improvement. The main focus of the group is carrying on error handling-related work that was in progress before the group's formation. To that end, we're working on systematically addressing error handling-related issues, as well as eliminating blockers that are holding up stalled RFCs.

error-handling

The Rust team is happy to announce a new version of Rust, 1.48.0. The star of this release is Rustdoc, with a few changes to make writing
documentation even easier!

Steve Klabnik recently wrote about whether out parameters are idiomatic in Rust. The post ends by showing a snippet of code: a generic function, with a non-generic function inside of it which contains the actual implementation. Steve says this pattern may warrant its own post, so here is that post, where I’ll explain why this inner function is useful, discuss the trade-offs of doing it, and describe why this pattern will hopefully not be necessary in the future.

Two years ago, I had a blog entry describing falling in love with Rust. Of course, a relationship with a technology is like any other relationship: as novelty and infatuation wears off, it can get on a longer term (and often more realistic and subdued) footing — or it can begin to fray. So well one might ask: how is Rust after the honeymoon?

This release contains no new language features, though it does add one long-awaited standard library feature. It is mostly quality of life improvements, library stabilizations and const-ifications, and toolchain improvements. See the detailed release notes to learn about other changes not covered by this post.

We're announcing the start of the Portable SIMD Project Group within the Libs team. This group is dedicated to making a portable SIMD API available to stable Rust users.

simd

In a recent lang team meeting we discussed the upcoming Stream RFC. We covered both the steps required to land it, and the steps we'd want to take after that. One of the things we'd want eventually for streams is: "async iteration syntax". Just like for x in y works for Iterator, we'd want something similar to work for Stream.

async

We will be closing the collection of blog posts on October 5th. As a reminder, we plan to close the survey on September 24th, later this week.

There’s an ongoing discussion about what makes Python better prototyping language than Rust (with Python being probably just the archetype of some scripted weakly-typed language). The thing is, I prefer doing my prototypes in Rust over Python. Apparently, I’m not the only one. So I wanted to share few things about what makes Rust viable for these kinds of throw-away coding sprints, at least for me.

The core team is beginning to think about the 2021 Roadmap, and we want to hear from the community. We’re going to be running two parallel efforts over the next several weeks: the 2020 Rust Survey, to be announced next week, and a call for blog posts.

The C2Rust project shows that most C code can be converted into something that is technically Rust. That is not to say that we’ve exhausted all opportunities for automation; far from it. What we can do today is making syntactic changes. The structure of the program is preserved including the way memory is managed. Therefore, the Rust output is as unsafe as the input C. To take advantage of Rust’s type system, we must also make semantic changes so the code comports with Rust’s ownership-based model which is how we enforce memory safety.

This post explores how the current output of the C2Rust translator can be further hardened. Most blog posts on porting C code to Rust rely on human intelligence. The goal here is to explore the smallest set of changes we can make to get to safe Rust without a human in the loop. The idea is that the simpler the changes, the greater the odds that we can automate them.

c2rust

We are working towards creating a modular, composable, and modern debugger library for Rust, and this month there has been a lot of exciting progress thanks to our supporters and contributors.

We have extended core functionality, implementing important debugger features such as stack unwinding (which is necessary for displaying backtraces), reading of local variables, and disassembly view.

debugger

This month brings some big community and core team updates, paving the way for the release of all Mun crates - including the compiler - to crates.io.

While looking at applications of recursion, I stumbled across a video for solving sudokus using Python by Pr. Thorsten Altenkirch from the University of Nottingham (UK) . The elegant solution got me wondering what an idiomatic Rust implementation would look like. Here is what I came up with.

This release enables quite a lot of new things to appear in const fn, two new standard library APIs, and one feature useful for library authors.

This is going to be another one of those posts where I did something ridiculous and then show you how I got there, so let’s just get right to it. Yep, Rust code with embedded Objective-C syntax, and it works. Why would you do such a thing? Maybe you want tighter interop between the Rust and Objective-C parts of your iOS app. Maybe you want to write your iOS app entirely in Rust. Or maybe you just wanted to see if it was possible after your colleague’s offhand remark.

obj-c

Rust's include_bytes! macro lets you statically include the contents of a file into your executable's binary. The builtin is a quick-and-dirty solution for packaging data with your executable, and perhaps even helping the compiler optimize your code!

A few years ago, for coursework, I implemented a basic neural network for recognizing digits. Ever-obsessed with having the fastest implementation in the class, I statically included the learned network parameters it was wrong, and I got lucky.

A Matrix<f64, U784, U10> has a minimum alignment of 8, but include_bytes! merely produces a reference to a byte array—with no care for alignment.

How can we ensure the included bytes are properly aligned?

First there was cooperative multiprocessing. Then there were processes. An operating system could run multiple processes, each performing a series of sequential, blocking actions. Then came threads. A single processes could spawn off multiple threads, each performing its own series of sequential, blocking actions. (And really, the story starts earlier, with hardware interrupts and the like, but hopefully you'll forgive a little simplification.)

Sitting around and waiting for stuff? Ain't nobody got time for that. Spawning threads at the operating system level? That's too costly for a lot of what we do these days.

Perhaps the first foray into asynchronous programming that really hit the mainstream was the Nginx web server, which boasted a huge increase in throughput by natively using asynchronous I/O system calls. Some programming languages, like Go, Erlang, and Haskell, built runtimes systems that support spawning cheap green threads, and handle the muck of asynchronous system calls for you under the surface. Other languages, such as Javascript and more recently Rust, provide explicit asynchronous support in the language.

Given how everyone seems to be bending over backwards to make it easy for you to make your code async-friendly, it would be fair to assume that all code at all times should be async. And it would also be fair to guess that, if you stick the word async on a function, it's completely asynchronous. Unfortunately, neither of these assumptions are true. This post is intended to dive into this topic, in Rust, using a simple bit of code for motivation.

async

Maxime Chevalier-Boisvert requested resources for learning about fuzzing programming language implementations on Twitter:


> I’d like to learn about fuzzing, specifically fuzzing programming language
> implementations. Do you have reading materials you would recommend, blog
> posts, papers, books or even recorded talks?

Maxime received many replies linking to informative papers, blog posts, and lectures. John Regehr suggested writing a simple generative fuzzer for the programming language.

testing

This is a hand-wavy philosophical article about programming, without quantifiable justification, but with some actionable advice and a case study.

The purpose of this blog post is to celebrate the anniversary of two really neat methods
on the Cell type:

Cell::from_mut This method turns a &mut T into a &Cell<T>.
Cell::as_slice_of_cells This method turns a &Cell<[T]> into a &[Cell<T>].

Both methods were released in version 1.37.0 of Rust, exactly one year ago from the date this post was published.

Let's talk about safe and unsafe in Rust.

ffi

Google has published a document on Rust and C++ interoperability in the context of Chromium. It describes their criteria for the experience of calling C++ from Rust — minimal use of unsafe, no boilerplate beyond existing C++ declarations, and broad support for existing Chromium types.

The response to this document has included a lot of confusion and misinformation. Really, this is a continuation of a larger ongoing discussion of how to use unsafe properly. Rust FFI and unsafe are complicated and often subtle topics, so here I am going to attempt to summarize the issues and suggest how we might address them.

ffi

Recently I was passing ownership of a Rust array to a C function from a Box<[T]> and found that I actually didn’t know whether my code would lead to a memory leak. This lead me down a rabbit hole to understand how exactly slices work in Rust.

ffi

I've seen GNOME people (often, people who have been working for a long time on C libraries) express concerns along the following lines:

1. Compiled Rust code doesn't have a stable ABI (application binary interface).
2. So, we can't have shared libraries in the traditional fashion of Linux distributions.
3. Also Rust bundles its entire standard library with every binary it compiles, which makes Rust-built libraries huge.

These are extremely valid concerns to be addressed by people like myself who propose that chunks of infrastructural libraries should be done in Rust.

So, let's begin.

Most WebAssembly tutorials and examples you will find online focus on using it inside the browser in order to accelerate various functionality of a website or web app. However, there is an area where WebAssembly is really powerful but not talked too much about: outside the browser usage scenarios. That is what we’ll focus on in this series of posts.

wasm

Chrome engineers are experimenting with Rust. For the foreseeable future, C++ is the reigning monarch in our codebase, and any use of Rust will need to fit in with C++ — not the other way around. This seems to present some C++/Rust interoperability challenges which nobody else has faced.

We'd need to solve these before considering Rust as (nearly) a first-class citizen in our codebase. If we can’t solve these, Rust would at best be isolated to “leaf nodes” which don’t interact much with the rest of our codebase. And if that’s all we can use Rust for, that calls into question whether the costs of an extra language are justified in the first place.

cplusplus google

The Rust project was originally conceived in 2010 (depending on how you count, you might even say 2006!) as a Mozilla Research project, but the long term goal has always been to establish Rust as a self-sustaining project. In 2015, with the launch of Rust 1.0, Rust established its project direction and governance independent of the Mozilla organization. Since then, Rust has been operating as an autonomous organization, with Mozilla being a prominent and consistent financial and legal sponsor.

Mozilla was, and continues to be, excited by the opportunity for the Rust language to be widely used, and supported, by many companies throughout the industry. Today, many companies, both large and small, are using Rust in more diverse and more significant ways, from Amazon’s Firecracker, to Fastly’s Lucet, to critical services that power Discord, Cloudflare, Figma, 1Password, and many, many more.

On Tuesday, August 11th 2020, Mozilla announced their decision to restructure the company and to lay off around 250 people, including folks who are active members of the Rust project and the Rust community. Understandably, these layoffs have generated a lot of uncertainty and confusion about the impact on the Rust project itself. Our goal in this post is to address those concerns. We’ve also got a big announcement to make, so read on!

I want to outline why Rust's unsafe keyword works, while similar measures in C/C++ don't.

Learning Rust is... an experience. An emotional journey. I've rarely been more frustrated than in my first few months of trying to learn Rust.

What makes it worse is that it doesn't matter how much prior experience you have, in Java, C#, C or C++ or otherwise - it'll still be unnerving.

In fact, more experience probably makes it worse! The habits have settled in deeper, and there's a certain expectation that, by now, you should be able to get that done in a shorter amount of time.

Maybe, after years of successfully shipping code, you don't have quite the same curiosity, the same candor and willingness to feel "lost" that you did back when you started.

Learning Rust makes you feel like a beginner again - why is this so hard? This doesn't feel like it should be that hard. I've done similar things before. I know what I want. Now I just need to... make it happen.

I'm going to keep including introductions like these in all my beginner-level articles, because they're very important: if you're picking up Rust, expect roadblocks.

Rust and Go have similar ways of dealing with UTF-8 encoded text. Rust gives you the .chars() method on strings, which returns a sequence of chars (no surprise). Go on the other hand gives you []rune(str), which returns a slice of runes. What’s the difference between these two things?

The answer is that a char is a Unicode Scalar Value, whereas a rune is a Unicode Code Point. That is… not very helpful. What’s the difference between those things?

A crappy but correct answer to this question is “a unicode scalar value is any unicode code point except high surrogate and low surrogate code points”. Ugh. You need a fair bit of context to understand this, so I will do my best to explain it from the beginning.

To celebrate the five years of the Rust programming language, this blog post is the second of a series where I explain why I think Rust will be the programming language for the next decade(s), and why you should learn and use it too! In the first blog post, we have seen why Rust is truly cross-platform. In this post, we’ll explore the range of application domains where Rust can be used to build software. Or rather, we’ll only scratch the tip of the iceberg, given that Rust is growing in so many domains!

Whether Rust is a full-stack programming language is still a hot debate, my opinion is that it is, and I hope that this blog post will convince you as well!

After that, I’ll conclude this series of posts with some perspectives on the Rust ecosystem, and Rust’s upcoming role in the job market.

This is a short note on the builder pattern, or, rather, on the builder method pattern.

Software testing is an industry-standard practice, but testing methodologies and techniques vary dramatically in their practicality and effectiveness.

Today you’ll learn about property-based testing (PBT), including how it works, when it makes sense, and how to do it in Rust.

testing property-testing

Last month, the Rust programming language celebrated its 5th anniversary. I’ve joined this journey roughly halfway through, and since then I had the opportunity to work on various Rust projects, but also to witness many improvements in the language, as well as in Rust libraries. Given what’s already possible to do in Rust today and this trajectory of improvements, I think that Rust will continue to grow into a cross-platform and full-stack programming language of choice for the next decade (at least).

Therefore, to celebrate these five years, I’ve decided to write a series of blog posts explaining why I think Rust is a language of choice, and where to find relevant resources to use Rust across platforms and along the software stack. In this first post of the series, after a brief summary of what is Rust and why it’s relevant, I’ll go through the various platforms where Rust can already be used. And Rust already supports so many platforms that I’m sure I’ll forget some!

Here is a simple .NET profiler implemented in Rust that prints the name of a function just before it is JIT compiled. The client library must create a type that implements all of the CorProfilerCallback traits. All of the methods in the callback traits have default implementations, so you only need to implement the methods that you actually want to use. COM boilerplate is set up for you behind the scenes with the register! macro call at the end of the code snippet.

csharp

This post is the first in a series on Seq’s storage engine. It’s a technical dive meant to share some of the more interesting aspects of its design and implementation.

Seq is a log server with its own embedded storage engine for log events written (mostly) in Rust. We call this storage engine Flare. We’ve written a bit about our experiences with Rust in the past, but we’ve been wanting to dive into the design of Flare itself since we first shipped it in August of 2018.

To get a sense of what Flare is all about we’ll first introduce its broad design and then explore some of its components. We’ll do this by tracing the ingestion of an event through Seq’s HTTP API through to disk, and then later back from disk via a query, and finally deleting the event entirely. This post is a long and tangential tour of the design and implementation of the database to provide context for deeper dives into specific components in future posts.

I show how to use heterogeneous lists and traits to implement a type-safe printf in Rust. These mechanisms can ensure that two variadic argument lists share important properties, like the number of format string holes matches the number of printf arguments.

The Rust compiler ships with a number of useful lints on by default, and many use Clippy to provide additional lints. It’s less well known that the Rust compiler ships with some useful lints which are set to allow by default, meaning they don’t generate warnings. In this post, I’ll explain what each of these lints does, and why it might be useful.

CQRS/ES have been trending topics for some times now and there are some pretty nice libraries out there to build application following these patterns. But I found myself missing some of those tools to build Rust applications for a while so I decided to try to fulfil this gap by implementing it myself.

In this blog series I will cover my way of building an OpenSource framework called Chekov. I will try to describe how I design and implement the whole thing. I will start from scratch, I will fail sometimes, but I will learn a lot and you too. I will try to cover as much part of the reflexion I have, feel free to ask for more details on Twitter or at the end of each blog post.

event-sourcing cqrs

In yesterday's post I raised the issue that Rust and C++ promise zero-cost abstractions, but with the standard build configurations people use today, you have to choose between "zero-cost abstractions" and "fast and debuggable builds". Reflecting on this a bit more, I realized that the DWARF debuginfo format used with ELF binaries inherently conflicts with the goal of having both zero-cost abstractions and fast debuggable builds, because of the way it handles inline functions (and possibly for other reasons).

Rust has always been the programming language that reminds me the most of my game hacking days, and for good reasons. Rust is a natural fit for embedded systems like video game consoles – or rather emulators thereof. The compiler supports a high number of platforms and lets you drop down to C or assembly if necessary. Plus, the language lends itself well to implementing (reverse-engineered) cryptographic algorithms and other low-level shenanigans.

Given these benefits, it’s no surprise that I keep coming back to Rust. This time, I decided to revisit a pull request from five years ago, which has been lingering in my mind ever since. The ambiguous goal of the pull request is to port cb2util, one of my old crypto tools for PlayStation 2, from C to pure Rust.

I used to be afraid of async Rust. It's easy to get into trouble!

But thanks to the work done by the whole community, async Rust is getting easier to use every week. One project I think is doing particularly great work in this area is async-std.

async

Many modern languages have collections called "array," "slice," or "vector." Rust has all three, plus many third-party libraries! This is an opinionated guide that tries to help you choose the best way to store contiguous data in Rust. It covers tools from Rust's core and standard library as well as third-party crates that meet more specific needs.

Contiguous data is when multiple pieces of data are stored next to each other in memory. It is often a great way to store collections, because it can provide great cache locality and branch prediction.

I recently discovered for myself a very nice Rust refactoring (or pattern?) which produced a very significant simplification in some code I am working on.

I would like to share it with you. I’d say this is an intermediate-level article, since the Rust compiler won’t lead you towards this design, good as its error messages are.

One reason to like the Rust ecosystem is testing. No test runners need to be installed, no reading up on 10 variations of unit testing frameworks, no compatibiltity issues…

… or are there? Rust has recently started supporting full async/await abilities after years of development and it looks like there is a missing piece there: tests. How so?

async testing

Foreign Function Interfaces (FFI) are a core mechanism for enabling integration of new languages into existing codebases or building on existing libraries. That said, the term “FFI” is often overloaded in ways that may be unclear or ambiguous, and the area can seem overwhelming to approach. In this post, I explain the two “directions” of FFI, some patterns for how FFI in each direction is handled in Rust and further break down some FFI design approaches.

ffi

While writing a Discord bot using serenity-rs I wanted to create a thread pool to properly process events coming from Discord. When I ran this code I’ve expected everything working properly - that I should see “Worker loop done”, “dropped worker pool thread” and “Good bye!” messages. Unfortunately, this is not the case.

I like to look at standard libraries of programming languages! It feels like being in a candy store: there are so many interesting functions and nice tools one can use! Some of them may feel useless at a particular moment, but may come handy later to save on some typing, to avoid reinventing the wheel, or even to guide a design choice.

In Rust, standard library documentation is extremely readable and even has a section "How to read this documentation" for people like me. Let me quote from it: "If this is your first time, the documentation for the standard library is written to be casually perused. Clicking on interesting things should generally lead you to interesting places."

So, I started reading and here we go. The very first module in the documentation is: alloc.

I've been wanting to write a big project in Rust for a while as a learning exercise, and actually started one in late 2018 (a FUSE server implementation). But then life happened and I got busy and never went anywhere with it. Due to certain world circumstances I'm currently spending a lot of time indoors so rust-fuse (docs) now exists and is good enough to write basic hello-world filesystems. I plan to polish it up a bit more with the goal of releasing a v1.0 that supports the same use cases as libfuse.
I took some notes along the way about things that struck me as especially good or bad. Overall I quite like Rust the language, have mixed feelings about the quality of ancillary tooling, and have strong objections to some decisions made by the packaging system (Cargo + crates.io).

Canrun is a new logic programming library for Rust with static types and constraints.

In my initial post I mentioned going through a few fundamental revisions before settling on the current approach. Here I'll try to do a quick recap for two reasons: 1) I think it's neat, and 2) with luck I'll snag the attention of someone with deeper experience and a willingness to share some tips.

My first successful attempt was actually based on an article by Tom Stuart titled "Hello, declarative world", which described a way to implement μKanren in Ruby. I recommend reading that if you find yourself confused about any of the "why" I gloss over here.

We recently learned that the WebAssembly build in our system isn't deterministic any longer. This is a short summary of what we did to chase down the bug in the hope that this helps others facing similar issues, give some help and guidance on what to try or how this kind of thing works.

debugging wasm

You know that cool donut written in C that made a donut when ran? You know, that one? I decided to have some fun and rewrote it all in Rust!

What I mean by query system here is a pattern for requesting and computing data in a program in on-demand way. The data don’t need to be computed ahead of time at the start of the program, but only when they are actually needed. In this post, I will walk you through a solution to this approach I came up with in Rust, which happened to be a good fit in one of my projects. If not yet, you will eventually be able to implement your own, tailored to your needs.

We recently landed support for Windows in Krustlet. Although the final PR was relatively small, there was a lot of learning behind it. While adding this support, we came across an oddity that forced us to learn a whole bunch about stack vs heap allocation in Rust and figured it would be good to share with the world. As with most learning opportunities, this one starts with a story.

I had a whole lot of fun with the Typeracer post I made showing how the program itself had changed throughout its lifetime.

What I didn’t mention (although is fairly obvious if you look at the code) is that Typeracer was my first real exposure to Rust.

I went through a bit of the rust book and wrote a tiny compression tool that did a sort of run length encoding algorithm, but both of them were super simple and I really wanted something I could work on for longer to understand the language.

Working on the two I almost never interacted with the borrow checker. I had no idea what I was getting myself into. But there had to be a reason for uh, super vocal subreddits about rust, so I figured it was time to see what all the hype was about.

When I started developing in Rust 2.5 years ago, I was fascinated by how often my program would just work once it compiled. The Rust compiler is known for being a bit pedantic, which can be quite frustrating in the beginning. Over time, though, I actually learned to love this aspect. These days, I often find myself deliberately thinking about how I can use Rust’s type system in clever ways that allow me to catch or prevent bugs early on in the development cycle.

In this guide, I’ll share some of the knowledge I’ve built up on this topic and present some techniques to make integration testing almost redundant. Ultimately, the question is: Could “It compiles, let’s ship it!” actually be true for Rust?

LLVM is a collection of compiler toolchain technologies used to compile dynamic and static programming languages. It’s heavly used in programming languages, such as Clang, Emscripten, and Rust!

This tutorial is going to go over how to create a simple compiler using LLVM for Brainf*ck in Rust. The completed code can be found here: brainfrick-rust

One of the things I’ve wanted to do for a while is really dig into assembly and get into the weeds of how programs actually run. A rework of the asm macro has recently landed in nightly rust so it seemed like a good time.

And compared to some other ways I’ve tried to approach this there’s a lot less setup we need to do if we just use the rust playground to do all the heavy lifting.

My process for figuring things out has been pretty simple. I write a tiny bit of rust code, look at the assembly output and try to figure out what’s going on (with lots of googling). I’m going to walk you through what I did, and what I figured out.

I discovered evolutionary algorithms in my teens and was fascinated. After a bit of research, I somehow managed to hack together a neural network in GameMaker, attach it to the brains of a virtual car and watch it crash into walls.

I’ll use Rust for my examples, but follow along in any language you like! It’s said that if you put a monkey in front of a typewriter, given enough time it will produce the complete works of Shakespeare. We haven’t infinite time (or easy access to monkeys!), so let’s try something simpler, reproducing just this fragment from Hamlet:

const HAMLET: &str = "To be, or not to be--that is the question";

This text is about my adventure writing a small CLI application (twice) using two languages I had little experience with.

1.45.2 contains two fixes, one to 1.45.1 and the other to 1.45.0.

Rust, as a compiled language without garbage collection, supports what have traditionally been C/C++ domains. This includes everything from high-performance distributed systems to microcontroller firmware. Rust offers an alternative set of paradigms, namely ownership and lifetimes. If you've never tried Rust, imagine pair programming alongside a nearly-omniscient but narrowly-focused perfectionist. That's what the borrow checker, a compiler component implementing the ownership concept, can sometimes feel like. In exchange for the associated learning curve, we get memory safety guarantees.

Like many in the security community, I've been drawn to Rust by the glittering promise of a safe alternative. But what does "safe" actually mean on a technical level? Is the draw one of moths to flame, or does Rust fundamentally change the game?

This post is my attempt to answer these questions, based on what I've learned so far. Memory safety is a topic knee-deep in operating system and computer architecture concepts, so I have to assume formidable prior systems security knowledge to keep this post short-ish. Whether you're already using Rust or are just flirting with the idea, hope you find it useful!

1.45.1 contains a collection of fixes, including one soundness fix. All patches in 1.45.1 address bugs that affect only the 1.45.0 release; prior releases are not affected by the bugs fixed in this release.

I’ve been helping out and contributing to exercism.io for the past few months. As an open source platform for learning programming languages that supports Rust, Exercism aligns very well with all the things I’m currently passionate about: open source, teaching, and Rust.

One of the most challenging hurdles the Exercism platform faces is the fact that students who opt in to receive mentor feedback on their work have to wait for a live person to get around to reviewing their submission. Decreasing wait times for students is thus an important metric to optimize for in order to improve the overall student experience on the platform.

In this post I’ll be talking about a project I’ve been working on that aims to address this problem. I’ll also be discussing the learnings I’ve been taking away from working on this project, as it’s my first non-toy Rust project that I’ve undertaken.

Rust has been the most loved programming language for the last 5 years. This, and many other factors, made me interested in learning more about Rust, especially from the perspective of a C#-developer.

csharp

The Rust Core Team is happy to announce that we're opening up the agenda of our weekly triage meetings!

The 1.45.1 pre-release is ready for testing. The release is scheduled for this
Thursday, the 30th.

At FP Complete, we have long spoken about the three pillars of a software development language: productivity, robustness, and performance. Often times, these three pillars are in conflict with each other, or at least appear to be. Getting to market quickly (productivity) often involves skimping on quality assurance (robustness), or writing inefficient code (performance). Or you can write simple code which is easy to test and validate (productivity and robustness), but end up with a slow algorithm (performance). Optimizing the code takes time and may introduce new bugs.

For the entire history of our company, our contention has been that while some level of trade-off here is inevitable, we can leverage better tools, languages, and methodologies to improve our standing on all of these pillars. We initially focused on Haskell, a functional programming language that uses a strong type system and offers decent performance. We still love and continue to use Haskell. However, realizing that code was only half the battle, we then began adopting DevOps methodologies and tools.

In this post, I wanted to share some thoughts on why we're thrilled to see Rust's adoption in industry, what we're using Rust for at FP Complete, and give some advice to interested companies in how they can begin adopting this language.

As a side project, I’m writing an Europa Universalis IV (EU4) leaderboard and in-browser save file analyzer called Rakaly. No need to be familiar with the game or the app, but feel free to check Rakaly out if you want, we’re just getting started.

I’m writing this post as whenever Rust is posted on the internet, it tends to elicit polar opposite responses: the evangelists and the disillusioned. Both can be correct in their viewpoints, but when one is neck deep in a Rust side project and stumbles across a discussion that is particularly critical of Rust, self doubt may creep into thoughts. And while self doubt isn’t inherently bad (second guessing can be beneficial), too much doubt can cause one to disengage with their hobby. This post is me sharing struggles seen along the way and realizing that the wins Rust gave me far outweigh the struggles.

wasm

Get a firm grasp of each of these smart pointers and other advanced variables in Rust: Box, Cell, RefCell, Rc, Arc, RwLock, Mutex, OnceCell (and there are others!)

Sometimes I want to quickly tell whether a particular feature in Rust is supported and if so, to what extent?

Stable Rust is soon getting const generics. Let’s look at how const generics can be used to avoid a certain bug at compile time. The bug? Trying to perform operations on data on separate CUDA devices in PyTorch.

Sizedness is lowkey one of the most important concepts to understand in Rust. It intersects a bunch of other language features in often subtle ways and only rears its ugly head in the form of "x doesn't have size known at compile time" error messages which every Rustacean is all too familiar with. In this article we'll explore all flavors of sizedness from sized types, to unsized types, to zero-sized types while examining their use-cases, benefits, pain points, and workarounds.

Rust’s module system is surprisingly confusing and causes a lot of frustration for beginners.

In this post, I’ll explain the module system using practical examples so you get a clear understanding of how it works and can immediately start applying this in your projects.

Since Rust’s module system is quite unique, I request the reader to read this post with an open mind and resist comparing it with how modules work in other languages.

modules

Recently I've had to write some nom code (it is a parser combinator library for Rust). To my surprise I discovered that there is no combinator for creating a parser which always succeeds returning a certain given value. At least not without using macros which is discouraged in nom v5. That combinator would be something like pure in Haskell Parsec. It's not very useful on its own, but can be used as part of other combinators, say providing a default alternative for alt.

So I decided to add success to nom. After looking at the library code, I realised that it uses closures quite heavily and I didn't use them much in Rust, so I had some questions.

Again? It feels like we just had one of these...6 weeks ago 😉. Anyways, much of this sprint was a continuation of the previous two: working towards making Chalk feature-complete and eventually using it in rustc for trait solving.

There are two big changes to be aware of in Rust 1.45.0: a fix for some long-standing unsoundness when casting between integers and floats, and the stabilization of the final feature needed for one of the more popular web frameworks to work on stable Rust.

It’s hard to believe that its been more than 3 years since I opened RFC 2000, which defined the const generics for Rust. At the same time, reading the RFC thread, there’s also been a huge amount of change in this area: for one thing, at the time the RFC was written, const fns weren’t stable, and consts weren’t even being evaluated using miri yet. There’s been a lot of work over the years on the const generics feature, but still nothing has shipped. However, I think we have defined a very useful subset of const generics which is stable enough to ship in the near term.

The Rust compiler has a few assumptions that it makes about the behavior of all code. Violations of those assumptions are referred to as Undefined Behavior. Since Rust is a safe-by-default language, programmers usually do not have to worry about those rules (the compiler and libraries ensure that safe code always satisfies the assumptions), but authors of unsafe code are themselves responsible for upholding these requirements.

Those assumptions are listed in the Rust reference. The one that seems to be most surprising to many people is the clause which says that Rust code may not produce “[…] an invalid value, even in private fields and locals”. The reference goes on to explain that “producing a value happens any time a value is assigned to or read from a place, passed to a function/primitive operation or returned from a function/primitive operation”. In other words, even just constructing, for example, an invalid bool, is Undefined Behavior—no matter whether that bool is ever actually “used” by the program. The purpose of this post is to explain why that rule is so strict.

I ported a C library to rust last week, and it went pretty smoothly. The library in question is RNNoise, a library for removing noise from audio. It works well, it runs fast, and best of all it has no knobs that you need to tune. There’s even a rust
binding.

audio

In my previous article, I argued for a case of extending the existing debuggers to provide better support for Rust. However, after some more research and thinking, I feel that we should consider the idea of creating a new debugger framework from scratch, taking inspiration from other great projects. Not-invented-here syndrome aside, I think there are some valid reasons for going in this direction.

debugger

Recently at work I managed to hit the Orphan Rules implementing some things for an internal crate. Orphan Rules you say? These are ancient rules passed down from the before times (pre 1.0) that have to do with trait coherence. Mainly, if you and I both implement a trait from another crate on the same type in another crate and we compile the code, which implementation do we use? Coherence is about knowing exactly which implementation of the code we use. Unfortunately this can be a bit strict at times and was the issue I ran into that I want to talk about how it was solved.

When updating dependency crates for 1.22.0, a change in behaviour of the url crate slipped in which caused env_proxy to cease to work with proxy data set in the environment. This is unfortunate since those of you who use rustup behind a proxy and have updated to 1.22.0 will now find that rustup may not work properly for you.

Macros in Rust tend to have a reputation for being complex and magical, the likes which only seasoned wizards like @dtolnay can hope to understand, let alone master.

Rust’s declarative macros provide a mechanism for pattern matching on arbitrary syntax to generate valid Rust code at compile time. I use them all the time for simple search/replace style operations like generating tests that have a lot of boilerplate, or straightforward trait implementations for a large number of types.

Unfortunately once you need to do more than these trivial macros, the difficulty tends to go through the roof…

I recently encountered a situation at work where a non-trivial technical problem could be solved by writing an equally non-trivial macro. There are a number of tricks and techniques I employed along the way that helped keep the code manageable and easy to implement, so I thought I’d help the next adventurer by writing them down.

macros

Memory-sensitive languages like C++ and Rust use compile-time information to calculate sizes of datatypes. These sizes are used to inform alignment, allocation, and calling conventions in ways that improve runtime performance. Modern languages in this setting support generic types, but so far these languages only allow parameterisation over types, not type constructors. In this article I describe how to enable parameterisation over arbitrary type constructs, while still retaining compile-time calculation of datatype sizes.

I started experimenting with asynchronous Rust code back when futures 0.1 was all we had - before async/await. I was a Rust baby then (I'm at least a toddler now), so I quickly drowned in a sea of .and_then, .map_err and Either<A, B>.

But that's all in the past! I guess!

Now everything is fine, and things go smoothly. For the most part. But even with async/await, there are still some cases where the compiler diagnostics are, just, so much.

There's been serious improvemenst already in terms of diagnostics - the errors aren't as rough as they used to be, but there's still ways to go. Despite that, it's not impossible to go around them and achieve the result you need.

So let's try to do some HTTP requests, get ourselves in trouble, and instead of just "seeing if a different crate would work", get to the bottom of it, and come out the other side slightly more knowledgeable.

async http

The rustup working group is happy to announce the release of rustup version 1.22.0. This release is mostly related to internal rework and tweaks in UI messages. It is effectively a quality-of-life update.

Our Rust project is a large and diverse one. Its activities are broadly coordinated by teams that give the community space to find and contribute to the things that matter to them. We’re trialing a reorganization of standard library activities between the Libs and Compiler teams. Going forward, the Libs team will own just the public API of the standard library, and the Compiler team will own its implementation. The goal of this separation of concerns is to better suit the interests of both teams to better support the needs of the standard library. It's a lot like the existing relationship between the Lang and Compiler teams, where the Lang team owns the Rust language design and the Compiler team owns the code that implements it. We'll re-evaluate how the trial is going later in the year and decide whether or not to make the change permanent.

My previous blog post introduced my work to improve Rust's support for RISC-V Linux systems. Since then I fixed a couple of interesting compiler bugs. This blog post is more technical - describing these bugs and explaining some rustc internals along the way. I conclude by reporting on movement in the broader Rust community regarding RISC-V. I assumed that the reader is comfortable with programming terminology. This blog post contains Rust code samples but the reader is not expected to be fluent in Rust.

riscv

Have you ever looked up into the stars and wondered, “How the is that feature implemented”? In this series, I’ll (hopefully) dive into the implementation of coroutines for several (compiled) programming languages.

A short disclaimer: I’m not too sharp on the details of some (or actually any) of these implementations. Most of this will just be me rambling and looking at compiler source code/the output of the Godbolt Compiler Explorer. I’ll try to validate every claim I’ll post here, but some mistakes are sure to sneak their way into one of these. Feel free to point them up and I’ll fix them as soon as I can.

async

I've been banging the same drum for years: APIs must be carefully designed.

This statement doesn't resonate the same way with everyone. In order to really understand what I mean by "careful API design", one has to have experienced both ends of the spectrum.

But there is a silver lining - once you have experienced "good design", it's really hard to go back to the other kind. Even after acknowledging that "good design" inevitably comes at a cost, whether it's cognitive load, compile times, making hiring more challenging, etc.

It's a very difficult experience to learn about something new, and understand its value, and then go back to the old way of doing things. This holds true for any topic, not just programming.

I know it's been very difficult for me. So, here's your warning: once you learn to spot design deficiencies, you can't unlearn it, and it does make it harder to just "get the job done". It's a delicate balance.

Lately we're exploring how Rust's designs discourage fast compilation. In the previous post in the series we discussed compilation units, why Rust's are so big, and how that affects compile times.

This time we're going to wrap up discussing why Rust is slow with a few more subjects: LLVM, compiler architecture, and linking.

Thanks to the work of Nicholas Nethercote and Alex Crichton, there have been some recent improvements that reduce the size of compiled libraries, and improves the compile-time performance, particularly when using LTO. This post dives into some of the details of what changed, and an estimation of the benefits.

A bit more than four years ago I started the xi-editor project. Now I have placed it on the back burner (though there is still some activity from the open source community).

The original goal was to deliver a very high quality editing experience. To this end, the project spent a rather large number of “novelty points”.
I’ve written the CRDT part of this retrospective already, as a comment in response to a Github issue. That prompted good discussion on Hacker News. In this post, I will touch again on CRDT but will focus on the other aspects of the system design.

gui

Programming languages have different methods of representing asynchronous operations. The way Rust handles concurrency should be familiar if you’ve ever used async/await in JavaScript. The keywords are the same and the fundamental model is similar, except that in Rust, deferred computations are called futures instead of promises. What is not so familiar is that you need to pick a runtime to actually run your asynchronous code.

Rust targets everything from bare-metal, embedded devices to programs running on advanced operating systems and, like C++, focuses on zero-cost abstractions. This impacts what is and isn’t included in the standard library.

Once you have the know-how, you’ll find it’s really not difficult to get started with async in Rust. If you’re writing an asynchronous program in Rust for the first time or just need to use an asynchronous library and don’t know where to start, this guide is for you. I’ll try to get you going as quickly as possible while introducing you to all the essentials you should know.

async

The Rust language and the Rust community are really interesting if you are want to build better quality systems software.

Over the last few months, I have been trying to understand one more part of the story: what is the state of formal verification tools for Rust? So what tools are out there? What can you do with them? Are they complete? Are they being maintained? What common standards and benchmarks exist?

Here is a list of the tools that I know about.

Whenever I have a chance, I extol the virtues of message-passing and event-loops in structuring concurrent workflows in Rust. However, in my wide-eyed enthusiasm, I will make it sound almost as if it cannot go wrong.

So today, let’s go into some details about the archetype of buggy message-passing code.

Lately we're exploring how Rust's designs discourage fast compilation. In the previous post in the series we discussed the difficult compile-time tradeoffs required to implement generics.

This time we're going to talk about compilation units.

The Foreword stated that

> Zero To Production will focus on the challenges of writing cloud-native applications in a team of four or five engineers with different levels of experience and proficiency.

How? Well, by actually building one!

Introduction Like a lot of folks stuck at home during lockdown, I'm currently spending my newfound free time on a personal project. I'm developing it in Rust because I need fast execution speed and memory efficiency, although guaranteed memory safety is certainly a great perk.

The program I'm building is a phylogenetic inference tool (used to estimate evolutionary trees), so there's a lot of constructing and modifying tree data structures. Naturally, tree traversals are fundamental to many of the algorithms I'll use, but implementing one in Rust was not as easy as I expected.

Don’t panic! Learn to build quality software resilient to errors.

error-handling

I tried writing a Chunked-List data structure and made all the mistakes while using unsafe for that.

I have always avoided C and C++ because I knew I could not be trusted with Pointers. Rust allows me to keep the high risk code contained into small unsafe { } sections. I want to write down what mistakes I made inside those sections and how I caught them.

unsafe

Rust 1.44.1 addresses several tool regressions in Cargo, Clippy, and Rustfmt introduced in the 1.44.0 stable release.

Our goal for tonari is to build a virtual doorway to another space that allows for truly natural human interactions. Nearly two years in development, tonari is, to the best of our knowledge, the lowest-latency high resolution production-ready "teleconferencing" (we are truly not fond of that word) product available.

Since launching our first pilot in February, we've experienced no software-related downtime (tripping over ethernet cables is a different story). And as much as we would love to think we're infallible engineers, we truly don't believe we could have achieved these numbers with this level of stability without Rust.

I believe Rust is a great language to make SIMD actually usable for ordinary humans. I’ve played with libraries to making it accessible two years ago (or was it 3?) and my impression was „Whoa! This is cool. I can’t wait until this is usable on stable.“ The libraries back then were stdsimd and faster.

Fast forward to today. I considered using some SIMD operations in a project in work. I have some bitsets and wanted to do operations like bitwise AND on them. If I represent them as bunch of unsigned integers, using SIMD on that makes sense. But for that, I need to compile on stable, I want the code to be readable and I don’t want to deal with writing multiple versions of the code to support multiple levels of SIMD support.

The thing is, while using SIMD on stable is possible, the standard library offers only the intrinsics. These are good enough as the low-level stuff to build a library on top, but none of the current ones quite cut it.

simd

Cargo and crates.io are an amazing part of the Rust ecosystem and one of the things that makes Rust so pleasant to work with. However, it’s quite easy for your cargo dependencies to pile up without you noticing. For example you might blindly add the (rather awesome) reqwest crate, not realizing that it might increase your binary size by over 4mb .

Now, you might be thinking to yourself, “does it really matter?", and the answer might well be “no”. However, I think it’s important to have some idea about your binary sizes and number of dependencies in your project as you make changes. It might not be worth adding a specific library, and it’s associated size overhead, when a smaller and simpler library might suffice.

While implementing ringbahn, I introduced at least two bugs that caused memory safety errors, resulting in segfaults, allocator aborts, and bizarre undefined behavior. I’ve fixed both bugs that I could find, and now I have no evidence that there are more memory safety issues in the current codebase (though that doesn’t mean there aren’t, of course). I wanted to write about both of these bugs, because they had an interesting thing in common: they were both caused by destructors.

unsafe

Dart is a client-optimized language for fast apps on any platform, it make it easy to build the UI of your application and it is quite nice language to work with, it the language used by Flutter Framework, Flutter is Google’s UI toolkit for building beautiful, natively compiled applications for mobile, web, and desktop from a single codebase.

Rust is blazingly fast and memory-efficient, with no runtime or garbage collector, it can power performance-critical services, run on embedded devices, and easily integrate with other languages.

We are using both Rust and Dart (in Flutter) at Sunshine to enable open-source grant initiatives to easily operate in an on-chain ecosystem.

Almost all of our Code is written in Rust, that's why we needed to think about using the same code and the same logic in our client-side application, but How?

ffi

This book arises from my frustration of not finding modern, clear and concise teaching materials that are readily accessible to beginners like me who wants to learn a bit on how to create their own programming language.

book

This is just a note on getting the best performance out of an async program.

The point of using async IO over blocking IO is that it gives the user program more control over
handling IO, on the premise that the user program can use resources more effectively than the kernel
can. In part, this is because of the inherent cost of context switching between the userspace and
the kernel, but in part it is also because the user program can be written with more specific
understanding of its exact requirements.
There are two main axis on which async IO can gain performance over threads with blocking IO:

* Scheduling time overhead: scheduling tasks in userspace can be substantially faster than
scheduling threads in kernel space when implemented well.
* Stack memory overhead: userspace tasks can use far less memory per task than an OS thread uses
per thread.

async

We are forming two new groups in the compiler team:

* A Windows group, for helping us to diagnose and resolve Windows-related issues.
* An ARM group, for helping us to resolve issues specific to the ARM architectures

Each of these groups are "notification groups", which means that anyone can add their own name to the list -- if you do, you'll receive pings when Windows- or ARM-related bugs arise.

windows arm

Nightly Rust has had a syntax for "inline assembly" (asm!) for a long time; however, this syntax just exposed a very raw version of LLVM's assembly construct, with no safeguards to help developers use it. Getting any detail of this syntax even slightly wrong tended to produce an Internal Compiler Error (ICE) rather than the kind of friendly error message you've come to expect from rustc. This syntax was also error-prone for another reason: it looks similar to GCC's inline assembly syntax, but has subtle differences (such as the names in register constraints). This syntax also had little to no hope of being supported on any non-LLVM backend. As a result of all these limitations, the asm! syntax was highly unlikely to ever graduate from nightly to stable Rust, despite being one of the most requested features.

In an effort to improve asm! and bring it to more users, Amanieu d'Antras designed and implemented a new, friendlier syntax for asm!.

This is a shorter blog post than usual: in acknowledgement that taking a stand against the police brutality currently happening in the US and the world at large is more important than sharing tech knowledge, we decided to significantly scale back the amount of promotion we're doing for this release.

The Rust Core Team believes that tech is and always will be political, and we encourage everyone take the time today to learn about racial inequality and support the Black Lives Matter movement.

Rust has a runtime. No really it does! I blew past that aspect in my most recent post "Oxidizing the technical interview" when I put in the line, "The greatest trick the Devil ever played was convincing C and Rust programmers their language has no runtime", and you can see there's a bit more to the machinery in running a Rust program with this bit of code used in that post.

Zero To Production is a book that I will be writing in the open, publishing one chapter at a time on this blog.

The Rust ecosystem has had a remarkable focus on smashing adoption barriers with amazing material geared towards beginners and newcomers, a relentless effort that goes from documentation to the continuous polishing of the compiler diagnostics. There is value in serving the largest possible audience. At the same time, trying to always speak to everybody can have harmful side-effects: material that would be relevant to intermediate and advanced users but definitely too much too soon for beginners ends up being neglected.

I struggled with it first-hand when I started to play around with async/await. There was a significant gap between the knowledge I needed to be productive and the knowledge I had built reading The Rust Book or working in the Rust numerical ecosystem.

I wanted to get an answer to a straight-forward question: Can Rust be a productive language for API development? Yes. But it can take some time to figure out how. That’s why I am writing this book.

async book

In this post, we will discuss an interesting technique for testing test coverage, and the associated Rust crate — cov-mark. The two goals of the post are:

1. Share the knowledge about a specific testing approach.
2. Show a couple of Rust tricks for writing libraries.

This post is an independent sequel to A Trick for Test Maintenance one.

testing

Whenever you write any kind of code, it’s critical to put it to the test. In this guide, we’ll walk you through how to test Rust code.

But before we get to that, I want to explain why it’s so important to test. To put it plainly, code has bugs. This unfortunate truth was uncovered by the earliest programmers and it continues to vex programmers to this day. Our only hope is to find and fix the bugs before we bring our code to production.

Testing is a cheap and easy way to find bugs. The great thing about unit tests is that they are inexpensive to set up and can be rerun at a modest cost.

Think of testing as a game of hide-and-seek on a playground after dark. You could bring a flashlight, which is highly portable and durable but only illuminates a small area at any given time. You could even combine it with a motor to rotate the light to reveal more, random spots. Or, you could bring a large industrial lamp, which would be heavy to lug around, difficult to set up, and more temporary, but it would light up half the playground on its own. Even so, there would still be some dark corners.

testing

I've held all of these misconceptions at some point and I see many beginners struggle with these misconceptions today. In a nutshell: A variable's lifetime is how long the data it points to can be statically verified by the compiler to be valid at its current memory address. I'll now spend the next ~6000 words going into more detail about where people commonly get confused.

lifetimes

In this final part of the series, we’ll explore a trick to make the behaviour of a macro depend on whether it’s used as a statement or as part of an expression. Using that, we’ll make the python!{} macro more flexible to allow saving, reusing, and inspecting Python variables.

python

3 part video tutorial for beginners to Rust programming on iteration.

video

One of the things that I personally struggled with when learning Rust was how to organize large programs with multiple modules.

In this post, I'll explain how I organize the codebase of just, a command runner that I wrote.

just was the first large program I wrote in Rust, and its organization has gone through many iterations, as I discovered what worked for me and what didn't.

There are some things that could use improvement, and many of the choices I made are somewhat strange, so definitely don't consider the whole project a normative example of how to write rust.

Ever since it’s illegal for me to leave my house, my weekends have been filled with rewriting Whisperfish. Whisperfish is an app, originally by Andrew E. Bruno, that natively implements Signal for SailfishOS. My goal with the rewrite is to modernize the non-GUI code such that it uses the official libsignal-protocol-c instead of the Go-reimplementation. For this, I would either use C++ or Rust; the title of the post probably spoiled which one I prefer.

I’m imagining two target audiences for this blog post: either you’re a Rustacean, and you’re here for the Tokio and Actix magic, or (and that’s not xor) you’re from the SailfishOS community and you’re wondering what all the Tokio and Actix buzzwords are even about. With that in mind, I’ll make an introduction on both topics, and depending on your background, you can skip either.

qt sailfish gui actix

Rust is a powerful systems programming with strong memory guarantees. Rust allows for concise expression at a high-level, while still producing fast low-level code. However, Rust does not guarantee the calling conventions and layout of structures in memory, which makes it difficult to write external applications that interface with Rust; Rust lacks a standardized ABI. Standardizing Rust's ABI has been brought up before, but has usually gone nowhere due to the difficulty of the task. In this post, we outline the benefits and stumbling-blocks of a stable ABI, as well as suggest a semi-novel technique as to how such an ABI could be implemented.

abi

Years ago, as a young'in from the Rust Belt, you dreamed of becoming a systems programmer, an Abyss Gazer if you will. You wanted to live, breathe, and die by being as close to the metal as you could. You grew up in a harsh environment of reading assembly output to shave off a few instructions in an attempt to make the fastest code, segfaults, and poorly documented hardware, all to become a systems programmer. You look back upon those years fondly as it's turned you into the programmer you are today. An unparalleled Abyss Gazer, an Eldritch One, one who makes mortal men fear and worship the code you've wrought upon this world that holds the very foundations of it together. Today you've come for an interview at the request of a friend for a position using Rust. They hope to be able to hire you so that you may channel the will of Ferris the Rustacean into all the code that flies from your fingers for them. Of course they can't just give you the job. Your potential future comrades must see for themselves whether the rumors about your skill are true. You look to your interviewer who asks if you're ready to get started. You nod, ready to take on any task given to you.

What's the point of pattern matching if we already have conditionals and variable assignment in a language?

Mutability is one of the common confusing topics for Rust beginners. There are in numerous number of articles explaining WHAT it is with hypothetical examples. I am trying to explain using a close-to-real-world example WHAT it is and WHY it is, so that we appreciate the practical benefits of this concept.

Every now and then when using native libraries from Rust you’ll be asked to pass a callback across the FFI boundary. The reasons are varied, but often this might be done to notify the caller when “interesting” things happen, for injecting logic (see the Strategy Pattern), or to handle the result of an asynchronous operation.

If this were normal Rust, we’d just accept a closure (e.g. a Box<dyn Fn(...)> or by being generic over any function-like type) and be done with it. However, when working with other languages you are reduced to the lowest common denominator, a the C language (or more specifically, the ABI and machine code in general) doesn’t understand generics or Rust’s “fat” pointers.

This means we need to be a little… creative.

ffi

As a big fan of powerline-shell, I wanted to see if I could distribute this in a usable form to my systems that I frequently log into. There’s not many, but they do vary in power and architecture (some x86_64, some arm7, some arm6 etc).

cross-compilation

I found some spare time this past week and sat down with a nice brew and Jon Gjengset's excellent Crust of Rust video on declarative macros. For the longest time, macros have felt 'that last part of Rust that I haven't gotten around to checking out'. I've had a vague notion of what they are, but have never quite gotten to exploring them. However, Gjengset's video served as a perfect introduction to declarative macros, and was just enough to get me started.

macros

In this tutorial series I am going to attempt to introduce ways of reverse engineering programs that were written in the rust programming language and to explain concepts in bite sized morsals. This means most example code in each tutorial will normally only consist of one to two functions that are program specific.

reverse-engineering

It's that time of year again: another traits working group sprint summary. And ohh boy, it was a busy sprint. While the first sprint of the year somewhat lacked direction and we very much "figured it out while we went", this sprint was much smoother. This was in part because of the tools and procedures that we settled into in sprint 1, such as the skill tree or a running sprint doc to track progress.

Recently, there has been a lot of progress in the Rust logging &amp; tracing ecosystem: projects like tracing make our lives much simpler, allowing us to track down bugs even in complex asynchronous environments in production. However, they still can’t replace interactive debuggers like gdb or lldb, which provide so much more than just stack traces and variables printing.

In an ideal world, you can use your debugger as a read-eval-print loop, writing, rewriting, and testing entire functions in an interactive session. Techniques like post-mortem debugging make it possible to prod dead processes, providing immeasurable help in understanding the state of a program just before it crashed. But the harsh reality is that debuggers can be counter-intuituve, hard to use, or just suck, and it’s doubly challenging for Rust.

debugger

A couple of weeks ago I spent some time pair-programming with Penelope Phippen, on her Ruby auto-formatter rubyfmt. This codebase is largely written in Rust and does a lot of work transforming abstract syntax trees to determine how your Ruby code should be formatted. As such, it deals with recursion and stack structures, and this presented an interesting opportunity to get the type system to enforce some business rules for us.

ruby

During the past several months, I’ve been swamped with in-the-trenches rust-analyzer work. Today, I spontaneously decided to take a step back and think about longer-term "road map" for rust-analyzer.

What follows is my (@matklad) personal thoughts on the matter, they not necessary reflect the consensus view of ide or compiler teams :-)

This post contains a gentle introduction to procedural macros in Rust and a guide to writing a procedural macro to curry Rust functions.

procedural-macros

With all that's going on in the world you'd be forgiven for forgetting that as of today, it has been five years since we released 1.0 in 2015! Rust has changed a lot these past five years, so we wanted reflect back on all of our contributors' work since the stabilization of the language.

Rust is a general purpose programming language empowering everyone to build reliable and efficient software. Rust can be built to run anywhere in the stack, whether as the kernel for your operating system or your next web app. It is built entirely by an open and diverse community of individuals, primarily volunteers who generously donate their time and expertise to help make Rust what it is.

In this post we will explore a brief example of asynchronous programming in Rust with the Tokio runtime, demonstrating different execution scenarios. This post is aimed at beginners to asynchronous programming.

The source code for this example is available on Github. A branch using the async-std runtime is also available (contributed by @BartMassey).

async tokio async-std

Have you ever seen the Rust compiler give a Python error? Or better, have you ever seen rust-analyzer
complain about Python syntax? In this post, we’ll extend our python!{} macro to make that happen.

python

In C++ there is a common idiom used when writing a low level interface that has different implementations for multiple architectures. That is to use the preprocessor to select the appropriate implementation at compile time. This pattern is frequently used in C++ math libraries.

There is nothing quite the same as the C preprocessor in Rust. That is generally considered a good thing, however when it comes to the above idiom there is nothing I’ve found in Rust that feels quite as convenient.

Rust 1.43.1 addresses two regressions introduced in the 1.43.0 stable release, and updates the OpenSSL version used by Cargo.

So typically when you want to make your own compiled language you need a compiler to.. well.. compile it into something useful. Then making it work across a wide range of operating systems and CPU architectures is a huge effort, let alone having it be performant. This is where LLVM1 comes in.

You can scan through your source code, parse it into an Abstract Syntax Tree (AST) then generate some abstract language (let’s call this Intermediate Representation) which LLVM can understand, LLVM says “Thanks! We’ll take it from here”.

So as long as you can generate something LLVM understands, it can make fast binaries for you, and that’s the advantage; you focus on your syntactical analysis, they focus on generating fast executables.

Rust, C, C++, Swift, Kotlin and many other languages do this and have been doing so for years. Often it’s achieved by having some component that generates LLVM IR.

There has been much talk recently about "try fn" in Rust. This is to add some fuel to the fire and address one major argument against try-like sugar for functions: the special casing of Result.

This is explicitly not supposed to be a proposal. This is just meant to open discussion on a new avenue of the "try fn" design space I haven't seen mentioned yet.

error-handling

The point of the async interview series, in the end, was to help figure out what we should be doing next when it comes to Async I/O. I thought it would be good then to step back and, rather than interviewing someone else, give my opinion on some of the immediate next steps, and a bit about the medium to longer term. I’m also going to talk a bit about what I see as some of the practical challenges.

async

The Mun v0.2 release is on the horizon, so we wanted to take this opportunity to delve a little deeper into this release’s big newcomer: hot reloadable structs. Being able to effortlessly hot reload data was what we originally set out to do when designing Mun, so we are excited to share how we brought this feat about.

One core principle of writing user friendly software is that the software should adapt to the user, not the user to the software. A user shouldn’t need to learn a new language in order to use the software for example.

Therefore internationalization is a usability feature that we shouldn’t ignore.

There exists a number of frameworks for translations, but GNU Gettext is one of the most used. So that is what we will use also.

i18n

Lately, I’ve been hacking on the next version of luminance, luminance-0.40. It should be out “soon-ish” but in the meantime, I’ve been struggling a bit with some highly and strongly typed code. I want to share something interesting I discovered with rustc. Especially, I haven’t seen a mention of that property in the book, so I’m happily sharing.

One day I wanted to quickly print for how long certain pieces of code run without setting up the whole profiling business. I used std::time::Instant for that, it's nice and easy and gives access to monotonic clock on your OS. Then I thought that it would be convenient to have a macro time_it which I can use to wrap any statement, block of code or even many statements and measure their timings. I had in mind something like context managers in Python.

macros

Computers provide clock sources that allow a program to make assumptions about time passed. If all you need is measure the time passed between some instant a and instant b the Rust libstd provides you with Instant, a measurement of a monotonically nondecreasing clock (I'm sorry to inform you that guarantees about monotonically increasing clocks are lacking). Instants are an opaque thing and only good for getting you the difference between two of them, resulting in a Duration. Good enough if you can rely on the operating system to not lie to you.

ffi

In this part, we’ll extend our python!{}-macro to be able to seamlessly use Rust variables in the Python code within. We explore a few options, and implement two alternatives.

python

Back in 2016 I had an idea. What if I put Rust code inside Haskell? Not long after that I had another idea. What if I put Haskell code inside Rust? With that the library curryrs (pronounced couriers) was born with the goal of making FFI between the two languages easy and to let the user spend less time mucking around with types and more time actually coding. However, this all broke when cargo changed it's behavior regarding rpaths for dynamic linking back some time in ~2017. Like many of my projects, I let it languish because I just couldn't fix it at the time and I was still relatively junior as a coder at my job and was finishing up my CS degree. I attempted it again in 2018 and for multiple reasons failed to get it to work again. curryrs languished for all of 2019 and I had archived the code at that point in time. A lot of the failure just came down to not understanding linking and just not enough knowledge around computers. Then this year in the midst of COVID-19 I got a brain itch.

This is mostly what causes me to try wild ideas that I work on obsessively until I solve it, like when I put the Haskell runtime in Rust. That level of working on the edge of no documentation really, cryptic clues, and can it even be done has driven me to do things just to prove I can.

haskell

I show how two domain-specific type systems, information flow control and two-party communication protocols, can be implemented in Rust using type-level programming. I explain how interesting properties of these domains can be verified at compile-time. Finally, I construct a general correspondence between type operators, logic programs, and their encoding in Rust.

I ran into this a little while ago and thought it would be helpful to share a possible solution.

Imagine you have an enum that describes a set of possible branches, for each branch there is a type associated with it that you want to run through a function, for this example– serialize.

Life is good. Then you realize that you actually needed to format these types in two different ways, one with to_writer and the other with to_writer_pretty. You could make a write and write_pretty function, but that feels dirty. The only thing that would be different in each implementation is the function from serde_json.

traits

This is a sequel to the previous post about Pratt parsing. Here, we’ll study the relationship between top-down operator precedence (Pratt parsing) and the more famous shunting yard algorithm. Spoiler: they are the same algorithm, the difference is implementation style with recursion (Pratt) or a manual stack (Dijkstra).

parsing

"Zero Sized Reference" (ZSR) sounds like an impossible thing given that mem::size_of returns a non-zero value for references to Zero Sized Types (ZST) like &() but ZSRs can in fact be constructed and they can improve both the performance and correctness of your embedded application.

In this post, we'll introduce you to this pattern which is actually used in many embedded crates – though many developers may not have given the actual pattern much attention.

Let's start by motivating the need for zero sized references.

We Rustaceans like our code to be CRaP. That is, correct, readable, and performant. The exact phrasing varies; some use interchangeable terms, such as concurrent, clear, rigorous, reusable, productive, parallel, etc. — you get the idea. No matter what you call it, the principle is crucial to building fast, efficient Rust code, and it can help boost your productivity to boot.

In this tutorial, we’ll outline a sequence of steps to help you arrive at CRaP code for your next Rust project.

This post is a “public service announcement” for people working on the guts of rustc.
I wish I had known about this a year ago, so I hope this post can make this feature more widely known.

How to structure concurrent workflows in Rust, via five simple examples.

concurrency

Rustler is a library for writing NIFs in Rust, giving us better safety from undefined behavior, out-of-bounds access, use-after-free bugs, null pointer dereferences, and data races. All of which can have disasterous consequences on the Erlang VM.

There are some brilliant quality of life improvements landing in Rustler 0.22, so let’s learn how to use Rustler to create a NIF module in Elixir!

Before continuing to extend our python!{} macro in part 2, let’s explore some things in more detail first.
First of all: “Why?" Why would anyone even want to embed Python in their Rust code? Is this just a fun experiment with no real purpose, or is it useful in ‘real world’ situations?

Originally, I just wanted to play with Rust macros and see whether this was possible at all. Quite a few of my programming adventures start with “this sounds impossible, let’s do it!"

python

About a year ago, I published a Rust crate called inline-python, which allows you to easily mix some Python into your Rust code using a python!{ .. } macro. In this series, I’ll go through the process of developing this crate from scratch.

python

This release is fairly minor. There are no new major features. We have some new stabilized APIs, some compiler performance improvements, and a small macro-related feature. See the detailed release notes to learn about other changes not covered by this post.

A lot of web development is transforming JSON one way or another. In TypeScript/JavaScript this is straightforward, since JSON is built into the language. But can we also achieve good ergonomics in Haskell and Rust? Dear reader, I am glad you asked! 🙌

The comparisons we will see is not meant to show if one approach is better than another. Instead, it is intended to be a reference to become familiar with common patterns across multiple languages. Throughout this post we will utilize several tools and libraries.

haskell typescript

The main motivation behind this post comes from the fact that, with the implementation of the lambda calculus we wrote in the last post has poor readability and is difficult to write, it also forces the user of our implementation to have Rust installed to use it, instead of being able to run an standalone executable.

One step in this direction is to write a parser for our small language, so we are able to take raw text and transform it into an adequate term. Rust has a lot of parsing libraries, such as nom, pest, combine and lalrpop, each one with its pros and cons. However, I think is better to just write our own parser from scratch so we can take a look on how parsing actually works. In case you're interested the parsing pattern we are going to use here is loosely based on the very first predictive parser from the second chapter of the Dragon Book.

parsing

…and how we rewrote the heart of sync with confidence.

Executing a full rewrite of the Dropbox sync engine was pretty daunting. (Read more about our goals and how we made the decision in our previous post here.) Doing so meant taking the engine that powers Dropbox on hundreds of millions of user’s machines and swapping it out mid-flight. To pull this off, we knew we would need a serious investment in automated testing. Our testing strategy gave us confidence that we were on the right track throughout the rewrite, and today it allows us to continue building and shipping new features on a quick release cycle.

First, we’ll discuss the types of testability considerations that went into the design of Nucleus, our new sync engine, and then we’ll get into some of the randomized testing systems that we built on top of our test-friendly architecture.

We are happy to present the results of our fourth annual survey of our Rust community. Before we dig into the analysis, we want to give a big "thank you!" to all of the people who took the time to respond. You are vital to Rust continuing to improve year after year!

A great aspect of the Rust stdlib is that many common operations are provided with a shared idiom. Rather than say having to remember how to write a correct sort function by hand, the stdlib provides a fast one for you.

One of the places with the most such conveniences is the std::iter::Iterator trait. This trait provides a rich set of operations for filtering, combining and merging streams of items. And because iterators are lazy they're incredibly efficient too.

However one downside to iterators is that they don't play well with all input. In particular there are some limits around Result (and Try in general).

64K BASIC runs the classics like Oregon Trail and Super Star Trek. 64K BASIC is compatible with programs from the beginning of personal computing. It is designed to capture and preserve the best parts of the BASIC experience. Getting a programming manual with your new computer hardware is best.

Welcome to my article about Pratt parsing — the monad tutorial of syntactic analysis. The goals of this article are:

- Raising an issue that the so-called left-recursion problem is overstated.
- Complaining about inadequacy of BNF for represeting infix expressions.
- Providing a description and implementation of Pratt parsing algorithm which sticks to the core and doesn’t introduce a DSL-y abstraction.
- Understanding the algorithm myself for hopefully the last time. I’ve implemented a production-grade Pratt parser once, but I no longer immediately understand that code :-)

This post assumes a fair bit of familiarity with parsing techniques, and, for example, does not explain what a context free grammar is.

parsing

I’m a bit reluctant writing this, because it’s about a controversial and sensitive topic. Yet, after two days of sleeping on it, I hope this’ll hopefully not cause any more heated discussions and may help some mutual understanding.

This is in part triggered by this post by a blog post by withoutboats, in part by some twitter exchanges. So, let me start with this, as a response to withoutboats and everyone in the „Ok-wrapping camp“.

error-handling

In a previous post, I shortly discussed the concept of “effects” and the parallels between them. In an unrelated post since then, Yosh Wuyts writes about the problem of trying to write fallible code inside of an iterator adapter that doesn’t support it. In a previous discussion, the users of the Rust Internals forum hotly discuss the notion of closures which would maintain the so-called “Tennant’s Correspondence Principle” - that is, closures which support breaking to scopes outside of the closure, inside of the function they are in (you can think of this is closures capturing their control flow environment in addition to capturing variables).

Types are a useful tool to make sure software libraries work together. I expect an int, you give me a string, compiler raises an error. In my experience, types, interfaces and encapsulation work best when using black-box APIs: hash maps, regexes, HTTP requests. The programmer controls the top-level program, i.e. the types that orchestrate the individual pieces.

However, the Type Life tends to become harder when dealing with frameworks, or any kind of extensible architecture where the programmer is plugging components into a bigger system they don’t control.

I’ve noticed that the ideas that I post on my blog are getting much more “well rounded”. That is a problem. It means I’m waiting too long to write about things. So I want to post about something that’s a bit more half-baked – it’s an idea that I’ve been kicking around to create a kind of informal “analysis API” for rustc.

I’ve been thinking a lot lately about how often Rust changes. There are some people that assert that Rust stays fairly static these days, and there are some people who say Rust is still changing far too much. In this blog post, I want to make a data driven analysis of this question. First, I will present my hypothesis. Next, my methodology. Then, we’ll talk about the results. Afterward, we’ll discuss possible methodological issues, and possible directions for further research.

Recently withoutboats posted a series of blog posts about some syntax sugar he would like added to Rust.

The gist of it is, as far as I can tell, there should be some way to not have to write Ok(...) for the happy path in functions which return Result. He specifically is advocating for a throws keyword that would go a bit beyond just this basic change, but the problem he’s mainly trying to fix is having to type Ok(...) so often.

I haven’t really been involved with the Rust community that much, especially since I stopped working on Way Cooler. This isn’t from lack of using Rust, I use Rust in my day job after all, but rather from disinterest in getting involved in what seems to be mostly bikeshedding. The .await war was the final straw to make me stop watching the Rust community beyond what comes in the release notes.

Clear the path to stabilizing try blocks and the Try trait, and identify some next steps for function-level try.

For the last few days the rust community has been abuzz with renewed discussions about try syntax in response to a series of blog posts([1][2]) by @withoutboats. This has triggered some fairly intense bikeshedding and vitriolic objections and, it seems, some progress!

As of yesterday the lang team is moving forward with a proposal to resolve issues with try blocks, and presumably, begin stabilizing them. It seems that the lang team is arriving at consensus to merge try blocks, where as historically this has been blocked on disagreements about ok-wrapping. So what changed? Two things, first, it became apparent that try blocks and ok wrapping could be separated from proposals for try functions and throws syntax. And second, a comment on reddit compellingly compared try and async blocks as similar effects applied to blocks.

v0.2 progress: we still had two major features left on our v0.2 roadmap: garbage collection and struct hot reloading. This month we were able to finish the former and started on foundational work for the latter; with a projected release date of Mun v0.2 by early May.

In this multi-part series, I aim to demystify the concepts behind memory management and take a deeper look at memory management in some of the modern programming languages. I hope the series would give you some insights into what is happening under the hood of these languages in terms of memory management.

In this chapter, we will look at the memory management of the Rust programming language. Rust is a statically typed & compiled systems programming language like C & C++. Rust is memory & thread-safe and does not have a runtime or a garbage collector. I previously also wrote about my first impressions of Rust.

If you have written code in either Rust or Go, you’ll recognize some similarities and differences between them. While there is some overlap between the goals of the two languages, there are plenty of differences between the two. Each language offers benefits depending on the problem you’re trying to solve.

We spoke with Damien Stanton, a software engineer who has gained experience in both languages. Our conversation with Damien delves into various aspects of the two languages: their differences, similarities, and some of the common controversies between the two.

go

In Haskell pattern matching is mostly nice and easy. It might be made more complicated by strictness annotations (i.e. whether to evaluate sub-patterns lazily or strictly) or irrefutability annotations, but it doesn't have any special interactions with ownership, borrowing and mutability which are bread and butter of Rust. So, naturally, pattern matching in Rust has additional complexity which I wanted to investigate here. Interestingly enough, some of this material is not covered by current edition of the Rust Book.

MOP is a flexible multi-solver framework intended to be fast, customizable and modular. The project was born because of my desire to unify the amount of redundancy and sensitivity I read in scientific papers. E.g: Articles written with the same solver and problem instance had very distant results due to one or two different parameters or a slight and specific modification of a well-known problem sometimes led to significant adaptations.

A flexible mathematical optimization program is nothing new, there are many better projects out there, MATLAB, Ensmallen and Ceres are some examples but MOP is written in Rust, a systems programming language that is fast, productivity and much safer than similar alternatives.

Let’s begin with two excerpts from the paper Simple Testing Can Prevent Most Critical Failures: An Analysis of Production Failures in Distributed Data-intensive Systems

almost all (92%) of the catastrophic system failures
are the result of incorrect handling of non-fatal errors
explicitly signaled in software.
in 58% of the catastrophic failures, the underlying
faults could easily have been detected through simple
testing of error handling code.
These stats haunt me. They cause me to frequently ask myself “how can I design my systems to increase the chances that errors will be handled correctly?”

This leads to two goals:

when an error happens, it is handled correctly
error handling logic is triggered under test

error-handling

Contributing to rust-lang/rust has been a long-standing desire of mine. It's a central of software that has enabled a lot of my work, and being able to contribute directly back to it felt like something I wanted to do.

Becuase rust-lang/rust is such a big project (100.000+ commits, 2500+ contributors) I felt I wanted to be super careful about any contributions. So my first attempt to contribute was over a year ago and I tried to contribute the Num::leading_ones and Num::trailing_ones APIs. I drafted an RFC, then tried submitting an implementation, but life got in the way and I didn't see it through to completion.

In January of this year someone else went ahead and landed a PR for the same feature and that inspired me. So last weekend I had some time to spare, and figured I'd go ahead and try sending in my first patch! It's been 4 days since, and I've now sent through 9 patches. Not all are merged, but I figured it'd be fun to talk about them still!

I’ve had a note in my to-do list to write down some of my own thoughts about error handling in Rust for quite some time and mostly got used to it sitting in there. Nevertheless, a twitter discussion brought it back to my attention since I wanted to explain them and honestly, twitter is just not the right medium for explaining design decisions, with its incredible limited space and impossible-to-follow threading model.

Anyway, this is a bit of a brain dump that’s not very sorted. It contains both how I do error handling in Rust, why I do it that way and what I could wish for. Nevertheless, my general view on the error handling is it is mostly fine ‒ it would use some polishing, but hey, what wouldn’t.

error-handling

I’ve long been a proponent of having some sort of syntax in Rust for writing functions which return results which “ok-wrap” the happy path. This is has also always been a feature with very vocal, immediate, and even emotional opposition from many of our most enthusiastic users. I want to write, in one place, why I think this feature would be awesome and make Rust much better.

error-handling

About two and a half years ago I wrote a Rust library called failure, which quickly became one of the most popular error handling libraries in Rust. This week, its current maintainer decided to deprecate it, a decision I strongly support. This week, I also released a new and very different error-handling library, called fehler. I wanted to discuss these two libraries briefly.

error-handling

This Tuesday, the traits working group finished our first sprint of 2020, last 6 weeks from February 11th through March 24th. The last sprint was about a year ago, but we decided to resurrect the format in order to help push forward traits-related work in Chalk and rustc.

Every now and then I like to think about topics that don't directly relate to my daily work. One of those topics has been parsers: back in January I wrote about byte-ordered stream parsing and how I think we can improve streaming parsing in Rust with a small set of API additions.

I've recently been thinking of how to improve beyond that, and one topic of interest has been state machines. Much of parsing logic includes sequences of: "If you see character X, read until character Y, then look for character Z." This sequence can best be described as a "transition between states", and state machines are a common technique/structure/pattern to describe these.

Compared to other languages I’ve learned, Rust has a fair few concepts that can be a bit tricky to get your head around. Borrowing, ownership, and the borrow-checker are common enough to have spawned a range of memes on their own, and I’ve personally spent hours on lifetime issues only to give up and rewrite something using cloning.

Associated types, though not something that’ll have you banging your head on your desk for hours, is something that it took me quite a few tries to finally understand—or at least think I understand.

What really never stuck was how it was different from generics, and why you’d need (or event want) associated types. So what’s a good way to learn and internalize a topic like this? Well, write something down and release it to the internet, of course! They’ll let you know if you’re wrong.

I have a lot of time to study things that I didn't have the time to study before. One of them is actually taking the time to read Pierce's Types and Programming Languages book from end to end. I've been using the book as a reference for a few years but I haven't had the time and motivation to actually read each chapter and do the exercises.

Nonetheless, I have an urge to actually test-field all the constructs proposed in the book, I suppose that's my inner physicist telling me that theory is useless without experimentation. So I will be implementing some of the languages and type systems proposed in the book using Rust.

It's never a bad idea to take a stroll through the source code for Rust's standard library. There's a lot to see, including high-performance data structures, meticulously-designed system interfaces, and rock-solid concurrency primitives. Personally, I've learned a lot just from studying (and using) the elegant APIs provided by the Result and Option types.

But, for a Rust developer, the standard library serves another vital purpose: it is chock full of clever ideas for how to manage various ergonomics issues you'll encounter initially when writing Rust. Indeed, it is a particularly valuable source of such techniques because, given the language's young age, the solution to every problem isn't exactly plastered all over Stack Overflow quite yet.

macros

Hey everyone and welcome! Let’s run our rust app on Android. If you will search this query on the web, you will find only Mozilla’s 2017 blog post and some copypastes. Some rules have changed (but a little).

android

The rustc compiler includes over 380,000 lines of source across more than 40 crates to support the lexing through binary linking stages of the Rust compile process. It is daunting for newcomers, and we recognize that a high-level survey of the pipeline is warranted.

In our December update, we announced plans for the publication of the "rustc-dev-guide Overview". Our goal is to describe the integrated components of the compiler in a high-level document for users and potential developers. The Overview will be published at the beginning of the rustc-dev-guide to orient readers to the more detailed documentation of the compiler in subsequent chapters.

This is going to be a quick overview of how I tend to write my application code. It might be a bit Rust-centric, but I apply similar methods in all programming languages I use.

Sometimes, you want to display a value in a specific way. Convert a String to a different format, display a i32 in a particular way. What is the most ergonomic way to do that in Rust?

This blog post is a zoom in into one subject of the Guide and Tips for crate publishing. It’s about managing your documentation and more specifically about using two unstable features of rustdoc: external_doc, doc_cfg.

rustdoc

Having the ability to call code written in other languages is increasingly important, as there are many very useful libraries that are getting ported over to WebAssembly. In .NET, the common defined way for doing interop is P/Invoke and DllImport, and .NET for WebAssembly has support for it in the form of static linking of LLVM Bitcode object files.

In this article, I will walk through how to call some simple C/C++ and Rust code from C# in a WebAssembly app.

csharp

This post describes a simple technique for writing interners in Rust which I haven’t seen documented before.

String interning is a classical optimization when you have to deal with many equal strings. The canonical example would be a compiler: most identifiers in a program are repeated several times.

Interning works by ensuring that there’s only one canonical copy of each distinct string in memory.

My (mis)adventures with async, tokio, and async-std - wherein I fail with purpose, learning a ton in the process.

async

I like it. I hope it's going to be big.

It's been just over two years since I started learning Rust. Since then, I've used it heavily at my day job, including work in the Firecracker code base, and a number of other projects. Rust is a great fit for the systems-level work I've been doing over the last few years: often performance- and density-sensitive, always security-sensitive. I find the type system, object life cycle, and threading model both well-suited to this kind of work and fairly intuitive. Like most people, I still fight with the compiler from time-to-time, but we mostly get on now.

There has been a longstanding miscompilation in Rust: programs that do not make forward progress. Note that the previous link is to the C++ definition; Rust is not C++, but currently LLVM optimizes all LLVM IR with the assumption that a lack of forward progress is undefined behavior.

Note also that Rust does not define a lack of forward progress as undefined behavior, while C++ does. It is particularly common to encounter the miscompilation "intentionally" when writing panic handlers and other such code with a body of loop {}. Some users also report that they've unintentionally hit this bug in recursive code which accidentally lacks a base case.

So, let’s talk about pointers.

Specifically, let’s talk about array pointers. In C, array pointers are the same as “normal” pointers, and have no size or other metadata attached. If you want to know the size of an array in C, you have to implement that yourself. For strings, this is tradionally implemented by ending the string with a “sentinel value”, so when iterating over the string you can continuously check for this value and exit. For other arrays, this is usually implemented by supplying some metadata as an additional parameter to the function or as a field in a struct. For a safe language like Rust, though, this simply doesn’t fly.

Recently, docs.rs added a feature that allows crates to opt-out of building on all targets. If you don't need to build on all targets, you can enable this feature to reduce your build times.

We wrote Nucleus in Rust! Rust has been a force multiplier for our team, and betting on Rust was one of the best decisions we made. More than performance, its ergonomics and focus on correctness has helped us tame sync’s complexity. We can encode complex invariants about our system in the type system and have the compiler check them for us.

Rust’s generics give us a whole lot of flexibility. A method that takes a trait bound argument does not need to care about the actual type of the argument it is called with. For example:

fn parse_read(r: impl Read) -> MyParseableType {
todo!();
}

However, this will monomorphize the method: For each Read instance, one instance will be created, potentially ballooning up the code size and increasing compile time. But Rust gives us dynamic dispatch, too, e.g.

fn parse_read(r: &mut dyn Read) -> MyParseableType {
todo!();
}

This makes each call on r dynamic. Only one version of this method needs to be created, but whenever r’s methods are called, the method must be looked up at runtime. This is done by having one static data per type that tells us where to find the respective methods. This so-called vtable is referenced whenever we need dynamic dispatch.

Now let’s say we want to pass our Read instance through to the method. For our example, we want to use Stdin or some File.

I recently hit a limitation of Rust when working with trait objects. I had a function that returned a trait object and I needed a trait object for one of its supertraits. To my surprise, the code did not compile.

I have received (a lot of) demands about writing a guide on how to write documentation in Rust lately. I'm quite happy that people finally gets interested into this area so let's not let it rest and let's go!

rustdoc

Hello everyone! I’m happy to be posting a transcript of my async interview with withoutboats. This particularly interview took place way back on January 14th, but the intervening months have been a bit crazy and I didn’t get around to writing it up till now.

async

The highlights of Rust 1.42.0 include: more useful panic messages when unwrapping, subslice patterns, the deprecation of Error::description, and more. See the detailed release notes to learn about other changes not covered by this post.

We are very proud to announce that Mun has been awarded a $15K grant as part of the MOSS Mission Partners track. Its an honour for an established open-source company like Mozilla to put their faith in our mission and capabilities. With Mozilla’s generous support, we are able to boost the Core Team’s efforts to finish hot reloadable data structures during the coming months.

Our goal is to support Mun code with both stack-allocated and heap-allocated data structures, which can be hot reloaded from Rust, C, and C++. A garbage collector will be used to manage heap-allocated structs.

Like many developers, I have been interested in Rust for quite some time. Not only because it appears in so many headlines on Hacker News, or because of the novel approach the language takes to safety and performance, but also because people seem to talk about it with a particular sense of love and admiration. On top of that, Rust is of particular interest to me because it shares some of the same goals and features of my favorite go-to language: Swift. Since I've recently taken the time to do try out Rust in some small personal projects, I wanted to take a little time to document my impressions of the language, especially in how it compares to Swift.

swift

I have been working on stuff that facilitates .NET development using Rust. My progress has reached a point where it looks like this could maybe become a real thing, but I'm not sure what to do with it. So I have decided to let you listen in while I ramble a bit.

dotnet

Let’s talk initialization of complex structures in Rust. There’s a few popular ways to go about this, some of which include the pub fn new() convention and the builder pattern. In this blog post I’m going to compare these, and also introduce a new pattern which I’m going to call the Init Struct Pattern.

Recently, I got yet another nice email from a “talent sourcing specialist” asking me if I’d like to work for their company. This one stood out from the usual blockchain recruiting spam by actually looking at this blog. And they told me their expectations were lowered by the front matter stating that I’m currently learning Rust (though not too badly, because they actually looked at the commit history and found out it was written in 2015).

I arrive a bit late considering that #[cfg(doctest)] is stable since rust 1.40 but I think
it's important for people to know about this feature and how to use it.

First things first, what is this feature about and when is it set? The answer: when running rustdoc --test (or cargo test on the doc subpart).

rustdoc

Hey there! Contributing to open source software is awesome. I’m going to talk about the story behind my first contribution, what I did and why I think you should contribute to a project of your choice too, feel free to skip to your section of interest!

Hello, dear reader! Today, I am excited to announce caniuse.rs. A website I created, which, in contrast to turbo.fish, has a practical use: It allows you to quickly find out which version of Rust stabilized a certain feature1. It's like 'site:blog.rust-lang.org' in your search engine, but better!

Newsbeuter have been mature and stable for a long time, and Newsboat inherited that. With that stability also comes a long “tail of support”: our latest release can be built with an old C++ compiler and linked to old libraries, all orchestrated by any version of GNU Make. And it’s not just a theoretical brag: with 111 Newsboat packages known to Repology, you can bet your bottom dollar that some of them weren’t built with the latest GCC 9, weren’t linked against the latest cURL, and didn’t have docs generated by an up-to-date Asciidoc.

Unfortunately, we can’t have that sort of support with Rust: the core language and library alone reached its 1.0 milestone a year after the release of the oldest C++ compiler we support. The rest of the ecosystem is necessarily even less mature.

And yet, I tried to provide that level of support.

dependencies

One surprising feature of type inference in languages like Rust is defining functions with generic return types. The idea is that by specifying at some later point in the code which type you want your function to return, the compiler can go back and fill in the blanks.

generics

Much of writing software revolves around checking if some data has some shape ("pattern"), extracting information from it, and then reacting if there was a match. To facilitate this, many modern languages, Rust included, support what is known as "pattern matching".

Pattern matching in Rust works by checking if a place in memory (the "data") matches a certain pattern. In this post, we will look at some recent improvements to patterns soon available in stable Rust as well as some more in already available in nightly.

When Stjepan and I were designing async-std, one of the core design principles was to make it resemble the standard library as closely as possible. This meant the same names, APIs, and traits as the standard library so that the same patterns of sync Rust could be used in async Rust as well.

One of those patterns is perhaps lesser known but integral to std’s functioning: impl Read/Write for &Type. What this means is that if you have a reference to an IO type, such as File or TcpStream, you’re still able to call Read and Write methods thanks to some interior mutability tricks.

The implication of this is also that if you want to share a std::fs::File between multiple threads you don’t need to use an expensive Arc<Mutex<File>> because an Arc<File> suffices. And because of how we designed async-std, this works for async_std::fs::File as well!

The FFI-unwind project group, announced in this RFC, is working to extend the language to support unwinding that crosses FFI boundaries.

We have reached our first technical decision point, on a question we have been discussing internally for quite a while. This blog post lays out the arguments on each side of the issue and invites the Rust community to join us at an upcoming meeting to help finalize our decision, which will be formalized and published as our first language-change RFC. This RFC will propose an "MVP" specification for well-defined cross-language unwinding.

ffi

I’ve been meaning to post about this for a while. I plan to start a series on different patterns for Rust / C++ FFI that are being used in Gecko.

I don’t know how many posts this would take, and given I’m a not-very-consistent writer, I think I’m going to start with the most complex one to get it done.

This is the pattern that Firefox’s style system uses.

F$&k the borrow checker, or “How I learned to stop worrying and love the compiler”.

Even though most of my day-to-day work is in Python, I still consider myself a Go programmer. I co-wrote the SSH server powering Bitbucket.org in Go and maintain a number of nontrivial projects, such as an easy to use SSH git server, an IRC library, and an IRC bot.

After reading Early Impressions of Go From a Rust Programmer, I started thinking about my experiences and how it’s been for me learning in the other direction and wanted to see what a similar article would look like.

Rust 1.41.1 addresses two critical regressions introduced in Rust 1.41.0: a soundness hole related to static lifetimes, and a miscompilation causing segfaults. These regressions do not affect earlier releases of Rust, and we recommend users of Rust 1.41.0 to upgrade as soon as possible. Another issue related to interactions between 'static and Copy implementations, dating back to Rust 1.0, was also addressed by this release.

Over the last year, the Self-Profile Working Group has been building tools to profile rustc because we often hear requests to know where compilation time is being spent. This is useful when optimizing the compiler, one of the Compiler Team's ongoing efforts to improve compile times, but it's also useful to users who want to refactor their crate so that it will compile faster. We've been working on a new feature that will help with that and this blog post gives a preview of how it works. Be warned, though: it is still experimental and we expect the interface to change over time. The Rust Unstable Book has documentation for this feature and we'll keep that up to date so you can always find the latest instructions there.

In this post, we'll look at the tools currently available and use them to profile rustc while it compiles an example crate.

rustc

The story of the implementation of the BroadcastChannel Web API in Servo, a large multi-process and multi-threaded Rust codebase.

servo

The other day I needed to do a fairly routine graphical operation, to “simplify” a polyline with many points into a simpler polyline which has roughly the same shape plus or minus some tolerance factor.

My actual use case was in sending linear movements to a CNC machine. Drawings are defined using floating point numbers and can be “accurate” to about 7-15 decimal places (depending on if you use floats or doubles) but when you take the machine’s mechanical tolerances and material effects into account the final cut is only really accurate to about 1 decimal place (0.1 mm). If I were to simplify the path with a tolerance of, say, 0.05 mm I could massively reduce the number of points sent to the machine (which reduces the amount of data sent, buffer sizes, communications overhead, etc.) with minimal effect on the accuracy.

Yesterday Mark-Simulacrum announced an experimental new notification-tracking mechanism implemented in Triagebot, the bot run by the Rust infrastructure team to perform issue assignment and labelling across the Rust project's GitHub repositories.

I have been working with Mark on designing and iterating on the notification system and wanted to write down my perspective on several ways that this work is important for Rust's growth.

One of the first things I learned when programming professionally is that global variables are bad. We all take it for granted that it’s bad practice to write code that relies heavily on global state but the other day I was working with a 3rd party native library, and it reminded why these best practices come about.

There’s a question that always comes up when people pick up the Rust programming language: why are there two string types? Why is there String, and &str?

My Declarative Memory Management article answers the question partially, but there is a lot more to say about it, so let’s run a few experiments and see if we can conjure up a thorough defense of Rust’s approach over, say, C’s.

Why Rust

Or: A Trip Report from my Satori with Rust and Functional Programming

Software is a very odd field to work in. It is simultaneously an abstract and
physical one. You build systems that can deal with an unfathomable amount of
input and output at the same time. As a job, I peer into the madness of an
unthinking automaton and give order to the inherent chaos. I then emit
incantations to describe what this unthinking automaton should do in my stead. I
cannot possibly track the relations between a hundred thousand transactions
going on in real time, much less file them appropriately so they can be summoned
back should the need arise.

Rust is my favorite programming language (other languages I enjoy are Kotlin and Python). In this post I want to explain why I, somewhat irrationally, find this language so compelling. The post does not try to explain why Rust is the most loved language according to StackOverflow survey :-)

I recently came across an interesting FFI situation, which I wanted to understand a bit better. Now I’d like to share what I found in the hopes of making everyone a better programmer.

This post is about undefined behavior, or UB. Specifically UB in Rust at the language level. This basically means we’re going to do something that the compiler is free to assume we never did. That sounds bad, right? How can the compiler just assume we didn’t write the code that we wrote? Well, it is as bad as it sounds, even if it doesn’t always cause bad things to happen.

undefined-behaviour

"Audit" is probably a strong word. Also, take this with a grain of salt. I am by no means an expert with task scheduling. I am, however, interested in using an async RwLock in a production environment.

What I was really interested in is answering the question: If I have a ton of readers acquiring and releasing the lock at all times, do the writers get a chance to acquire the lock, too?

async

Hello! For the latest async interview, I spoke with Eliza Weisman (hawkw, mycoliza on twitter). Eliza first came to my attention as the author of the tracing crate, which is a nifty crate for doing application level tracing. However, she is also a core maintainer of tokio, and she works at Buoyant on the linkerd system. linkerd is one of a small set of large applications that were build using 0.1 futures – i.e., before async-await. This range of experience gives Eliza an interesting “overview” perspective on async-await and Rust more generally.

async

This article is not comprehensive on the Rust Async topic but could be an easy overview if you have no idea about Async Programming in Rust or in general. If you are wondering about the new async/await keywords, Futures, and intrigued what Tokio is useful for, then you should feel less clueless by the end.

Rust Async is the new hot thing in Rust’s land. It has been hailed as a big milestone for Rust; especially for people developing highly performant networking applications. The long time for development, the different incompatible versions, and the various libraries; however, might made it not very straightforward to grasp. There is a lot going and it’s not obvious from where to start.

Let’s start from the beginning.

async

I’ve been wanting to do this for a few weeks, ever since I typed apt install xorg-dev on my Debian system and it installed literally 30 megabytes of header files. A perennial complaint among various sections of the Rust community is “all these darn programs that use 300 crates to do anything”. These complaints are valid, but my argument is that they’re also not NEW, and they’re certainly not unique to Rust. Rather the difference in dependencies between something written in Rust vs. “traditional” C or C++ is that on Unix systems, all these dependencies are still there, just handled by the system instead of the compiler directly. The distro maintainers do more of the work, and our build systems assume the presence of various system libraries in system places. The only thing new about it is that programmers are exposed to more of the costs of it up-front.

It's been a while since I’ve posted about debugging rust, we’ve come a long way since 2017 and I wanted to do an update on getting set up for Rust Debugging.

debugger

Have you heard of GraalVM? If you haven't you should check it out. It is an exciting technology, you know the kind that gets a polyglot developer going.

GraalVM is one of its kind. It is a polyglot VM developed at Oracle and apart from its polyglot capabilities it also has been proven to be quite performant and has a smaller memory footprint. It has support for building native images and some modern Java microservice frameworks like Micronaut and Quarkus support GraalVM as it provides faster startup and smaller footprint which is ideal in microservice architectures.

So what are the capabilities of GraalVM? Let us take a quick look.

java

There’s been a lot of excitement in the Rust community about the new async and await keywords that dropped on the stable channel last year, but until recently there wasn’t much in the way of documentation or library support. The futures and tokio developers have been working like mad to migrate their own crates over, finally pulling off their own releases in November of 2019. Many library crates using futures have followed suit, and the ecosystem is finally starting to settle into the new way of doing things. This really is a completely different way to express asynchronous code, which means in many cases code must be rewritten or tossed out. So there’s an obvious question for developers: is migrating all your existing code worth the trouble?

The answer is a resounding yes.

async

This is part 2 in our PHP FFI + Rust blog series. Previously we took a look at how the FFI feature can be enabled and used in PHP 7.4, and now we will jump over to Rust and see how we can create C-ABI libraries ourselves, which can then be loaded using PHP FFI.

ffi php

Rust Analyzer is an experimental IDE/latency-oriented Rust compiler. This is an emerging endeavour within the Rust ecosystem, which is aimed at improving the IDE experience with Rust.

Compiler performance has always been a major focus of Rust tooling development and compile times have been steadily improving across releases. However, as Igor Matuszewski explained in his Rust Belt Rust Conference talk, Rust IDE support is an area of active work:

This work is being carried through under the guidance of the RLS 2.0 working group, which includes among its main components Rust Analyzer. InfoQ has taken the chance to speak with Aleksey Kladov, main contributor to Rust Analyzer, and Rust Core Team member Steve Klabnik to learn more about it.

rust-analyzer

Often when writing automated tests for parts of distributed systems such as a microservice, one runs into the problem of the service under test calling external web services.

In this post, we’ll look at the mockito library, which provides a way for mocking such requests in tests, making writing those tests a lot more convenient.

The library is being actively developed. Currently, it’s possible to match requests on method, path, query, headers and the body, with the possibility to combine these matchers as well.

testing

While upgrading dependencies (basically deleting Cargo.lock) in a big rust project I hit an issue. A new commit in serde_json caused has upstream failure in another library, jmespath, and possibly more crates. Because these structs and enums visibility has changed, this could be interpreted as a breaking change, depending on what you consider public.

While jmespath was doing the wrong thing here, using undocumented API, it got me thinking about what can library maintainers and library consumers do to ensure semver compatibility, and what tools are out there to assist both groups.

Rust praises itself as being capable of creating some of the fastest executables, while at the same time supporting high-level abstractions. In most cases, the abstractions are resolved at compile-time, leading to ‘zero-cost abstractions’ without any runtime-overhead. In most cases function calls are implemented using fast static dispatching. But what about the other cases? This is where dynamic dispatching comes into play. In this post I am going to present you with a thorough introduction to this concept, leading far deeper into the rabbit hole than initially anticipated.

A Trait in the Rust programming language enables what today’s coders commonly call “duck-typing” (walks like a duck and quacks like a duck).

In Rust, type refers to concrete types — the type of a value; whereas, a Trait refers to an abstract or generic type. Here the English word type lacks the specificity we need to describe these concepts, so we need adjectives to differentiate.

The goal of C2Rust is to translate any standard C code into Rust. To translate bitfields from C, we therefore require drop-in compatible bitfield support. Since the Rust ecosystem revolves around sharing code in libraries (called crates), we decided to look at the few most promising crates to see what was available. Here is an overview of the 3rd party crates we looked at and how they stacked up to our requirements

Luckily getting starting with testing Rust code is reasonably straightforward and no external libraries are needed. cargo test will run all test code in a Rust project. Test code is identified with the code attributes #[cfg(test)] and #[test].

testing

At PingCAP, my colleagues use Rust to write TiKV, the storage node of TiDB, our distributed database. They do this because they want this most important node in the system to be fast and reliable by construction, at least to the greatest extent reasonable.

It was mostly a great decision, and most people internally are mostly happy about it.

But many complain about how long it takes to build. For some, a full rebuild might take 15 minutes in development mode, and 30 minutes in release mode. To developers of large systems projects, this might not sound so bad, but it's much slower than what many developers expect out of modern programming environments. TiKV is a relatively large Rust codebase, with 2 million lines of Rust. In comparison, Rust itself contains over 3 million lines of Rust, and Servo contains 2.7 million (see full line counts here).

The first entry in this series is just a story about the history of Rust with respect to compilation time.

Now that we’ve built the block_on() function, it’s time to take one step further and turn it into a real executor. We want our executor to run not just one future at a time but many futures concurrently!

This blog post is inspired by juliex, a minimal executor and one of the first that pioneered async/await support in Rust. Today we’re writing a more modern and cleaner version of juliex from scratch.

The goal for our executor is to have only simple and completely safe code while delivering performance that rivals existing best-in-class executors.

async

The highlights of Rust 1.41.0 include relaxed restrictions for trait implementations, improvements to cargo install, a more git-friendly Cargo.lock, and new FFI-related guarantees for Box<T>. See the detailed release notes to learn about other changes not covered by this post.

release

I’d like to announce two crates I’ve been working to integrate Rust components into LinuxCNC. You can find them at linuxcnc-hal (high-level interface) and linuxcnc-hal-sys (low-level interface). The rest of this post is a getting started tutorial, so follow along if you have a cool idea for a custom bit of CNC hardware and an itch to write the interface in Rust!

This is a fairly basic Rust syntax issue that I’ve run into several times. Based on unknowable runtime conditions, I will return one of several different return types from the same function. Also, this return type must use methods from two traits. How can I express this in Rust?

traits

If you’ve ever wondered how block_on from the futures crate works, today we are going to write our own version of the function.

Inspiration for this blog post comes from two crates, wakeful and extreme. wakeful has devised a simple way to create a Waker from a function, while extreme is an extremely terse implementation of block_on().

Our implementation will have slightly different goals from extreme. Rather than going for zero dependencies and minimal number of lines of code, we’ll go for a safe and efficient but still pretty simple implementation.

async

As of stable Rust 1.39.0, it is possible to implement a very basic and safe coroutine library using Rust's async/await support, and in under 100 lines of code. The implementation depends solely on std and is stack-less (meaning, not depending on an separate CPU architecture stack).

A very basic simple coroutine library contains only an event-less 'yield' primitive, which stops execution of the current coroutine so that other coroutines can run. This is the kind of library that I chose to demonstrate in this post to provide the most concise example.

Imagine if you had a text file containing thousands of URLs:

$ cat urls.txt
https://example.com/1.html
https://example.com/2.html
https://example.com/3.html

…and you needed to download all of those HTML pages efficiently. How would you do it? Maybe a shell script using xargs and curl? Maybe a simple Golang program? Go’s powerful concurrency features would work well for this.

Instead, I decided to try to use Rust. I’ve read a lot about safe concurrency in Rust, but I’ve never tried it. I also wanted to learn what Rust’s new “async/await” feature was all about. This seemed like the perfect task for asynchronous Rust code.

While on vacation in Japan I dug into streaming parsing, for fun. It's something that piqued my interest ever since I saw Mafintosh's csv-parser package back in 2014.

This led to the creation of the omnom library 1, and as part of it I came up with an ergonomic way to parse and write numbers with a specified endianness to and from streams.

Hello! For the latest async interview, I spoke with Steven Fackler (sfackler). sfackler has been involved in Rust for a long time and is a member of the Rust libs team. He is also the author of a lot of crates, most notably tokio-postgres.

async

Rust has been Stack Overflow’s most loved language for four years in a row, indicating that many of those who have had the opportunity to use Rust have fallen in love with it. However, the roughly 97% of survey respondents who haven’t used Rust may wonder, “What’s the deal with Rust?”

The short answer is that Rust solves pain points present in many other languages, providing a solid step forward with a limited number of downsides.

I’ll show a sample of what Rust offers to users of other programming languages and what the current ecosystem looks like. It’s not all roses in Rust-land, so I talk about the downsides, too.

In the last part, we've finally parsed some IPv4 packets. We even found a way to filter only IPv4 packets that contain ICMP packets.

There's one thing we haven't done though, and that's verify their checksum. Folks could be sending us invalid IPv4 packets and we'd be parsing them like a fool!

This series is getting quite long, so let's jump right into it.

About a week ago, there was an item in This Week in Rust (issue 320) that caught my eye under the Tracking Issues & PRs heading:

[disposition: merge] Stabilize #![feature(slice_patterns)] in 1.42.0.

Aww, yeah! I’ve been waiting for this for literal years, so you better believe that was a good day! I mentioned this in my Rust 2020 post as one of the things I’m the most excited to see this year, so now that it’s getting stabilized soon (2020-03-12), let’s make sure we’re prepared!

Hello and welcome to Part 11 of this series, wherein we finally use some of the code I prototyped way back when I was planning this series.

For us to filter things more accurately, we’ll need to parse IPv4 packets.

When implementing web services, the need to run recurring jobs often arises at some point (e.g. for cleanup / maintenance, or state-sharing purposes). Depending on the setup, these jobs might either run on dedicated services, or on the web services themselves.

In this post, we’ll look at a basic implementation of a Job Runner, which synchronizes job runs between instances of the services (so we don’t run the same job more than once at the same time) and which executes the jobs asynchronously using tokio, which enables us to use async/await within the jobs.

Librsvg exports two public APIs: the C API that is in turn available to other languages through GObject Introspection, and the Rust API.

You could call this a use of the facade pattern on top of the rsvg_internals crate. That crate is the actual implementation of librsvg, and exports an interface with many knobs that are not exposed from the public APIs. The knobs are to allow for the variations in each of those APIs.

This post is about some interesting things that have come up during the creation/separation of those public APIs, and the implications of having an internals library that implements both.

ffi

So, after a bit more than three years, and having reached the arbitrary number 100 in commit count, I think it’s time to take some time to survey the experience so far.

Before we move on to parsing more of our raw packets, I want to take some time to improve our error handling strategy. Currently, the ersatz codebase contains a mix of Result<T, E>, and some methods that panic, like unwrap() and expect().

We also have a custom Error enum that lets us return rawsock errors, IO errors, or Win32 errors. First of all, I want to address something: When is it okay to panic?

error-handling

Hello! For the latest async interview, I spoke with Florian Gilcher (skade). Florian is involved in the async-std project, but he’s also one of the founders of Ferrous Systems, a Rust consulting firm that also does a lot of trainings. In that capacity, he’s been teaching people to use async Rust now since Rust’s 1.0 release.

async

I want to write a library in Rust that can be called from C and just as easily called from Rust code. The tooling makes it pretty easy, but I had to look in a few places to figure how it is supposed to work and get tests running in both languages.

ffi

A core feature of capnproto-rust is its ability to read messages directly from memory without copying the data into auxiliary structures. Unfortunately, this functionality is a bit tricky to use correctly, as can be seen in its primary interface, the read_message_from_words() function, whose input is of type &[Word]. In the common case where you want to read from a &[u8], you must first call the unsafe function bytes_to_words() in order to get a &[Word]. It is only safe to call this function if you know that your data is 8-byte aligned. So it’s easy to understand why someone might shy away from calling bytes_to_words() and, in turn, read_message_from_words().

I noticed that I quite enjoy writing libraries that support no_std environments, even though I myself don't even work on embedded. Its just very fun to try and get as many features done without ever allocating, purely from a challenge point of view.

There is also some benefits one can hope for, the two big ones being usability in more cases, like embedded, and better performance due to less memory management overhead, possibly less indirection and therefore more compiler insight.

no-std

IntelliJ CLion is still leading the way for advanced refactoring and polish, but as you may know I’ve been banging on about the need for decent debuggers for Rust for years. I’m happy to say in 2020 that a good debugging experience is now available through VSCode. (Hopefully Clion will raise their debugging game)

debugger

Project Visage is a project to write a new frontend (parser and bytecode emitter) for JavaScript in Rust that’s more maintainable, modular, efficient, and secure than the current frontend. The team (Jason Orendorff, Nicolas Pierron, Tooru Fujisawa, Yulia Startsev) is currently experimenting with a parser generator that generates a custom LR parser.

javascript

This post is an overview of the major projects the Cargo team is interested in tackling in 2020.

It can be difficult to plan and predict around a volunteer-based open-source project with limited resources. Instead of trying to present a wish list, these are projects that already have a solid effort planned to push them forward. That doesn't mean that we are not interested in other projects. We have compiled a more detailed wish list at https://github.com/rust-lang/cargo/projects/1 that gives an outline of things we would like to see, but are unlikely to have significant progress this year.

If you are interested in helping, please let us know! We may not have time to shepherd additional projects, but we may have time to give some amount of feedback and review, particularly for well-motivated people who can do the legwork of design and gathering a consensus.

Hi all! I wanted to give a quick update about the lang team. We're starting something new this year: a regular design meeting. The idea of the design meeting is that it's a time for us to have in-depth discussions on some particular topic. This might be a burning problem that we've discovered, an update on some existing design work, or a forward looking proposal.

sourmash 3 was released last week, finally landing the Rust backend. But, what changes when developing new features in sourmash? I was thinking about how to best document this process, and since PR #826 is a short example touching all the layers I decided to do a small walkthrough.

python

My $dayjob is working in the cozy realms of Java, coddled by a GC and the amenities of a fat runtime. Common wisdom is to use a GC if you can afford it. I sometimes even write python if performance is a non-issue. Why would I then long for Rust every now and then?

In part 1 of this article, we saw how we could use traits to make a function work with various built-in types, how matching traits to types works, and what happens when we violate the coherence rules. In this part we’ll look at some more types, letting us work with concurrency and different techniques for routing input and output. We’ll also look at const generics, an upcoming feature that lets us make better use of types that are parameterised over sizes.

Last month, I followed the Advent of Code series of puzzles that have been published each December for the last few years. I typically write my solutions in Ruby, but there were a few puzzles in this batch that I decided to solve in Rust. These problems ended up teaching me a lot about how one can use traits to make a Rust API feel like something you’d do in an untyped language, including expanding the functionality of built-in types in a safe, controlled way.

This year’s story was based around a virtual machine; a software simulation of a physical computer. This particular machine has a very small set of features. It works like a bytecode interpreter, where a program is represented as an array of integers – signed 64-bit integers, rather than just bytes. Once this program is loaded into the machine’s memory, it can modify any value within that memory region, but there are no other storage abstractions – no stack, no heap, no registers, and so on.

The Rust-loving team at Immunant has been hard at work on C2Rust, a migration framework that takes the drudgery out of migrating to Rust. Our goal is to make safety improvements to the translated Rust automatically where we can, and help the programmer do the same where we cannot. First, however, we have to build a rock-solid translator that gets people up and running in Rust. Testing on small CLI programs gets old eventually, so we decided to try translating Quake 3 into Rust. After a couple of days, we were likely the first people to ever play Quake3 in Rust!

c2rust

In this series so far, we've taken a C program and converted it into a faster, smaller, and reasonably robust Rust program. The Rust program is a recognizable descendant of the C program, and that was deliberate: my goal was to compare and contrast the two languages for optimized code.

In this bonus section, I'll walk through how we'd write the program from scratch in Rust. In particular, I'm going to rely on compiler auto-vectorization to produce a program that is shorter, simpler, portable, and significantly faster... and without any unsafe.

Can it be?

unsafe

The Pernosco debugger engine is written in Rust and makes extensive use of async code. We had been using futures-preview 0.2; sooner or later we had to update to "new futures" 0.3, and I thought the sooner we did it the easier it would be, so we just did it. The changes were quite extensive:

103 files changed, 3610 insertions(+), 3865 deletions(-)

That took about five days of actual work. The changes were not just mechanical; here are a few thoughts about the process.

Understanding atomics and the memory ordering options when dealing with them can help us better understand multithreaded programming and why Rust helps us write safe and performant multithreaded code.

Trying to understand atomics by just reading random articles and the documentation in Rust (or C++ for that matter) feels like trying to learn physics by reverse engineering E=MC^2.

I'll give it my best try to explain this for myself and for you in this article. If I succeed the ratio should be WTF?/AHA! < 1. Let me know in the issue tracker of the repository for this article how we did.

The Rust team regrets to announce that Rust 1.41.0 (to be released on January 30th, 2020) will be the last release with the current level of support for 32-bit Apple targets. Starting from Rust 1.42.0, those targets will be demoted to Tier 3.

The decision was made on RFC 2837, and was accepted by the compiler and release teams. This post explains what the change means, why we did it, and how your project is affected.

apple

In the previous post Interior mutability patterns we looked at four different conventions that can be used to safely use interor mutability. We didn’t look further than the basic API, that was already plenty of material to cover.

Yet there are some creative methods that push the limit of those abstractions. Let’s explore them!

If this isn't your first time visiting my blog, you may recall that I've spent the past several years building an elaborate microcontroller graphics demo using C++.

Over the past few months, I've been rewriting it — in Rust.

This is an interesting test case for Rust, because we're very much in C/C++'s home court here: the demo runs on the bare metal, without an operating system, and is very sensitive to both CPU timing and memory usage.

The results so far? The Rust implementation is simpler, shorter (in lines of code), faster, and smaller (in bytes of Flash) than my heavily-optimized C++ version — and because it's almost entirely safe code, several types of bugs that I fought regularly, such as race conditions and dangling pointers, are now caught by the compiler.

It's fantastic. Read on for my notes on the process.

stm32

2019 was a great year for clippy. It’s available on stable, even installed by default in the Rust distribution and selectable as a rustup component. We have more than 300 lints, and the upwards trend is unbroken. The lints that we have also see a steady stream of improvements.

By the end of the year I found myself mulling a question that I thought should have a definite answer, but so far it seems to have eluded us: Should clippy lint code expanded from macros? And if so, also from macros defined outside of the current crate?

clippy

I’ve recently learned a new piece of Rust syntax related to specifying lifetimes with types that don’t have an explicit lifetime defined. To save time for anyone who might already know about this “trick” here’s the resulting function definition:

pub fn events(&self) -> impl Iterator<Item = Event> + '_;

The rest of this article is about explaining the + '_ part, why it’s needed and what it solves. Also, VSCode + RLS currently does not highlight this properly, which tells me this is a pretty niche feature. Before I begin explaining what the issue was and how it’s solved, I think it’s best to show the overall problem I was trying to “code my way around”.

lifetimes

This article is part 6 of the series Making our own ping. Our ping API is simple, but it’s also very limited. It doesn’t allow specifying the TTL (time to live) of packets, it doesn’t allow specifying the timeout, it doesn’t let one specify the data to send along, and it doesn’t give us any kind of information on the reply. Let’s change that now.

ping

Rust's type system requires that there only ever is one mutable reference to a value or one or more shared references. What happens when you need multiple references to some value, but also need to mutate through them? We use a trick called interior mutability: to the outside world you act like a value is immutable so multiple references are allowed. But internally the type is actually mutable.

All types that provide interior mutability have an UnsafeCell at their core. UnsafeCell is the only primitive that allows multiple mutable pointers to its interior, without violating aliasing rules. The only way to use it safely is to only mutate the wrapped value when there are no other readers. No, the guarantee has to be even stronger: we can not mutate it and can not create a mutable reference to the wrapped value while there are shared references to its value.

What are some patterns that have been developed to use interior mutability safely? How do multi-threaded synchronzation primitives that provide interior mutability follow similar principles?

When writing a struct with the intention of it being reused, it’s important not to use boxed trait objects to represent interior data.

Namely, this is because turning an object into a Box<dyn Trait> loses a lot of type information about the object which is difficult to get back should the developer consuming your struct need it.

When programming in Rust, it's not always straightforward to directly translate idioms you know. One such category is tree-like data structures. These are traditionally built out of Node structs that refer to other Nodes in the tree. To traverse through your structure, you'd use these references, and changing the structure of the tree means changing which nodes are referred to inside each one.

Rust hates that. Quickly you'll start running into problems - for instance, when iterating through nodes, you'll generally need to borrow the structure. After doing so, you'll have a bad time doing anything else with the structure inside.

Trees are a fact of life, though, and very much a useful one at that. It doesn't have to hurt! We can use region-based memory management to pretty much forget about it.

memory-management

It's been about 6 months since I watched Catherine West's excellent Using Rust for Game Development sent me down the Entity-Component-System (ECS) rabbit hole, and I thought I'd share some of my findings.

I've been meaning to write about this for quite a while now but it took a while to put my thoughts into a cohesive article without throwing massive walls of code at you.

ecs

GHC Haskell supports a feature called asynchronous (or async) exceptions. Normal, synchronous exceptions are generated by the currently running code from doing something like trying to read a file that doesn't exist. Asynchronous exceptions are generated from a different thread of execution, either another Haskell green thread, or the runtime system itself.

Rust does not have exceptions at all, much less async exceptions. (Yes, panics behave fairly similarly to synchronous exceptions, but we'll ignore those in this context. They aren't relevant.) Rust also doesn't have a green thread-based runtime like Haskell does. There's basically no direct way to compare this async exception concept from Haskell into Rust.

Or, at least, there wasn't. With Tokio, async/.await, executor, tasks, and futures, the story is quite different. A Haskell green thread looks quite a bit like a Rust task. Suddenly there's a timeout function in Tokio. This post is going to compare the Haskell async exception mechanism to whatever powers Tokio's timeout. It's going to look at various trade-offs of the two different approaches. And I'll end with my own personal analysis.

async haskell

The Rust async story is extremely good and many people (like me) have already converted their networked applications to async. In this post, I document some async/await pitfalls and show you how to avoid them. Finally, how to debug a async program.

async

Strings of text seem to always be a complicated topic when it comes to programming. This counts double for low-level languages which expose the programmer to the full complexity of memory management and allocation.

Rust is, obviously, one of those languages. Strings in Rust are therefore represented using two distinct types: str (the string slice) and String (the owned/allocated string). Learning how to juggle those types is something you need to do very early if you want to be productive in the language.

But even after you’ve programmed in Rust for some time, you may still trip on some more subtle issues with string handling. In this post, I will concentrate on just one common task: writing a function that takes a string argument. We’ll see that even there, we can encounter a fair number of gotchas.

strings

When creating a Rust crate that aims to be no_std compatible, it can happen that you accidentally break that promise without noticing. To demonstrate this, let's create an example project.

no-std

This article provides an overview of the compilation process and structure of Rust compiler. The main focus will be on the Mid-level Intermediate Representation (MIR) and understanding it.

rustc

For my game, I decided to store almost every entity in a big chunk of memory allocated only once when the program boot. I am using this technique for three reasons. First, I want full and precise control over how memory is managed in the game, second I want better data locality in order to increase cache hits from the cpu, and finally, at runtime asking the operating system in order to allocate more memory is slow.

memory-management

Producing readable, idiomatic Rust code is a major goal of C2Rust, our project to accelerate migration of C code into Rust. One hurdle we faced is the mismatch between C headers and the Rust module system. C and Rust are similar in many ways: they're both performance oriented languages with explicit memory management and full control over every aspect of the system. Rust's module system is a huge improvement over C header files. Modules declare an interface of type declarations and functions for other modules to use. In contrast, C has no such modern conveniences, so declarations must be duplicated in each source file. If C2Rust is going to produce reasonable Rust code, we have to bridge that gap by de-duplicating and merging declarations across modules.

c2rust

Like many people who have backgrounds in higher level languages like JavaScript and Ruby, one thing that really attracted me to Rust was the ability to get “closer to the metal”. While Rust offers plenty of high level abstractions, it certainly makes you think a bit more about lower level concerns like the memory allocation than the JavaScript or Ruby do. But of course, you can always go deeper, and learning more about the abstraction layer underneath Rust can be a really great way to really understand what makes Rust tick.

In this series, we'll explore the world of assembly language from the perspective of a Rust developer. We'll treat the compiler as a black box and see what kind of assembly instructions get produced from standard, run-of-the-mill Rust code. Doing this should get us a bit closer to understanding what's actually happening on our machine (though, of course, the stack does even deeper than the assembly language abstraction layer).

asm

Hello! For the latest async interview, I spoke with Carl Lerche (carllerche). Among many other crates1, Carl is perhaps best known as one of the key authors behind tokio and mio. These two crates are quite widely used through the async ecosystem. Carl and I spoke on December 3rd.

async

In our last post about Async Rust we looked at Futures concurrency, and before that we looked at Rust streams. In this post we bring the two together, and will take a closer look at concurrency with Rust streams.

async

LRtDW is a series of articles putting Rust features in context for low-level C programmers who maybe don't have a formal CS background — the sort of people who work on firmware, game engines, OS kernels, and the like. Basically, people like me.

unsafe

The highlights of Rust 1.40.0 include #[non_exhaustive] and improvements to macros!() and #[attribute]s. Finally, borrow-check migration warnings have become hard errors in Rust 2015. See the detailed release notes for additional information.

Malaysians have an interesting queuing mechanism, instead of having one line for counters, there is one line for each counter, each counter have different speeds so people try to queue on multiple counter at once and see who reach first. It’s a classic race for resources. You have just 3 workers instead of yourself to hedge your bets to avoid the worse case scenarios.

My role at $work these days is to help guide a big company's investment in Rust toward success. This essay covers a slice of my experience as it pertains to unsafe code, and especially bugs in unsafe code.

unsafe

It’s refactor time!

Our complete program is now about a hundred lines, counting blank lines (see the end of part 3 for a complete listing).

While this is pretty good for a zero-dependency project (save for pretty-hex), we can do better.

First off, concerns are mixed up. Second, a lot of low-level details are exposed, that really shouldn’t be. I don’t think main() really ought to be using transmute at all. We should build safe abstractions above all of this.

windows

This blog post is continuing my conversation with cramertj. This will be the last post.

In the first post, I covered what we said about Fuchsia, interoperability, and the organization of the futures crate.

In the second post, I covered cramertj’s take on the Stream, AsyncRead, and AsyncWrite traits. We also discused the idea of attached streams and the imporance of GATs for modeling those.

In this post, we’ll talk about async closures

async

This blog post is continuing my conversation with cramertj.

In the first post, I covered what we said about Fuchsia, interoperability, and the organization of the futures crate. This post covers cramertj’s take on the Stream trait as well as the AsyncRead and AsyncWrite traits.

async

Today we're announcing a brand new team: The Docs.rs Team!

For the second async interview, I spoke with Taylor Cramer – or cramertj, as I’ll refer to him. cramertj is a member of the compiler and lang teams and was – until recently – working on Fuchsia at Google. They’ve been a key player in Rust’s Async I/O design and in the discussions around it. They were also responsible for a lot of the implementation work to make async fn a reality.

async

Earlier last year, we announced OnePush, our notification delivery system written in Rust. In this post, we will cover improvements in our delivery capabilities since then, an interactive tour of OnePush’s subsystems and reflections of our experience shipping production Rust code. We hope you'll find it insightful!

How exactly does the cfg_if! macro do its thing?

Hi everyone, I haven’t blogged in a while so it feels good to be back. First things first — here’s some quick news. After two years of work on Crossbeam, in 2019 I’ve shifted my main focus onto asynchronous programming to research the craft of building runtimes (think of async-std and tokio). In particular, I want to make async runtimes more efficient and robust, while at the same time also simpler.

In this blog post, I’d like to talk a bit about an interesting problem all runtimes are facing: calling blocking functions from async code.

async

A few weeks ago, dtolnay introduced the idea of autoref-based specialization, which makes it possible to use specialization-like behavior on stable Rust. While this approach has some fundamental limitations, some other limitations of the technique’s initial description can be overcome. This post describes an adopted version of autoref-based specialization – called autoderef-based specialization – which, by introducing two key changes, is more general than the original one and can be used in a wider range of situation.

In the previous lesson in the crash course, we covered the new async/.await syntax stabilized in Rust 1.39, and the Future trait which lives underneath it. This information greatly supercedes the now-defunct lesson 7 from last year, which covered the older Future approach.

Now it’s time to update the second half of lesson 7, and teach the hot-off-the-presses Tokio 0.2 release.

async tokio tutorial

In the last compiler/IDE team meeting we've discussed the overall direction for IDE support in Rust.

ide

It's that time again! Time for us to take a look at how the Rust project is doing, and what we should plan for the future. The Rust Community Team is pleased to announce our 2019 State of Rust Survey! Whether or not you use Rust today, we want to know your opinions. Your responses will help the project understand its strengths and weaknesses and establish development priorities for the future.

It’s about a year since I wrote the last installment in the Rust Crash Course series. That last post was a doozy, diving into async, futures, and tokio. All in one post. That was a bit sadistic, and I’m a bit proud of myself on that front.

Much has happened since then, however. Importantly: the Future trait has moved into the standard library itself and absorbed a few modifications. And then to tie that up in a nicer bow, there’s a new async/.await syntax. It’s hard for me to overstate just how big a quality of life difference this is when writing asynchronous code in Rust.

async

I'm pleased to announce that the Mid-level IR (MIR) constant propagation pass has been switched on by default on Rust nightly which will eventually become Rust 1.41!

I've been playing around trying to asyncify a little web scraper of mine recently. Ran into a few problems doing so, and I noted the lack of beginner materials in this area so I thought I would write up some very simple examples using async-std.

The final example shows how to download multiple URLs simultaneously. I hope this is useful, and if anybody has any suggestions or corrections I look forward to hearing them.

In a previous article we’ve talked about how you can avoid rewriting a library in Rust when you don’t need to. But what about the times when you really do need to?

In most languages you’d need to rewrite the entire library from the ground up, waiting until the port is almost finished before you can start seeing results. However, Rust has a killer feature when it comes to this sort of thing. It can call into C code with no overhead (i.e. the runtime doesn’t need to inject automatic marshalling like C#’s P/Invoke) and it can expose functions which can be consumed by C just like any other C function. This opens the door for an alternative approach:

Port the library to Rust one function at a time.

Essentially, the borrow checker is a compile-time feature which ensures that objects live long enough, and at the same time prevents unsafe concurrent access to variables. This is implemented by following a couple of simple rules: If an object drops out of scope, it is destroyed. An object must outlive all references to it. An object can only have either multiple immutable references, or a single mutable reference. In this blog post we are mostly concerned about the first rule – if an object drops out of scope, it is destroyed.

Hello from Iceland! (I’m on vacation.) I’ve just uploaded the first
of the Async Interviews to YouTube. It is a conversation with Alex
Crichton (alexcrichton) and Nick Fitzgerald (fitzgen) about how
WebAssembly and Rust’s Async I/O system interact.

async

What exactly happens when you panic!()? I recently spent a lot of time looking at the parts of the standard library concerned with this, and it turns out the answer is quite complicated! I have not been able to find docs explaining the high-level picture of panicking in Rust, so this feels worth writing down.

Lately, I’ve been seeing some common misconceptions about how Rust’s futures and async/await work (“blockers”, haha). There’s an influx of new users excited for the major improvements that async/await brings, but stymied by basic questions. Concurrency is hard, even with async/await. Documentation is still being fleshed out, and the interaction between blocking/non-blocking can be tricky. Hopefully this article will help.

async

One of Rust’s biggest selling points is how well it can interoperate with C. It’s able to call into C libraries and produce APIs that C can call into with very little fuss. However, when dealing with sufficiently complex APIs, mismatches between language concepts can become a problem. In this post we’re going to look at how to handle callback functions when working with C from Rust.

if and match are now usable in constants on the latest nightly.

Traditionally, parsing of programming languages is split up into two phases: the lexing stage and the parsing stage. In truth, both of these stages are parsers: they both take an input list of symbols and produce a higher level of structure. It's just that the lexer's output is used as the parser's input.

While developing redismodule-rs, the Rust API for writing Redis modules, I encountered the need to set up a custom memory allocator.

Normally, when a Rust program needs to allocate some memory, such as when creating a String or Vec instance, it uses the global allocator defined in the program. Since Redis modules are built as shared libraries to be loaded into Redis, Rust will use the System allocator, which is the default provided by the OS (using the libc malloc(3) function).

This behavior is problematic for several reasons.

A couple of years ago, I created a programming language. It was an attempt to make an extremely minimal logic programming system, with almost no built-in functionality other than the ability to do first-order logic, for the purposes of following the formalism in some more maths-driven textbooks. Recently, having learned Rust, I thought it would be a good exercise to port this language, originally written in Ruby, as I believe building it in Rust will result it significant performance wins and help make it more usable.

Hello all! I’m going to be trying something new, which I call the “Async Interviews”. These interviews are going to be a series of recorded video calls with various “luminaries” from Rust’s Async I/O effort. In each one, I’m going to be asking roughly the same question: Now that the async-await MVP is stable, what should we be doing next? After each call, I’ll post the recording from the interview, along with a blog post that leaves a brief summary.

async

The following are examples to render Fractal image in Android bitmap with Rust.

android

In a previous post we built a CLI app using rust. This app used both an HTTP API using the asynchronous hyper library, as well as GIT using the synchronous git2.

In this post, we’ll port the whole app to the new async/await syntax in the hope that the interplay between asynchronous and synchronous program flow becomes easier to handle as well as that the complexity of the whole app is reduced somewhat with increased readability as a side-benefit.

While I was working some particularly nasty bugs recently, I put a lot of effort into bug minimization: the process of taking a large test case and finding ways to remove code from the test while preserving the test’s ability to expose the same bug.

As I worked, I realized that I was following a somewhat mechanical process. I was using regular patterns for code transformation. At some point, I said “you know, I’m not sure if everyone is aware of these patterns. I only remember some of them while I am in the middle of a bug minimization session.”

Since the Rust compiler team always wants to help newcomers learn ways they can contribute to the project, I realized this could be the ideal subject for a blog post.

In which we explore Rust's newly stabilized async/.await language feature by creating a simple, asynchronous application. We look at what you need to do asynchronous programming in Rust and how it differs from other languages. And we talk a little bit about Pokémon!

async tutorial

If you’ve ever done much embedded programming in Rust, you’ve most probably run across the arrayvec crate before. It’s awesome. The main purpose of the crate is to provide the ArrayVec type, which is essentially like Vec<T> from the standard library, but backed by an array instead of some memory on the heap.

One of the problems I ran into while writing the Motion Planning chapter of my Adventures in Motion Control was deciding how far ahead my motion planner should plan.

Types—in the static-typing sense—are useful because they help people, not computers. Oh sure, we use them, in part, to subdue the compiler or meet some need peculiarly arising from our computer. But types are valuable because they are a way of communicating.

Type systems are a way of communicating. Type systems are a way of announcing what you understand, expect, or intend. Good type systems let you do so at the level of abstraction you choose.

A programming language’s solution to error handling significantly influences the robustness, brevity, readability and – to an extent – the runtime performance of your code. Consequently, the error handling story is an important part of PL design. So it should not come as a surprise that the Rust community constantly discusses this topic. Given some recent discussions and the emergence of more and more error handling crates, this article shares some of my thoughts (not solutions!) on this.

error-handling

One of the big sources of difficulty on the async ecosystem is spawning tasks. Because there is no API in std for spawning tasks, library authors who want their library to spawn tasks have to depend on one of the multiple executors in the ecosystem to spawn a task, coupling the library to that executor in undesirable ways.
Ideally, many of these library authors would not need to spawn tasks at all.

async

Last month, rust-analyzer gained an exciting new feature: find usages. It was implemented by @viorina in #1892.

Now that async/await has been released, attention has drifted back to refining stackless coroutines (the unstable language feature that makes async/await possible). Alas, the latest RFC has shown that there is still a lot of disagreement on what exactly coroutines in Rust should look like beyond async/await. I felt like it will be useful to flesh out what coroutines could be so we can better discuss what they should be.

This book is targeted towards experienced programmers that already feel somewhat comfortable with vanilla Rust (you definitely do not need to be an "expert" though, I certainly am not) and would like to dip their toes into its async ecosystem.

As the title indicates, this is not so much a book about how to use async Rust as much as it is about trying to build a solid understanding of how it all works under the hood. From there, efficient usage should come naturally.

async tutorial book

For those who don't follow Swift's development, ABI stability has been one of its most ambitious projects and possibly it's defining feature, and it finally shipped in Swift 5. The result is something I find endlessly fascinating, because I think Swift has pushed the notion of ABI stability farther than any language without much compromise.

So I decided to write up a bunch of the interesting high-level details of Swift's ABI. This is not a complete reference for Swift's ABI, but rather an abstract look at its implementation strategy. If you really want to know exactly how it allocates registers or mangles names, look somewhere else.

Also for context on why I'm writing this, I'm just naturally inclined to compare the design of Swift to Rust, because those are the two languages I have helped develop. Also some folks like to complain that Rust doesn't bother with ABI stability, and I think looking at how Swift does helps elucidate why that is.

On Thursday, November 7, async-await syntax hit stable Rust, as part of the 1.39.0 release. This work has been a long time in development -- the key ideas for zero-cost futures, for example, were first proposed by Aaron Turon and Alex Crichton in 2016! -- and we are very proud of the end result. We believe that Async I/O is going to be an increasingly important part of Rust's story.

async

The Rust team is happy to announce a new version of Rust, 1.39.0. The highlights of Rust 1.39.0 include async/.await, shared references to by-move bindings in match guards, and attributes on function parameters.

I found myself in a situation where I had a number of CSV files that all shared some key data, and all had to be put together to a larger dataset. I figured that the easiest way to do this would be to deserialize the files, then stitch them together using a portion of their data as a key.

I decided to try my hand at writing a macro to solve the issue, and I ended up with two of them; one for one-to-one relations, and one for one-to-many.

I’ve spent quite a lot of time extolling the virtues of message-passing in concurrent Rust. However, there are times when shared-state is the right approach, sometimes just because it’s the historical approach chosen in a module and you want to add something to it without refactoring the whole thing. So today we’re going to talk about using shared-state, with the help of condvars and locks.

Rust is well-known for its helpful error messages, good tooling, and generally empathic compiler interface. If something goes wrong, Rust tries hard to help you get back on track. In this post I'd like to talk about the runtime aspect of debugging errors.

I'm finally happy to announce that I've "finished" the 1.29 branch of mrustc (for those who don't know, mrustc is my attempt at making a rust compiler, primarily for breaking the bootstrap chain).

This version is capable of compiling both rustc 1.29 (and packaged cargo) AND rustc 1.19, both of which can compile their successors (1.20 and 1.30 - and 1.30 results in a binary equal output).

mrustc

For most of 2018, we've been issuing warnings about various bugs in the borrow checker that we plan to fix -- about two months ago, in the current Rust nightly, those warnings became hard errors. In about two weeks, when the nightly branches to become beta, those hard errors will be in the beta build, and they will eventually hit stable on December 19th, as part of Rust 1.40.0. If you're testing with Nightly, you should be all set -- but otherwise, you may want to go and check to make sure your code still builds. If not, we have advice for fixing common problems below.

rustc

I recently finished my first rust project - a command line utility called “what” that displays network utilization information. As a newcomer to rust, this project offered quite some challenges for me. This post is a write up of one of them, going into detail on the parts that I personally found most difficult to understand.

In this first post, I’d like to talk about implementing a job queue to resolve IPs into their hostnames by querying a remote DNS server.

What is a ket? How do traits work? Implementing a quantum computer simulator in Rust.

The Learning Working Group, formed in April 2019, is focused on making the compiler easier to learn by ensuring that rustc-guide and API docs are "complete". It is one of the many efforts by the Rust Compiler team to decrease the barrier of contributing to the compiler.

rustc

In our last adventure we looked at C++ exceptions in WebAssembly with the emscripten compiler. Now we’re taking a look at the main error handling system for another language targeting WebAssembly, Rust. Rust has a “panic”/”unwind” system similar to C++ exceptions, but it’s generally recommended against catching panics.

error-handling wasm

How to build a timer in Rust in five easy evolutionary steps.

In my spare time I’m an emergency services volunteer, and one of the tasks our unit has is to run the radio network and keep track of what’s happening. This can be a pretty stressful job, especially when there’s lots of radio traffic, and it’s not unusual to miss words or entire transmissions.

To help with a personal project that could make the job easier I’d like to implement a basic component of audio processing, the Noise Gate.

audio

What if I told you that you could use the same very performant code in Android, iOS or even in Flutter. In this article, we’ll see how to achieve this with Rust.

ios android

After reading boat’s excellent post on asynchronous destructors, I thought it might be a good idea to write some about async fn in traits. Support for async fn in traits is probably the single most common feature request that I hear about. It’s also one of the more complex topics. So I thought it’d be nice to do a blog post kind of giving the “lay of the land” on that feature – what makes it complicated? What questions remain open?

async

"Specialization" refers to permitting overlapping impls in Rust's trait system so long as for every possible type, one of the applicable impls is "more specific" than the others for some intuitive but precisely defined notion of specific. Discussions about a specialization language feature have been ongoing for 4.5 years (RFC 1210, rust-lang/rust#31844). Today the feature is partially implemented in rustc but is not yet sound when mixed with lifetimes (rust-lang/rust#40582) and requires more language design work and compiler work before it could be stabilized.

This page covers a stable, safe, generalizable technique for solving some of the use cases that would otherwise be blocked on specialization. The technique was originally developed for use by macros in the Anyhow crate.

I am building yet another order book tool in Rust, which requires me to parse L3 (full) market data feed from NASDAQ. In the words of one experienced trading systems developer writing a feed handler is boooooring, but does not build an order book, so I searched if someone else has implemented it already. Thankfully, adwhit has implemented a library to parse ITCH 5.0 feeds from files, which is one line away from importing to my Cargo.toml.

Today I'm announcing a new experiment in the compiler team, the LLVM ICE-breaker group. If you're familiar with LLVM and would like to contribute to rustc -- but without taking on a large commitment -- then the LLVM ICE-breaker group might well be for you!

Once you get past the growing pains of the Borrow Checker and realise Rust gives you the power to do things which would be unheard of (or just plain dangerous) in other languages, the temptation to Rewrite it in Rust can be quite strong. However at best, the temptation to RiiR is unproductive (unnecessary duplication of effort), and at worst it can promote the creation of buggy software (why would you be better equipped to write a library for some domain-specific purpose than the original author?

In this post, we’ll talk about financial side of the rust-analyzer project. The goal is to find out how much rust-analyzer costs now, formulate financial goals for speeding up the development and document the Open Collective expenses policy.

The first version of async/await syntax is in the beta release, set to be shipped to stable in 1.39 on November 7, next month. There are a wide variety of additional features we could add to async/await in Rust beyond what we’re shipping in that release, but speaking for myself I know that I’d like to pump the breaks on pushing forward big ticket items in this space. Let’s let the ecosystem develop around what we have now before we start sprinting toward more big additions to the language.

The highlights of this release are profiles support, the ability to get the latest available nightly with all the components you need, and improvements to the rustup doc command. You can also check out the changelog for a list of all the changes included in this release.

rustup

The aim of this article is to demonstrate one of many ways to detect application uniqueness and establish unilateral interprocess communication in Rust in the Linux platform. Note that the APIs used in this article is not portable to Windows and other *nix systems. Also note that methods used in this article may not be best suitable for all use cases.

My favorite rust function is std::mem::drop which is used to free ordeallocate a value, similar to free() in C.

After originally researching the history and discussions about Rusts Async story, I realized I needed a better understanding of async basics and the result is this book. It's published it as a gitbook to make this journey easier for the next person (hopefully).

async

The Rust compiler has problems creating Bitcode that's compatible with recent versions of Xcode. Ditto uses a custom toolchain that stays in sync with Apple—and you can too.

derive_more is a crate which has many proc macros, amongst which is a macro for deriving From for structs, enums, and newtypes. From is the basic mechanism for using ? ergonomically in a function which returns Result<T, Error>. Almost everything I write has the derive_more crate as a dependency, and the following pattern for handling errors.

error-handling

I interned with Microsoft as a Software Engineering Intern in the MSRC UK team in Cheltenham this past summer. I worked in the Safe Systems Programming Language (SSPL) group, which explores safe programming languages as a proactive measure against memory-safety related vulnerabilities.

This blog post describes the project that I have been working on under the mentorship of the SSPL team. Hopefully, this provides additional insight into the work Microsoft interns do! My goal was to build an open-sourced Rust library that will allow developers to both consume and produce in-process Component Object Model (COM) components in an idiomatic manner.

windows

Over the past few years, we’ve heard over and over again about an exodus from Mac to Windows in the creative community. Here at Astropad, we’ve kept a close eye on this shift, knowing that Windows would be a big part of our company’s future. Our flagship products — Astropad Studio and Luna Display — primarily serve the creative pro market. Both products run on our low-latency, high-fidelity video streaming technology called Liquid that was designed to meet the demands of professional illustrators, animators, and photographers.

When we were first building our products, we used the tools we were most comfortable with, like Objective-C and the Cocoa APIs. This allowed us to move quickly, launch Astropad 1.0, and establish product-market fit in a relatively short period of time. But as we grew, we made the mistake of doubling down on Objective-C, and we pushed off the Windows effort because it created a catch-22 situation of engineering hurdles. Our Liquid engine was tightly wrapped around the Apple ecosystem, and the thought of unraveling ourselves was hard to imagine.

There's a common pattern in Rust APIs: returning a relatively complex data type which provide a trait implementation we want to work with. One of the first places many Rust newcomers encounter this is with iterators.

As you've perhaps heard, recently the async-await feature landed on the Rust beta branch. This marks a big turning point in the usability story for Async Rust. But there's still a lot of work to do. As we mentioned in the main post, the focus for the Async Foundations WG in the immediate term is going to be polish, polish and (ahem) more polish.

In particular, we want to take aim at a backlog of strange diagnostics, suboptimal performance, and the occasional inexplicable type-check failure. This is a shift: whereas before, we could have laser focus on things that truly blocked stabilization, we've now got a large set of bugs, often without a clear prioritization between them. This requires us to mix up how the Async Foundations WG is operating.

async

This is the "Inside Rust" blog. This blog is aimed at those who wish to follow along with Rust development. The various Rust teams and working groups use this blog to post status updates, calls for help, and other similar announcements.

Tarpaulin (or cargo-tarpaulin) is a code coverage tool for Rust, and anyone who’s used it might know that until recently it had an issue with code that used futures.

Azure IoT Edge is an open source, cross platform software project from the Azure IoT team at Microsoft that seeks to solve the problem of managing distribution of compute to the edge of your on-premise network from the cloud. This post explains some of the rationale behind our choice of Rust as the implementation programming language for the Security Daemon component in the product.

iot

Imagine you are implementing a calculator application and want users to be able to extend the application with their own functionality. For example, imagine a user wants to provide a random() function that generates true random numbers using random.org instead of the pseudo-random numbers that a crate like rand would provide.

The Rust language gives you a lot of really powerful tools for adding flexibility and extensibility to your applications (e.g. traits, enums, macros), but all of these happen at compile time. Unfortunately, to get the flexibility that we’re looking we’ll need to be able to add new functionalty at runtime. This can be achieved using a technique called Dynamic Loading.

Big news! As of this writing, syntactic support for async-await is available in the Rust beta channel! It will be available in the 1.39 release, which is expected to be released on November 7th, 2019. Once async-await hits stable, that will mark the culmination of a multi-year effort to enable efficient and ergonomic asynchronous I/O in Rust. It will not, however, mark the end of the road: there is still more work to do, both in terms of polish (some of the error messages we get today are, um, not great) and in terms of feature set (async fn in traits, anyone?).

async

Sometimes, I get this nudging feeling that something is not exactly right and that I have to go out and save the world and fix it (even though it’s usually something minor or doesn’t need fixing at all). I guess everyone has days like these. It’s part what drives me to invest my free time to writing software.

This is about some dead ends when trying to fix the problem of Rust’s async networking fragmentation. I haven’t been successful, but I can at least share what I tried and discovered, maybe someone else is having the same bugging feeling so they don’t have to repeat them. Or just maybe some of the approaches would work for some other problems. And because we have a bunch of success stories out there, having some failure stories to balance it doesn’t hurt.

async

The highlight of this release is pipelined compilation. The release also includes linting of some incorrect uses of mem::{uninitialized, zeroed}, #[deprecated] macros, std::any::type_name, and more.

Rust's orphan rule prevents us from implementing a foreign trait on a foreign type. While this may appear limiting at first, it is actually a good thing and one of the ways how the Rust compiler can prove at compile time that our code works the way we intended.

This blog post is a follow-up on one that I already wrote some time ago. In this one, we will go more in-depth into the "local wrapper type" idea and rebrand it as "generic newtypes".

Last month we introduced Surf, an async cross-platform streaming HTTP client for Rust. It was met with a great reception, and people generally seem to be really enjoying it. A common piece of feedback we've gotten is how much people enjoy the interface, in particular how little code it requires to create HTTP requests. In this post we'll cover a pattern at the heart of Surf's ergonomics stjepang came up with: the "async finalizer".

async

It feels like an eternity since I’ve started using Rust, and yet I remember vividly what it felt like to bang my head against the borrow checker for the first few times.

I’m definitely not alone in that, and there’s been quite a few articles on the subject! But I want to take some time to present the borrow checker from the perspective of its benefits, rather than as an opponent to fend with.

memory-management

Our goal in the C2Rust project is to translate any valid C99 program into equivalent Rust code. Naturally, this means we need to properly support translating C variadic functions into Rust. For a long time, the Rust-C FFI only allowed one-way calls to C variadic functions: Rust code could call C variadic functions, but not the other way around. Rust RFC 2137 proposed an interface for Rust code to provide C-compatible variadic functions, which was later implemented as a series of patches by Dan Robertson that have been merged into nightly Rust from November 2018 to February 2019.

c2rust

But It’s Better that "🤦🏼‍♂️".len() == 17 and Rather Useless that len("🤦🏼‍♂️") == 5

From time to time, someone shows that in JavaScript the .length of a string containing an emoji results in a number greater than 1 (typically 2) and then proceeds to the conclusion that haha JavaScript is so broken—and is rewarded with many likes. In this post, I will try to convince you that ridiculing JavaScript for this is less insightful than it first appears and that Swift’s approach to string length isn’t unambiguously the best one. Python 3’s approach is unambiguously the worst one, though.

unicode

In a previous post we've looked at Rust streams. In this post we're going to discuss another problem in the async space: futures concurrency combinators. We're going to cover the different forms of concurrency that can be expressed with Futures, and cover both fallible and infallible variants.

async

In Part 1, we covered how async fns in Rust are compiled to state machines. We saw that the internal compiler implementation uses generators and the yield statement to facilitate this transformation. In this post, we'll go over some subtleties that the compiler implementation must consider when optimizing generators. We'll look at two different kinds of analysis, liveness analysis and storage conflict detection.

async

How to write a parser using rust and lalrpop crate? In this article, we are going to walk through the implementation of parsing a string like this:

This is a {dog, cat}.

And we want to expand this string into the following two strings:

This is a dog.
This is a cat.

parsers

Why don’t I implement a nice monadic parser combinator library in Rust? That’s what my thought was when after implementing low-level mock-HTTP server in MIO and actually needed to parse the bytes received by server. What I wanted is a declative way to define sequence of strings to be matched and/or extracted.

parsers

One neat result of Rust’s futures and async/await design is that all of the async callers are on the stack below the async callees. In most other languages, only the youngest async callee is on the stack, and none of the async callers. Because the youngest frame is most often not where a bug’s root cause lies, this extra context makes debugging async code easier in Rust.

async

I have a bit of a history of killing SSDs - probably because I do a bit too much compiling and management of thousands of tiny files. Plenty of developers have this problem! So while thinking one evening, I was curious if I could setup a ramdisk on my mac for my cargo work to output to.

Last December, I decided to solve the Advent of Code (AoC) programming puzzles in Rust. This website is an advent calendar proposing a new algorithmic problem of varying difficulty every day throughout December, so it’s a great way to learn and practice a programming language. There is also a leaderboard for the fastest people to solve them, but the past problems remain available forever, so don’t worry if you’re too slow or busy in December, you can try them any time of the year - as this overdue blog post shows!

I already have some experience in Rust programming, but I felt that I could learn more about idiomatic Rust. Given the short time to solve each problem, it was a perfect opportunity to avoid reinventing the wheel but use the standard library as much as possible.

Here are the most useful Rust features that I learned and used during this challenge.

In this article, we will explore how to wrap those functions and make them safe for normal use. We’ll go over how to define a wrapper struct that handles initialization and cleanup, and describe some traits that describe how application developers can safely use your library with threads. We’ll also talk a bit about how to turn a function’s random integer return into an ergonomic, type-checked Result, how to translate strings and arrays to and from the world of C, and how to turn raw pointers returned from C into scoped objects with inherited lifetimes.

The overall goal of this step is to dig into the C library’s documentation and make each function’s internal assumptions explicit.

unsafe ffi

A companion to the RustConf 2019 talk with the same name; an introduction to making your first contribution to the Rust compiler.

This is a note on how to make multithreaded programs more robust. It’s not really specific to Rust, but I get to advertise my new jod-thread micro-crate :)

I’m about to accept a PR that will increase druid’s compile time about 3x and its executable size almost 2x. In this case, I think the tradeoff is worth it (without localization, a GUI toolkit is strictly a toy), but the bloat makes me unhappy and I think there is room for improvement in the Rust ecosystem.

Coverage reports are widely used to visualize the lines of code which are covered by test cases. Often this is used in CI to block merge requests which lower test coverage by some metric. But coverage reports don’t have to be test coverage reports. In general, the idea of “run some code and see which lines are executed” can be applied to anything, not just the test cases.

Rust is one of the major programming languages that’s been getting popular in recent years. It has many advanced high level language features like Scala.This made me interested to learn Rust. So in this next series of blogs I will share my experience with Rust from a Scala developer point of view. I would like to explore how these two language approach things. I would like to explore the similarities and their differences. This is seventh post in the series. In this post, I will be talking about type classes. You can find all the other posts in the series here.

A home for compiler team planning documents, meeting minutes, and other such things. If you’re interested in learning about how rustc works – as well as advice on building the compiler, preparing a PR, and other similar topics – check out the rustc-guide.

Today I want to dig into one of the difficulties we ran into while trying to rewrite our IoT Python code in Rust: specifically FFI, or the “Foreign Function Interface” — the bit that allows Rust to interact with other languages. When I tried to write Rust code to integrate with C libraries a year ago, the existing documents and guides often gave conflicting advice, and I had to stumble through the process on my own. This guide is intended to help future Rustaceans work through the process of porting C libraries to Rust, and familiarize the reader with the most common problems we encountered while doing the same.

ffi

This time we dive into std::pin which has a dense documentation.

If you're familiar with promises in JavaScript and followed the last blog post you may have been confused about where the familiar combinators (then, catch, and finally) were in the previous post. You will find their equivalents in this post, and, by the end, the following code will compile. You will also gain an understanding of the types, traits, and underling concepts that make futures work.

The highlights of Rust 1.37.0 include referring to enum variants through type aliases, built-in cargo vendor, unnamed const items, profile-guided optimization, a default-run key in Cargo, and #[repr(align(N))] on enums. Read on for a few highlights, or see the detailed release notes for additional information.

We’re pleased to announce the release of the first Tokio alpha with async & await support. This includes updating all of the Tokio crates to use std::future instead of futures 0.1. It also includes adding async fn versions of the APIs.

A lot of programs need to read some kind of configuration at startup. But the challenge doesn’t end here. Some programs ‒ certainly not all, but some ‒ are long running. For these, restarting them to change configuration isn’t something you’d want to do. The unix daemon convention is to send a SIGHUP signal to the process.

A Rust String is a vector of bytes containing a UTF-8 string, which is an uneasy combination. You can’t simply index into a String: the compiler forbids it, because you’re far too likely to get one byte of a multi-byte UTF-8 char. Instead you need to use a Chars iterator to parse out the string character by character.

First off, thanks for all the comments and kind words on the original writeup; I've been meaning to follow up on some of the suggestions and write about the different ways to represent monads (and functors, HKTs, etc) that now exist, but a month of being busy has kind of gotten in the way (mainly with three new kittens!).

And for sure, I do not expect (nor do I want) this to become the norm for production-level Rust: rather, I hope that this can contribute to the foundations of programming with higher-level abstractions in Rust, somewhat like how early template metaprogramming in C++ and typeclass-constraint-unification metaprogramming in Haskell have contributed, perhaps indirectly, to later innovations in their respective languages and ecosystems that were much more reasoned, sound and usable.

Recently I have been retooling some core Rust libraries at $work to play nicely with native async/await syntax. This note covers my thoughts on why this feature is so important to our async codebase if it's "just" syntax sugar for a job that could just be done using raw Futures instead.

I’ve used C++ professionally in games and simulations for over 10 years, and in the past few years I’ve also used C# to build distributed backend systems. Lately, I’ve been exploring Rust.

I'm feeling really positive about Rust's prospects of popularity and wide-spread adoption in the future. I've been a part of the Rust community for three years now, and it feels like the stars are starting to align in order to let Rust jump into a position of dominance in the programming language world. There are several different, wide-spread, and mostly unrelated trends that I've noticed are all coming together with positive implications for Rust.

Recently, there’s been a lot of talk about unsafe in the Rust world, and how to deal with it. Let’s recap: Rust has a subset called “safe Rust”, with a few very neat guarantees, such as memory safety and freedom from data races. The superset that completes the language is called “unsafe Rust”, and it still has a number of cool safeguards, but it also has an escape hatch to allow bending a few of them in order to let us write safe abstractions on top in much the same language.

There is a fair amount of confusion about what unsafe means in Rust, as well as debate about how one should think about it. Recently I’ve seen several blog posts like What is Rust’s unsafe?, The Temptation of Unsafe and Unsafe as a Human-Assisted Type System. I’m not really attempting to explain what is considered unsafe in Rust, which is explained by the reference. Nor am I going to try to answer the question of precisely when unsafe should be used and how often. My basic suggestion: we can think of unsafe in terms of mathematical axioms and theorems. This understanding is somewhere in between actual mathematical rigour and an analogy.

In a previous post I introduced the MNIST dataset and the problem of classifying handwritten digits. In this post I’ll be using the code I wrote in that post to port a simple neural network implementation to rust. My goal is to explore performance and ergonomics for data science workflows in Rust.

python neural-network

This is a short note about yet another way to look at Rust’s unsafe. Today, an interesting bug was found in rustc, which made me aware just how useful unsafe is for making code maintainable. The story begins a couple of months ago, when I was casually browsing through recent pull requests for rust-lang/rust. I was probably waiting for my code to compile at that moment :] Anyway, a pull request caught my attention, and, while I was reading the diff, I noticed a usage of unsafe.

The recent 1.36.0 release of Rust has brought a mem::MaybeUninit union that allows safer handling of possibly uninitialized data. MaybeUninit is a replacement of mem::uninitialized. Why? Because using mem::uninitialized it is damn easy to shoot yourself in the foot.

Turning functions into first-class citizens in our programming languages is one of the major changes of the decade. Well, kind of. The concept, also known as lambda, is far from new. Functional programming languages had it from the very start, during the late ’50s. Even some of the object-oriented languages like Python had it quite early, back in 1994. However it became an official part of C++ only in 2011, and Java brought it even later in 2014. And with those two languages and many others, it became the norm, even for non-functional programming. As first-class citizens, functions can be saved as variables or transfer as arguments to functions easily.

Recently, another round of discussion concerning the use of Rust’s unsafe features in the Actix web framework happened, or rather erupted, on Reddit, even more heated and acrimonious than the first time around. (I am not linking to any of the threads, as I believe that they don’t need any more exposure. Use your favorite search engine.) This proves, if more proof is needed, that people hold passionate beliefs about the matter.

One of my favorite blog posts about Rust is Things Rust Shipped Without by Graydon Hoare. To me, footguns that don’t exist in a language are usually more important than expressiveness. In this slightly philosophical essay, I want to tell about a missing Rust feature I especially like: constructors.

This post is about uninitialized memory, but also about the semantics of highly optimized “low-level” languages in general. I will try to convince you that reasoning by “what the hardware does” is inherently flawed when talking about languages such as Rust, C or C++. These are not low-level languages. I have made this point before in the context of pointers; this time it is going to be about uninitialized memory.

I’ve seen a lot of misconceptions around what the unsafe keyword means for the utility and validity of Rust and its marketing as a “safe systems language”. The truth is a lot more complicated than a single pithy tweet can possibly sum up, unfortunately; here it is as I see it.

Basically, the unsafe keyword does not turn off the advanced type system that keeps Rust code honest. It only allows a few select “superpowers”, like dereferencing raw pointers. It is used to implement safe abstractions over a fundamentally unsafe world so that the majority of Rust code can use those abstractions and avoid memory unsafety.

I consider myself an advanced beginner in Rust. There is still much I’m wrapping my head around–and I still get caught off guard by the “move” and “mutability” rules Rust enforces. However, in keeping with my personal emphasis, I’ve devoted my efforts to learning how to create automated tests in Rust. The below guidelines are not exhaustive, but represent my learning so far. Feedback is welcome!

I have been thinking about how language feature development works in Rust1. I wanted to write a post about what I see as one of the key problems: too much concurrency in our design process, without any kind of “back-pressure” to help keep the number of “open efforts” under control. This setup does enable us to get a lot of things done sometimes, but I believe it also leads to a number of problems.

Although I don’t make any proposals in this post, I am basically advocating for changes to our process that can help us to stay focused on a few active things at a time. Basically, incorporating a notion of capacity such that, if we want to start something new, we either have to finish up with something or else find a way to grow our capacity.

I wanted to give an update on the status of the “async-await foundations” working group. This post aims to cover three things: the “async await MVP” that we are currently targeting; how that fits into the bigger picture; and how you can help, if you’re so inclined;

I've been fiddling about with an idea lately, looking at how higher-kinded types can be represented in such a way that we can reason with them in Rust here and now, without having to wait a couple years for what would be a significant change to the language and compiler.

There have been multiple discussions on introducing higher-ranked polymorphism into Rust, using Haskell-style Higher-Kinded Types (HKTs) or Scala-looking Generalised Associated Types (GATs). The benefit of higher-ranked polymorphism is to allow higher-level, richer abstractions and pattern expression than just the rank-1 polymorphism we have today.

I first learned Rust back in 2014, before it was stable. Rust is definitely a very interesting language so I have decided to revisit it by programming a simple neural network. For comparison, I also implemented the network in C++, the language I'm looking to replace.

Github Repository: https://github.com/JasonShin/functional-programming-jargon.rsFunctional programming (FP) provides many advantages, and its popula...

Rust 1.36 is released on the 4th July and includes a bunch of new stuff. This blog post is about one newly stable feature in Cargo: --offline.

Let’s get deep into std::alloc! The very basic need for any program to compile and execute is having access to either physical memory or virtual memory. An allocator is responsible for providing such an access. You can think of an allocator as a service, taking some sort of requests and either giving back a (pointer) to block of memory or some errors. In Rust, a request is a Layout i.e. some meta-data about how the memory we want is supposed to take up the space.

This release brings many changes, including the stabilization of the Future trait, the alloc crate, the MaybeUninit<T> type, NLL for Rust 2015, a new HashMap<K, V> implementation, and --offline support in Cargo. Read on for a few highlights, or see the detailed release notes for additional information.

rust-analyzer is an experimental compiler frontend for the Rust programming language. The ultimate goal for this project is to provide the perfect IDE experience for Rust, with all IDE features working flawlessly while editing code. This post talks about what happened to rust-analyzer in between the all-hands and today, discusses future plans, and also announces the rust-analyzer Open Collective.

In 2018, the Mercurial project decided to use Rust to improve performance and maintainability of previous high-performance code. We have faced some interesting challenges when bridging the Python implementation with the new Rust code, and this is one that I have not found any literature about.

python

This is a subjective, primarily developer-ergonomics-based comparison of the three languages from the perspective of a Python developer, but you can skip the prose and go to the code samples, the performance comparison if you want some hard numbers, the takeaway for the tl;dr, or the Python, Go, and Rust diffimg implementations.

go python

A while back, I asked on Twitter what people found confusing in Rust, and one of the top topics was “how the module system maps to files”. I remember struggling with that a lot when I first started Rust, so I’ll try to explain it in a way that makes sense to me.

The two languages that I spent most of my time daydreaming about writing code in are Rust and Zig. Would the lack of features in Zig make me more or less productive than with Rust’s feature overload? Which language is more enjoyable to use for writing a small, self-contained computer graphics project? To find out, I decided to implement the same simple project in both languages: a small ray tracer, following the book Ray Tracing in One Weekend.

It has been literally years since I last posted to this blog. I have been doing a bunch of Rust compiler work. One big feature has been deployed: Non-Lexical Lifetimes (hereafter denoted “NLL”).

The motivation for this blog post: The next version of Rust, 1.36, is going to have NLL turned on for the 2015 edition. Going forward, all editions of Rust will now use NLL.

Summary: Closures are a combination of a function pointer (fn) and a context. A closure with no context is just a function pointer. A closure which has an immutable context belongs to Fn. A closure which has a mutable context belongs to FnMut. A closure that owns its context belongs to FnOnce.

Disclaimer: If you are learning Rust, take this with grain of salt. I’m learning Rust too and I may be utterly wrong in my guesses. The more I read and do Rust, the more I realize, that Rust consists of two (three, if macros counts) languages.

Over the past month we've been hard at work to add time support to the Runtime crate. One of the things we've had to think about has been examples. Which means we've had a chance to become intimately familiar with the good and less good parts of the std::time API.

In this post we'll look at the std::time API, and some of the proposed changes to smooth things out a bit. Also disclaimer: I've been involved with these proposals, hehe.

We initially began replacing a small component of our stack using Rust, but it quickly became clear that a larger effort would allow a great reduction in complexity. In the end, all the C, C++, and Python components of the service were rebuilt, with Rust used from task loading through to dispatching GPU operations.

Sometimes you mean it. Other times you really don't. It can be a bit of a headscratcher, but is not a particularly complicated situation, just easy to stumble into on a tired afternoon. In the end it all comes down to the ensuring you're being purposeful about what you're iterating over. I'll take a relatively brief dive into what can cause this, and how you can get back to iterating over what you want to iterate over.

PingCAP is creating a series of training courses on writing distributed systems in Rust and Go. These courses consist of:

Practical Networked Applications in Rust. A series of projects that incrementally develop a single Rust project from the ground up into a high-performance, networked, parallel and asynchronous key/value store. Along the way various real-world and practical Rust development subject matter are explored and discussed.

Distributed Systems in Rust. Adapted from the MIT 6.824 distributed systems coursework, this course focuses on implementing important distributed algorithms, including the Raft consensus algorithm, and the Percolator distributed transaction protocol.

As Rust's async story is evolving, so is Rust's streaming story. In this post we'll take a look at how Rust's streaming model works, how to use it effectively, and where things are heading in the future.

Continuing the standard library study, it’s time for Cell<T>!

We’re approaching the 9th anniversary of the day Graydon Hoare (and numerous contributors) first revealed to the world the newly-designed Rust programming language. So we thought it’d be a good time to assess our current landscape.

Hoare graciously agreed, sharing his thoughts on everything from the state of systems programming, to the difficulty of defining safety on ever-more complex systems — and whether we’re truly more secure today, or confronting an inherited software mess that will take decades to clean up.

We were experimenting with streams and I wanted to play around with them as well. There are some tokio implementations for async file reading futuers, but since linux filesystems before kernel 5.1 do not really support non blocking file operations. I thought lets have fun breaking things ourselves. As I mentioned this is not really non blocking I/O esp. since there are two ways to view futures in their current state.

I present a straight-forward design of a plugin interface using the Rust FFI.

In this blog article, I want to explore a problem I’ve been facing from time to time in luminance. The manual dispatch problem. The idea is simple: you are writing a crate and want to expose an API to people. You want them to know which type they can use with a given operation (let’s call it update). However, the actual implementation of this update function is not performed directly by your API but is deferred to a backend implementation. Some people usually like to do that with several crates; in my case, I really don’t care and let’s think in terms of types / modules instead.

I wanted to write an article about one aspect of Rust I really put off for a long while — lifetimes. They are one of the hardest parts about Rust to wrap one’s brain around. Many of us are simply not used to a compiler with a paradigm around memory ownership where such things are needed.

Lifetimes help the compiler make your code safer (i.e. less prone to crashing by using unexpected places in memory). Even if we don’t write them in our code, the compiler is smart enough to figure out your lifetimes without you under the covers. They are often times your secret allies, so let's learn a bit about them.

ownership-and-borrowing lifetimes

For work-related reasons, I had to recently get up to speed on programming in Haskell. Before that, I had very little actual experience with the language, clocking probably at less than a thousand lines of working code over a couple of years. the one thing that really enabled me to become (somewhat) productive was not even related to Haskell at all? It was Rust.

Here is a straightforward port of some easy code. randtable.c has a lookup table with seemingly-random numbers. This table is used by the following macros in bzlib_private.h

Or, more generally, how do you implement any trait that is outside of your crate, for a type that is also outside of your crate? Lets create a micro app that helps us explore the problem. We’ll create a simple struct, implement Display for that, then try to implement Display for a Vec of that struct. Once we understand the problem we’ll discuss a simple solution and how to make that solution more idiomatic.

Let’s explore a topic that has been quite foreign to me for a long time: macros.

In our regular hacksession, the current season ;), we are focusing on threading. Concurrency/Multithreading is a really hard topic it has a lot of very specific nomenclature and there are different 'levels' of concurrency one might say. I will start with the nomenclature starting from the programmers / OS perspective.

Earlier today, I tooted out a Rust question: How would you write a function to determine if a Vector of integers are all the same, or not. Anyway, the Fediverse is wonderful and full of helpful Rust friends – I ended up getting about a dozen solutions (none exactly the same I don’t think?)

I’m getting more and more used to thinking about Rust code in an idiomatic way, but I don’t think I’m comfortable enough to call myself a rustacean yet. To further my goal of oxidizing my brain with rust knowledge, I decided to start working through project euler problems sequentially. I’ve recently finished the first 20 problems and I though I’d share the highlights of what I learned about rust along the way.

Rust doesn’t allow multiple impls of a trait on the same type. This rule keeps resolution transparent and reliable. It also has an ugly side effect, that for every trait there can be only 1 blanket impl. Compiler is completely distrustful here. What if somebody somewhere created a structure that implemented both ToString and Clone? Should such combination suddenly be forbidden? What about String and u32? This rule prevents type hierarchy from sliding into minefield of odd rules and breakages on every other dependency update.

The typestate pattern is an API design pattern that encodes information about an object's run-time state in its compile-time type. This pattern is so easy in Rust that it's almost obvious, to the point that you may have already written code that uses it, perhaps without realizing it. I haven't seen a detailed examination of the nuances of this pattern, so here's my contribution.

In this post, I want to describe the Lifetimes in a different way that what I’m learned from the RFC. Audience: You may already have read the Rust Book. Nice if you took a compiler course.

I have started a little experiment in porting bits of the widely-used bzip2/bzlib to Rust. I hope this can serve to refresh bzip2, which had its last release in 2010 and has been nominally unmaintained for years.

I hope to make several posts detailing how this port is done. In this post, I'll talk about setting up a Rust infrastructure for bzip2 and my experiments in replacing the C code that does CRC32 computations.

This blog post is a manifestation of a problem that has been floating around in my head for quite a while now. It is about the seemingly incompatible idea of fully embracing Rust's custom derive system in an application that puts a strong focus on a hexagonal architecture.

To discuss this problem, I am going to first write about both concepts individually. Feel free to skip over those sections if you are already familiar with the topics. The blog post finishes off with some ideas on how Rust could be extended to better support these kind of usecases.

In my previous post I said that the lang team would be making our final decision about the syntax of the await operator in the May 23 meeting. That was last Thursday, and we did reach a decision. In brief, we decided to go forward with the preliminary proposal I outlined earlier: a postfix dot syntax, future.await. For more background, in addition the previous post on my blog, you can read this write up about some of the trade offs from April.

A brief run-down of how to wrap a Go library in a CGO FFI to enable its functions to be called by Rust.

In this post I'll show you some code I wrote for paginating over a Vec collection in Rust. I needed this for a CLI tool I wrote which was meant to display all the vector entries retrieved from a remote server. In most cases, I expected to receive a lot of results, so to display them in a terminal efficiently, I couldn't reasonably render them all. I decided I would page the results.

The highlight of this release is the implementation of the FnOnce, FnMut, and Fn closure traits for Box<dyn FnOnce>, Box<dyn FnMut>, and Box<dyn Fn> respectively. Additionally, closures may now be coerced to unsafe function pointers. The dbg! macro introduced in Rust 1.32.0 can now also be called without arguments. Moreover, there were a number of standard library stabilizations. Read on for a few highlights, or see the detailed release notes for additional information.

This repository showcases some examples of tricky Rust code that I have encountered during my years working with a variety of advanced macro libraries in Rust (my own and others').

Less than a month ago, I announced Stacked Borrows 2. In particular, I hoped that that version would bring us closer to proper support for two-phase borrows. Turns out I was a bit too optimistic! Last week, @Manishearth asked on Zulip why Miri rejected a certain program, and it turned out that the issue was related to two-phase borrows: in combination with interior mutability, behavior wasn’t always what we wanted it to be. So, I went back to the drawing board and tried to adjust Stacked Borrows.

In the end, I decided to give up on “proper” support for two-phase borrows for now, which I explained here. But I also made some tweaks to Stacked Borrows that affect all accesses (not just two-phase borrows), and that’s what this post is about. I am referring to this as “Stacked Borrows 2.1”.

Rust's infamous mem::uninitialized method has been deprecated in today's nightly build. Its replacement, MaybeUninit, has been stabilized. If you are using the former, you should migrate to using the latter as soon as possible (probably when it hits stable in 6 weeks). This was done because it was determined that mem::uninitialized was fundamentally broken, and could not be made to work.

An exploration of how I wrote a C++ binding API for my Rust library.

When I joined Avast about a year and a half ago, I did it because of two things: I wanted to work on interesting problems and I wanted to share the Wisdom of Rust with few more people.

Originally, I was hired because of my experience with writing software for „bigger embedded“ (think a home router or Raspberry PI ‒ it runs Linux kernel, has a shell, but the file system is a bad joke, your libc has bunch of weird bugs features, you really need to think twice not to waste RAM needlessly and you have to cross-compile), low-level networking knowledge and C++.

But I don’t enjoy writing C++ (not speaking about the libc features). And I have other skills I like to practice too. So I would drop an occasional comment about how this or that would be better done in Rust. I’ve done internal courses and workshops about Rust for whoever was interested, in the hope more people would start asking to be allowed to do stuff in Rust and I could participate in such projects.

Recently landed in nightly is the ability for Cargo to execute rustc in a “pipelined” fashion which has the promise of faster build times across the ecosystem. This support is turned off by default and the Cargo team is interested to gather more data and information about this feature, and that’s where you come in! If you’re interested in faster compiles, we’re interested in getting your feedback on this feature!

The FlatBuffers project is an extremely efficient schema-versioned serialization library. In this tutorial, you’ll learn how to use it in Rust.

f your application needs to iterate over a bunch of items from different sources or arrays, someone with C/C++ background might copy all items into a single vector and iterate this vector. This strategy will cause high costs in terms of allocating heap memory for the consecutive vector buffer. Instead, keep the data where it is, and chain it together to form an iterator over a virtual array. The following Rust code demonstrates the chaining of multiple arrays, forming a single itertator, without any additional allocation of vector buffer in heap.

A few years ago, mainly due to performance reasons, we started rewriting specific back-end services from Python to Rust, with great success. Now, for the sake of ease of development and testing, we are exploring the idea of moving parts of our C/C++ code base to Rust, too.

In order to do so, instead of re-writing everything in one swoop, we decided to try integrating Rust into our existing code base.

Following is a summary of our experiments, and a skeleton for writing a Rust library and calling it from a C/C++ application.

The idea of a zero cost abstraction is very important to certain programming languages, like Rust and C++, which intend to enable users to write programs with excellent performance profiles with relatively little effort. Since this idea is fundamental to the design of Rust and my work, I want to investigate, for a moment, what exactly a zero cost abstraction even is.

On May 15th, 2015, Rust was released to the world! After 5 years of open development (and a couple of years of sketching before that), we finally hit the button on making the attempt to create a new systems programming language a serious effort!

A quick tour through my 4+ years with Rust.

Rust is an amazing language with an even better ecosystem. Many design decisions of Rust make it a great fit to add new functionality to existing C/C++ systems or gradually replace parts of those systems!

When I tried to make a C++ API for a Rust library, I found that binding from C/C++ to Rust is better documented and has a smoother experience than binding from Rust to C/C++.

This post is about compiling Rust-code, the executables, the handling of the corresponding debug symbols and core-files. It highlights the importance of debug-symbols for debugging and how to split them of the binary before shipping to customer.

The Error::type_id method was recently stabilized as part of Rust 1.34.0. This point release destabilizes it, preventing any code on the stable and beta channels to implement or use it, awaiting future plans that will be discussed in issue #60784.

Many languages feature “optional” parameters to function arguments: if you provide a value, it will be used, but if you don’t, a default value will be used instead. How to do that in Rust? Well, in Rust you have to provide all the parameters a function requests. You can, however, use “Option”s to do two things: make their usage not mandatory, provide a default value.

This post will be an extension of the debates found in the Rust forums, specifically here and here. A lot is being said in those threads and there's a certain amount of duplicated posts that are drowning out potentially valuable information and perspectives. I've written several comments in those debates

I’ve been looking for this blog post everywhere, but it doesn’t exist, so I guess it’s my turn to write about Some Fun with Rust. Let’s say you have a recursive, acyclic data structure. Now let’s say you want to iterate over the values of the root node and all its children, recursively, so that you get the sequence [1, 2, 3, 4, 5, 6, 7].

We live in a great era for language design. Within the last 5-10 years, several innovative languages have come out and won over the hearts of many developers with a newfound focus on memory safety (Rust), runtime interoperability (JVM: Kotlin, V8: Typescript, BEAM: Elixir), first class concurrency (Go, Pony), dependent types (Idris), Language oriented Programming (Racket) and many more inspired features. In this spirit, I have decided to throw my hat into the ring as well and create my own language for fun.

The rust compiler dynamically link the executable against the glibc in the system. Hence if you compile your software against a newer version of glibc (say 2.19) that the one available where you run the executable (say in the host is available 2.14) it may not work.

The cleanest one is to don’t dynamically link against glib, indeed is possible to compile a rust binary statically linking musl, to do so is sufficient to compile against the correct target, usually using cargo build --target x86_64-unknown-linux-musl.

Another possibility is to compile in an environment with an “old-enough” version of glibc, this is usually done using docker and indeed there is a whole project that aim to create a set “zero setup” docker images.

As I’ve been writing Rust code more, I’ve noticed how few boolean types I’m using in my code. Instead, I’m using Rust’s powerful enums in 90% of cases where I would have reached for a boolean in another language.

Explains in which way the planned `await` resembles a function call and provides reasoning how apparent contradictions in this model can be dispelled.

This is an announcement regarding the resolution of the syntax for the await operator in Rust. This is one of the last major unresolved questions blocking the stabilization of the async/await feature, a feature which will enable many more people to write non-blocking network services in Rust. This post contains information about the timeline for the final decision, a proposal from the language team which is the most likely syntax to be adopted, and the justification for this decision.

Motivation: The Nintendo 3DS uses an ARM standard peripheral, the CoreLink DMA engine, for copying memory among DRAM and memory-mapped peripherals.

This DMA engine, unlike most other IO devices on the 3DS, actually has its own instruction set where the CPU merely uploads a stream of instructions for the peripheral to execute (other examples of this, on the 3DS, are the DSP audio processor and the PICA graphics chip).

I’d like to compile and run DMA instructions in Rust, in a hopefully ergonomic manner, without needing to use any dynamic memory allocation. This imposes a particular constraint that I need to know the number of instruction bytes at compile time so I can use an appropriately-sized array.

Many years ago, Peter Norvig wrote a beautiful article about creating a lisp interpreter in Python. It’s the most fun tutorial I’ve seen, not just because it teaches you about my favorite language family (Lisp), but because it cuts through to the essence of interpreters, is fun to follow and quick to finish.

Recently, I had some time and wanted to learn Rust. It’s a beautiful systems language, and I’ve seen some great work come out from those who adopt it. I thought, what better way to learn Rust, than to create a lisp interpreter in it?

Hence, Risp — a lisp in rust — was born. In this essay you and I will follow along with Norvig’s Lispy, but instead of Python, we’ll do it in Rust 🙂.

XV is a terminal hex viewer that I am working on. It is the first “real” Rust project that I am working on, coming from a Java background.

Java has exceptions. Both checked exceptions, identified by having the Exception class as a parent class, and unchecked exceptions, which have RuntimeException as a parent class.

Rust does not have exceptions. Rust has panics, which, depending on build-time configurations, are either catch-able when they unwind the stack, or only produce a backtrace, or just immediately aborts the process. This is controlled by the “panic” setting in the “profile” sections of your Cargo.toml file.

I recently published a post detailing a vision for the next few years (hah! Not so recently now, this took a lot longer than expected). Here I'll get into more detail about 2019.

Python is a great programming language but sometimes it can be a bit of slowcoach when it comes to performing certain tasks. That’s why developers have been building C/C++ extensions and integrating them with Python to speed up the performance. However, writing these extensions is a bit difficult because these low-level languages are not type-safe, so doesn’t guarantee a defined behavior. This tends to introduce bugs with respect to memory management. Rust ensures memory safety and hence can easily prevent these kinds of bugs.

python

If you're interested in the possibilities that hosting your own private or internal crates brings, then this is incredibly good news for you: Cloudsmith are proud to provide the World's first commercially available public and private Cargo registry hosting, with ultra-fast and secure delivery of your Rust packages, alongside all of the usual Enterprise-grade features that we provide.

Recently, I have significantly updated Stacked Borrows in order to fix some issues with the handling of shared references that were uncovered in the previous version. In this post, I will describe what the new version looks like and how it differs from Stacked Borrows 1. I assume some familiarity with the prior version and will not explain everything from scratch.

For me, Rust takes a stroll over the memory lane above and picks and drives home the best experiences from all those languages — which is a

During my final term at UWaterloo I took the CS444 compilers class with a project to write a compiler from a substantial subset of Java to x86, in teams of up to three people with a language of the group’s choice. This was a rare opportunity to compare implementations of large programs that all did the same thing, written by friends I knew were highly competent, and have a fairly pure opportunity to see what difference design and language choices could make. I gained a lot of useful insights from this. It’s rare to encounter such a controlled comparison of languages, it’s not perfect but it’s much better than most anecdotes people use as the basis for their opinions on programming languages.

I’ve been wanting to play around with the cool spinning Pikachu demo everyone was talking about. Sadly, it used termion to do its magic, which meant that unfortunately it wouldn’t work for me. Termion has been a boon for Rust, with lots of folks using it to create terminal applications. Unfortunately, as a Windows user, I know there’s a good chance that if the crate depends on termion that’s the end of the line for me, as termion apps just don’t work in Windows. Surely, I thought, there must be a better way, but I never managed to find one. Enter crossterm.

This patch release fixes two false positives and a panic when checking macros in Clippy. Clippy is a tool which provides a collection of lints to catch common mistakes and improve your Rust code.

Every once in a while I'll be involved in a conversation about dependency management and versions, often at work, in which the subject of “dependency hell” will come up. If you're not familiar with the term, then I encourage you to look it up. A brief summary might be: "The frustration that comes from dealing with application dependency versions and dependency conflicts". With that in mind, let's get a little technical about dependency resolution.

Each year the Rust community comes together to set out a roadmap. This year, in addition to the survey, we put out a call for blog posts in December, which resulted in 73 blog posts written over the span of a few weeks. The end result is the recently-merged 2019 roadmap RFC. To get all of the details, please give it a read, but this post lays out some of the highlights.

After casting around for a new platform to learn recently, I’ve decided to dive into Rust. Being mostly familiar with untyped languages like Ruby and JavaScript, it’s interesting to learn a statically typed language and see how it changes how one writes programs. There’s a common misconception amongst dynamic typing fans that static typing means you write the same programs, they’re just more verbose and come with more restrictions. And while there is certainly a cost to only being allowed to write type-safe programs, a good type system actually lets you write programs you cannot write in dynamic languages. In Rust, generic return values are a good example of this.

A collection of software engineering techniques for effectively expressing intent with Rust.

I’ve already talked about how I like how enums are used in Rust. They make it easy to express multiple states and the state’s related data. One place this is excellently utilized is error handling.

The majority of my async programming experience is on iOS and let me tell you, life is good. You can easily dispatch work to background threads. You can bring work back to the main thread. You can mark your classes as delegates and when you need to handle some event the OS will use a magic pre-existing thread pool to invoke your method and you can do whatever you like. It works perfectly almost all the time, except for when it doesn’t because of race conditions or it crashes due to concurrency. Life is good.

Rust is less tolerant about the crashing part. While I agree that crashing is bad in principle, avoiding it has significant ramifications for how you can write async code at all. Recently I’ve been finding out what the differences are. Obviously this means I’m more of a noob than an expert, but I’m currently in a good position to point out what the confusing parts are and what the Rust solutions seem to be. (But I’m a noob so take it with a grain of salt.)

I’ve been diving into Rust for the last couple of months, after my colleague started talking about it. I’ve been wanting to learn a lower level language, but C++ or something of the like have always seemed too daunting for me to even start.

I’d heard of Rust before, and great things too, but hadn’t set apart time to look into it. I finally took the dive. And boy, am I glad I did.

A lot of people talk about the borrowing system of Rust, or how fast it, or the strict type system. All of which are great things, but it’s not what I’m going to write about here. I’m excited about enums.

For the fourth World’s Simplest Bytecode Interpreter, Kazanov goes into a bit of esoterica without discussing why: he has “authentic bytecode”; he’s doing C programming and he dense-packs his instructions into a single 16-bit instruction. That’s fine if you’re emulating a real chunk of hardware, but for our purposes… I really don’t care.

This article teaches the fundamentals of parser combinators to people who are already Rust programmers. It assumes no other knowledge, and will explain everything that isn't directly related to Rust, as well as a few of the more unexpected aspects of using Rust for this purpose. It will not teach you Rust if you don't already know it, and, if so, it probably also won't teach you parser combinators very well.

Kazanov’s fifth World’s Simplest Bytecode Intepreter (see the Fourth Simplest) isn’t quite so simple: it is an implementation of Thompson’s most basic regular expression table-driven DFA handler. It handles a complete set of Kleene’s Regular Expressions without the closure operator (no * operator), and it does so by exploiting the host language stack to handle nested recursion of alternatives.

This is a much different beast from the previous four, and it requires a little thinking to grok why it’s so different. For one thing, we’re going to be jumping back and forth inside the program: this is our first bytecode interpreter with a Jmp instruction. More than that, we may want to jump backwards, so we can’t use ‘usize’ for that, we’ll need to use ‘isize’. There are times we want to add a negative offset, after all!

The scalar multiplication in a vector space is written kv in math, where k is a scalar value (e.g. a number) and v is a vector. It would be nice to write k * v in programming languages, to stay close to the familiar notation. Object-oriented languages typically only support calling methods on the first argument. But the scalar normally doesn't know about vectors, so it can't easily do that.

Associated Types in Rust are similar to Generic Types; however, Associated Types limit the types of things a user can do, which consequently facilitates code management. Among the Generic Types of traits, types that depend on the type of trait implementation can be expressed by using the Associated Type syntax. By comparing the Associated and Generic Types, you can get a better understanding of Associated Types.

This post will be different from the previous ones, since I’m going to present some of the early results of my work as a PhD student at the Prosecco team in ...

Lately, the compiler team has been changing up the way that we work. Our goal is to make it easier for people to track what we are doing and – hopefully – get involved. This is an ongoing effort, but one thing that has become clear immediately is this: the compiler team needs more than coders.

The biggest unresolved question regarding the async/await syntax is the final syntax for the await operator. There’s been an enormous amount of discussion on this question so far; a summary of the present status of that discussion and the positions within the language team is coming soon. Right now I want to separately focus on one question which impacts that decision but hasn’t been considered very much yet: for loops which process streams.

Recently I've been using Rust to build a server for the new 7-piece Syzygy endgame tablebases. Using Rust was quite enjoyable and I plan to use it for many future projects. This series is intended to order and share my thoughts, and as a primer to discuss some open questions I have.

Recently Rust has introduced a couple of new features, and the one that caught my eye in particular was std::iter::from_fn, which let’s you make an iterator from a function, which is most of what that macro was trying to do, so I thought I would try to convert the various places I was using the macro to use the new function instead…

I’m currently working on a (private in 2019, public in july 2019) project which is a NoSQL database writting in Rust. To help us manage the correctness and lifecycle of database entries, I have been using advice from the Rust Embedded Group’s Book. As I have mentioned in the past, state machines are a great way to design code, so let’s plot out the state machine we have for Entries

This is the second part in a two part series on writing a pub/sub server in Rust using Sonr. We will jump straight in building the publisher. This is the biggest piece of code so far in this project.

The largest feature in this release is the introduction of alternative cargo registries. The release also includes support for ? in documentation tests, some improvements for #[attribute(..)]s, as well as the stabilization of TryFrom. Read on for a few highlights, or see the detailed release notes for additional information.

LLD is generally much faster than the GNU ld.bfd and ld.gold linkers, so you would think it has been pretty well optimised. You might then be surprised to discover that a 36-line patch dramatically speeds up linking of Rust debug builds, while also shrinking the generated binaries dramatically, both in simple examples and large real-world projects.

This is the first part in a two part series where we explore Sonr by writing a pubsub server in Rust using Sonr.

It’s no secret to people who know me that I’m a huge fan of the Rust programming language. I could talk for hours about the brilliance of the ownership system, my irrational longing for natively compiled languages without garbage collection, or the welcoming community that finally moved me to take a more active part in open source projects. But for a start, I just want to highlight one of my favourite features: Macros.

This is the second part of writing Javascript evaluator series. I’m going to talk about my project developing Javascript evaluator in Rust. This post is going to briefly introduce Parsing that’s build on top of the results from Lexer in the first post. Then I will cover elements of evaluation of abstract syntax tree (AST).

This is a continuation of the Making Arc more atomic post. In short, ArcSwap is a place where you can atomically store and load an Arc, similar to RwLock<Arc<T>> but without the locking. It’s a good tool if you have some data that is very frequently read but infrequently modified, like configuration or an in-memory database that answers millions of queries per second, but is replaced only every 5 minutes. The canonical example for this is routing tables ‒ you want to read them with every passing packet, but you change them only when routing changes.

Making a Ferris the Rustacean hat.

When we shipped Seq 5.0 back in November, our new storage engine was compiled against Rust's unstable nightly channel. As of Seq 5.1, we can instead use the supported stable channel. That feels like a bit of a milestone so I'd like to share a few details about our journey from nightly to stable, and celebrate the progress the community has made on the language, libraries, and tooling over the last twelve months that made that journey painless for us.

There are a few good resources on the internet about using the Rust FFI to expose functions written in Rust to other languages. However, I found little information about passing data types between languages. To help remedy this situation, I describe in this post a simple Rust library that I wrote to explore how to pass complex data types from Rust to C.

The Firefox Application Services engineering team made the decision to use Rust to build cross-platform components for Firefox Sync, powering Firefox Accounts across many devices. They are implementing core business logic using Rust and wrapping it in a thin platform-native layer, such as Kotlin for Android and Swift for iOS.

In this post I will describe my latest findings from writing my own Javascript lexer in Rust-lang. I will start by briefly describing what lexing is. Then, I will continue explaining how to implement state machines in Rust-lang. Next, I talk about how to use state machines for Javascript lexing. Last but not least, I cover further performance optimizations of my lexer.

An interesting thing about Gleam is that its compiler is written in Rust. I think that Rust is a sort of ML + C language. I like C since the developer is at the driver seat driving with manual transmission. I can’t explain very well but I have always seen C as a simple and powerful language but I have always disliked C++. Knowing that I like ML and C you might understand why I find Rust an interesting language. To sum up we (me and Juan Bono) decided to do this interview with Louis Pilfold not only because of what it is, but also because it is implemented in Rust.

While exploring Rust's standard libraries, I came across a beautiful feature of Rust - compile_error.

When writing automated unit tests for your application you will probably need to use mocks at some point. Classical object-oriented programming languages such as PHP solve this with reflection where mock object types are created during test runtime. The code under test expects a certain interface or class and the test code passes mock objects that implement the interface or are a subclass.

I'm working in a team developing a big Rust project recently. The project has some features depending on time. We, the developers, want to be able to mock the time in test. In this post, I'll talk about the problems we have met, mostly related to Cargo.

Introducing Rust in an Enterprise Environment...

Running your unsafe code test suite in Miri has just gotten even easier: Miri is now available as a rustup component!

Futures make async programming in Rust easy and readable. Learn how to use futures by building them from scratch.

I posted the idea of a database WG on twitter recently and it was met with a lot of excitement. Also there was a post on reddit recently that proposed the same idea, taken from examples of where using Rust with databases is currently a painful experience. I would as part of this also want to work out a base charter to start the WG as well as setting up when and how to have regular meetings to discuss roadmaps and current projects that are being worked on.

Recently on twitter, someone asked fora a practical explainer for PhantomData and while I don't have that I did want to share one place I have found PhantomData to be useful. This blog post is an overview of how I ended up using PhantomData in my builder patterns that require a generic type argument.

For the last 15 years as a professional programmer I have worked mostly with dynamic languages. First Perl, then Python, and for the last 10 years or so, Ruby. I’ve also been writing Rust on the side for personal projects for nearly four years. Recently I started a new job and for the first time I’m writing Rust professionally. Rust represents quite a shift in language features, development process and tooling. I thought it would be interesting to reflect on that experience so far.

The most commonly used kinds of containers are arrays and maps. Pretty much any other container type can be built using those two, so that’s what we’ll build today! Of course, just like for Unq, we won’t be making simple replacements, instead we’ll be making the most minimal containers necessary for now and add features later as needed, but we’ll be make them allocator aware.

I'm giving a talk next month at our Rust Meetup about using Rust in production. I've been reflecting on my last few months using Rust after learning the language about a year ago. One of my most frustrating experiences tends to always be around the futures ecosystem, as that's where I oft-fruitless labour for hours before giving up on what I'm doing.

I do data engineering and software development work professionally, and these 2 areas are where I often find a lot of pain with using the language.

A few weeks ago I wanted to write something that takes csv files and writes them to a database. I used Apache Arrow's Rust library (which I've started contributing to this year) to do that. The idea was simple, Arrow has a CSV reader that can infer schema, so I map the schema's data types to a database's types, and then I sequentially write records in batches to the database.

I found the exercise quite painful, so I'd like to talk about databases and Rust.

A few weeks ago, I had the pleasure of attending the second annual Rust All Hands meeting, hosted by Mozilla at their Berlin office. The attendees were a mix of volunteers and corporate employees covering the full range of Rust development, including the compiler, language, libraries, docs, tools, operations, and community. Although I’m sure there will be an official summary of the meeting (like last year’s), in this article, I’ll cover a few things I was directly involved in. First, I’ll look at a feature many developers have wanted for a long time…

I fairly frequently get asked how to implement a linked list in Rust. The answer honestly depends on what your requirements are, and it's obviously not super easy to answer the question on the spot. As such I've decided to write this book to comprehensively answer the question once and for all. In this series I will teach you basic and advanced Rust programming entirely by having you implement 6 linked lists.

It’s a common pattern in the Rust ecosystem to have a function return self at the end in order to enable method chaining. This approach is often used in combination with the builder pattern, though it can also be applied to a wide variety of other situations. The example demonstrates the most straightforward of these cases (i.e. initializing and modifying an object in a single statement), but, as I’m going to demonstrate, this approach quickly breaks down when applied to a wider variety of use cases.

I recently finished a detailed review of hashbrown, which will likely become the new implementation for rust's std::collections::HashMap. One of the most surprising things I found was in the implementation of insert. It was doing something that was so offensive to people who care about collection performance that we had designed an entire API to help people avoid it: it did two lookups in the map. However, after some more discussion and review, I concluded that this implementation was reasonable. This post will try to cover why that is.

Recently I made a presentation about subtyping and variance in Rust for our local Vancouver Rust meetup, but I still think intuition was rather lost in the formalism, so here’s my shot at explaining it as intuitively as I can.

How to pick a function and make it a macro with added superpowers.

I've been using the proposed await! and Future features in nightly Rust, and overall, I really like the design. But I did run into one surprise: await! may never return, and this has consequences I didn't fully understand. Let's take a look.

Lately, I've been working on several real-world systems using Rust's async and tokio. As you can see on the areweasyncyet.rs site, this requires using nightly Rust and the experimental tokio-async-await library. I hope to talk more about these experiences soon! But today, I want to talk about channel APIs in Rust. A question was raised by @matklad on GitHub, "I've migrated rust-analyzer to crossbeam-channel 0.3, and the thing I've noticed is that every .send is followed by .unwrap. Perhaps we should make this unwrapping behavior the default, and introduce a separate checked_send which returns a Result?".

We often get the question how productive working with Rust is. “We know that it is awesome, but isn’t it hard to learn? Don’t you struggle with the borrow checker?”. Well, we put it to the test in Google’s Hash Code 2019 programming competition.

A few days ago, I was exploring Rust’s Unstable Book and found pretty much same feature in Rust, which is const_fn. I started exploring this feature more after the recent Rust release 1.33.0, in which Rust team has announced major improvements in const fn. The idea of using const fn is to compute result at compile time so that time can be saved when code is run.

In this blog post, I’m proposing we also replace the guts of mpsc with crossbeam-channel for some more performance wins. However, unlike with mutexes and hash maps, this change will also enable oft-requested new features that make it tempting to deprecate mpsc altogether and introduce better channels designed from scratch.

I wanted to post a quick update on the status of the async-await effort. The short version is that we’re in the home stretch for some kind of stabilization, but there remain some significant questions to overcome.

In this post, I would like to share the way how we can implement a Rust application that has a User Interface written in JavaFX.

The Rust team is happy to announce a new version of Rust, 1.33.0. The two largest features in this release are significant improvements to const fns, and the stabilization of a new concept: "pinning."

One of the pain points in trying to make the Meson build system work with Rust and Cargo is Cargo's use of build scripts, i.e. the build.rs that many Rust programs use for doing things before the main build. This post is about my exploration of what build.rs does.

Learn how npm uses Rust

Kcov is a code coverage tool for Binaries, Shell scripts as well as Python scripts. It generates an HTML file for most of the languages. But we will focus mainly on Rust language.

Now that the Rust 2018 edition has shipped, the language design team has been thinking a lot about what to do in 2019 and over the next few years. I think we’ve got a lot of exciting stuff on the horizon, and I wanted to write about it.

Just a quick update: You may have noticed that, in the last month or so, a number of Rust core team members have changed their jobs and/or their roles in the project.

You can easily document your Rust items like functions by putting three slashes ///. However, if you want to document each separate invocation of your amazing! macro, it is not that straightfoward.

In our crusade to oxidize platform after platform, I've been working to bring Rust to yet another target: MS-DOS. I don't know if this has been done before, but I couldn't find any information about it on the web, so I had to rely on information about using GCC to compile MS-DOS programs (not all of which carried over), and it took quite a bit of fiddling with the target specification to get things just right. In the end, I've managed to produce COM executables that can call DOS interrupts and interface with hardware such as the PC speaker, and presumably the rest of the hardware, given the right code.

A recap of the 2019 Rust All-Hands from a rustdoc perspective; and the 2019 roadmap for the Rustdoc Team.

This is my second post on the design of generators. In the first post, I outlined what an MVP of the feature would look like. In this post, I want to take a look at the first design issue for the feature: how it integrates with the ? operator.

It’s hard for me to believe but it’s already been over a year since I seriously committed to learning Rust and I have now reached my initial goal of 100 open source contributions to the Rust ecosystem. You can see the full list here. I want to use this blog post to review the work I’ve done, talk about the challenges I’ve come across and how I’ve tried to deal with them. I’m afraid I haven’t blogged in a long time and this is a bit longer than usual.

In a previous post I mentioned that the Rust compiler allows you to output interesting intermediate languages/formats in a number of different ways. hir, mir and even flowgraphs! In this post I will be giving a brief overview of the flowgraph format and also instructions on how to generate images from your code.

I’ve recently been working on a Rust project at work which requires compiling for Linux (GNU), Linux (musl - for Alpine Linux) and macOS. I use Linux Mint nearly all the time, so building for macOS targets has required asking very nicely to borrow a spare Macbook Air. This is naturally a bit crap, so I set out to find a Linux-only solution to cross compile for macOS using osxcross. A weekend of pain later, and I have the following post. Hopefully it spares you a weekend of your own pain.

The memory models of Rust and C can often cause a lot of friction. This guide is born out of my own personal struggles writing transmission-sys a wrapper for the Transmission BitTorrent client. Though in this guide we will go over the much simpler example of writing a wrapper for libevent-sys.

Multithreading allows programs to do more faster, but adds synchronization bugs and attacks. From a security standpoint, why do we care about thread safety?

How we migrated our Tier 1 service from Ruby to Rust and didn’t break production.

I wanted, was to start the Gtk application with already generated image of the prime numbers spiral (contained in gtk::Image widget) and then be able to re-generate the image when user changed something. It could be "Generate" click action for instance to show image in different resolution or color. The problem with the button closure was that when I added the Gtk image to the box_vert container, the next time the button was pressed, the code was supposed to remove existing image and add new one, but it didn't.

A recent blog article discussed the fact that 70% of all security bugs in Microsoft products are due to memory safety vulnerabilities. A lot of the comments I’ve seen on social media boil down to “The problem isn’t the use of a memory unsafe language, but that the programmers who wrote this code are bad.”

In this article, I’m going to look at a recent bug that was caught by the Rust compiler, which I think shows that not only is this assertion unreasonable but virtually impossible for reasons I haven’t seen discussed. While the example I’m going to give is about thread safety rather than memory safety, the arguments I’m going to present can be applied to both.

One of the things that baffled me for quite a long time are Rust’s “trait objects”: they felt like an odd part of the language and I was never quite sure whether I was using them or not, even when I wanted to be. Since I’ve recently had cause to look into them in more detail, I thought it might be helpful to write a few things down, in case anyone else finds my explanation useful. The first part of this blog post covers the basics and the second part takes a look at the performance implications of the way trait objects are implemented in Rust.

Last week, I was in Berlin at the Rust All-Hands 2019. It was great! I will miss nerding out in discussions about type theory and having every question answered by just going to the person who’s the expert in that area, and asking them. In this post, I am summarizing the progress we made in my main areas of interest and the discussions I was involved in—this is obviously just a small slice of all the things that happened.

In this blog, I will explain about new debugging macro dbg, added in Rust 1.32.0. This is a macro for quick and dirty debugging with which you can inspect the value of a given expression.

We’re still not finished with the design of async/await, but it’s already become clear that it’s time to get the next phases of the feature into the pipeline. There are two extensions to the minimal async/await feature we’ve currently got that seem like the clear high priority:
Async methods: allowing async fn to be used in traits. Generators: allowing imperative control flow to create Iterators and Streams the same way async fn allows imperative control flow to create a Future.

In the Rust standard library, Box is a RAII wrapper for an object on the heap. It’s actually a special type that’s not implemented purely in the library, but also use special features called lang items. It uses the global allocator to allocate its memory. We want a similar type that also has an allocator associated to it. We’ll call it Unq, which mirror C++’s unique_ptr.

Welcome to Handmade Rust, a series (hopefully) where I will be developing a Vulkan rendering engine in Rust the Handmade way. By this, I mean using no external libraries, not even the Rust standard library, only the core lib. I am doing this mainly for my own enjoyment but also because I want to get better at writing Rust, and sometimes the best way to really understand something is to just do it yourself. The project will be available on GitHub at stevenlr/HandmadeRust.

The first step will be to build a foundation library for memory allocation, containers, and other utilities that are not provided by the core lib.

A very common question that comes up on IRC or elsewhere by people trying to use the gtk-rs GTK bindings in Rust is how to modify UI state, or more specifically GTK widgets, from another thread. I’ll take this opportunity to also explain why it’s not so trivial in Rust first and also explain another solution.

While there’s a lot of interesting detail captured in this series, it’s often helpful to have a document that answers some “yes/no” questions. You may not care about what an Iterator looks like in assembly, you just need to know whether it allocates an object on the heap or not. And while Rust will prioritize the fastest behavior it can, here are the rules for each memory type

Rust 1.26 introduced the ability to return a Result from the main method, which was a great ergonomics improvement especially for small CLI applications. If your application returns an Ok, Rust reports a success exit status code to the operating system. Likewise if your application returns an Err, Rust reports an error exit status code.

But what if you want to return a custom exit status error code for each possible error type in your application, to provide some additional feedback to your user? This leads into an exploration of the Termination and Try traits, and is the topic of this post.

Rust provides a lot of language constructs to enable and empower the user to write memory safe and correct code. But what happens behind these constructs? In this post I will outline ways of exploring rust and it’s compiler.

Last weekend I went to fosdem 2019. This is where I had the chance to attend a talk given by Matthias Endler. In his talk he explained how rust has got a lot of syntactic sugar to help the programmers in writing safe and correct code, part of his talk was explaining cargo-inspect to analyse this syntax and see what’s happening behind the scenes. This inspired me to dig a bit deeper and try out other tools.

Throughout the series so far, we’ve put a handicap on the code. In the name of consistent and understandable results, we’ve asked the compiler to pretty please leave the training wheels on. Now is the time where we throw out all the rules and take off the kid gloves. As it turns out, both the Rust compiler and the LLVM optimizers are incredibly sophisticated, and we’ll step back and let them do their job.

Similar to “What Has My Compiler Done For Me Lately?”, we’re focusing on interesting things the Rust language (and LLVM!) can do with memory management.

For the very first coding blog, I think it is appropriate to start with building objects. This post is about the Builder Pattern in Rust, and how it taught me I couldn’t write everything the way I want. Yes, strong typing prevents you from common pitfalls, and C++ can go quite far in this direction (as many JS/Python enthusiastic will gladly testify). It is often easy to forget how it sometimes prevents you from writing a completely legal and safe code, due to rules being too “protective”. And as Rust takes the code safety to a whole new level, sometimes a trivial code can’t be written, and without the proper knowledge, it might seem entirely arbitrary. It was a subtle restriction in the builder pattern that took me by surprise first.

Managing dynamic memory is hard. Some languages assume users will do it themselves (C, C++), and some languages go to extreme lengths to protect users from themselves (Java, Python). In Rust, how the language uses dynamic memory (also referred to as the heap) is a system called ownership. And as the docs mention, ownership is Rust’s most unique feature.

This is a position paper that I originally circulated inside the firmware community at X. I've gotten requests for a public link, so I've cleaned it up and posted it here. This is, obviously, my personal opinion. Please read the whole thing before sending me angry emails.

tl;dr: C/C++ have enough design flaws, and the alternative tools are in good enough shape, that I do not recommend using C/C++ for new development except in extenuating circumstances. In situations where you actually need the power of C/C++, use Rust instead. In other situations, you shouldn't have been using C/C++ anyway — use nearly anything else.

In which I try to explain the reasoning behind Rust’s memory-safety mechanisms.

const and static are perfectly fine, but it’s relatively rare that we know at compile-time about either values or references that will be the same for the duration of our program. Put another way, it’s not often the case that either you or your compiler knows how much memory your entire program will ever need.

The first memory type we’ll look at is pretty special: when Rust can prove that a value is fixed for the life of a program (const), and when a reference is unique for the life of a program (static as a declaration, not 'static as a lifetime), we can make use of global memory. This special section of data is embedded directly in the program binary so that variables are ready to go once the program loads; no additional computation is necessary.

There’s an alchemy of distilling complex technical topics into articles and videos that change the way programmers see the tools they interact with on a regular basis. I knew what a linker was, but there’s a staggering amount of complexity in between the OS and main(). Rust programmers use the Box type all the time, but there’s a rich history of the Rust language itself wrapped up in how special it is.

In a similar vein, this series attempts to look at code and understand how memory is used; the complex choreography of operating system, compiler, and program that frees you to focus on functionality far-flung from frivolous book-keeping. The Rust compiler relieves a great deal of the cognitive burden associated with memory management, but we’re going to step into its world for a while.

Let’s learn a bit about memory in Rust.

There’s been a lot of talk about improving Rust’s governance model lately. As we decompress from last year’s hectic edition work, we’re slowly starting to look at all the bits of debt we accumulated, and organizational debt is high on that list.

I’ve been talking in private with people about a bunch of these things for quite a while now, and I felt it worthwhile to write down as much of my thoughts as I can before the Rust All Hands in Berlin this week.

A step-by-step guide for debugging Rust with Visual Studio Code.

Rust offers the promise of “fearless concurrency”, and delivers on it through memory safety. Yet this safety doesn’t guarantee code that is easy to maintain. If one is not “fearful” of complexity, concurrency can easily become a story of regrets. Can we get a “regret-less” kind of concurrency?

The Cargo team have been thinking about and discussing long-term plans for Cargo. In this post I'll talk about what we hope Cargo will look like around the time of the next edition (assuming there is another edition and that it happens in about three years, neither of which is confirmed). There will be another post soon on more concrete plans for this year, including some kind of roadmap.

It has been more than 3 years now since the MIR ini­ti­at­ive has been ac­cep­ted. Cur­rently rustc has a num­ber of MIR op­tim­isa­tions: a simple in­lin­er, ba­sic con­stant and copy propaga­tion, a single in­struc­tion com­bin­a­tion rule, a few graph sim­pli­fic­a­tion and clean up passes… The pat­tern here is clear – most of the op­tim­isa­tions we cur­rently have are ba­sic and lim­ited in their po­tency. Given the pace at which we man­aged to bring up MIR in the first place, one would be right to ex­pect… some­thing more.

As some­body who has made an at­tempt and failed to im­ple­ment a num­ber of data­flow-­based op­tim­isa­tions (a­mong other thing­s), I con­sider my­self fairly qual­i­fied to haz­ard a guess as to what is the reason for the cur­rent state we are at. Here it goes.

A collection of snippets to avoid unnecessary calls to unwrap() in Rust.

So for the last couple of months or so, I’ve been hacking in my spare time on this library named salsa, along with a number of awesome other folks. Salsa basically extracts the incremental recompilation techniques that we built for rustc into a general-purpose framework that can be used by other programs. Salsa is developing quickly: with the publishing of v0.10.0, we saw a big step up in the overall ergonomics, and I think the current interface is starting to feel very nice.

The Rust programming language introduced many leading concepts in the programming language design landscape. The most famous features are the borrow checker, the ownership management and the trait system.

However, the fantastic expressiveness of the closures is generally underestimated. Yes, from the day Javascript introduces closures to main stream programming languages, these days closures become one of the basic features for almost all modern languages. However, Rust’s ownership rules result in some brand-new observations to closures and its position in programming. Let’s start the journey now.

If you’re an iOS developer you may be asking yourself how and why you would make use of Rust on iOS. This article will mostly cover the how. As to why, the most compelling reason for us at Visly is that it enables us to share code between Android and iOS in a performant and safe manner, in a language much easier to work with than C++.

Recently I travelled all the way to Waterloo from Boston for Starcon. With a 9 hour drive I had a lot of time to think about things and so I spent a good majority of it thinking about OSS Governanace and Sustainability. What I came up with and thought of is the more concrete solutions to the problems I brought up in my Rust 2019 post. With the Rust All hands in Berlin only a few weeks away I wanted to get my thoughts in order by writing out some of the solutions to specific problems I came up with. Now, this doesn't mean they'll be accepted! We might even find better solutions! I just felt a need to articulate them as both a reference point and to make sure I've thought through them well. I'll be splitting them into a few posts so I can publish more faster, rather than write one long post that won't be published in time. With that in mind let's begin!

In my previous post about Polonius and subregion obligations, I mentioned that there needs to be a follow-up to deal with higher-ranked subregions. This post digs a bit more into what the problem is in the first place and sketches out the general solution I have in mind, but doesn’t give any concrete algorithms for it.

I got a bit tangled up while experimenting with threads and channels in Rust. The compiler prevented any undefined behavior or memory corruption, but it can only do so much. My problems came from a shaky understanding of the language’s fundamentals and the inherent complexity of parallel programming. Or, in my case, attempted parallel programming.

Closures seem like magical functions. They can do magic like capture their environment, which normal functions can’t do. How does this work?

Haskell generally has better safety guarantees than Rust, there are some cases when Rust is safer than Haskell. This post explores when Rust is safe to use.

Now that NLL has been shipped, I’ve been doing some work revisiting the Polonius project. Polonius is the project that implements the “alias-based formulation” described in my older blogpost. Polonius has come a long way since that post; it’s now quite fast and also experimentally integrated into rustc, where it passes the full test suite.

Rust 1.32.0 has a few quality of life improvements, switches the default allocator, and makes additional functions const.

Note:This post is about how I arrange the code I write in Rust.If you wanted to “order” Rust codein the “hire someone to write code” sense,you should still keep on readingas this is excellent material for a job interview.(Not the opinion I present but having an opinion on the topic.)

Rust is one of the major programming languages that’s been getting popular in recent years. It has many advanced high level language features like Scala.This made me interested to learn Rust. So in this next series of blogs I will share my experience with Rust from a Scala developer point of view. I would like to explore how these two language approach things. I would like to explore the similarities and their differences.

Row-oriented storage and column-oriented storage are two major ways of laying out data in memory. In Rust, there is a simple way to think of this: row-oriented storage is like an array of structs, whereas column-oriented storage is like a struct of arrays. It’s easy to use row-oriented storage in Rust, so this post is going to explore column-oriented storage.

If you’ve ever wrote code in a compiled language (C, C++, Java, …), you are probably used to compiler error messages, and you may think there are only here to prevent you from making mistakes. Well sometimes you can also use compiler error messages to design and implement new features. Let me show you with a simple command-line program written in Rust.

On the 1st of December 2018, I decided to give it a try to Advent of Code. AoC is, basically, a programming challenge website where you get two puzzles unlocked every day of December from 1st to 25th – hence the name. It has a ranking system which scores are based on the absolute time you took to solve the puzzles – i.e. spent time as soon as the puzzles got unlocked. As a French living in Paris, I find this a bit unfair (we get puzzles unlocked at around 5:00 AM!) and then I just realized I could do the puzzles for fun only.

This blog post sums up what I did with AoC#18, my thoughts about the puzzles and even a meta-discussion about programming challenges. I used Haskell for most challenges and switched to Rust for no specific reason. Just fun.

Rust permits a limited form of compile-time function execution in the form of const and const fn. While, initially, const may seem like a reasonaby straightforward feature, it turns out to raise a wealth of interesting and complex design questions. In this post, we’re going to look at a particular design question that has been under discussion for some time and propose a design that is natural and expressive. This is motivated both from a syntactic perspective and a theoretic perspective.

Since a few days ago, librsvg's library implementation is almost 100% Rust code. Paolo Borelli's and Carlos Martín Nieto's latest commits made it possible. What does "almost 100% Rust code" mean here?

Recently cessen asked people to write their thoughts on Rust community norm for unsafe code. So here it is.

No, seriously, this time for real.

I recently released Ropey 1.0, a text rope library for Rust. Ropey uses unsafe code internally, and its use of unsafe unsurprisingly came up in the 1.0 release thread on Reddit.

The ensuing discussion (especially thanks to Shnatsel) helped me significantly reduce the amount of unsafe code in Ropey with minimal (though not non-existent) performance degradation. But the whole thing nevertheless got me thinking about unsafe code and community norms around it, and I figured writing some of those thoughts down might be useful.

My day-to-day work involves writing a fair bit of JavaScript but, lately, I've gotten really interested in Rust. The other day, I decided to take a slightly different approach: I decided to take a simple linked list program—the type can and do ask my students to implement in JavaScript in ~20 minutes—and re-implement it in Rust. Specifically, I decided to build a queue implemented with a singly linked list.

When writing an interpreter or a compiler for any language, you usually need to start with a lexer and a parser. Boa here is no different, our first task will be to do the same but what do these do?

Procedural macros in Rust are a really compelling feature that I didn’t understand until recently. There are a few gotchas, but they make it super easy to implement custom #[derive()] expansions for implementing traits with a single line of code. Let’s dive in.

Placement new is a feature currently being discussed for the Rust programming language. It gives programmer control of memory allocation and memory placement, where current memory allocation implementations are hidden behind compiler internals via the Box::new interface. This is Rust’s answer to C++ placement new, allowing one to control not only when and how memory is freed, but also where it is allocated and freed from.

For the first time, I took part in the Advent of Code this year. If you haven't heard of it, it's a daily programming challenge that can be solved in any programming language. Rust was very present in the Advent of Code community with people contributing a ton of Rust-related content. In the daily solutions thread on the /r/aoc subreddit, there were always several Rust solutions posted. Advent of Code really helps show off the things that make Rust shine, demonstrating the power and utility of many community-created crates as well as the language itself.

I’ve begun to seriously dig into the Rust programming language. The learning curve is real, but I already appreciate the work they’ve put into ergonomics. I’m writing a simple photo thumbnail endpoint using the Rocket web framework (v0.4) and Image library (v0.20.1). My first pass used a lot of unwrapping to ignore potential errors. A lot can go wrong, even in this “simple” case. Rocket catches any panics thrown by route handlers, so this is about as robust as a naive equivalent in most other languages. However, Rust at least forces us to be explicit and purposeful about when we want to be sloppy. This is great for a first quick and dirty pass, but we can do much better.

2019 is approaching. The rust team keeps their promise about asynchronous IO: async is introduced as keywords, Pin, Future, Poll and await! is introduced into standard library. I have never used rust for asynchronous IO programming earlier, so I almost know nothing about it. However, I would use it for a project recently but couldn't find many documents that are remarkably helpful for newbie of rust asynchronous programming. My purpose of writing this blog is to review and summarize, I will be happy if it can help someone who are interested in rust asynchronous programming.

My internship (“research assistantship”) with Mozilla has ended several weeks ago, and this post is a report of the most recent tweaks I made to Miri and Stacked Borrows. Neither project is by any means “done”, of course. However, both have reached a fairly reasonable state, so I felt some kind of closing report made sense. Also, if I ever want to finish my PhD, I’ll have to seriously scale down the amount of time I work on Rust – so at least from my side, things will move more slowly from now on.

In particular, installing Miri and running your test suite in it is now just a single command away! Scroll all the way down if you are not interested in the rest.

Today we're going to take a look at the 'pipe' function my friend has written and why all of the sudden lifetimes get important esp. when using references.

Sometimes you just have a bunch of example data laying around and you want to make sure your code works with all of them. Some of them are probably short and sweet and could live happily as doctests, which are amazing btw. But some of them are more awkward to present in such form, because, for example, of their size or number. Typically when you have an example of how the program should behave you write an example-based unit test. Ideally, each of them would represent an isolated example and they should fail independently. But, converting your source data files into a unit test one by one, manually, can be a bit tedious. Rust build scripts to the rescue !

Back in 2013, I started a series of posts on programming languages I found interesting. One of the languages I wanted to write about at that time was Rust. As often happens, life got in the way, and it’s only now, in the twilight of 2018 I’m coming round to a long overdue post.

Visualizing Rust's growing ecosystem through crates.io, Rust's central package repository.

Arrays in Rust are fixed size, and Rust requires that every element in an array is initialized to a valid value when the array is initialized. The result of these requirements is array initialization in Rust is a much deeper topic then it would seem.

Perhaps my favorite feature in the Rust 2018 edition is procedural macros. Procedural macros have had a long and storied history in Rust (and will continue to have a storied future!), and now is perhaps one of the best times to get involved with them because the 2018 edition has so dramatically improved the experience both defining and using them.

Here I'd like to explore what procedural macros are, what they're capable of, notable new features, and some fun use cases of procedural macros. I might even convince you that this is Rust 2018's best feature as well!

This past year was… intense. Rust 1.31 was basically Rust 2.0, at least in the marketing sense. I burned myself out getting the first edition of the book together for Rust 1.0, and I burned myself out getting the edition shipped.

Let’s talk about the bad and the good. Bad first so we end on the high notes.

I am working these days on the development of offst's Index server. I needed to implement a basic directed graph structure, allowing to run the BFS algorithm to find routes with a certain amount of capacity. During the work on the Index server I had the problem of wanting to return an empty iterator in an early flow of a function.

I will not be really original here, but I really hope to see the following features to land on stable in 2019: const generics, async/await, GATs, inherent traits, minimum supported Rust version:

The 2018 year brought a lot of incredible new features and I'm impressed with all the work that was put in to make the 2018 edition happen. I want to join the other users asking for 2019 to be a year we adjust to all the new changes and focus on cleaning and polishing, not entirely new projects*. For the upcoming year I'd like to see progress on compilation speed and maintenance attention in the library ecosystem.

It has been a long-standing tradition to develop a language far enough to be able to write the language's compiler in the same language, and Rust does the same. Rust is nowadays written in Rust. We've tracked down the earlier Rust versions, which were written in OCaml, and were planning to use these to bootstrap Rust. But in parallel, John Hudge (Mutabah) developed a Rust compiler, called "mrustc", written in C++. mrustc is now good enough to compile Rust 1.19.0. Using mrustc, we were able to build Rust entirely from source with a bootstrap chain

The Rust project I am working on is a caching layer, currently backed by Redis, and it came to a point where I needed to leverage pipelining. On its own, pipelining is straightforward as the redis crate implements it already. However all notions of a cache in our code are abstracted out behind a trait so we can have alternative implementations, such as an in-memory HashMap-backed implementation.

The problem arises with representing the pipeline in code. It would force any implementation of our cache to Redis’s notion of a pipeline. Not only would this make it difficult to introspect during testing, but it would also be nonsensical for our HashMap-backed cache. My usual answer to this in languages that support higher-kinded types is to use tagless-final algebras, but Rust’s type system currently doesn’t support higher-kinded types1. Fortunately, there is a pretty good alternative that Rust does support: existential types.

In 2019, there are three areas where I would like to see the Rust community focus its efforts: Improved compile times, A community effort to review crates, More “80% solutions”.

As you likely know if you’re reading this post Rust has an upcoming async/await feature being tested in nightly. Because of Rust’s unique features and positioning fully understanding the implementation powering this syntax is very different to understanding other well-known implementations (C# and JavaScript’s being the ones I am familiar with). Instead of thinking of a CPS-like transform where an async function is split into a series of continuations that are chained together via a Future::then method, Rust instead uses a generator/coroutine transform to turn the function into a state machine (C# and probably most JavaScript implementations use a similar transform under the hood, but as far as I’m aware because of the garbage collector these are indistinguishable from the naive CPS transform they are normally described as). For more detail on why Rust is taking this approach you should read eRFC 2033: Experimental Coroutines, that lays out the why’s much better than I could here.

What I’m going to try and provide instead, is a look into how this actually works today. What steps the compiler takes to turn an async fn into a normal function returning a state machine that you could write if you wanted to (but you definitely don’t).

While developing some crates in rust, I ran into a few crashes in certain situations when using these crates from another application. In order to more easily reproduce the problem, and also minimize or eliminate future regressions, I decided to write some unit tests for these issues, and use them to more easily debug the problems… or so I thought!

Starting today, the Rust 2018 edition is in its first release. With this edition, we’ve focused on making Rust developers as productive as they can be. But beyond that, it can be hard to explain exactly what Rust 2018 is.

Starting today and running until of January 15, we’d like to ask the community to write blogposts reflecting on Rust in 2018 and proposing goals and directions for Rust in 2019.

The Rust team is happy to announce a new version of Rust, 1.31.0, and "Rust 2018" as well. Rust is a programming language that empowers everyone to build reliable and efficient software.

In a few days the 2018 edition is going to roll out, and that will include some new framing around Rust's tooling. We've got a core set of developer tools which are stable and ready for widespread use. We're going to have a blog post all about that, but for

Unlike languages like Haskell, Erlang, and Go, Rust does not have a runtime system providing green threads and asynchronous I/O. However, for many real world use cases, async I/O is strongly desired, if not a hard requirement. The de facto standard library for handling this in Rust is tokio. This post is part of a series based on teaching Rust at FP Complete.

Since version 56, Firefox has had a new character encoding conversion library called encoding_rs. It is written in Rust and replaced the old C++ character encoding conversion library called uconv that dated from early 1999. Initially, all the callers of the character encoding conversion library were C++ code, so the new library, despite being written in Rust, needed to feel usable when used from C++ code. In fact, the library appears to C++ callers as a modern C++ library. Here are the patterns that I used to accomplish that.

While in the middle of converting librsvg's code that processes XML from C to Rust, I went into a digression that has to do with the way librsvg decides which files are allowed to be referenced from within an SVG. There was a central function rsvg_io_acquire_stream() which took a URL as a string. The code assumed that that URL had been first validated with a function called allow_load(url). Rust made it possible to actually make it impossible to acquire a disallowed URL.

Rust doesn’t have a language-level concept of generic mutability, which makes “method threading” (which take `self` by some handle, and return it in the same way) hard to write. This article covers how to write in that pattern in a less painful way.

Today, we’d like to announce a beta of the new rust-lang.org. If you go to https://beta.rust-lang.org, you’ll see a preview of the new site.

Test your Rust knowledge with tricky Rust questions.

This is the "Rust Language Cheat Sheet". It is for users who: are early Rust professionals (experienced programmers, intermediate Rust users), and prefer visual, example-driven content. Use cases, in order of priority: "identification guide" for unknown or symbolic constructs encountered in code. Provide further reading from easy to advanced (Book to Nomicon). Quick lookup for language related problems. Discover constructs in the language you might not know.

Another year means another Rust survey, and this year marks Rust’s third annual survey. This year, the survey launched for the first time in multiple languages. In total 14 languages, in addition to English, were covered. The results from non-English languages totalled 25% of all responses and helped pushed the number of responses to a new record of 5991 responses. Before we begin the analysis, we just want to give a big “thank you!” to all the people who took the time to respond and give us your thoughts. It’s because of your help that Rust will continue to improve year after year.

Learn more about how the Rust programming language shares many of the advantages offered by Haskell such as a strong type system, great tooling, polymorphism, immutability, concurrency, and great software testing methodologies. Rust is a good choice when you need to squeeze in extra performance.

Following on from my last post, I thought I would look at async/await support in Rust. The async/await support coming to Rust brings with it a much more ergonomic way to work with asynchronous computations. In this post I'll introduce std::future::Future, and run through how to make use of them, and how to interoperate with the current ecosystem which is built around version 0.1 of the futures package.

Prior to this experience, I had thought that Futures, Sinks and Streams were the smallest building blocks in the world of Tokio, and so I went looking through the Tokio documentation for these things. Actually, all of the fundamental objects to read and write bytes to things implement one or both of AsyncRead and AsyncWrite, but not the Future, Sink or Stream traits. In fact, there are lots of poll_x methods dotted around, so I realised I needed to figure out how to make use of them.

Lately, I have been converting the code in librsvg that handles XML from C to Rust. For many technical reasons, the library still uses libxml2, GNOME's historic XML parsing library, but some of the callbacks to handle XML events like start_element, end_element, characters, are now implemented in Rust. This has meant that I'm running into all the cases where the original C code in librsvg failed to handle errors properly; Rust really makes it obvious when that happens.

In this post I want to talk a bit about propagating errors. You call a function, it returns an error, and then what?

Toggling feature flags when you compile for zero runtime cost

Resource allocation & the implementation of drop logic in Rust.

Let’s say you’re contributing to a system in Rust consisting of a bunch of different components, running in their own threads or processes, for example an engine to make the Web run.

When one of those components seemingly hangs on something, how can you find out what it is hanging on? Maybe a backtrace of what that component is doing at that time would be useful?

That’s easy, for that we have thebacktrace-rs crate, right?

Well, there’s a catch: how do we call Backtrace::new() from a thread that is hanging?

Three months ago, I proposed Stacked Borrows as a model for defining what kinds of aliasing are allowed in Rust, and the idea of a validity invariant that has to be maintained by all code at all times. Since then I have been busy implementing both of these, and developed Stacked Borrows further in doing so. This post describes the latest version of Stacked Borrows, and reports my findings from the implementation phase: What worked, what did not, and what remains to be done. There will also be an opportunity for you to help the effort!

Learn how to integrate Rust Cargo package manager with the Nix package manager.

Program synthesis is the act of automatically constructing a program thatfulfills a given specification. I recently stumbled across Adrian Sampson’s Program Synthesis is Possible blog post. Adrian describes and implements minisynth, a toy program synthesizer that generates constants for holes in a template program when given a specification. What fun! As a way to learn more about program synthesis myself, I ported minisynth to Rust.

A survey of things that Rust doesn’t let you do although arguably safe.

Rust allows for a very functional style of value “flow” without sacrificing the performance of a more traditionally imperative sequence. Furthermore, the functional flow may offer more clarity about value lifetimes and error handling that the imperative sequence might obscure.

I know it is claimed how Rust has zero cost abstractions and such and that all these levels of abstractions will just go away in a release build. But there’s a difference in hearing the theory and seeing it really happen in practice. And I don’t appreciate it because I’d consider it magic, but more because I understand how that is being done and it still looks cool.

Following last time, where we saw that, given parameterision over traits (rather than just types), we could implement functors and monads in Rust that supported existing “monad-like” traits like Iterator and Future, I thought it would be interesting to tackle another one of the arguments against monads in Rust.

Continuing on with my “After NLL” series, I want to look at another common error that I see and its solution: today’s choice is about moves from borrowed data and the Sentinel Pattern that can be used to enable them.

This blog post will outline the creation of dynstack, a stack datastructure that stores trait objects unboxed to minimize the number of heap allocations necessary.

Recently, the procedural macro interface was somewhat stabilized. OK, there’s still the unstable proc_macro_hygiene feature you have to activate, but at least the registrar and rustc_private are no longer needed.

When optimizing Rust code it’s sometimes useful to know how big a type is, i.e. how many bytes it takes up in memory. std::mem::size_of can tell you, but often you want to know the exact layout as well. For example, an enum might be surprisingly big, in which case you probably will want to know if, for example, there is one variant that is much bigger than the others.

One thing we’ve left as an unresolved question so far in the matter of async/await syntax is the exact final syntax for the await operation. In the current implementation, awaits are written using a compiler plugin:

async fn foo() {
await!(bar());
}

This is not because of any technical limitation: the reason we have done this is that we have not decided on the precise, final syntax for the await operation.

I think I've discovered Rust somewhere around the year 2012. With time I grew more and more fond of Rust. The language kept evolving in a direction that was my personal sweet spot: a modern C. And at some point I realized I'm in love with Rust. And I still am today, after a couple of years of using it. So let me tell you why is Rust my darling programming language.

Today we’re looking at the rust borrow checker from a different perspective. As you may know, the borrow checker is designed to safely handle memory allocation and ownership, preventing accessess to invalid memory and ensuring data-race freedom. This is a form of resource management: the borrow checker is tracking who’s in charge of a chunk of memory, and who is currently allowed to read or write to it. In this post, we’ll see how these facilities can be used to enforce higher-level API constraints in your libraries and software. Once you’re familiar with these techniques, we’ll cover how the same principles apply to advanced memory management and handling of other more abstract resources.

ownership-and-borrowing

Rust was my language of the year. You know, that thing where programmers set out to learn a new programming language every year. Usually not to be productive at it but to familiarize themselves with current trends in language design, implementation, and paradigms. I had heard a lot of good stuff about Rust and decided late last year to make it my 2018 language. I’m only a few days in but I’ve been smacked by some of what I consider the best ideas in programming I’ve encountered yet.

I’ve been writing on a few examples code lately to add to documentations of some crates of mine. I write a lot of code that creates new objects that need other objects in order to be built. Most of the APIs you can see around tend to love the borrow principle – and I do.

ownership-and-borrowing

When designing an API for your crate one topic which can come is how to handle optional arguments. Let’s explore our Options in Rust!

Rust is an imperative language but it provides many tools in the standard library which adhere to a more functional style, like the Iterator trait and its methods like map, for_each, and filter. This is a quick run-down of how to define your own higher-order functions in Rust which can both take closures as parameters and return closures in such a way that you can use the two together.

Rust 2018 is almost out the door, but there is one big decision the language team has yet to make. It has to do with the modules and paths system, so of course it is a very easy decision that no one has a strong opinion about. ;-)
In Rust 2018, we’ll be making some big changes to how paths work to try to create a more consistent experience. The “lodestar” (if you will) of these changes is an idea we call “1path:” the idea no matter where you are in your project, whether in a use statement or normal code, a path is interpreted the same way.

In my previous post on the status of NLL, I promised to talk about “What is next?” for ownership and borrowing in Rust. I want to lay out the various limitat...

Now that the final Rust 2018 Release Candidate has shipped, I thought it would be a good idea to do another update on the state of the MIR-based borrow check (aka NLL). Let’s get the highlights out of the way. Most importantly, Rust 2018 crates will use NLL by default. Once the Rust 2018 release candidate becomes stable, we plan to switch Rust 2015 crates to use NLL as well, but we’re holding off until we have some more experience with people using it in the wild.

This blog post is part of a series explaining how to send Rust beyond earth, into many different galaxies. The galaxy we will explore today is the PHP galaxy. This post will explain what PHP is, how to compile any Rust program to C and then to a PHP native extension.

I was doing some initial load testing of the next version our application, so that performance regressions can be tracked, when I noticed something. After only a few seconds of throwing wrk at it, our backend was using 1.3GB of memory, growing at around 50MB/s. Yikes.

After some practice with three of my Rust projects (fd, hyperfine and bat), my workflow has converged to something that works quite well and avoids many pitfalls that I have walked into in the past. My hope in writing this post is that this process can be useful for others as well. The following is my release checklist for fd, but I have very similar lists for other projects.

Two weeks ago, I wrote a blog post explaining some design decisions that I made for the ndarray-csv crate. Based on some excellent Reddit comments and GitHub issues from dtolnay, I have amended some of these decisions.

I know a few Rustaceans who are wary of macros. One privately admitted to hating them with a passion. They are right; macros can make code harder to understand (both for humans and computers, for example many clippy lints have an explicit check to only lint outside of macros), so they should be used with some caution.

We have 85K lines of Rust code implementing the backend of our Pernosco debugger. To impose some modularity constraints and to reduce build times, from the beginning we organized our code as a large set of crates in a single Cargo workspace in a single Gitlab repository. Currently we have 48 crates. This has mostly worked pretty well but as the number of our crates keeps increasing, we have hit some serious scalability problems.

The Rust team is happy to announce a new version of Rust, 1.30.0. Rust 1.30 is an exciting release with a number of features: Procedural Macros, Module system improvements, Raw Identifiers, and more.

The orphan trait rule in Rust is interesting and works impressively well for what it intends to do. While I'm often frustrated by the limitations it imposes, it absolutely succeeds at removing ambiguity in whether or not a trait will be implemented for a type.

SIMD is a powerful performance technique, and is especially valuable in signal and image processing applications. I will be using it very extensively in my synthesizer, and also it’s increasingly used in xi-editor to optimize string comparisons and similar primitives.

Rust is an imperative systems programming language. Why does it have so much attention from functional programming advocates? Is it hiding a functional nature?

In Rust, a type which takes type parameters (Rc<T>, Vec<T>, HashMap<K, V>, etc) is only a valid type when all type parameters are specified. In other words, Rc, Vec, and HashMap<K> are not types. You can’t have a variable of type Rc. You can’t pass Rc as a parameter to other types. The ability to have such things be actual types is a feature called higher kinded types (HKT).

With the minimal subset of const fn becoming stable soon (in the second next Rust version), I wanted to give const fns a try and test what is possible with them. We implemented a compile-time SUBLEQ interpreter which only uses const-fns, which you can find on the playground. Let's walk through the process of building this abomination :)

Whatever the project you work on, you should must document your code. There are several situations – let’s call this the First Hypothesis

3 weeks ago I set out to fix a crash in Clippy, this is what I learned along the way. I hope this blog post will be useful for other people diving into Clippy and maybe serve as motivation if things get difficult.

I’ve often seen people make statements like this one, from the Rust subreddit this morning, "Manual memory management requires more work than garbage collected. Its a trade off of course to be more performant or use lower resources. When and where should Rust be used or not used according to you?". While I don’t completely disagree with this sentiment, it’s also never quite sat right with me. Yes, Rust is a bit harder at the start, but once you get over a hump, I don’t generally find writing Rust to be significantly harder than using a GC’d language. I’ve been trying to figure out why that is.

What are the most important properties of programs, and how much do existing languages help? How is Rust different?

Over the years I've found myself with a weird amount of knowledge about how types and ABIs in Rust work, and I wanted to write it all down in one place so that... it's written down in one place. Much of this information can or should be found in the Rust Language Reference and the Rustonomicon.

So because it seemed like a good idea at the time, I decided to port the minimp3 library from C to Rust. I want a pure Rust MP3 decoder crate to exist under a permissive license, I wanted to learn a few things about the MP3 file format, and it seemed small enough to do in a single weekend. (In reality it was largely done in about a week.) I’m quite good at Rust, and I’m okay at C (but rusty; hah!), and I know nothing at all about MP3 decoding. So, it was a fun learning experience. It was very interesting seeing how C and Rust’s different feature set changed how the programs were written. minimp3 turned out to be a good choice for this, since it is standalone, pretty well-written C as far as I can tell, does nothing that needs to be unsafe, and small but not trivial. This article is an attempt to organize my thoughts, notes and observations as I went about the project, in the hopes that it will be useful or at least interesting to someone else.

Way back in August I announced that I was starting in on "a project to QuickCheck Rust’s standard library data structures", here. And I did! The project is called bughunt-rust and I've been poking at it on weekends since, adjusting my approach based on papers I've been reading, experience gained writing test code and the kind of results I've been getting. This post goes through what I've been up to, where I see the project heading in the near term.

Rust's Macros 2.0 are intuitive: demonstrate a pattern, and the compiler can insert the pattern into your program wherever you want it. Inspired by this syntax, I wondered: Could you “run a macro backwards”—use the same by-example language to describe patterns to search for?

In this article we will make a small Rust library that uses the reqwest http client library, and see what we can do to adequately test the business logic. We assume you have the Rust toolchain installed, and are at least passingly familiar with programming in Rust.

There’s been a persistent set of issues we’ve had with cbindgen that have not been solved. They all roughly result from the same problem; cbindgen is a standalone parser of rust code, not a rustc plugin. What this means is that cbindgen doesn’t understand your rust library like the compiler does. We’ve tried to minimize the differences here by making cbindgen smarter, but it’s not obvious that’s the best approach going forward.

When I talk to folks foreign to Rust, I often get asked the question: “Why doesn’t Rust have support for default arguments”. When I first started learning Rust I pondered the same question. Eventually I came to realize that it does, kind of. Rust just takes different approach based on it’s unique design choices, one which I now wish other languages supported.

I had a question this morning: who authors the most popular crates on crates.io? First, we have to figure out what we mean by “most popular.” My first guess was “top 100 by recent downloads”, so I looked at crates.io. Once I got to 100, I found... | Steve Klabnik | “The most violent element in society is ignorance.” - Emma Goldman

In the past, there has been reoccurring feedback that Tokio is hard to understand. I believe a lack of good documentation plays a significant part. It’s time to fix this problem.

And because Tokio is open source, it is on us (the community) to make this happen! 👏

After two tweets that I made last week, playing around with UEFI and Rust, some people asked to publish a blog post explaining how to create a UEFI application fully written in Rust and demonstrate all the testing environment.

There will be times where code will run slow and Erlang/Elixir optimizations will only go so far. BEAM has several ways to interface with foreign code, the fastest way being with a Native Implemented Function (NIF) whose API expects them to be written in C. But speaking frankly, the last time I worked with C involved a lengthy debugging session that boiled down to the lack of type safety, so I’d rather not have to repeat that experience. It’s for this reason that Rust is such a compelling language.

A security vulnerability was found in the standard library where if a large number was passed to str::repeat it could cause a buffer overflow after an integer overflow. If you do not call the str::repeat function you are not affected. This has been addressed by unconditionally panicking in str::repeat on integer overflow.

This repo is a place where examples can be added of iOS/android projects written entirely/mostly in rust.

This is a report on the second “office hours”, in which we discussed how to setup a series of services or actors that communicate with one another. This is a classic kind of problem in Rust: how to deal with cyclic data. Usually, the answer is that the cycle is not necessary (as in this case).

This blog post is just going to be a quick summary of the basic workflow of using Rust with gdb on the command line. I’m assuming you are using Linux here, since I think otherwise you would prefer a different debugger. There are probably also nifty graphical tools you can use and maybe even IDE integrations, I’m not sure.

This blog post covers my adventure in fixing a bug in the Rust bindings for the Capstone C library, a disassembly library that supports several CPU architectures. The capstone-rs crate attempts to provide a Rusty, object-oriented interface. You do not necessarily need previous experience in C code or foreign function (FFI) bindings to understand this blog post. I will cover some of the steps I used to debug this problem. Hopefully, readers can learn from my mistakes.

In this blog entry, I want to explore a specific problem of orphans and how I decided to solve it in a crate of mine. The problem is the following: Given a crate that has a given responsibility, how can someone add an implementation of a given trait without having to use a type wrapper or augment the crate’s scope?

Seq is a log server that's built using a few programming languages; we have a storage engine called Flare written in Rust, and a server application written in C#. Our language stack is something I've talked about previously.

Between Rust and C# we have a foreign function interface (FFI) that lets us call out to Rust code from within the .NET runtime. In this post I'd like to explore our approach to FFI between Seq and its storage engine using the API for reading log events as a reference.

We are living in a Golden Age of software, one that will produce artifacts that will endure for generations. Of course, it can be hard to hold such heady thoughts when we seem to be up to our armpits in vendored flotsam, flooded by sloppy abstractions hastily implemented. Among current languages, only Rust seems to share this aspiration for permanence, with a perspective that is decidedly larger than itself.

This is part of a blog series on working towards an intuitive mental model for lifetimes in Rust. When I tried to sit myself down and really, really write down an in-depth example… I realized that there was no two ways about it. Before you can learn to appreciate why lifetimes exist, you must learn what life would be like without them. And in order to do that, well…

This is part of a blog series on a new way to look at lifetimes in Rust's type system. I hope to cover some advanced aspects of lifetimes that are seldom discussed in the open, and my goal is ultimately to help convey new intuitions about how to use them correctly.

It’s not immediately obvious that calling min(squares) modifies squares. If squares were a list or even a range, we would be able to call min and max on it with no problem. It would be nice if the language prevented us from trying to use something twice that can only be used once. Almost all modern languages, both statically and dynamically typed, will fail at runtime in these situations.

python ownership-and-borrowing

Still drunk with the power of function composition, I started to play around with the technique in Rust, a language I've been experimenting with. Rust is a low-level language with a strict compiler that saves you from doing dangerous things. Furthermore, Rust is a functional language. It has several concepts and features inspired by Haskell (read more) and Scala for example. The design of Rust makes it highly expressive and attractive.

Every once in a while, someone will talk about unsafe in Rust, and how it “turns off the borrow checker.” I think this framing leads to misconceptions about unsafe and how it interacts with safe code. Here’s some code that causes a borrow checker... | Steve Klabnik | “The most violent element in society is ignorance.” - Emma Goldman

Associated Types in Rust are similar to Generic Types; however, Associated Types limit the types of things a user can do, which consequently facilitates code management. Among the Generic Types of traits, types that depend on the type of trait implementation can be expressed by using the Associated Type syntax. By comparing the Associated and Generic Types, you can get a better understanding of Associated Types.

The Rust team is happy to announce a new version of Rust, 1.29.0. The two most significant things in this release aren’t even language features: they’re new abilities that Cargo has grown, and they’re both about lints: cargo fix can automatically fix your code that has warnings. cargo clippy is a bunch of lints to catch common mistakes and improve your Rust code.

In my last posts I covered profiling and some tips for optimizing inner loops in Rust code while working on a multithreaded PNG encoder. Rust’s macro system is another powerful tool for simplifying your code, and sometimes awesomeizing your performance…

At Datalust we’ve been busy building Flare: a storage engine for our log server, Seq, written in the Rust programming language. This post is a point-in-time look at how we've approached building this fairly complex piece of software in Rust in 2018. I’d like to share a few

This blog post is part of a series explaining how to send Rust beyond earth, into many different galaxies. The galaxy we will explore today is the C galaxy. This post will explain what C is (shortly), how to compile any Rust program in C in theory, and how to do that practically with our Rust parser from the Rust side and the C side. We will also see how to test such a binding.

I always enjoy reading blogs about patterns or tricks people have picked up writing Rust. I’ve seen this a few times but not read about it anywhere.

I’ve been doing class assignments from Operating Systems cs140e. I highly recommend this class if you know a bit of Rust and would like to try writing some lower level code. The class involves building bits of an OS for the raspberry pi.

In Rust, data types - primitives, structs, enums and any other ‘aggregate’ types like tuples and arrays - are dumb. They may have methods but that is just a convenience (they are just functions). Types have no relationship with each other.

Traits are the abstract mechanism for adding functionality to types and establishing relationships between them.

This post examines a particular, seemingly simple problem: given ownership of a Rc<Vec<u32>>, can we write a function that returns an impl Iterator<Item = u32>? It turns out that this is a bit harder than it might at first appear – and, as we’ll see, for good reason. I’ll dig into what’s going on, how you can fix it, and how we might extend the language in the future to try and get past this challenge.

I’ve been playing around a lot with Rust recently and it’s quickly becoming my second-favourite programming language. One of the things I’ve been playing with is some Object Oriented design concepts as they might apply.

Read many, write exclusive locks – RwLock Consider a situation where you have a resource that must be manipulated only a single thread at a time, but is safe to be queried by many—that is, you have many readers and only one writer.

While you could protect this resource with a mutex, the trouble is that the mutex makes no distinction between its lockers; every thread will be forced to wait, no matter what their intentions.

I wrote a really small Rust program a while back because I was curious. I was 100% convinced itcouldn’t possibly run. And to my complete befuddlement, it compiled, ran, and produced a completely sensible output.

To panic or to return a Result: why libraries in Rust must weigh their options rather than accepting a never-panic mandate.

An alternate introdcution to the APR book. This book aims to be a comprehensive, up-to-date guide on the async story in Rust, appropriate for beginners and old hands alike. We assume you already know Rust fairly well, including having done some multi-threaded programming. If any Rust terms in this guide are unfamiliar, you should check out the Rust book.

The bug that caused two brown-paper-bag releases in librsvg — because it was leaking all the SVG nodes — has been interesting. Memory leaks in Rust? Isn't it supposed to prevent that? Well, yeah, but the leaks were caused by the C side of things, and by unsafe code in Rust, which does not prevent leaks.

withoutboats, one of the Rust language design team, recently posted a thread on the infeasibility of monads as a useful abstraction technique in Rust, as a response to the persistence of some (usually from outside the Rust community) in claiming that “Rust is doing things incorrectly” by developing specific solutions to problems, rather than using a general category theoretic framework for everything. The points demonstrate real difficulties with attempting to use a general framework for these problems and to me serves perfectly as a “the ball’s in your court now” to anyone claiming Rust is ignoring theory and coming up with unnecessary solutions to solved problems: if you think Rust could use monadic abstractions, you have to be able to address these counterarguments.

Recently I ran into a bug in my code; hey, it happens. The bug was that I had a struct which could serialize into json, but could not deserialize from its own json. The struct holds a value for a mac address, which is 48-bit integer (that i store in a u64), but it is serialized using the network interface name. For example on my mac, i have a network interface named en1 with the mac address of 20:c9:d0:b0:a4:71.

I first starting learning Rust about 6 months ago, I was looking for a new language to learn when I came across it. At first I thought Rust was only meant to be a low level, systems programming language, but the more I learned, the more I realised the potential it has for high level programming and web applications. Also, along the way I learned many ways in which Rust prevents many of the typical bugs often found in applications written in other programming languages.

In case you haven’t heard, async / await is a big new feature that is being worked on for Rust. It aims to make asynchronous programming easy (well, at least a little bit easier than it is today). The work has been on going for a while and is already usable today on the Rust nightly channel.

I’m happy to announce that Tokio now has experimental async / await support! Let’s dig in a bit.

Servo is a huge project. I have counted the lines of code for you. There are almost a hundred thousand lines of code in the Servo project. To develop such a big project, knowing how to debug in a right way is very important, since you would like to find the bottleneck in a fast and efficient way.

In this article, I will teach you some tips to use GDB developing and debugging your Rust code in the Servo project.

Recently, I found myself in the market for some quickcheck. However, there were custom types, which had no Arbitrary implementation. Wondering if someone had already written a procedural macro to derive it, I found panicbit’s quickcheck_derive crate. However, to my dismay, it was severely limited in that it could only derive Arbitrary for structs.

A couple of months ago, I created my first Rust program; a music manager called seiri. seiri is actually a rewrite of a previous, much buggier program that I used to organize my music that was written in C#. The tag library of choice was of course, taglib-sharp, a port of the C++ library TagLib to the .NET ecosystem. Since Rust unfortunately doesn’t have its own native port of TagLib, and any C bindings available didn’t expose the picture API, the most obvious thing to do was to use the C# library with Rust somehow, right?

Interior mutabiliby is a concept known to anyone who have programmed in Rust for a while. And even though Rust's stdlib have several wrapper-types allowing interior mutability there is no trait unifying these types. Motivated by writing libraries suitable for no_std development that are fully safe to use with threads, this blog post will attempt to fill in one gap in the Rust stdlib.

Last December I decided to give Rust a run: I spent some time porting the C++ bits of sourmash to Rust. The main advantage here is that it's a problem I know well, so I know what the code is supposed to do and can focus on figuring out syntax and the mental model for the language. I started digging into the symbolic codebase and understanding what they did, and tried to mirror or improve it for my use cases.

python ffi

In my last post, I announced a release candidate for the RLS 1.0. There has been a lot of feedback (and quite a lot of that was negative on the general idea), so I wanted to expand on what 1.0 means for the RLS, and why I think it is ready. I also want to share some of my vision for the future of the RLS, in particular changes that might warrant a major version release.

I recently continued with my exploration of Rust through Rookeries (my attempt at a static site generator/backing API server). This time I worked on switching over from using invoke and GNU make to using a nice build system called cargo-make. Overall I am quite happy with the result.

rustdoc is a great tool, but as of now there isn’t an official way to have its generated docs refresh as you make edits. Running cargo doc with the --open argument will open the generated docs in browser window. If you make changes to your source code, you’ll need to re-run cargo doc to have the changes reflected in your browser. By chaining together a few other Rust tools, we can pretty easily get the functionality of live-reloading docs.

When talking about the Rust type system in the context of unsafe code, the discussion often revolves around invariants: Properties that must always hold, because the language generally assumes that they do. In fact, an important part of the mission of the Unsafe Code Guidelines strike force is to deepen our understanding of what these invariants are.

However, in my view, there is also more than one invariant, matching the fact that there are (at least) two distinct parties relying on these invariants: The compiler, and (authors of) safely usable code. This came up often enough in recent discussions that I think it is worth writing it down properly once, so I can just link here in the future.

One thing has always nagged about the API we have right now though: the proliferation of different reference types that it implies. Today, the pin feature adds the PinMut and PinBox types, but in theory there ought to be a “pinned” version of every pointer in the standard library: PinRc and PinArc and so on. This is a very unfortunate consequence, but so far we have not found a good way to make pinning work compositionally - to have a single adapter that could be combined with any pointer.

Last night, a bit of inspiration struck me, and I realized that it is possible to make a compositional Pin type. This isn’t a fundamental change to the pinning model, just an API refactoring, but I’ve put a blocking concern on the proposal to stabilize Pin so that we can consider this possibility.

This series of posts is about those bindings, and explains how to send Rust beyond earth, into many different galaxies. Rust will land in: The WebAssembly galaxy, The ASM.js galaxy, The C galaxy, The PHP galaxy, and The NodeJS galaxy. The ship is currently flying into the Java galaxy, this series may continue if the ship does not crash or has enough resources to survive!

Say we have a struct, Foo, with multiple fields that we would like to partially initialize without resorting to using unsafe. We could write a procedural macro called PartialInit, for example, which would be invoked using derive.

Demystifying one of Rust’s most powerful feature.

Rust closures are harder for three main reasons: The first is that it is both statically and strongly typed, so we’ll need to explicitly annotate these function types. Second, Lua functions are dynamically allocated (‘boxed’.) Rust does not allocate silently because it prefers to be explicit and is a system language designed for maximally efficient code. Third, closures share references with their environment. In the case of Lua, the garbage collector ensures that these references will live long enough. With Rust, the borrow checker needs to be able to track the lifetimes of these references.

As part of my overall change over in Rookeries, from Python to Rust, I rewrote a suite of integration tests for the server API. To celebrate my successful transition, I released version 0.11.0 of Rookeries, whose tests use pure Rust now!

For the first time in my life I tracked a real bug's root cause to incorrect usage of weak memory orderings. Until now weak memory bugs were something I knew about but had subconciously felt were only relevant to wizards coding on big iron, partly because until recently I've spent most of my career using desktop x86 machines.

In my previous post, I described taking a simple enum and creating a custom type in diesel. This post will take that same enum and implement deserialize. I often get tripped up by the mechanics of deserializing so this simple enum makes for a good example. Again, this is to benefit anyone looking for more examples of Serde’s Deserialize as well as for myself, so I can remember next time I need to do this.

Rust is a language that utilizes the RAII idiom, resulting in different code depending on when the object is destroyed.

One of the long-standing issues that we’ve been wrestling with in Rust is how to integrate the concept of an “uninhabited type” – that is, a type which has no values at all. Uninhabited types are useful to represent the “result” of some computation you know will never execute – for example, if you have to define an error type for some computation, but this particular computation can never fail, you might use an uninhabited type.

Over the past two months, I worked on a feature for Project Fluent. My feature was needed in the Rust implementation of Fluent and was published as a Rust crate, making my code available to the entire Rust community. Completing this project brought me a great sense of satisfaction, and having contributed a fundamental internationalization crate to the Rust ecosystem is possibly the biggest milestone in my career as a developer.

Traits are a core part of the Rust programming language, and understanding traits, particularly those which are part of the standard library, is necessary in order to write idiomatic Rust. In this post I’ll write several FizzBuzz implementations, each demonstrating the use of a different trait from the Rust standard library.

Refactoring boilerplate code is always easy in dynamically-typed languages, but sometimes takes a bit more effort when constrained by strong typing. This is something I was puzzling over recently, when the penny dropped for me about how Rust's macros can be used to bridge the gap.

In many things, Rust is very much like C++. It’s memory management strategy is mostly the same, threading models are copied vanilla, both compile to native code and do about the same optimisations at that time, and traits and templates have a lot in common too. Both tend to be rather feature-rich languages with quite a lot to learn. While Rust is definitely better teacher (I’m looking at you, C++ error message!) and has many more „safety covers“ over the dangerous moving parts inside the engine, the design of the engine is more of an evolution from C++ than a completely new thing.

But I’ve noticed one rather subtle difference in the philosophy of the languages I’d like to describe here. To make it somewhat more complete, I’ll also throw what some other languages do in this area in.

A few days ago, I wrote about two Rust pain points when using Rust at work. One of these points were the long compile times. In this post, I want to share a few tips that can help alleviate that pain.

In which we explore how cargo and rustdoc make it possible to write documentation and unit tests at once, resulting in code that is explained and tested from the POV of a public API.

I am Peter Hrvola (retep007) Twitter Github. During my Google Summer of Code (GSoC) project, I have been working on investigating the monolithic nature of Servo’s script crate and prototyping separation to smaller crates. My goal was to improve the use of resources during compilation. Current debug build consumes over 5GB of memory and takes 347s.

The Rust community recently approved a Custom Test Frameworks eRFC which lays out a series of goals and possible directions of exploration for implementing custom test frameworks. In this post, I present my own proposed fulfillment of the RFC with rationale.

It’s that time again! Time for us to take a look at how the Rust project is doing, and what we should plan for the future. The Rust Community Team is pleased to announce our 2018 State of Rust Survey! Whether or not you use Rust today, we want to know your opinions. Your responses will help the project understand its strengths and weaknesses and establish development priorities for the future.

This year, volunteers have also translated the survey into 14 languages!

Let’s put ourselves to the challenge of having an ‘infinite’ generator, which will have to be told to stop generating by the consumer…

In this post, I am proposing “Stacked Borrows”: A set of rules defining which kinds of aliasing are allowed in Rust. This is intended to answer the question which pointer may be used when to perform which kinds of memory accesses.

Recently, I was trying out clippy — a linting and static analysis tool for Rust, when I ran into a lint warning that wasn’t immediately clear to me: warning: casting u8 to u16 may become silently lossy if types change.

When I started learning Rust, the module system did not at first seem to be a shining beacon of intuitive design. The Rust documentation is phenomenal, but there are definitely some areas that I found difficult to follow; this being one such topic. So I thought I might take a stab at writing up a guide that I think would have helped me through the awkward growing pains a bit quicker.

Over in this issue we are discussing how to add debug logging for librsvg. A popular way to add logging to Rust code is to use the log crate. However, the log create is just a facade, and by default the messages do not get emitted anywhere. The calling code has to set up a logger. Crates like env_logger let one set up a logger, during program initialization, that gets configured through an environment variable. This is a problem for librsvg: we are not the program's initialization! Librsvg is a library; it doesn't have a main() function. And since most of the calling code is not Rust, we can't assume that they can call code that can initialize the logging framework.

A couple of days ago I landed my second pull request in the Rust Programming Language repository. This is the story of how that went. This post is inspired by other posts about improving the Rust compiler.

Today I want to talk about two Rust PRs I recently wrote. The PRs in question are #52942 and #52997. Both are relatively small changes to Rust’s internally used data structures that improve performance and readability. Both have some basic benchmarks (the first one already had them and I wrote them for the second one), although it’s rather hard to gauge whether they really impacted compile times (as perf.rust-lang.org puts all changes of the specific day together). But that’s not the point I want to make right now.

Generic Associated Types (GATs for short) are a long awaited extension to Rust’s type system. They offer a way to work with higher kinded types – a necessity in a couple of situations. A common example is the streaming iterator: an iterator able to return items borrowing from self (the iterator itself). Unfortunately, GATs haven’t even landed in nightly yet. So while are waiting, we can try tackling the streaming iterator problem without GATs. In this post we explore three possible workarounds for situations where an associated type depends on the lifetime of a &self receiver.

The Rust team is happy to announce a new version of Rust. This release includes the global_allocator attribute to customise the allocator, improved error messages for format strings, and a number of number related stabilisations.

An investigation into getting Haskell-like error handling ergonomics into a Rust application dealing with streaming UTF-8 encoding and decoding.

Imagine you want to write a timestamping repository of some sorts, that will associate the timestamp of when the storage operation was invoked with the stored value. How to write it in Rust ? And more importantly - how to test it ? I would like to share a solution I found and talk a bit about how it works.

Today, ajyne posted a thread on users.rust-lang.org asking: What have been the drawbacks of static typing for you? Kornel was quick to reply with a variety of points, but this one in particular stands out to me, "With powerful type systems there’s no end to how far you can go to guarantee things about your program, but you might create a complex monster". As I see it, there is no truer answer. The type system can be a seductive beast, often promising correctness and performance at the low-low, one-time cost of your soul. I personally can name a number of examples from my own code base where I tried to abstract over something too big and failed. I call these my wasted weekends.

There seems to be demand for a “Rust concurrent pipeline” guide à la https://blog.golang.org/pipelines, so let’s give it a try.

Talking about a language’s popularity is traditionally a tricky topic. How do you measure popularity? How do you compare one language to another when they’re focused on different styles and different audiences? So, rather than having one or two charts, I’m going to look at a number of “slices” into Rust’s growth to see it front different angles.

The cycle of development we’re most familiar with is: write code, compile your code, then run this code on the same machine you were writing it on. On most desktop OSes, you pick up a compiler by downloading one from your package manager. Xcode and Visual Studio are toolchains (actually IDEs) that leverage being platform-specific, each including tools tailored around the platform your code will run on and heavily showcasing the parent OS’s design language.

The release of Rust 1.31.0 on December 6th will be the first release of “Rust 2018.” This marks a culmination of the last three years of Rust’s development, and brings it together in one neat package. For example, there will be a 2018 edition of the book that incorporates features stabilized since the print edition was considered finalized.

When there are multiple ways to resolve dependencies, Cargo generally chooses the newest possible version. The goal of this post is to explain why Cargo works this way, and how that rationale relates to several recent discussions, including:

Recently, I wrote a little a side project to sign git commits without gpg. When I did this, I decided to use the Rust 2018 edition. I also transitioned an existing library from Rust 2015 to Rust 2018 to see how that tooling worked. I thought I’d write a blog post about my experience using the Rust 2018 preview and the state of things right now.

This summer, I am again working on Rust full-time, and again I will work (amongst other things) on a “memory model” for Rust/MIR. However, before I can talk about the ideas I have for this year, I have to finally take the time and dispel the myth that “pointers are simple: they are just integers”. Both parts of this statement are false, at least in languages with unsafe features like Rust or C: Pointers are neither simple nor (just) integers.

I also want to define a piece of the memory model that has to be fixed before we can even talk about some of the more complex parts: Just what is the data that is stored in memory? It is organized in bytes, the minimal addressable unit and the smallest piece that can be accessed (at least on most platforms), but what are the possible values of a byte? Again, it turns out “it’s just an 8-bit integer” does not actually work as the answer.

I hope that by the end of this post, you will agree with me on both of these statements. :)

One of the biggest challenges in software testing is defining the input for code under test in a way that is expressive and powerful enough to test complex situations but doesn’t distract from the intent of the test or clutter the test code to a degree that makes it difficult to read.

Many dynamic languages have testing APIs which take advantage of their looser and later type checking to provide easy mocking and stubbing, but strict, statically typed languages can make it difficult to build up suitable instances of the types needed in the test.

Rust has this cool feature called impl block. An impl block is just a scope that introduces a way to augment a type with methods – do not confuse impl blocks with trait impls, used to implement a given trait.

Doing concurrency in ‘share by communicating’ style has been popularized by the Go community. It’s a valuable approach to concurrency in Rust too, however, one has to be aware of the different semantics of Rust vs Go channels when doing so.

The Rust team is happy to announce a new version of Rust, 1.27.2.

Welcome to the inaugural post of the new futures-rs blog!

After several months of work, we’re happy to announce an alpha release of the new edition of future-rs, version 0.3. The immediate goal of this work is to support async/await notation (with borrowing) in Rust itself, which has entailed significant changes to the futures crate.

Following the actix-web incident (which is fixed now, at least mostly) I decided to poke other popular Rust libraries and see what comes of it.

The good news is I’ve poked at 6 popular crates now, and I’ve got not a single actually exploitable vulnerability. I am impressed. When I poked popular C libraries a few years ago it quickly ended in tears. The bad news is I’ve found one instance that was not a security vulnerability by sheer luck, plus a whole slew of denial-of-service bugs. And I can’t fix all of them by myself. Read on to find out how I did it, and how you can help!

For some time now (since the 1.26 release, to be precise), Rust has a very powerful machinery for CTFE, or compile-time function evaluation. Since then, there have been various discussions about which operations should be allowed during CTFE, which checks the compiler should do, how this all relates to promotion and which kinds of guarantees we should be able to expect around CTFE. This post is my take on those topics, and it should not be surprising that I am going to take a very type-system centric view. Expect something like a structured brain dump, so there are some unanswered questions towards the end as well.

Inband lifetimes is a limited change, and does not feel like it greatly enhances code. However, it also doesn’t hurt much and feels slightly better in many cases.

However, there are numerous edge cases and slight pain points, many having to do with a lack of known standard ways to do things. As such, many of the edge cases are likely to fall away as we develop after stabilization and come up with standard methods to work with the new feature.

The primary work to migrate is essentially just deleting ~all lifetime headers (<'a, 'b, 'c>) across impls and functions. More intensive migration would involve replacing untied/single-use lifetimes with '_ in all cases. This is quite hard to do from a person perspective (though compiler can likely do so fairly easily).

Lately, I’ve been working implementing the Custom Test Frameworks eRFC for Rust. While exploring the compiler codebase, I’ve learned about the internals of testing in Rust and figured it would be interesting to share.

An edition brings together the features that have landed into a clear package, with fully updated documentation and tooling. By the end of the year we are planning to release the 2018 edition, our first since the Rust 1.0 release. You can currently opt-in to a preview of the 2018 edition to try it out and help test it.

In fact, we really need help testing it out! Once you’ve turned it on and seen its wonderful new features, what then? Here we’ve got some specific things we’d like you to test.

Content-o-Tron is a project to help amplify the lesser heard voices in the Rust community.

We are able to do this by providing editorial assistance and technical reviews of draft blog posts.

Once your blog post is ready to publish, we will ensure it is disseminated through various channels such as Read Rust, MozHacks, social networks and of course the Rust Community’s own blog on community.rs.

It's funny how universe aligns things, just few days ago I stumbled upon Rust koans. Already familiar way of learning and exercising, patented by Ruby programmers, where you correct tests and make them work. Also whole method of learning was similar to reading 'Little Schemer' fairly popular book among fellow Lispers. So I'll use this blog post to summarize few early impressions about Rust, and let me tell you straight away, I am loving it so far!

Welcome to the Rust Review’s bonus post, which I had promised from the very beginning. I’m here to cover the big elephant in the room: Rust vs. Go. Which one is better?

There is no good answer to this question because this comparison is unfounded. I think people tend to bundle the two languages together because they were released at about the same time and the release of Rust felt like a response to the release of Go. Moreover, both languages are supposed to focus on systems software. But they are vastly different, and even as they both target systems software, they target different kinds of such software.

While I was busy doing Rust-unrelated research, RustBelt continues to move and recently found another bug (after a missing impl !Sync that we found previously): It turns out that Arc::get_mut did not perform sufficient synchronization, leading to a data race.

This article will attempt to help you avoid the debate entirely, at least in your Rust projects, by explaining how you can use the rustfmt tool to enforce style guidelines using CI. We’ll start with a brief introduction to rustfmt, so feel free to skip the next section if you are already familiar.

I spent much of my free time over the past year learning Rust, and while it’s been a difficult language to fully grasp (it’s still a work in progress), I find it incredibly rewarding to write in. I also have had many conversations with people who don’t know much about Rust and are curious about the problems it solves.

This is my take on why Rust is important, and why I have fallen in love with the language.

While working with Rust, you will often come across r#"something like this"#, especially when working with JSON and TOML files. It defines a raw string literal. When would you use a raw string literal and what makes a valid raw string literal?

With the recent addition of Rust 1.27.0 in the HaikuPorts repository, I thought it would be good to do a short, public write-up of the current state of Rust on Haiku, and some insight into the future.

This week I decided to do a little hacking on Rust. I thought I’d write down my first impressions of the language.

Programming is hard. Not because our hardware is complex, but simply because we’re all humans. Our attention span is limited, our memory is volatile — in other words, we tend to make mistakes.

Programmers think dynamic languages like Python are easier to use than static ones, but why? I look at uniquely dynamic programming idioms and their static alternatives, identifying a few broad trends that impact language usability.

Recent nightly releases provide an opt-in llvm-tools rustup component which you can install using the command: rustup component add llvm-tools. This component contains the following LLVM tools: llvm-nm, llvm-objcopy, llvm-objdump, llvm-profdata, and llvm-size. Most of these tools are LLVM alternatives to GNU binutils. The main advantage of these LLVM tools is that they support all the architectures that the Rust compiler supports.

One thing I have come to appreciate over time in the design of Servo, is the concurrency story. Basically, it’s pretty much all done using channels(and their multi-process counterpart).

What is so great about channels vs shared mutable state? One thing is, it makes it easier to reason about how various threads will synchronize their behavior as they go on about their business.

The way it’s done in Servo is by combining event-loops with multi-threading/processing. What does that mean?

I recently found a bug in mutagen: The “exchange arguments” mutation was actually ineffective. I was in the process of refactoring the code to pull coverage reporting into the mutagen calls (to reduce the amount of code generated), so the report_coverage call was to go away anyway. Except this bug masked another, more insiduous one: When I refactored, I found that one of the test would no longer compile methods with self arguments, running into Error E0424 (self keyword used in static method). Consider me confused.

As promised in the previous article (thanks for all the valuable feedback ‒ I didn’t have the time to act on it yet, but I will), this talks about Unix signal handling.

Long story short, I wasn’t happy about the signal handling story in Rust and this is my attempt at improving it.

The release delivers native Eclipse IDE experiences for Rust and C# through Language Server based plugins. The Language Server Protocol (LSP) ecosystem delivers editing support for popular and emerging programming languages. Combined with the move to a quarterly rolling release cadence, the LSP focus demonstrates a commitment to keeping pace with evolving developer and commercial needs.

Many of the candidates we interview for a position at PassFort are intrigued by the fact that we use Rust, a language which is only three years old (since its 1.0 release).

Despite its relatively young age, Rust has been voted the “most loved” language in the StackOverflow developer survey every one of those three years - an impressive feat!

However, it’s not enough for a language to be well liked: the programming ecosystem changes rapidly, and many of these developers are rightly afraid to jump blindly onto the latest bandwagon. We chose Rust not because it is popular, but because we believe it is the best tool for the job we have to do, and I hope to explain that reasoning now.

Channels Channels are a useful concurrency primitive that enable separate processes to safely communicate without the need for explicit synchronization. The term processes is used here to loosely describe independent threads of execution within a program. This can be an OS level thread or a runtime level thread. Channels can be seen as a pipe to connect these processes and allow them to share memory with one another. For example a program could spawn any number of processes along with a channel to transmit results that it gathers.

Much has been written about fuzzing compilers already, but there is not a lot that I could find about fuzzing compilers using more modern fuzzing techniques where coverage information is fed back into the fuzzer to find more bugs.

If you know me at all, you know I'll throw anything I can get my hands on at AFL. So I tried gcc. (And clang, and rustc -- but more about Rust in a later post.)

This is a story of a tiny feature I was missing in Rust… so I created it (partly because I like the feature, because it felt wrong for Rust not to have it, but mostly for the practice and fun of beating a hard and interesting problem). You can read the story if you are interested about the behind the scenes, about the feature itself, how to use it or just for fun ☺.

The Rust teams having been working hard to implement features of the 2018 edition. Today we have reached an important milestone: we are announcing that we have an alpha-quality preview of the 2018 edition ready for testing and feedback.

The preview presents a great opportunity for those of you using the stable channel to switch to nightly and try out how it feels to code in the new edition, both to help us fix bugs and to provide feedback – positive and negative – on features. Unfortunately, today’s nightly doesn’t work due to infrastructure issues, so you’ll need to run rustup install nightly-2018-06-20 in order to get a nightly that’ll work. If you’re already on the nightly channel, it’s likely that there’s no need to update the compiler.

In this part of the review, I would like to focus on Rust’s ecosystem: in other words, how Rust plays with other parts of a functioning system and how Rust’s standard library vs. external libraries interact with each other. There are a lot of pieces to cover in these areas and they have left me with mixed feelings. Let’s look at some.

This release has two big language features that people have been waiting for: SIMD, and dyn Trait. Additionally there is support for searching the Rust books, and a new book about rustc.

I consider Rust’s RFC process one of our great accomplishments, but it’s no secret that it has a few flaws. At its best, the RFC offers an opportunity for collaborative design that is really exciting to be a part of. At its worst, it can devolve into bickering without any real motion towards consensus. If you’ve not done so already, I strongly recommend reading aturon’s excellent blog posts on this topic.

The RFC process has also evolved somewhat organically over time. What began as “just open a pull request on GitHub” has moved into a process with a number of formal and informal stages (described below). I think it’s a good time for us to take a step back and see if we can refine those stages into something that works better for everyone.

“The Rust Programming Language” is one of the free books that the community has put together to teach the language. The book does a good job in general, but there are some things that could be better. Let’s cover these, but first, some background.

It is very straightforward to get Rust projects to build within a CI environment. This post is going to take that build process one small step further, we’re going to build a Rust project that uses the Diesel ORM. This adds a step of complexity since to compile a Diesel project you need to have a postgresql database accessible if you’re using the infer_schema!() macro.

TL;DR I decided to learn Rust on my nth attempt. Writing small programs helped me get stuff done. I converted a Java gRPC service into Rust for comparison I'm super-impressed with Rust's low CPU and memory footprint.

In Rust, traits are a powerful tool to use polymorphism, both static and dynamic. I’m going to skip the basics about the traits and just link to another blog post with a good explanation about static and dynamic dispatch in Rust: Traits and Trait Objects in Rust.

Instead, I would like to do an experiment of making dynamic dispatch even more dynamic! Like in Java1.

A commonly-acclaimed feature of Rust is its match keyword: a “conditional on steroids”. match lets you take the value of an expression and compare it against a bunch of values—or, more generally, patterns.

As you write and read Rust, you will notice that this keyword is used everywhere because it’s the way to access certain types, like Option values or error codes.

I’ve been getting a lot of questions about the status of “Non-lexical lifetimes” (NLL) – or, as I prefer to call it these days, the MIR-based borrow checker – so I wanted to post a status update.

The single most important fact is that the MIR-based borrow check is feature complete and available on nightly. What this means is that the behavior of #![feature(nll)] is roughly what we intend to ship for “version 1”, except that (a) the performance needs work and (b) we are still improving the diagnostics.

Last week I tweeted "What do you think are the most interesting/exciting projects using Rust? (No self-promotion :-) )". The response was awesome! Jonathan Turner suggested I write up the responses as a blog post, and here we are.

Rust resembles a functional language in many ways although it does not claim to be one. In fact, I have been thinking of Rust as a “pragmatic Haskell” or as a “well-balanced mixture between C++ and Haskell”.

One of the ways the functional aspects show up is via expressions and how pretty much any construct in Rust can be treated as an expression. But before we begin, a little warning: the examples below are, by no means, idiomatic Rust—I just hope they are simple enough to illustrate what I want to show.

One of Go's big selling points for me was its novel approach to JSON encoding. Learning about Rust's encoding has made me even more excited. In this post, we'll start with Go's JSON encoder, and then see how Rust does encoding. And we'll even through in some YAML!

I’ve been really confused lately about Rust’s trait objects. Specifically when it comes to questions about the difference between &Trait, Box<Trait>, impl Trait, and dyn Trait.

I briefly demonstrate how to use procedural macros to automatically perform type coercion in Rust, mimicking the behavior of dynamic languages.

Last week, I wrote a post in which I discussed some of the things that I learned about Rust concurrency. One of the things that I pointed out was that when you spawn a thread within another thread, they both have the main process as their parent.

Recently I needed to run a simple SQL query on a Postgres database and produce a one-off report. I could have done this in 5 minutes using Ruby and ActiveRecord. Instead, I decided to use Rust and Diesel – a language and a tool I hadn’t used before. Instead of 5 minutes it took several hours, but I learned something new. I’ve written up the steps I took here today. Get your mind’s exercise for today and read on to learn how to execute a SQL statement using Rust.

C is almost 50 years old, and C++ is almost 40 years old. While age is usually indicative of mature implementations with decades of optimization under their belts, it also means that the language's feature set is mostly devoid of modern advancements in programming language design. For that reason, you see a great deal of encouragement nowadays to move to newer languages - they're designed with contemporary platforms in mind, rather than working within the limitations of platforms like the PDP-11. Among said "new languages" are Zig, Myrddin, Go, Nim, D, Rust.. even languages like Java and Elixir that run on a virtual machine are occasionally suggested as alternatives to the AOT-compiled C and C++.

I have plans to look into the characteristics that distinguish each and every one of these new programming languages, learning them and documenting my first impressions in the form of blog posts. This post is the beginning of that adventure: my first impressions of Rust.

Writing Rust code is not restricted to programming gurus—but there is no denying that the learning curve is steeper than that of other languages. Or is it? In this post, I'll try to convince you that the curve does feel steep, but it isn't when taken into perspective.
Let's first start by stating that learning a language is not the same as learning its syntax. Learning a language involves learning the syntax, of course, but it also involves familiarizing oneself with its common idioms and grabbing a good sense of what the standard libraries provide.

How to get Rust to execute OCaml code and libraries with zero-cost by sharing C libraries. Call OCaml functions from Rust through ffi, fast!

The one thing that blew my mind about Rust is its approach to data sharing in concurrent situations.

I had always thought of mutexes as something that is easy to get wrong and was convinced that the use of a RAII pattern to prevent lock leaks never happen (like with Abseil’s MutexLock) was the panacea. (I’m a fan of RAII in C++ by the way, in case you haven’t noticed.)

We’re not allowed to have a type parameter that goes unused. If we want to have a type that looks like the one above we have to add a marker to it like so: struct Tagged<T>(usize, PhantomData<T>);

This patch release fixes a bug in the borrow checker verification of match expressions. This bug was introduced in 1.26.0 with the stabilization of match ergonomics. Specifically, it permitted code which took two mutable borrows of the bar path at the same time.

We’ve recently been making lots of progress on future plans for clippy and I thought I’d post an update.

Last week, I started learning Rust, and published a post about the “ownership” system. One of the places where Rust’s ownership system really shines is in threading and concurrency. Kevin and I decided to dig into this more on Friday, and did some work on the dining philosophers problem.

In this post I’ll be covering what we learned, and how the Rust compiler saves you from some scary concurrency issues.

Last time, we introduced the idea of async methods, and talked about how they would be implemented: as a kind of anonymous associated type on the trait that declares the method, which corresponds to a different, anonymous future type for each implementation of that method. Starting this week we’re going to look at some of the implications of that. The first one we’re going to look at is object safety.

Similarly to the previous post, we will once again add types to the Rust code which works perfectly fine without them. This time, we’ll try to improve the pervasive pattern of using indexes to manage cyclic data structures.

A lot of people at RustFest Paris mentioned Cows – which may be surprising if you’ve never seen std::borrow::Cow!

Cow in this context stands for “Clone on Write” and is a type that allows you to reuse data if it is not modified. Somehow, these bovine super powers of Rust’s standard library appear to be a well-kept secret even though they are not new. This post will dig into this very useful pointer type by explaining why in systems programming languages you need such fine control, explain Cows in detail, and compare them to other ways of organizing your data.

Aaaah, the borrow checker: the dreaded enemy lurking within the Rust compiler, ready to make its move to bring pain to your life by preventing your code from compiling. Or that’s what everyone seems to say, which is one of the reasons I put off learning Rust for so long. In reality… the borrow checker is a blessing, but it is true that getting past its gates is difficult at first.

What happens when a data collection is copied and then the new copy is changed? Does the original remain the same, or does it change too?

If you think of copying as creating a completely new object, of course you expect that any change to the new copy does not affect the original object. But if you think of copying as creating a new name for the same, single object, then you expect that any change to the object through the new name appears also when you access the same object through the old name.

Let's see how is the behavior of Python, Javascript, Java, C++, and Rust regarding the assignment operator ("=") between collection variables.

python javascript java cplusplus

I first came across Rust back in 2010 or 2011, and it was a very different language than the one it is today, both syntactically and semantically. I remember at the time that newcomers would often complain loudly about the terse keywords—like the fact that the return keyword had been shortened to ret—and the omnipresent tildes scattered throughout the language like fallen leaves in autumn. My programming background was in functional languages—specifically in Scheme and Haskell—and I found this language fascinating, sitting in an interesting and unexplored place in the spectrum of programming languages and bringing something genuinely new to the table.

Is it possible to find something in a hashmap if the key you are looking for is not exactly the same as the one you put into that hashmap? At first glance, this might not make any sense at all. The whole purpose of a hashmap is to store something under some key and then look it up using the same key. Right?

I spent pretty much the whole day banging my head against the wall trying to figure out how ownership and borrowing work in Rust, and finally have a grasp on what’s going on.

In this post I’m going to demonstrate how these concepts work through some examples of code that break Rust’s rules, and explain why they’re problematic. I assume very little knowledge of the Rust programming language. I’ve also added comments to all of the code blocks that indicate whether the code is valid Rust or not.

ownership-and-borrowing

Async/await continues to move along swimmingly. We’ve accepted an RFC describing how the async/await syntax will work in Rust, and work is underway on implementing support for it in the compiler. We’re hopeful that users will be able to start experimenting with the syntax on nightly by early July.

The RFC for async/await didn’t address one important thing: async methods. It is very important for people defining libraries to be able to define traits that contain async functions, like this:

I just failed to implement what looked to be a relatively simple opportunistic replacement so that the compiler would accept the mutated code. But I’m getting ahead of myself.

Let’s start the deep dive by looking into a powerful feature of Rust: all variables and references are immutable by default unless qualified with mut.
To understand why this is important, let’s cover some context first. One of my pet peeves when reviewing C++ code is to ask authors to sprinkle the const qualifier everywhere: if something ain’t mutated, say so explicitly. This includes marking local variables, function arguments, function return values, class attributes, etc.

When you're just building some very basic tool programs, I'd probably not even think about threading in C, but here it is so easy that I've been quick to drop a (for example, typically) 30ms loop down to 3.5ms. One of the things I've been somewhat missing is easy access to SIMD intrinsics, but this brings me to something else I've been enjoying this year: Rust is evolving.

A couple of issues were found in 1.26.0 which were deemed sufficient for a patch release.

A quick summary of the changes:

RLS no longer interferes with command line builds
Rustfmt stopped badly formatting text in some cases
Returning from main via impl Trait where the Trait is not Termination is no longer permitted
::<> (turbofish) no longer works for method arguments whose type is impl Trait

NaN > NaN no longer returns true in const contexts
rustup should no longer fail due to missing documentation on some platforms

Beware that at any point the code here may stop compiling, segfault, and otherwise behave in weird ways, some of which involve Velociraptors.

Now that that’s out of the way, what is a fat pointer anyway? All pointers are the same right? Just a number indicating an address in memory. Well, yes and no.

As the co-author of Go in Practice, I have felt a certain obligation to Go. But I'm ready for a change. Rust topped the satisfaction survey in Stack Overflow's survey of languages (screenshot above). I've decided to give it a try. While Go and Rust are often compared, they are remarkably different languages.

Coming from a Go background, there are things about Rust that feel very natural, and things (like memory management) that feel utterly foreign. And so as I learn Rust, I am cataloging how it feels for a Go programmer. And rather than leading others to "dive in at the deep end" as I did (when I tried to write a full web service), I decided to approach Rust by starting with similarities and working toward differences.

I had been meaning to learn Rust since I first toyed with Go a couple of years ago. During this period, I’ve written a non-trivial amount of Go code both inside and outside Google, but never found the chance to sit back and learn Rust.

This changed a month ago during my yearly family trip to Korea. This time around, I decided upfront that I would not work on any personal or work projects for the 2-week long vacation. Instead, I would focus all spare time in reading. And I would read “The Rust Programming Language”, second edition. The plan worked: getting through the book took the two weeks and I barely wrote any code.

In this post, I go through how I added the first automated fuzz test for my hobby project Hat — a snapshotting backup system written in Rust. I’ll briefly go through what a fuzz test is and how it works. In a follow-up post, I will share how I made the test more effective by running it through Seasoned Software.

In this post, I’ll talk about a pattern for extracting values from a weakly typed map. This pattern applies to all statically typed languages, and even to dynamically typed ones, but the post is rather Rust-specific.

In Rust 1.26 a new feature called impl Trait was stabilized. How does it work? Instead of specifying an exact type, you can say that your function either returns or takes something that implements a trait.

Rust doesn’t allow you to move out of a value which type implements Drop, and this is quite logical. When Foo::take returns, because of self going out of scope, it must call its Drop::drop implementation. If you have moved out of it – both a: A and b: B fields, the Drop::drop implementation is now a complete UB. So Rust is right here and doesn’t allow you to do this.

But imagine that we have to do this. For insance, we need to hand over both the scarce resources a and b to another struct (in our case, a (A, B), but you could easily imagine a better type for this).

There’s a way to, still, implement Foo::take with Foo implementing Drop. Here’s how:

I built a little tool in Rust to convert an Evernote export file to Markdown. It was impressively easy.

I've been wanting to write a simple Android game for my daughter, and decided to use it as an excuse to learn Rust. Thus began an odyssey.

I'll ignore the game itself in this post in favour of describing how to get a simple Rust on Android game environment. For my game I didn't want anything fancy - I wanted to load some jpg files and blit rectangles from those textures to the screen. But I don't know OpenGL, and I don't really feel the need to learn for this project - if I hit the need to use a shader, then I backtracked and tried another approach. The plan was to get a simple, high-level graphics API for Rust running on Android.

Earlier this month I told you about my pet project in Rust.

As a reminder, it’s a tool named rusync which contains some of the functionality offered by the rsync command-line tool.

Today I’d like to talk about a feature I’ve added recently, and take this opportunity to show you a few principles of good design along the way.

SQL injection vulnerabilities have been a plague ever since such databases have been combined with user facing applications. Such vulnerabilities arise when a SQL query string is naively combined with data that is controlled by an attacker.

To mitigate, people should make use of placeholders and prepared statements provided by SQL client libraries. This separates the variable data from the actual query, ensuring that these two never mix. Pretty much all modern SQL client libraries offer this functionality, but of course, it’s still possible to mix variable data and SQL by means of string concatenation.

I have been editing my FizzBuzz repository since 2014. After four years, I was finally able to switch from nightly to stable due to the 1.26 release. Let’s back up a little bit and appreciate the changes since the first revision.

Three years ago today, the Rust community released Rust 1.0 to the world, with our initial vision of fearless systems programming. As per tradition, we’ll celebrate Rust’s birthday by taking stock of the people and the product, and especially of what’s happened in the last year.

The past few releases have had a steady stream of relatively minor additions. We’ve been working on a lot of stuff, however, and it’s all starting to land in stable. 1.26 is possibly the most feature-packed release since Rust 1.0.

Last weeks were interesting for warmy, a crate I’ve been writing for several weeks / months now that enables you to hot load and reload scarce resources – e.g. textures, meshes, configuration, JSON parameters, dependency nodes, whatever. warmy received several interesting features.

I practice Barbarian Leadership. Standing on the back lines with hand in pocket giving orders is missing the fun. More importantly, knowledge is created on the cutting edge of action. People you work with know this. They value modern, non-hierarchical organizations where a leader dives into the fray, sword in hand, and gets to know intimately the problems and tools for solving them. So I dove into the fight despite knowing that it is “hard language”.

This is a post about an annoying Rust pattern and an annoyingworkaround, without a good solution :)

One of my issue when building a project from scratch is the amount of manual steps required to be able to run a simple project. This is not so much a concern when you are a single developer but the larger the team, the most obvious it becomes.

I am currently working on a refactor of the Rust implementation of Apache Arrow to change the way that arrays are represented. This is a relatively large change even though this is a tiny codebase so far and I thought it would be good to write up this blog post to explain why I think this is needed. I think this information will also be interesting for any Rust developer who is struggling with making the right choice between (or using the right combination of) enums, structs, generics and traits. I was inspired to write this up after reading this blog post that was posted to Reddit just a few days ago.

Procedural macros are a really powerful language feature in Rust and something I haven’t seen in many other languages.

There are a heap of tutorials out there for procedural macros, including in The Rust Reference, and the first edition of the Rust Book. One of the more entertaining (and useful) posts is by Zach Mitchell where you get to “learn Rust procedural macros with Nic Cage”.

I won’t go into depth about what procedural macros are and why they’re so powerful.

How Mozilla’s new language dramatically improved our server-side performance.

Recently I gave a talk at our Rust Meetup about mutagen, and I also showed how our opportunistic mutations work (I however left out that gnarly thing about shifts, but in my defense I was short on time). That got me thinking whether we always do the right thing elsewhere.

18 months ago I wrote about some work I did to speed up the Rust compiler (rustc). I’ve recently taken this work up again. Also, in the meantime rustc’s build system has been replaced and its benchmark suite has been overhauled. So it’s a good time for an update.

I wanted to use Rust on an offline Linux system, but it seemed like there isn’t a nice guide to install Rust and some popular packages all in one go (like Anaconda, though what I describe here is much more ghetto), so I decided to summarize the procedure to install the Rust toolchain and some popular libraries all in one go on a system with no internet access.

This contains shorthand URLs for navigating to Rust documentation.

Ever since the Rust All Hands, I’ve been experimenting with an alternative formulation of the Rust borrow checker. The goal is to find a formulation that overcomes some shortcomings of the current proposal while hopefully also being faster to compute. I have implemented a prototype for this analysis. It passes the full NLL test suite and also handles a few cases – such as #47680 – that the current NLL analysis cannot handle. However, the performance has a long way to go (it is currently slower than existing analysis). That said, I haven’t even begun to optimize yet, and I know I am doing some naive and inefficient things that can definitely be done better; so I am still optimistic we’ll be able to make big strides there.

A tale of my time in Rust-land

A month ago, I wrote about how I was frustrated with my progress in Rust. These days, I’m still no expert, but I’ve made progress.

From team structure and annual surveys to RFCs and the release process, a staff research engineer on Mozilla’s Rust team shares what it takes.

The networking working group is pushing hard on async/await notation for Rust, and @withoutboats in particular wrote a fantastic blog series working through the design space. I wanted to talk a little bit about some of the implications of async/await, which may not have been entirely clear. In particular, async/await is not just about avoiding combinators; it completely changes the game for borrowing.

ownership-and-borrowing async

Chucklefish, an independent game studio based in London, publishes hit video games like Stardew Valley and Starbound. Now, the company is developing its next game, code-named Witchbrook, using the Rust programming language instead of C++. Why the switch? Two main reasons: to get better performance on multiprocessor hardware and to have fewer crashes during game play.

With the latest GIT version of the Rust bindings for GLib, GTK, etc it is now possible to make use of the Rust futures infrastructure for GIO async operations and various other functions. This should make writing of GNOME, and in general GLib-using, applications in Rust quite a bit more convenient.

The Rust programming language provides powerful guarantees around memory and thread safety. It also exposes all the knobs required for implementing custom rules, enabling a project to make additional guarantees and enforce opinions on best practice. Embedded standards are very opinionated about software practices—like using floating point values as loop counters or the number of possible exit points of a function—and Rust’s defaults don’t prevent every runtime panic (for example, recursion that goes too deep and overflows the stack).

For PolySync, a runtime panic means the potential for an unsafe situation on the road, and with that in mind, we’ve explored ways to restrict that potential. Of course, we aren’t the only ones thinking about ways to improve the quality of code at compile time by enforcing the right rules for the job. Active projects like rust-clippy are working to do that too by providing lints to supplement the rustc defaults.

In this post we’ll explore how to enforce a rule by prohibiting a practice we’ve formed an opinion about, the indexing of a vector or an array.

This post is about the process of transforming something you would write as a one-off script in Python (or any other scripting language) into a library including error handling.

This is a bit late (how is it the middle of April already?!), but the dev-tools team has lots of exciting plans for 2018 and I want to talk about them! Our goals for 2018 Here's a summary of our goals for the year. Ship it! We want to ship

In this post, we will implement multiprocessing.pool.ThreadPool from Python in Rust. It represents a thread-oriented version of multiprocessing.Pool, which offers a convenient means of parallelizing the execution of a function across multiple input values by distributing the input data across processes. We will use an existing thread-pool implementation and focus on adjusting its interface to match that of multiprocessing.pool.ThreadPool.

I’ve decided to do a little series of posts about Rust compiler errors. Each one will talk about a particular error that I got recently and try to explain (a) why I am getting it and (b) how I fixed it. The purpose of this series of posts is partly to explain Rust, but partly just to gain data for myself. I may also write posts about errors I’m not getting – basically places where I anticipated an error, and used a pattern to avoid it. I hope that after writing enough of these posts, I or others will be able to synthesize some of these facts to make intermediate Rust material, or perhaps to improve the language itself.

Surprisingly few know about the built-in pretty-printer. In the book, there is only a short passage that mentions {:#?} in passing. It aligns structs and enums based on nested positions and is automatically derived with Debug.

As an open source distributed NewSQL Hybrid Transactional/Analytical Processing (HTAP) database, TiDB contains the most important asset of our customers--their data. One of the fundamental and foremost requirements of our system is to be fault-tolerant. But how do you ensure fault tolerance in a distributed database? This article covers the top fault injection tools and techniques in Chaos Engineering, as well as how to execute Chaos practices in TiDB.

Over the month of March 2018, we've been accepting responses to the Rust CLI Survey. This survey was designed to give us some areas of focus, according to the community, for the CLI Working Group (CLI-WG).

One of the goals of Rust 2018 is to make writing command line applications in Rust as frictionless (and fun!) as possible. And we are super excited to say: we've received 1,045 responses! The results, while varied, paint a pretty clear picture for tangible goals.

Last week I fell down a rather interesting rabbit hole in Rust, which was basically me discovering a series of quirks of the Rust compiler/language, each one leading to the next when I asked “why?”

Software errors in safety-critical systems can have severe consequences: property-loss, environmental devastation, injury, or death. Despite the severity of these risks, software continues to be written for safety-critical applications in languages that permit common classes of failures, such as undefined behavior, state corruption, and unexpected termination. One such language is C. Language standards that define allowable subsets (e.g. MISRA) and static analysis tools are often used in an attempt to ameliorate these failures by detecting them in the program code before they result in a critical issue at runtime. These traditional methods are ultimately insufficient when it comes to providing ahead-of-time assurances about safe runtime behavior for safety-critical applications. Alternative approaches must be considered.

Rust has some special syntax for ‘diverging functions’, which are functions that do not return.

Last week (sigh, the week before last now) we held an 'all-hands' event in Berlin. It was a great event - fantastic to meet so many Rust people in real life and really energising to see how much is being planned and implemented. In this post I want to summarise some of the important dev-tools stuff that happened. Our planning and notes from some meetings is in the dev-tools team repo.

When I finally implemented opportunistic mutations in mutagen, everything seemed fine until my co-maintainer gnieto found a problem. Code failed to compile with the mutagen plugin, something that should never happen as long as the code in question compiles without the plugin. We not only broke the code – we broke the build.

A document describing how (in my opinion) C++’s and Rust’s definitions of object instance differ.

Despite having an experience with wide range of computer languages, including C++ and Haskell (both strong influences to Rusts design), I found Rust hard to learn. Sometimes I grind my teeth about something the compiler doesn’t let me do. Despite that, I didn’t put ergonomics as a wish in any poll. In fact, if I was to take a poll right now, I’d probably be against further ergonomics initiatives.

I’m really excited to announce the culmination of much of our work over the last four months: a pair of RFCs for supporting async & await notation in Rust. This will be very impactful for Rust in the network services space. The change is proposed as two RFCs:
RFC #2394: which adds async & await notation to the language. RFC #2395: which moves a part of the futures library into std to support that syntax.

Another topic of discussion at the Berlin Rust All Hands was the long-term story around Cargo, Xargo, and Rustup. The latter two tools are both involved in managing your Rust toolchain, with Xargo allowing you to build custom stds and Rustup managing pre-built artifacts for mainstream targets. Xargo is most commonly used for cross-compiling to less common platforms, but can also be used to customize the standard library on mainstream platforms.

Last week we held an “All Hands” event in Berlin, which drew more than 50 people involved in 15 different Rust Teams or Working Groups, with a majority being volunteer contributors. This was the first such event, and its location reflects the current concentration of team members in Europe. The week was a smashing success which we plan to repeat on at least an annual basis.

Prior to adding Python performance monitoring, we'd written monitoring agents for Ruby and Elixir. Our Ruby and Elixir agents had duplicated much of their code between them, and we didn't want to add a third copy of the agent-plumbing code. The overlapping code included things like JSON payload format, SQL statement parsing, temporary data storage and compaction, and a number of internal business logic components.

This plumbing code is about 80% of the agent code! Only 20% is the actual instrumentation of application code.

So, starting with Python, our goal became "how do we prevent more duplication". In order to do that, we decided to split the agent into two components. A language agent and a core agent. The language agent is the Python component, and the core agent is a standalone executable that contains most of the shared logic.

python

Specialization holds the dubious honor of being among the oldest post-1.0 features remaining in unstable limbo. That’s for good reason, though: until recently, we did not know how to make it sound.

One of the big requests from the Domain Working Groups for Rust 2018 is a richer feature set for framework- or domain-specific workflows in Cargo. At the simplest level, that might look like project templates – the ability to direct cargo new to start with a custom template defined in crates.io. That’s already enough to get you cooking with frameworks like QuiCLI, which today involve a fixed set of initial scaffolding that you can fill in.

cargo

I’ve been spending some time thinking about garbage collection in rust. I know, shame on me, it’s a systems language, we hate garbage collection, but… even in a systems programming language, garbage collection is still pretty damn useful.

Recently, a new API for “pinned references” has landed as a new unstable feature in the standard library. The purpose of these references is to express that the data at the memory it points to will not, ever, be moved elsewhere. Others have written about why this is important in the context of async IO. The purpose of this post is to take a closer, more formal look at that API: We are going to take a stab at extending the RustBelt model of types with support for pinning.

This introduction is written for people, who are programmers, but don’t know Rust or are at the very beginning of learning it. It’s easier to understand for readers who know C, C++ or other language with manually managed memory as well as some with garbage collector. It’s a high-level introduction intended to present core Rust concepts and encourage further learning. It’s not a tutorial, there is no Hello Rust in the end.

I recently got into a discussion with another very knowledgeable Rustacean, who (I paraphrase) claimed that Rust is about adding just enough roadblocks to keep you from cutting corners. This is a nice metaphor because it explains a lot: Rust may feel more cumbersome, because it won’t let you cut corners. On the other hand, once it compiles, many classes of errors will already have been taken care of, so your code will usually work as expected (or if you’re new to Rust, unexpectedly well).

Considering how the state of our art is ever changing, I re-evaluate which tools belong in my box of gizmos each year as well. In the past, I’ve employed nginx as a high-performance cache and proxy, but it has been largely edged out by Envoy, which touts a hybrid non-blocking event model and has become wildly successful after being released in 2016. That very same principle, event-driven I/O, is the same reason I chose Node.js for most of the APIs I’ve developed since 2011. Even if practices change, we retain successful engineering models.

Beginning late last year, as I sketched our founding mission and initial product offerings, I also decided to select a new primary language that could handle most of our primary development tasks. After writing mostly JavaScript and compile-to-JS languages for half a decade, I longed for something more.

I started learning Rust 2 weeks back (yay!!) whenever I got free time. And all the time that I spent learning it has been worthwhile. This is not going to be a deep technical post, but just my impressions about Rust from where I come from (C++).

The last year has been fun because I could build a lot for really nice stuff for Sentry in Rust and for the first time the development experience was without bigger roadblocks. While we have been using Rust before it now feels different because the ecosystem is so much more stable and we ran less against language or tooling issues.

However talking to people new to Rust (and even brainstorming APIs with coworkers) it's hard to get rid of the feeling that Rust can be a mind bending adventure and that the best way to have a stress free experience is knowing upfront what you cannot (or should not attempt to) do. Knowing that certain things just cannot be done helps putting your mind back back on the right track.

So here are things not to do in Rust and what to do instead which I think should be better known.

As part of my final year in university I have had to undertake a project and then write a twenty page paper on it. I ended up being assigned one on a type of machine learning algorithm called boosting. This wasn't my first choice unfortunately, so I decided I'd try to make it interesting for myself by implementing it in Rust. Rust was, and still is, quite immature when it comes to machine learning - as Are We Learning Yet? confirms. I thought it would be an interesting challenge to write some machine learning algorithms in a language that has yet to be used too much for this field.

I’ve decided to learn some Rust recently while working on the Stanford’s experimental course on operating systems. Here’s a list of things that I think are great about it.

The Rust team is happy to announce a new version of Rust, 1.25.0. The last few releases have been relatively minor, but Rust 1.25 contains a bunch of stuff!

I started writing mob, an multi-echo server using mio, in 2015. I coded mob into a mostly working state and then left it mostly alone, only updating it to work with the latest stable mio. Recently, I started looking at the code again and had the urge to improve it. In a previous post, I talked about managing the state of connections in mob. In this post, I will walk through what I did to remove the need to track connection state. I wanted to remove the state because the implementation required an O(n) operation every tick of the mio event loop. It also added a fair amount of complexity to the code.

I have been working with @alexcrichton to improve the resolver in Cargo.

I’m attempting to build a dataframe in Rust. I implemented a pattern using traits, generics, and enums in conjunction to deal with columns of different datatypes while allowing runtime reflection for accessing the data stored in a column.

This is the first article in a series on techniques I’ve found useful for making my projects more reliable. These techniques are used in the distributed systems, database, automotive, embedded, and aerospace fields, but if you build services, user interfaces, or generally anything stateful, I think you will find something useful along the way.

Any mutation testing tool worth its salt uses coverage to restrict the number of tests to run. mutagen is no exception, of course, so once we had a test runner, we wanted to extend it with coverage-based testing.

Closures are an interesting CS concept and one that will frequently come up in interviews. I know I've been asked, and have asked, questions about closures for frontend (Javascript) positions numerous times. And in all honesty they're a difficult concept to define, especially when you're under the scrutiny of an interviewer. In this post I'd like to show how Rust leverages the concept of closures and why they might be used. But first, we need to discuss the concept of scope because it is so important for the full understanding of closures.

Taming multiple threads is a mess. Not only many things can happen all at once, but what you wrote in the code isn’t exactly what happens in the CPU. To gain some more performance, the compiler cheats if it thinks nobody is watching. It can reorder instructions or throw some of them out if they look useless. The same happens in the hardware. Furthermore, there isn’t just one RAM, but each memory location can live in different caches at each time and some of them are private to each CPU. It would not make do to publish all the local changes to one’s cache right away.

To say my first foray into Rust was a frustrating struggle would be an understatement. I picked a terrible first project that left me neck deep in Rust’s trickiest areas right off the bat. I was excited to try again. A few years ago I wrote Sumoshell, a CLI App for log analysis. I’d wanted to improve it for a while, so porting it to Rust seemed like a nice way to kill two birds with one stone.

I have started porting the code in librsvg that parses SVG's CSS properties from C to Rust. Many properties have symbolic values. StrokeLinejoin is the first property that I ported. First I had to write a little bunch of machinery to allow CSS properties to be kept in Rust-space instead of the main C structure that holds them (upcoming blog post about that). But for now, I just want to show how this boiled down to a macro after refactoring.

I’ve been going through a period of programming language wanderlust over the past couple months. Recently, I’ve been quite interested in Rust. Coming from Python, I’ve found a lot of Rust’s language features to be quite powerful.

И для чего он используется в фреймворке для приватных блокчейнов Exonum Tokio — это фреймворк для разработки сетевых масштабируемых приложений на Rust,...

When looking for a new backend language, I naturally went from Python to the new cool kid: Go. But after only one week of Go, I realised that Go was only half of a progress. Better suited to my needs than Python, but too far away from the developer experience I was enjoying when doing Elm in the frontend. So I gave Rust a try.

It’s hard to believe its been almost 6 weeks since the last post I made about async/await in Rust. So much has happened that these last several weeks have flown by. We’ve made exceptionally good progress on solving the problem laid out in the first post of this series, and I want to document it all for everyone.
Future and the pinning API Last month I wrote an RFC called “Standard library API for immovable types”.

We have a problem: the average queue of ready-to-test PRs to the main Rust repo has been steadily growing for a year. And at the same time, the likelihood of merge conflicts is also growing, as we include more submodules and Cargo dependencies that require updates to Cargo.lock.

I explore how to use Rust compiler internals to metaprogram Rust using information from the typechecker, e.g. to automatically insert garbage-collection into Rust code, and discuss the benefits and drawbacks of this approach.

The Rust compiler implements tagged unions, which prevent you from crashing your program by initializing a union with one variant and accessing it with another. Rust uses enum to improve on both C enums and C unions at the same time.

mutagen until recently suffered a bug that rendered both the return input and the interchange arguments mutation inapplicable.

To explain, the former mutation compares each input type with the return type and allows code to return inputs of the same type, if any, while the latter compares input arguments’ types and exchanges two equally-typed inputs.

A simple approach to use external crates with our macros in Rust.

This article will focus on a comparison between Erlang and Rust, detailing their similarities and differences. It may be interesting to both Erlang developers looking into Rust and Rust developers looking into Erlang. A final section will detail more about each of the language capabilities and shortcomings and argue for the possibility of leveraging both languages' strengths in the same project.

At the moment, mutagen only considers top-level idents in function arguments (e.g.foo(x: X, y: Y)), but function arguments are actually patterns, so we could have foo((x, y): (X, Y)) or bar(Bar { bla, bazz } : Bar). For now, this means we have no type information for either of those examples.

Each year the Rust community comes together to set out a roadmap. This year, in addition to the survey, we put out a call for blog posts in December, which resulted in 100 blog posts written over the span of a few weeks. The end result is the recently-merged 2018 roadmap RFC.

This post is two stories. One is about accepting and recognising personal failure, reflecting and growing from it; the other is about an incredibly and seemingly endlessly powerful programming language, called Rust.

Oftentimes, I see a variant of this question posted or asked somewhere. In general, most of the times I think the answer is „Yes“, but maybe for reasons other than you’d think at first.

Rustdoc has a pretty powerful feature that feels pretty unknown. It doesn’t help that it’s currently restricted by a nightly feature gate, but it’s still cool enough that I want to talk about it.

Overloading is the ability to create multiple functions of the same name with different implementations.

Rust has no traditional overloading, you cannot define two methods with the same name. The compiler will complain that you have a duplicate definition regardless of the different argument types.

I recently got the chance to redo the error handling in two different crates I help maintain. For liquid, I decided to write the error types by hand rather than use something like error-chain. In the case of assert_cli, I decided to finally give failure a try.

To me one of the initial shocks of learning Rust was figuring out lifetimes. As a frontend-by-day developer I don't come face-to-face with the 'Double free' and 'Use after free' problems all that often. Actually, it could be easily argued that my backend-brethren don't really either or, for that matter, anyone who's typically dealing with a garbage collected language. I'm looking over at you JS, Java, and Ruby devs. I'd bet most neckbea.. *cough, excuse me, C developers are comfortable with these issues but alas, I am not. As such, lifetimes were kinda difficult to wrap my head around but I think I get them a little better now so let me try to explain.

One of the value propositions most frequently lauded by Rust developers is its freedom from data races. The compiler will literally not allow you to build code that could ever produce a situation where two threads can mutate the same data.

Recently we have been having discussions about how Rust and Meson should work together, especially for mixed language projects. One thing which multiple people have told me (over a time span of several years, actually) is that Rust is Special in that everyone uses crates for everything. Thus there is no point in having any sort of Rust support, the only true way is to blindly call Cargo and have it do everything exactly the way it wants to.

This seems like a reasonable recommendation so I did what every reasonable person would do and accepted this as is.

But then curiosity takes hold of you and you start to wonder. Is that really the case?

Recently I’ve been writing Rust code to work with a third-party data source in TOML format. In other languages I’d just load the data with some standard TOML library and have my program rummage through it, but I’ve been hearing lovely things about the Rust serialization library serde, so I figured I’d try it out.

In the new 0.7.x series of cernan we stumbled on a neat, cheap approach for making internal metrics available inside a rust codebase, an approach that has legs in other projects, I'd say. This is going to be a quick note describing what cernan is, what we were doing before and how our current approach works.

When I recently told a coworker that Rust has macros, his first reaction was that this was bad. Previously I would have had the same reaction, but a part of what learning Rust has taught me is that macros don’t need to be bad. This post exists to help explain why that is, by diving into what problems macros solve, with a brief look at their downsides as well. In other words, this post is not a technical deep dive on how macros work, but focuses on the use cases for macros, and doesn’t require much knowledge about Rust to follow.

Let’s make a tokenizer and code generator to understand the basics behind tiny compilers.

As you may know, my current mutagen project deals with mutation testing in Rust. However, as I remarked, Rust’s famed flexibility leaves us little room to do mutations while keeping the type checker happy. For example, other mutation testing frameworks can mutate x + y to x - y.

This is an interesting mutation, because it’s so easy to do in languages like Java, which have full type information available at the bytecode level and so hard to do in Rust, because the std::ops traits make everything so hecking flexible.

Today we will take a very simple intrusive linked list written in Rust and make it safe. Kind of, anyway.

Before we start making something safe we need an unsafe thing to make safe. Let’s not pretend that what we are doing here is the least bit useful, let us instead do it just for the fun of it. (What we are doing actually is useful, the explanation of which this margin is too narrow to contain.)

This is a small post about a specific pattern for cancellation in the Rust programming language. The pattern is simple and elegant, but it’s rather difficult to come up with it by yourself.

On June 13, 2017 took place the Paris Container Day. They unveiled a new docker feature: multi-stage build. That's the subject of this article.

Presently, I’m busy writing a capture the flag (CTF) scoreboard, it requires rather complex structures and relationships with other internal objects. Being a security event, I’d also like to maintain explicit control of user data. While serialization in Rust has come a significant way, leveraging auto-generation presents some issues.

When you’re writing a library for other programs to depend on, it is paramount to think how the developers are going to use it in their code.

The best way to ensure they have a pleasant experience is to put yourself in their shoes. Forget the internal details of your package, and consider only its outward interface. Then, come up with a realistic use case and just implement it.

In other words, you should create complete, end-to-end, and (somewhat) usable example applications.

A program is memory-safe if in any possible execution of the program , all expressions e in the program that refer to an object of type T resolve to an object of type T that has been initialized and not yet deallocated.

There are different ways to guarantee memory safety for all programs. One is to restrict the programming language and disallow pointers. But, this forces most programs to make unnecessary copies of data. Another strategy, called garbage collection, embeds a garbage collector with every program. The garbage collector periodically looks for objects in memory that cannot be accessed from the program and reclaims this memory. The drawbacks of this are the overhead of garbage collection and that deallocation of memory is no longer under the control of the programmer.

Lets deploy small docker images for Rust

Rust is a modern programming language which is marketed primarily on the basis of its very nice type system, and I’d like to tell you about how you can use this type system to reason about your programs in interesting ways. Most of the time when its type system is discussed, the focus is on its guarantee of data race freedom and ability to enable so-called fearless concurrency (and rightfully so—this is a place where Rust truly shines!). Today, I have a different focus in mind, characterized perhaps most succinctly as follows:

This is a response to the recently submitted blog post titled Why Writing a Linked List in (safe) Rust is So Damned Hard. The post on Reddit was even more dramatic: Why Writing a Linked List in Rust is Basically Impossible.

I see exaggarated claims like these very often - and strongly disagree. Writing a doubly linked list in Rust is not hard - in fact, it's fairly easy! The best strategy, in my opinion, is creating a vector for allocating nodes and using indices instead of pointers. This strategy is often overlooked, getting a 'honorauble mention' at best.

One of rustdoc’s greatest features is the ability to take code samples within your documentation and run them like tests. This ensures that all your samples stay up to date with your library’s API changes. However, there are some steps that need to happen to massage these “doctests” into something that can be compiled and run like a regular program.

Some of these suggestions are not entirely new and have been added as posts/ comments on /r/rust, Github threads. But I believe better listing down all in a one place, because now we are in the correct time even I am bit late.

A long time ago, the Rust language was a language with typestate. Officially, typestates were dropped long before Rust 1.0. In this entry, I’ll get you in on the worst kept secret of the Rust community: Rust still has typestates.

The team at Paris-based Snips has created a voice assistant that can be embedded in a single device or used in a home network to control lights, thermostat, music, and more. You can build a home hub on a Raspberry Pi and ask it for a weather report, to play your favorite song, or to brew up a double espresso. Manufacturers like Keecker are adding Snips’ technology to products like multimedia home robots. And Snips works closely with leaders across the value chain, like NVIDIA, EBV, and Analog Devices, in order to voice-enable an increasingly wider range of device types, from speakers to home automation systems to cars.

We have been building libpasta as a simple, usable solution to password hashing and migration. The goal for libpasta is to be a cross-platform, cross-language system library. libpasta is written in Rust, exports a C-style API, and builds to a static/shared library. Most languages support calling external libraries through foreign function interfaces (FFIs), and the end result can be seen in the documentation where each language has access to the libpasta functionality.

Before I start this post, let me preface it by saying that I’m not an experienced Rustacean by any means. Errata and corrections are appreciated. This post is aimed at helping other fledgling rust-learners avoid my mistake. First, by helping Rust learners pick good introductory projects that will fit naturally in idiomatic rust. Second, by helping Rust learners start building Rust-friendly design intuition. I’d heard about Rust and it’s inscrutable borrow checker for years, but after reading a few blog posts about compiler error improvements, I figured it might be user-friendly enough to give it a try.

I've been trying to get a hang of some of the more advanced, and weird, concepts of Rust. With any new language it's a little difficult to know where to begin. How do you throw yourself into the deep-end of something without knowing where the deep-end is?

Librsvg feels like it is reaching a tipping point, where suddenly it seems like it would be easier to just port some major parts from C to Rust than to just add accessors for them. Also, more and more of the meat of the library is in Rust now. I'm switching back and forth a lot between C and Rust these days, and C feels very, very primitive these days.

We were recently able to finally make the docs for integer primitive types much more accurate (thanks to @antoyo!). Now, the code examples match the type for which they're written. No more i32 examples for i128 (I think you got the idea at this point)! Now, I think a few people might be interested by the method we used to achieve such a result so let's talk about it.

Sorting is an invaluable skill and often covered early in a computer science curriculum. Have you ever tried to look up a friends phone number in an unsorted list!? You’d have to look at every single entry. Sorting creates all sorts of ways to access data quicker.

Ownership and borrowing are the fundamentals of data structures in Rust. However, both taking owneship of a value (moving it) or taking a reference to it can only happen after the value was created. This ordering seems to prevent having any cycle in a data structure, even though that’s sometimes useful or necessary. For example in a web page’s content tree, from any DOM node, one can easily access (if any) its first and last child, previous and next sibling, (so children of a node form a doubly-linked list) and parent. Some other applications might need to manipulate arbitrary graphs in their full generality.

ownership-and-borrowing

Caveat lector: the primary purpose of the article is to introduce a reader proficient in one of the popular object-oriented languages how not to program in Rust. While each feature of the language will be briefly introduced where it is used, no great efforts will be made to explain the feature in detail. Links to the Rust book should provide that.

In October of last year, I wrote a post, “The Expressive C++17 Coding Challenge (in Rust)”. For various reasons, it got brought up again in the D world, and seb has written a new post. It’s good, you should check it out! However, it links to my gist, not my blog post. As I said back then: I held myself to the same constraints as the original contest; no external packages is a bit painful in Rust, but it’s not too bad. Mostly it would let me eliminate boilerplate while also improving correctness, and making the code a bit shorter. So, that got me thinking: What would this look like if I could use external packages? I took about an hour, and knocked it out. I have two versions to show you today, one where I pay no attention to allocations, and one where it’s zero-allocation.

So aturon wrote this beautiful post about what a good week it has been. In there, they wrote: "Breakthrough #2: @nikomatsakis had a eureka moment and figured out a path to make specialization sound, while still supporting its most important use cases (blog post forthcoming!). Again, this suddenly puts specialization on the map for Rust Epoch 2018". Sheesh I wish they hadn’t written that! Now the pressure is on. Well, here goes nothing =).

I had an itch: I was pretty-printing the BERT-encoded terms that we use in a production system at work and it was very slow. The Erlang shell took more than two minutes to dump the largest file. (It took about 0.1 second to read and parse the file; the rest was spent in io:format.) I decided to scratch that itch: I wrote ppbert, a command-line utility that reads BERT-encoded values and pretty-prints them. I’ve worked sporadically on ppbert for almost a year now, I use it daily at work, I’m happy with it, and I want to write about some of the things I learned during that journey.

This week has been so amazing that I just had to write about it. Here’s a quick list of some of what went down in one week:

Two posts ago I proposed a particular interface for shipping self-referential generators this year. Immediately after that, eddyb showed me a better interface, which I described in the next post. Now, to tie everything together, its time to talk about how we can integrate this into the futures ecosystem. Starting point: this Generator API To begin, I want to document the generator API I’ll be using in this post, which is roughly what followed from my previous post:

I did not plan to write this blog post. I thought that the fourth post in my series would explain how we could go from the generator API in my previous post to a futures API in which you don’t have to heap allocate every async call. But eddyb surprised me, and now I have to revisit the API in the previous post, because we can implement everything we need from immovability with a safe interface afterall.

TL;DR: This post proposes to deprecate the std facade, instead having a unified std that uses target- and capability-based cfgs to control API availability. Leave comments on internals!

As a newcomer to Rust, I heard the phrase "procedural macro" thrown around a lot without really understanding what it meant. I figured that I would learn about them if I ever needed them. Well, I'm working on the guts of relm, and a large chunk of it is procedural macros. I've learned enough about procedural macros to be dangerous, so I thought I would pass on some knowledge.

As a newcomer to Rust, I heard the phrase “procedural macro” thrown around a lot without really understanding what it meant. I figured that I would learn about them if I ever needed them. Well, I’m working on the guts of relm, and a large chunk of it is procedural macros. I’ve learned enough about procedural macros to be dangerous, so I thought I would pass on some knowledge.

In the first post, we looked at the relationship between generators and a more general notion of self-references. In the second post, we narrowed down exactly what problem we need to solve to make generators work, and talked about some solutions that we’ve considered but don’t feel like we could ship in the near future.
In the original post, I promised that I would have a near term solution by the end of this series.

Lifetimes are a interesting subject: a lot of people seem to gain a day-to-day familiarity with them, without fully understanding what they are. Maybe, they are truly Rust's Monads. Let's talk about what they are, where you encounter them and then how to get competent with them.

I knew that there are a lot of different “pointers” but I found all the descriptions of them lacking or confusing. Specifically, Rust calls itself a systems programming language, yet I found no clear description of how the different pointers map to C—the systems programming language. Eventually, I stumbled across The Periodic Table of Rust Types, which made things a bit clearer, but I still didn’t feel like I truly understood.

During my weekend expedition to Rust land, I think I’ve grokked things enough to write this explanation of how Rust does things. As always, feedback is welcomed.