A few weeks ago, I had the pleasure of attending the second annual Rust All Hands meeting, hosted by Mozilla at their Berlin office. The attendees were a mix of volunteers and corporate employees covering the full range of Rust development, including the compiler, language, libraries, docs, tools, operations, and community. Although I’m sure there will be an official summary of the meeting (like last year’s), in this article, I'll cover a few things I was directly involved in. First, I'll look at a feature many developers have wanted for a long time...
Array Iterators
Idiomatic Rust code will make heavy use of the Iterator
trait, but unfortunately this doesn’t work for arrays. This trait is not just a library feature but is also baked into the compiler, as even the plain for
loop uses IntoIterator
to convert its input, then calling Iterator::next()
for each cycle of the loop. However, [T; N]
arrays do not implement IntoIterator
by value, so the best you can do is iterate it as a slice by reference (iter()
and iter_mut()
), or put your values in a Vec<T>
on the heap instead to iterate by value. It’s also possible to iterate an array directly using a third party wrapper like ArrayVec
, but this functionality really ought to be native in the standard library.
for s in ["abc", "foo", "xyz"] {} // Nope, can't iterate a simple array by value for &s in &["abc", "foo", "xyz"] {} // It's OK by reference instead. let words: Vec<_> = my_string.split_whitespace().flat_map(|word| { // Can't return a temporary array for iteration at all [word.to_lower(), word.to_upper()] }).collect();
I first tried to implement IntoIterator
for arrays in PR32871. This wasn’t too hard technically, but the lack of constant generics made it a bit awkward. We don’t have a way to implement[T; N]
for any N length, so at the moment traits are only implemented for lengths 0 through 32. The more unfortunate part is that this showed up in the iterator type signature too. While we would like it to be something like array::IntoIter<T, N>
, without const generics I could only make this IntoIter<T, A>
with a constraint that A
must implement an internal FixedSizeArray
trait. The libs team really didn’t want to commit to this, so that PR was closed.
Later, Rust did approve RFC2000 for const generics, so I opened a new PR49000 for array iterators. (What nice round numbers!) We still didn’t have const generics implemented, but I argued for a path forward: add IntoIterator
without stabilizing the array::IntoIter
type, and we could clean that up in the near future. This seemed acceptable except for one more sticky issue, which was breaking the method resolution of existing .into_iter()
calls. Currently, because arrays don’t directly have that method, such calls will auto-reference into a slice iterator. Adding the array IntoIterator
will disrupt this into a value iterator. Clippy PR3344 added a lint for code that would be broken here, but we still want to proceed very carefully because these cases appear to be common. Clippy even had to change a few calls to .iter()
in its own code in that PR!
That’s much ado to bring us to the All Hands, where I came into the libs meeting with a hopeful solution. The idea we had toward the end of PR49000 was that we could add a new inherent method to arrays, preserving the current behavior for method resolution. The compiler prioritizes inherent methods over trait methods, but we would still get the benefit of the new IntoIterator
for arrays by value in for
loops, chain
, flat_map
, etc., where the trait method is called explicitly. We could also add a deprecation message on this inherent method to suggest using slice iter()
instead. Although adding this inherent method would need some compiler lang help, the libs team was enthusiastic about this approach, and we announced this success at the All Hands end-of-day summary.
// Here's the inherent method. (demonstrating with full const generics) impl<T, const N: usize> [T; N] { #[deprecated(note = "use slice `iter()` to iterate by reference)] fn into_iter(&self) -> slice::Iter<'_, T> { self.iter() } } // Now the new trait impl can hide behind the inherent method. impl<T, const N: usize> IntoIterator for [T; N] { type Item = T; type IntoIter = IntoIter<T, N>; fn into_iter(self) -> Self::IntoIter { // ... } }
If you’re more familiar with Rust’s method resolution, you may already see the problem with this approach, which I realized the next day. We had experimented during the meeting with a placeholder struct A
type to confirm the use of the inherent method, but we hadn’t properly covered all variations. When I expanded the example later, I found that no, an inherent into_iter(&self)
does not actually take precedence over IntoIterator::into_iter(self)
when called directly on an array value. The latter is found first because the method receiver is a direct self
, whereas the former requires auto-referencing to call on &self
. We must use a reference to borrow the array for slice::Iter
, so it seems we don’t actually have a way to preempt IntoIterator
here.
Sadly, we had to revoke our previous announcement of success. Maybe we can still find a way to override method resolution in the compiler for this issue, if we’re willing to accept some language impurity here. Otherwise, we’re back to finding ways to lessen the pain of adding this new trait implementation. The first step is probably to promote that Clippy lint into the main compiler to give it more visibility, but even then we’ll probably want to leave that warning for a while before daring to make the actual change. If you’re using into_iter()
on an array to get a slice iterator, please switch to iter()
ASAP, and if you have ideas to ease this transition, please speak up!
More at All Hands
Another useful meeting was with the WebAssembly (Wasm) working group, where I represented Rayon—a crate providing a work-stealing threadpool and parallel iterators. We would love to get Rayon working with Wasm to enable that easy parallelism under JavaScript, but threading is still in an early state. Most of the std::thread
implementation for Wasm is just stubbed out, so Rayon will hit errors as soon as it tries to start a threadpool. We talked about the limitations faced with threads that would want to wait on JavaScript promises but decided it would still be useful to have Rayon usable for CPU-bound workloads. We planned to add a custom way to start Rayon's threadpool where they can control the spawning, and I've since opened rayon PR636 for experimentation.
Between several meetings on the build system and the infrastructure and release teams, we made plans to improve the experience for Rust contributors. In rustbuild, the bespoke tool for bootstrapping rustc
itself, we discussed a few different contributor workflows and how we might reduce their burden. The plan is to more aggressively cache build artifacts that won't affect that contributor's goal and make it easier to use the build options for that path. The infrastructure team discussed the burden of CI in landing your pull request, because Rust requires a clean CI run before merging any change. A full run takes about 2.5 hours, and we talked about ways we could tweak the current CI runs to have less redundant testing, while still ensuring sufficient coverage. Finally, in the release team, we talked about adding more automation to aid issue triage and pull request management, such as a new bot to let anyone apply issue labels and one to automatically manage PR rollups to aggregate CI runs.
Beyond the meetings in which I was actively involved, the Rust All Hands was also a great opportunity to learn more about what other teams are working on. Now that the 2018 edition has shipped in Rust 1.31, everyone can take a breath from the release crunch and start planning for the next few years. The compiler team discussed some potentially major refactoring that could enable better IDE integration and made plans to experiment toward that end. The lang team laid out a HUGE list of potential new features, from things that are almost ready to stabilize with a little polishing to pipe dreams that may have no real future in Rust at all. And, with several meeting rooms running simultaneously, I couldn't possibly catch it all. I hope other attendees will blog about their results and share more information.
Last updated: February 6, 2024