Yet more Rust Async Writing, and some Google Rants
Welcome to another issue! I’ve been spending most of my free time hacking on Ergo, but hopefully you’ll find the links below interesting nevertheless.
Recommended Reading and Videos
Steve Yegge’s Google Platforms Rant
This is an archived Google+ post from 10 years ago, but it’s an interesting look into Amazon vs. Google from someone who had worked at both, specifically focused on interaction between teams. Steve harbors no love for Jeff Bezos, his management techniques, or almost anything else about how Amazon is run, but admits that the extreme commitment to service-oriented interfaces between teams makes up for almost all of that when competing with Google
And if you enjoyed this, it’s also worth reading Steve’s 2018 article about his move from Google to Grab.
Brainstorming Async Rust’s Shiny Future
This post from the official Rust blog covers the next steps in a new outreach process they’re doing to elicit current pain points and future desires for Rust async. Rust async programming has become a lot easier in the past couple of years, but there are still some things that should be much easier to do than they are. So it’s great to see the them tackling these problems and looking to the community for input.
Asynchronous streams in Rust (part 1) - Futures, buffering and mysterious compilation error messages
When learning asynchronous Rust programming, you’ll eventually encounter Streams. A stream is sort of an iterator of futures, but the semantics of actually using one are very different (at least for now). I haven’t finished working through this post yet, but it covers a lot of common use cases with Rust streams and how to implement them. Part 2 has also been published.
What I’m Working On
The Ergo job queue code is all done and performs well! Lua is a bit of an odd language, but it works well enough for simple things like writing Redis scripts. With this task out of the way, the parts for the initial “event to action” backend are just about in place.
Here’s a screenshot from the stress test tool running a million jobs with 64 job producer tasks and 64 worker tasks. Starting a job requires two Redis script calls, and so there’s probably some room for performance improvement there, but I’m pretty happy with 9000 jobs per second on a single queue.
Now that I have the job queue code done, expect a blog post or three covering it as well as various async Rust topics in the next few weeks.
If you enjoyed this, I’d love if you share it with a friend (sign up here) or just reply to this email with your thoughts. Thanks for reading!