Sajiron
Related Post
đź“– If you haven't read it yet, check out the previous blog: Fearless Concurrency in Rust: A Guide to Threads
As our applications grow more connected—handling network calls, file operations, and real-time data streams—asynchronous programming becomes crucial. It allows us to handle potentially long-running tasks without blocking execution, improving both efficiency and responsiveness.
In Rust, async is powered by the async
and await
keywords. These, combined with runtimes like Tokio or async-std, enable writing efficient, non-blocking code. In this guide, we’ll explore:
What async
and await
do in Rust
How futures work
How to leverage Tokio runtimes for real-world applications
When to use async vs. traditional threading
By the end, you’ll have a grasp of asynchronous programming in Rust and know how to use it in your own projects. 🚀
Before diving into Rust’s async ecosystem, let’s clarify two related concepts:
Parallelism: Multiple tasks truly run simultaneously on different CPU cores.
Concurrency: Multiple tasks switch between execution, not necessarily in parallel. The tasks appear to run at the same time from an external perspective.
Rust’s async
model primarily addresses concurrency: scheduling multiple tasks so none block each other unnecessarily. Under the hood, async runtimes can still leverage parallelism across multiple threads or cores, but the fundamental abstraction is concurrent scheduling of tasks that yield when idle (e.g., waiting for I/O).
The key pieces of Rust’s asynchronous programming story are futures and the async
/ await
keywords.
A future in Rust represents a computation that might not have completed yet but will become ready at some point in the future. It’s defined by the Future
trait, which has:
An Output type (the result once the future finishes).
A poll
method, which the runtime calls to see if the future is ready or if it needs more time.
Lazy Execution: Unlike some languages, futures in Rust are “lazy.” Creating a future doesn’t immediately run anything. You need an executor (part of an async runtime like Tokio) to drive the future to completion. The executor ensures that when a future is awaited, it gets scheduled and resumes execution once it is ready.
Pinning: Certain futures, especially self-referential ones, rely on being pinned in memory. This prevents them from moving around so references inside them remain valid. Often, you won’t manually handle pinning unless you’re storing futures in data structures. Libraries like Box::pin
can help when needed.
You can declare an asynchronous function with async fn
:
async fn fetch_data() -> String {
"Hello, Async Rust!".to_string()
}
fn main() {
let future = fetch_data(); // Returns a future, does not run yet
println!("Future created but not executed!");
}
Key Point: Just calling fetch_data()
doesn’t run it; you only get a future that describes the pending work.
Rust’s standard library does not include an async runtime, so you typically choose from popular crates:
Runtime | Best For |
Tokio | High-performance apps, servers, etc. |
async-std | Simpler API, std-lib-like experience |
In this blog, we will be using Tokio as our async runtime. To use Tokio, add the following to your Cargo.toml
:
[dependencies]
tokio = { version = "1", features = ["full"] }
To actually run the future and get its result, you can await
it within another async
context. Typically, you’ll do this inside an async runtime, such as Tokio:
#[tokio::main] // Provides the async executor
async fn main() {
let result = fetch_data().await;
println!("Received: {}", result);
}
Here, fetch_data().await
means: wait until fetch_data()
completes, then resume. While waiting, the runtime can schedule other tasks, leading to more efficient concurrency.
In Rust, you can use async
in two ways:
async fn
(Async Function Signature)
Declaring async fn
automatically returns a Future
.
The function body is not executed immediately; calling the function returns a lazy future that must be .await
ed or passed to an executor.
Example:
async fn fetch_data() -> String {
"Hello, Async Rust!".to_string()
}
fn main() {
let future = fetch_data(); // Doesn't execute yet, just returns a Future
}
async {}
(Async Block)
An async {}
block is an expression that creates a future immediately.
Useful for defining an async operation in place without creating a full function.
Example:
use tokio::runtime::Runtime;
fn main() {
let runtime = Runtime::new().unwrap();
let future = async {
"Hello from an async block!".to_string()
};
let result = runtime.block_on(future);
println!("{}", result);
}
Async Rust offers various ways to run multiple asynchronous tasks together.
join!
If you know the exact number of futures you want to run concurrently, you can use the join!
macro:
use tokio::join;
async fn fetch_data_1() -> String {
"Data from fetch_data_1".to_string()
}
async fn fetch_data_2() -> String {
"Data from fetch_data_2".to_string()
}
#[tokio::main]
async fn main() {
let (resp1, resp2) = join!(fetch_data_1(), fetch_data_2());
println!("Resp1: {}, Resp2: {}", resp1, resp2);
}
Both fetch_data_1()
and fetch_data_2()
run concurrently, and join!
waits until both complete.
join_all
for Dynamic CollectionsIf you have a variable number of futures, you can use join_all
:
use futures::future::join_all;
use std::future::Future;
use std::pin::Pin;
#[tokio::main]
async fn main() {
let futures: Vec<Pin<Box<dyn Future<Output = ()>>>> = vec![
Box::pin(async { println!("Task 1"); }),
Box::pin(async { println!("Task 2"); }),
Box::pin(async { println!("Task 3"); }),
];
join_all(futures).await;
}
Here, each async block is converted into a Box<dyn Future<Output = ()>>
. We pin them on the heap and run them all to completion.
Sometimes, you only need the first completed future and want to discard the rest. Rust offers the select!
macro (or race!
in some crates) that waits for whichever future finishes first:
use tokio::time::{sleep, Duration};
use tokio::select;
#[tokio::main]
async fn main() {
let slow = async { sleep(Duration::from_millis(100)).await; println!("Slow task done"); };
let fast = async { sleep(Duration::from_millis(50)).await; println!("Fast task done"); };
select! {
_ = slow => println!("Slow won!"),
_ = fast => println!("Fast won!"),
}
}
Whichever async block finishes first “wins,” and you can decide how to handle the other futures from there.
If you have more than two futures and need to race them, you can use futures::future::select_all
, which returns the first completed future from a list.
Example: select_all
with Dynamic Futures
use futures::future::select_all;
use std::pin::Pin;
use tokio::time::{sleep, Duration};
use std::future::Future;
#[tokio::main]
async fn main() {
let futures: Vec<Pin<Box<dyn Future<Output = &str>>>> = vec![
Box::pin(async {
sleep(Duration::from_secs(3)).await;
"Task 1 done"
}),
Box::pin(async {
sleep(Duration::from_secs(2)).await;
"Task 2 done"
}),
Box::pin(async {
sleep(Duration::from_secs(1)).await;
"Task 3 done"
}),
];
let (result, _index, _remaining) = select_all(futures).await;
println!("Winner: {}", result);
}
Either
EnumWhen using select
, the result is returned as an Either
enum, representing which future completed first.
Example: Using Either
for Two Futures
use tokio::time::{sleep, Duration};
use futures::future::{self, Either};
#[tokio::main]
async fn main() {
let future1 = Box::pin(async {
sleep(Duration::from_secs(3)).await;
"Future 1 completed"
});
let future2 = Box::pin(async {
sleep(Duration::from_secs(1)).await;
"Future 2 completed"
});
match future::select(future1, future2).await {
Either::Left((result, _)) => println!("Winner: {}", result),
Either::Right((result, _)) => println!("Winner: {}", result),
}
}
Rust's async model relies on cooperative multitasking, meaning that tasks must explicitly yield control to the runtime. Without yielding, a single async task could monopolize the executor, preventing other tasks from running efficiently.
❌ Problem: Blocking the Executor
#[tokio::main]
async fn main() {
loop {
println!("Running..."); // Never gives up control!
}
}
This loop never awaits anything, so it completely blocks the async runtime.
Other tasks will never get a chance to run.
âś… Solution: Yielding Control
Use tokio::task::yield_now().await
to allow other tasks to run:
use tokio::task::yield_now;
#[tokio::main]
async fn main() {
loop {
println!("Working...");
yield_now().await; // Allows other tasks to run
}
}
This helps ensure fair scheduling in cases where a single task might hog the executor.
đź› When to Yield?
âś” Inside long-running async loops to prevent blocking other tasks.
âś” Before awaiting on a long operation to let other tasks run first.
âś” In CPU-bound async tasks to avoid starving other async operations.
In Rust, you can chain async computations using methods like .map()
, .then()
, and .and_then()
from the futures::FutureExt
trait.
.then()
to Chain FuturesThe .then()
method allows chaining another async function after a future completes.
use tokio::time::{sleep, Duration};
use futures::future::FutureExt;
async fn fetch_data() -> String {
sleep(Duration::from_secs(2)).await;
"Fetched data".to_string()
}
#[tokio::main]
async fn main() {
let future = fetch_data()
.then(|data| async move {
println!("Processing: {}", data);
format!("Processed: {}", data)
});
let result = future.await;
println!("Final result: {}", result);
}
.map()
for Synchronous TransformationsIf the transformation does not require async operations, .map()
can be used.
use futures::future::FutureExt;
#[tokio::main]
async fn main() {
let future = async { 5 }
.map(|x| x * 2);
let result = future.await;
println!("{}", result); // 10
}
.and_then()
for Async Operations Returning ResultsThe .and_then()
method is useful when working with Result
-returning async functions.
use futures::TryFutureExt;
async fn fetch_data() -> Result<String, &'static str> {
Ok("Fetched data".to_string())
}
#[tokio::main]
async fn main() {
let future = fetch_data()
.and_then(|data| async move {
Ok(format!("Processed: {}", data))
});
let result = future.await;
println!("{:?}", result); // Ok("Processed: Fetched data")
}
These methods allow for structured and readable async workflows, making complex async logic easier to manage.
Rust’s iterators provide a powerful way to lazily produce values one at a time. An iterator implements the Iterator
trait and requires the next()
method, which returns values one by one.
Example: Basic Iterator
struct Counter {
count: u32,
}
impl Iterator for Counter {
type Item = u32;
fn next(&mut self) -> Option<Self::Item> {
if self.count < 5 {
self.count += 1;
Some(self.count)
} else {
None
}
}
}
fn main() {
let mut counter = Counter { count: 0 };
while let Some(value) = counter.next() {
println!("Iterated value: {}", value);
}
}
Iterators allow efficient, lazy evaluation, only computing values when requested.
In async Rust, iterators are replaced by streams, which allow producing multiple values asynchronously over time. Unlike regular iterators, streams require .await
to get the next value.
Example: Using an Async Stream
[dependencies]
tokio-stream = "0.1"
use tokio_stream::{StreamExt, iter};
use tokio::time::{sleep, Duration};
#[tokio::main]
async fn main() {
let mut stream = iter(vec![1, 2, 3, 4, 5]);
while let Some(value) = stream.next().await {
println!("Received: {}", value);
sleep(Duration::from_secs(1)).await; // Simulate async work
}
}
How This Works
iter(vec![1, 2, 3, 4, 5])
creates a stream from a vector.
.next().await
waits for the next item in the stream.
The loop processes one item at a time.
You can create a custom async stream using the async_stream::stream!
macro.
[dependencies]
async-stream = "0.3"
use async_stream::stream;
use tokio_stream::StreamExt;
use tokio::time::{sleep, Duration};
#[tokio::main]
async fn main() {
let my_stream = stream! {
for i in 1..=5 {
sleep(Duration::from_secs(1)).await;
yield i; // Similar to returning a value in an iterator
}
};
tokio::pin!(my_stream); // Pin the stream before using it
while let Some(value) = my_stream.next().await {
println!("Received: {}", value);
}
}
Key Takeaways
Iterators produce values synchronously, while streams yield values asynchronously.
Unlike futures, which resolve once, streams can yield multiple times.
Use .next().await
to consume stream values one at a time.
Use stream!
for custom async streams.
Deciding between async tasks and traditional threads usually comes down to the nature of your workload:
You’re dealing with I/O-bound tasks (network requests, file reads/writes, database queries).
You have a large number of concurrent tasks but don’t want the overhead of one thread per task.
You want non-blocking execution and efficient scheduling.
Example: (I/O-bound)
async fn fetch_data() -> Result<String, reqwest::Error> {
let response = reqwest::get("https://example.com").await?;
let body = response.text().await?;
Ok(body)
}
You have CPU-bound tasks (heavy computation, data processing, image manipulation).
You need direct parallelism across multiple cores, or tasks are truly independent and won’t benefit from interleaving.
Example: (CPU-bound)
use std::thread;
use std::time::Duration;
fn heavy_computation() {
// Simulate intense work
thread::sleep(Duration::from_secs(3));
println!("Computation complete");
}
fn main() {
let handle = thread::spawn(heavy_computation);
handle.join().unwrap();
}
A good rule of thumb:
Async: large-scale concurrency with I/O
Threads: parallel processing for CPU-heavy workloads
Error handling in async Rust looks almost the same as in synchronous Rust. You can return a Result<T, E>
and use ?
inside async functions:
async fn fetch_data() -> Result<String, Box<dyn std::error::Error>> {
Ok("Data fetched successfully".to_string())
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let data = fetch_data().await?;
println!("{}", data);
Ok(())
}
If you prefer more tailored error messages, define a custom error type or use crates like thiserror
:
enum MyError {
FetchError(String),
ProcessError(String),
}
async fn complex_operation() -> Result<(), MyError> {
let data = fetch_data()
.await
.map_err(|e| MyError::FetchError(e.to_string()))?;
Ok(())
}
Time-sensitive operations can be canceled using tokio::time::timeout
:
use tokio::time::{timeout, Duration};
async fn long_running_task() -> Result<(), &'static str> {
tokio::time::sleep(Duration::from_secs(10)).await;
Ok(())
}
#[tokio::main]
async fn main() -> Result<(), &'static str> {
match timeout(Duration::from_secs(5), long_running_task()).await {
Ok(result) => result,
Err(_) => Err("Task timed out"),
}
}
async move
Besides simply .await
-ing functions in main()
, you can run tasks in the background using tokio::spawn
. This spawns a new concurrent task:
#[tokio::main]
async fn main() {
let handle = tokio::spawn(async move {
// The `move` keyword here means we capture any used variables by value
println!("Hello from a background task!");
});
// Do other work concurrently...
// Then wait for the background task
handle.await.unwrap();
}
async move
ensures variables captured by the async block are moved into it. This is helpful when you need to pass owned data into background tasks.
Rust’s async/await feature is a powerful tool for handling I/O-bound workloads and building highly scalable applications. By leveraging futures, non-blocking runtimes like Tokio, and Rust’s safety guarantees, you can write code that remains readable, efficient, and concurrency-friendly.
This blog covers all the fundamental concepts of async programming in Rust, from futures and async/await syntax to streams and concurrency handling. However, to fully master async Rust, you may need to explore more advanced topics such as:
Executor internals: How async runtimes schedule tasks efficiently.
Pinning and Unpin
in-depth: Understanding memory safety in self-referential futures.
Async error handling patterns: How to gracefully handle failures in async workflows.
Combining multiple streams and handling backpressure: Techniques for managing large-scale async data pipelines.
Official Async Book – in-depth coverage of Rust’s async model.
Tokio Documentation – guides, best practices, and advanced features.
Learn Rust error handling with Result, Option, ?, and popular libraries like thiserror, anyhow, and color-eyre for robust error management.
Learn fearless concurrency in Rust with threads, mutexes, async programming, and message passing for safe, efficient, and race-condition-free code.
Master Rust traits with this in-depth guide! Learn trait implementation, polymorphism, trait bounds, inheritance, and returning types with impl Trait.
Master Rust enums with this deep dive! Learn how to define, use, and optimize enums with pattern matching, and advanced techniques.