Why I Switched from Node.js to Rust for AWS Lambda
How AWS Lambda's new billing for initialization time made switching from Node.js to Rust a cost-effective decision for high-throughput workloads.
What Used to Be Latency Overhead Is Now Cost Overhead
If you’ve built with AWS Lambda before, you’re familiar with cold starts: that latency spike when Lambda needs to spin up a new runtime container. In Node.js, this often means an additional 100ms–1s (or more), depending on memory configuration and code size.
As of August 2025, AWS charges for Lambda initialization time. This means cold starts aren’t just a user experience issue. For high-throughput or latency-sensitive workloads, this can represent a measurable percentage of your Lambda spend.
That’s when I decided to move critical workloads to Rust.
Rust’s Performance Advantage on Lambda
The performance delta between Node.js and Rust on Lambda is significant and consistent across workloads. According to Max Day's Lambda runtime benchmarks:
- 50–80% lower cold start times for Rust
- 2–3x lower memory usage, which translates to reduced billing granularity
- Tighter latency distributions under load
In high-concurrency environments, Node.js often shows p99 latencies that spike unpredictably. With Rust, latency tails are much more predictable, especially when paired with the Tokio runtime and Axum’s lightweight abstractions.
The economic implications are straightforward: fewer milliseconds, fewer megabytes, lower cost.
A Practical Rust Stack for Lambda
To prove out the benefits, I built a REST API using Axum (built on Tokio and Tower) and deployed it to Lambda using AWS CDK and the @cdklabs/aws-lambda-rust construct.
Dual-Mode Execution: Local + Lambda
I wanted a dev experience that didn’t compromise on productivity or fidelity. Here's how conditional compilation makes this possible:
#[cfg(debug_assertions)]
{
// Local development server
let listener = TcpListener::bind("127.0.0.1:3000").await?;
axum::serve(listener, app).await?;
}
#[cfg(not(debug_assertions))]
{
// AWS Lambda runtime
let app = ServiceBuilder::new()
.layer(axum_aws_lambda::LambdaLayer::default())
.service(app);
lambda_http::run(app).await?;
}
This pattern gives me a single codebase that behaves like a traditional HTTP server locally but compiles down to a Lambda-compatible handler in production. No need for mocks, proxies, or duplicated code paths.
Key Features of the Stack
- Axum for composable HTTP routing
- Tower Middleware for CORS, compression, request tracing
- Structured JSON logging with
tracing_subscriberfor CloudWatch compatibility - Integration with AWS X-Ray for end-to-end request tracing
Dependencies of Note
| Crate | Purpose |
|---|---|
axum | HTTP routing and state management |
tower-http | Middleware: CORS, compression, tracing |
lambda_http | Event conversion between AWS and Axum |
axum-aws-lambda | Transparent Lambda runtime integration |
tracing | Structured, async-aware logging |
Deployment with CDK and Lambda Rust Construct
Thanks to @cdklabs/aws-lambda-rust, building and deploying the Rust binary was seamless:
const fn = new RustFunction(this, "api-handler", {
entry: join(__dirname, "..", "..", "api"),
binaryName: "api-handler",
});
It’s worth noting that unlike Node.js where code is zipped as-is, the Rust deployment artifact is a single, statically compiled binary. This dramatically improves both startup time and deploy reliability.
Local Development and DX
Rust doesn’t have to mean poor ergonomics. My local workflow looks like this:
# Run locally with live reload
cargo watch -x run
And for deployment:
cd cdk
npx cdk deploy
The feedback loop is tight. With cargo-watch, you get hot reload behavior similar to nodemon, and test iteration is fast thanks to Rust’s compile-time guarantees.
Observability and Tracing
One of Rust’s superpowers in Lambda is predictable, structured observability.
I use the tracing ecosystem with the following setup:
tracing_subscriberto emit structured JSON logs- AWS Lambda’s native log forwarding to CloudWatch
- Optional
tracing-opentelemetryfor full distributed traces via AWS X-Ray
This gives me a full picture of function execution times, HTTP request lifecycles, and custom spans, all without adding runtime cost or dependencies like Datadog.
Results in Production
After migrating key Node.js endpoints to Rust:
| Metric | Node.js | Rust |
|---|---|---|
| Cold Start (P50) | ~300ms | ~70ms |
| Execution Duration (Avg) | ~100ms | ~25ms |
| Memory Allocation | 512MB | 256MB |
| p99 Latency Under Load | 500ms+ | <100ms |
These are real improvements tracked with CloudWatch and X-Ray. The cost reduction alone for these functions was 30–40% monthly, and latency dropped enough to remove retries from upstream services.
Migration Advice
Thinking of making the jump? Here’s my playbook:
- Start small: Migrate a simple REST endpoint first.
- Isolate logic from I/O: Adopt the functional core/imperative shell pattern to keep your Rust code modular and testable.
- Benchmark everything: Use CloudWatch dashboards to compare before/after.
Final Thoughts
AWS Lambda billing has evolved. So should your tech stack.
Rust brings unmatched performance, lower memory usage, and best-in-class reliability to Lambda functions. With the tooling now mature the DX gap with Node.js has largely closed.
If you're building latency-sensitive APIs, real-time pipelines, or simply looking to optimize your AWS bill, Rust isn’t just an alternative—it’s an upgrade.
- Explore the example repo: github.com/hollanddd/axum-lambda-api
- Need help migrating? My DMs are open.