Posted on

Table of Contents

When building backend logic, it's pretty common for developers to reach for the tried and tested "database + async logic" pattern. But what if we modeled data like entities in a game engine — things that have components and move through discrete states, modified by systems ?

Motivated by my past experience writing credit union software, I began to wonder if there's a better way to handle business logic that better encapsulates how things work in the real world.

In previous projects, the backend logic often ended up scattered and hard to reason about. We had multiple places where state was stored, heavy locking, and no single source of truth for our transactions. It worked… but it wasn’t elegant, and making changes or reasoning about the system was painful.

By using ECS, we get a centralized world of entities, where every piece of data lives in one place, and systems operate deterministically on that data. Locks are localized, pipelines are explicit, and the overall flow becomes much easier to understand, test, and extend.

Why ECS for backend logic?

ECS is really just a dataflow architecture: Entities are composed of data (components), and systems process entities that match specific component queries.

Everything in a backend is data anyway — jobs, users, transactions, sessions. ECS takes that idea and formalizes it. Instead of thinking in terms of function calls and database mutations, we think in terms of data transformations over time.

The Setup

use bevy_ecs::prelude::*;
use std::sync::{Arc,Mutex};

#[derive(Clone)]
struct AppState{
    world: Arc<Mutex<World>>,
}

We use a shared World wrapped in a Mutex so both our HTTP handler and background ECS scheduler can access and mutate it safely.

The World is where all our entities live — each transaction is just another entry in that space.

Defining our components

Each transaction is an entity with several components — source, target, amount, and a status.

#[derive(Component)]
struct Source(pub String);

#[derive(Component)]
struct Target(pub String);

#[derive(Component)]
struct Amount(pub f32);

#[derive(Component, Clone, Copy, Debug, PartialEq, Eq)]
enum TransactionStatus {
    Pending,
    Posted,
    Declined,
}

Each piece of data lives in its own type-safe component. This lets systems later express very precise queries — like “give me all entities that have an Amount and a Pending status.”

Spawning entities via Axum

As a quick demo, I've set my project up to create a new entity inside the ECS world every time a request hits our API.

async fn create_transaction(
    State(state): State<AppState>,
    Json(payload): Json<TransactionInput>,
) -> Response {
    let mut world = state.world.lock().unwrap();
    world.spawn(TransactionBundle {
        source: Source(payload.source),
        target: Target(payload.target),
        amount: Amount(payload.amount),
        status: TransactionStatus::from_str(&payload.status),
    });
    (StatusCode::CREATED).into_response()
}

The important thing is that we never run the transaction logic directly in the HTTP handler. Instead, the handler spawns data into the ECS world and returns immediately — like a producer dropping work into a queue.

The ECS Systems

Systems are just functions that query for specific component sets and modify them

fn modify(query: Query<(&mut Amount)>) {
    for mut amount in query {
        amount.0 = 666.0;
    }
}

This trivial example replaces every transaction’s amount with a randomly chosen fixed value. In a real system, this could be a simple transformation stage — e.g., applying fees, rounding, or normalizing currency.

The next system prints each entity and then despawns it:

fn print_and_remove(
    mut commands: Commands,
    query: Query<(Entity, &Source, &Target, &Amount, &Status)>,
) {
    for (entity, source, target, amount, status) in query {
        println!(
            "Entity {:?}\n  from: {}\n  to: {}\n  amount: {}\n  status: {:?}\n",
            entity, source.0, target.0, amount.0, status.0
        );
        commands.entity(entity).despawn();
    }
}

Finally, we have a system to validate an incoming transaction (eg: Amount > 0 & Target != Source)

fn validate_transaction(mut query: Query<(&Source, &Target, &Amount, &mut Status)>) {
    for (source, target, amount, mut status) in query.iter_mut() {
        if amount.0 <= 0.0 {
            println!(
                "Declining transaction from {} to {} (invalid amount: {})",
                source.0, target.0, amount.0
            );
            status.0 = TransactionStatus::Declined;
        } else if source.0 == target.0 {
            println!("Declining transaction (source == target: {})", source.0);
            status.0 = TransactionStatus::Declined;
        } else {
            println!(
                "Posting transaction from {} to {} (amount: {})",
                source.0, target.0, amount.0
            );
            status.0 = TransactionStatus::Posted;
        }
    }
}

Each system is pure data logic — no async, no side effects other than ECS mutations.

What makes this pattern feel cleaner is that everything revolves around data — not control flow. We’re no longer chaining async futures or passing mutable state around; we’re simply defining what kind of data a system cares about, and letting the scheduler handle the rest. It’s a declarative way of describing the backend’s behavior: “when a thing has these components, run this logic.”

Other systems that could be added might include:

  • Retry / Timeout handling: Track pending transactions and automatically retry or expire them after a number of ticks.
  • Notifications / Events: Generate emails, push notifications, or messages when transactions are posted or declined.
  • Logging & Analytics: Record transactions for audit purposes or metrics collection.
  • Fraud Detection: Run checks before posting; flag suspicious transactions.

The point is, we're able to select entities composed in a very specific way, and do very specific things with it. We're simply concerned with the data we need.

The ECS Schedule Loop

Because we're not running it in bevy and we're just using bevy_ecs, we have to manually "tick" the time, similar to how game engine handles frames:

task::spawn(async move {
    let mut schedule = Schedule::default();
    schedule.add_systems((
        modify,
        validate_transaction,
        print_and_remove
        ).chain());
    loop {
        {
            let mut world = shared_state.world.lock().unwrap();
            schedule.run(&mut world);
        }
        tokio::time::sleep(std::time::Duration::from_millis(100)).await;
    }
});

This is our “world runner.” It repeatedly locks the world, runs all registered systems, and releases it back for the next tick. The Axum server and ECS loop run side-by-side — the web server feeds new data in, and the ECS processes it continuously.

Why this works?

Bevy ECS is fast, deterministic, and embarrassingly parallel. It scales naturally because systems only touch the data they need, and the scheduler can safely run many of them at once.

Compared to traditional async backends:

  • Get explicit data flow instead of hidden control flow.
  • Can easily visualize pipelines as systems.
  • Each transformation is testable in isolation.

You can even extend this model to multi-step transaction flows: validation → authorization → posting → settlement, each as its own system, running on entities that satisfy specific component filters.

Closing thoughts

This tiny experiment shows how ECS can reshape backend thinking. By centralizing all data in a single world of entities, we reduce scattered state and heavy locking, making pipelines more explicit and deterministic. Each system only touches the data it cares about, which improves testability and clarity.

That said, there are trade-offs:

Persistence: ECS excels at in-memory pipelines, but persistent storage still requires a database. This means some of the speed and simplicity may be offset when integrating durable storage. Further, this approach might not be applicable for structs that are more complex and might not be easily (or realistically) serde-ed.

Concurrency: Locking the shared world can become a bottleneck at high scale. Solutions could include sharding the ECS world, using message queues, or orchestrating work in other ways.

Learning curve: This approach is unconventional.