flow-arch / backend · exploration
The backend should be as honest as the frontend. Data transformation is just a function. Business logic is just a pure expression. Infrastructure is just plumbing around clean data models.
// 01 — Philosophy
Most backend code is a tangle of mutable state, hidden side effects, and implicit dependencies. Debugging it means holding an entire runtime in your head. Flow-Arch backend proposes a different approach.
A backend that processes data through pure functions is deterministic, testable, and auditable — by construction. Not by discipline. By design.
Every business rule is a pure function. Request in, response out. No hidden reads. No unexpected writes.
The schema is the real backend. Serverless functions are thin wrappers. Infrastructure is just plumbing.
DB reads, API calls, and logging live at the boundary. Everything between input and output is pure.
Complex pipelines are built by composing simple functions. No middleware forests. No plugin systems.
A strong type system documents intent at the function signature. If it compiles, the contract is met.
Serverless and edge functions eliminate the compatibility debt of traditional servers. Focus on logic, not infrastructure.
// 02 — Relationship to Frontend
vanilla-flow and the pure backend are not separate projects. They are the same idea applied to two different layers. The same mental model runs through both.
Web Component + Shadow DOM
State → Reducer → View
Pure functions. Zero deps.
Serverless + Edge functions
Request → Transform → Response
Pure functions. Data model first.
// Frontend (vanilla-flow) State → view(state) → HTML Action → reducer(state, action) → newState // Backend (pure-backend) Request → validate(req) → Input Input → transform(input) → Result Result → respond(result) → Response // Same idea. Different runtime. // Data flows in one direction. // Every step is a pure function.
// 03 — Languages
Each language in this exploration offers a different perspective on pure functional data processing — from pragmatic JS to mathematically rigorous Haskell. The goal is not to pick a winner, but to understand what each one teaches.
JS can write pure functions — but enforces nothing. The exploration: how far can discipline and convention take you before you need a type system?
TypeScript's
readonly
, discriminated unions, and branded types push JS toward a more
honest functional style. Not Haskell — but a real step up.
React's
useReducer
and server components show that pure functional style works at
scale. A reference point for what the browser can do.
Elm proves pure functional frontend is possible in production. Its architecture directly inspired vanilla-flow. Learning Elm is learning the foundation.
The reference implementation of pure functional programming. IO Monad, type classes, lazy evaluation. Understanding Haskell means understanding why the rules exist.
Scala bridges OOP and functional. Cats Effect and ZIO bring principled effect systems to the JVM. Industrial-scale pure functional data processing.
Erlang's VM with a modern syntax. Immutable data, pattern matching, and actor model concurrency. The proof that functional scales to telecom reliability.
// JavaScript — pure by convention const getActiveEmails = (users) => users .filter(u => u.active) .map(u => u.email) // TypeScript — pure + type-safe type User = Readonly<{ email: string; active: boolean }> const getActiveEmails = (users: ReadonlyArray<User>): ReadonlyArray<string> => users .filter(u => u.active) .map(u => u.email) -- Haskell — pure enforced by compiler getActiveEmails :: [User] -> [String] getActiveEmails users = map email $ filter active users # Elixir — pure + pattern matching def get_active_emails(users) do users |> Enum.filter(& &1.active) |> Enum.map(& &1.email) end
// 04 — AI + Pure Functions
AI code generation works best when the target is pure and declarative. A function with a clear type signature and no side effects is precisely the kind of code AI can generate, verify, and compose reliably.
AI generates code. Pure functional code is verifiable, composable, and auditable. Strong types catch AI errors at compile time. Declarative DSLs constrain the space of what AI can generate — which makes generation more reliable, not less powerful.
AI is excellent at generating pure data transformation functions. The type signature is the spec. The compiler is the verifier. Human reviews the intent, not the implementation detail.
A library of small pure functions becomes a palette. AI composes them into complex pipelines. Each primitive is human-verified. Compositions are AI-generated.
Define the types. Let AI implement the functions. The type system rejects incorrect implementations automatically. Types become the contract between human intent and AI execution.
A well-designed DSL limits what AI can express — and that is the point. Constrained generation is more reliable. SQL is an example: AI writes SQL well because SQL is declarative.
// Step 1: Human defines types (the contract) type Order = Readonly<{ id: string items: ReadonlyArray<LineItem> discount: number }> type OrderSummary = Readonly<{ total: number tax: number payable: number }> // Step 2: Human writes the signature only declare const summariseOrder: (Order) => OrderSummary // Step 3: AI implements the body // Step 4: TypeScript verifies the types match // Step 5: Tests verify the logic // The type is the spec. // The compiler is the first reviewer. // Purity means the function is trivially testable.
// 05 — Learning Roadmap
This is a long-term, public record of learning. Each phase builds on the last. No deadlines — only direction.