flow-arch / backend · exploration

Pure functions.
Declarative data. No side effects.

The backend should be as honest as the frontend. Data transformation is just a function. Business logic is just a pure expression. Infrastructure is just plumbing around clean data models.

f (data) → result
g (result) → output
g ∘ f (data) → output

no mutation.
no hidden state.
no surprises.

Why the backend
should be pure too.

Most backend code is a tangle of mutable state, hidden side effects, and implicit dependencies. Debugging it means holding an entire runtime in your head. Flow-Arch backend proposes a different approach.

Core Claim

A backend that processes data through pure functions is deterministic, testable, and auditable — by construction. Not by discipline. By design.

PRINCIPLE_01
Data In, Data Out

Every business rule is a pure function. Request in, response out. No hidden reads. No unexpected writes.

PRINCIPLE_02
Data Model is the Core

The schema is the real backend. Serverless functions are thin wrappers. Infrastructure is just plumbing.

PRINCIPLE_03
Side Effects at the Edge

DB reads, API calls, and logging live at the boundary. Everything between input and output is pure.

PRINCIPLE_04
Composition Over Configuration

Complex pipelines are built by composing simple functions. No middleware forests. No plugin systems.

PRINCIPLE_05
Types as Documentation

A strong type system documents intent at the function signature. If it compiles, the contract is met.

PRINCIPLE_06
Abandon Server Complexity

Serverless and edge functions eliminate the compatibility debt of traditional servers. Focus on logic, not infrastructure.

Two halves of
one philosophy.

vanilla-flow and the pure backend are not separate projects. They are the same idea applied to two different layers. The same mental model runs through both.

Frontend
vanilla-flow

Web Component + Shadow DOM
State → Reducer → View
Pure functions. Zero deps.

Backend
pure-backend

Serverless + Edge functions
Request → Transform → Response
Pure functions. Data model first.

Shared across both layers
pure functions declarative style immutable data no hidden state side effects at boundary composition over inheritance zero framework lock-in data model first
the same loop — frontend and backend CONCEPT
// Frontend (vanilla-flow)
State   view(state)    HTML
Action  reducer(state, action)  newState

// Backend (pure-backend)
Request  validate(req)   Input
Input    transform(input)  Result
Result   respond(result)  Response

// Same idea. Different runtime.
// Data flows in one direction.
// Every step is a pure function.

Seven languages.
One philosophy.

Each language in this exploration offers a different perspective on pure functional data processing — from pragmatic JS to mathematically rigorous Haskell. The goal is not to pick a winner, but to understand what each one teaches.

JavaScript
Exploring Now
pure style by discipline

JS can write pure functions — but enforces nothing. The exploration: how far can discipline and convention take you before you need a type system?

map/filter/reduce immutability by convention closures no enforcement
TypeScript
Exploring Now
types as a safety net

TypeScript's readonly , discriminated unions, and branded types push JS toward a more honest functional style. Not Haskell — but a real step up.

readonly types discriminated unions type narrowing branded types
React
Exploring Now
declarative UI as proof of concept

React's useReducer and server components show that pure functional style works at scale. A reference point for what the browser can do.

useReducer server components pure render fn unidirectional
Elm
Learning Next
enforced purity, zero runtime errors

Elm proves pure functional frontend is possible in production. Its architecture directly inspired vanilla-flow. Learning Elm is learning the foundation.

compiler-enforced pure no runtime exceptions The Elm Architecture immutable by default
Haskell
Learning Next
pure functional orthodoxy

The reference implementation of pure functional programming. IO Monad, type classes, lazy evaluation. Understanding Haskell means understanding why the rules exist.

IO Monad type classes lazy evaluation pattern matching
Scala
Future
functional on the JVM

Scala bridges OOP and functional. Cats Effect and ZIO bring principled effect systems to the JVM. Industrial-scale pure functional data processing.

Cats Effect ZIO case classes for-comprehensions
Elixir
Future
functional + distributed + fault-tolerant

Erlang's VM with a modern syntax. Immutable data, pattern matching, and actor model concurrency. The proof that functional scales to telecom reliability.

immutable data pattern matching actor model fault tolerance

Same task — pure data transformation — in each language

filter active users, extract emails — pure function JS → TS → Haskell → Elixir
// JavaScript — pure by convention
const getActiveEmails = (users) =>
  users
    .filter(u => u.active)
    .map(u => u.email)


// TypeScript — pure + type-safe
type User = Readonly<{ email: string; active: boolean }>

const getActiveEmails = (users: ReadonlyArray<User>): ReadonlyArray<string> =>
  users
    .filter(u => u.active)
    .map(u => u.email)


-- Haskell — pure enforced by compiler
getActiveEmails :: [User] -> [String]
getActiveEmails users =
  map email $ filter active users


# Elixir — pure + pattern matching
def get_active_emails(users) do
  users
  |> Enum.filter(& &1.active)
  |> Enum.map(& &1.email)
end

The future is
AI + strong types + declarative DSL.

AI code generation works best when the target is pure and declarative. A function with a clear type signature and no side effects is precisely the kind of code AI can generate, verify, and compose reliably.

The Thesis

AI generates code. Pure functional code is verifiable, composable, and auditable. Strong types catch AI errors at compile time. Declarative DSLs constrain the space of what AI can generate — which makes generation more reliable, not less powerful.

AI_ROLE_01

Generate pure transformations

AI is excellent at generating pure data transformation functions. The type signature is the spec. The compiler is the verifier. Human reviews the intent, not the implementation detail.

AI_ROLE_02

Compose pipelines from primitives

A library of small pure functions becomes a palette. AI composes them into complex pipelines. Each primitive is human-verified. Compositions are AI-generated.

AI_ROLE_03

Type-driven development

Define the types. Let AI implement the functions. The type system rejects incorrect implementations automatically. Types become the contract between human intent and AI execution.

AI_ROLE_04

Declarative DSLs as constraint

A well-designed DSL limits what AI can express — and that is the point. Constrained generation is more reliable. SQL is an example: AI writes SQL well because SQL is declarative.

type-driven AI workflow TS
// Step 1: Human defines types (the contract)
type Order = Readonly<{
  id:       string
  items:    ReadonlyArray<LineItem>
  discount: number
}>

type OrderSummary = Readonly<{
  total:    number
  tax:      number
  payable:  number
}>

// Step 2: Human writes the signature only
declare const summariseOrder: (Order) => OrderSummary

// Step 3: AI implements the body
// Step 4: TypeScript verifies the types match
// Step 5: Tests verify the logic

// The type is the spec.
// The compiler is the first reviewer.
// Purity means the function is trivially testable.

The exploration
plan.

This is a long-term, public record of learning. Each phase builds on the last. No deadlines — only direction.

Phase 1 · Now
  • vanilla-flow frontend core
  • Pure function methodology documented
  • JS pure backend patterns
  • TypeScript readonly + discriminated unions
  • React useReducer + server components
  • Cloudflare Workers pure API demos
  • Data model design patterns
  • Elm — The Elm Architecture deep dive
  • Elm — production frontend demo
  • Haskell — IO Monad and effect isolation
  • Haskell — pure API server (Servant)
  • AI + types workflow experiments
  • Type-driven development demos
Phase 3 · Future
  • Scala — Cats Effect and ZIO
  • Elixir — functional pipelines at scale
  • Erlang — fault tolerance patterns
  • Declarative DSL design
  • AI-assisted pure function generation
  • Cross-language pure data pipeline