Real-Time State in Next.js: How Supabase Realtime and Zustand Give You Live Updates Without the Mess

Somewhere right now, a frontend team is wiring up Supabase Realtime subscriptions inside a useEffect, watching their component re-render 47 times per second, and wondering why the browser tab just ate 1.2 GB of RAM. The subscription leaks are subtle. The re-render storms are not.
Supabase Realtime — encompassing Broadcast, Presence, and Postgres Changes — is genuinely production-ready infrastructure. The underlying system supports millions of concurrent connections and message throughput that can handle serious workloads. But the React integration patterns? Those are where teams bleed time and ship bugs. The official docs show you how to subscribe. They do not show you how to build a maintainable, performant application that survives longer than a demo.
This article covers the architecture that actually works: Supabase Realtime channels feeding into Zustand stores via custom hooks, with the Zustand store as the single source of truth. Every component simply listens to the store and adapts to updates in real time — no direct channel management, no subscription juggling. It is opinionated. It is specific to Next.js. And it comes from building a live order dashboard for THAMARAI Restaurant where subscription cleanup, auth token refresh, and connection state management are not theoretical concerns — they are the reason the kitchen gets orders on time.
From GraphQL Subscriptions to Supabase: Why We Switched
Before Supabase Realtime, the standard answer for real-time data in a React application was GraphQL subscriptions. And they worked — technically. You could wire up a subscription to an order table and get live updates pushed to the client. The problem was never the capability. The problem was the sheer weight of the setup.
Getting GraphQL subscriptions production-ready meant running a WebSocket-capable GraphQL server (Hasura, or a custom Apollo Server setup with subscription transport), managing subscription lifecycle on the client, dealing with the reconnection logic when connections dropped, and handling authentication tokens across the WebSocket handshake. For a team that just wanted "when a new order comes in, show it on the dashboard," the infrastructure overhead was absurd. You were maintaining an entire GraphQL subscription layer — schema definitions, resolvers, transport configuration — for what amounted to "tell me when a row changes."
In October 2025, when we built the real-time order dashboard for THAMARAI Restaurant, we were already using the Supabase stack. Authentication, database, storage — all Supabase. When the requirement came in for real-time order processing (the restaurant team needs to see incoming online orders instantly, and customers need immediate feedback when their food is ready), the Supabase Realtime API was the natural extension. No additional infrastructure. No separate WebSocket server. No GraphQL schema to maintain. The Realtime API plugs directly into the same Postgres database you are already using, respects the same Row Level Security policies, and integrates with the same client library.
The improvement is not just convenience. Supabase Realtime gives you fine-grained access permissions at the database level — the same RLS policies that protect your REST queries protect your Realtime subscriptions. With GraphQL subscriptions, you had to implement authorization logic separately in your resolvers. With Supabase, it is one security model across reads, writes, and real-time events.
And critically: there is no obligation to run Supabase on a paid plan. For THAMARAI, we set up a self-hosted Supabase instance. Full control over the infrastructure, data stays exactly where you want it, and the Realtime features work identically to the hosted version. For clients in Switzerland and Germany — where data protection laws (the Swiss Federal Act on Data Protection and Germany's BDSG, both operating within the GDPR framework) make data residency a real concern — self-hosting is not a workaround. It is the architecture.
The Three Realtime Primitives and When to Use Each
Supabase Realtime gives you three distinct primitives, and conflating them is the first mistake teams make:
- Postgres Changes — listens to your actual database via logical replication. You subscribe to INSERT, UPDATE, DELETE events on specific tables, optionally filtered by column values. The payload includes old and new row data (up to 1 MB; fields exceeding 64 bytes get truncated when you hit the limit). This is what most people reach for first, and it is often the wrong default choice for high-frequency updates.
- Broadcast — low-latency pub/sub between clients, routed through Supabase's Realtime servers. Messages bypass the database entirely (unless you explicitly use
realtime.send()from a database function). Payloads up to 3 MB on paid plans. This is what you want for cursor tracking, typing indicators, live notifications, and any ephemeral state that does not need persistence. - Presence — tracks and synchronizes client state across connections. Who is online, who is viewing this document, who is editing this row. Built on top of Broadcast with CRDT-based conflict resolution.
The critical insight: Postgres Changes has a round-trip through your database's WAL (Write-Ahead Log). Every change fires a logical replication event, which Supabase's Realtime server picks up and fans out to subscribers. This is elegant for data that is already being written to Postgres. It is absurd for ephemeral UI state like cursor positions — you would be writing cursor coordinates to a database table just to get them broadcast to other clients.
The architecture recommendation: use Broadcast for ephemeral, high-frequency state. Use Postgres Changes for authoritative data mutations. Use Presence for user status. Many teams try to do everything through Postgres Changes because it feels "cleaner" to have one pattern. It is not cleaner. It is slower, more expensive, and hammers your database for no reason.
For the THAMARAI order dashboard, Postgres Changes was the correct choice. Orders are authoritative data — they are written to the database, they need persistence, they have a lifecycle (placed → confirmed → preparing → ready → picked up). Every state transition is a database UPDATE, and Postgres Changes delivers those transitions to the dashboard in real time. Broadcast would have been wrong here: you do not want ephemeral order notifications that vanish if the kitchen staff refreshes their browser.
The Naive Integration and Why It Breaks
Here is what every tutorial shows you, and what will cause problems at scale:
1// ❌ The tutorial pattern — do not ship this2function LiveOrders() {3 const [orders, setOrders] = useState([])4
5 useEffect(() => {6 // Initial fetch7 supabase.from('orders').select('*').then(({ data }) => setOrders(data))8
9 // Subscribe to changes10 const channel = supabase11 .channel('orders-changes')12 .on('postgres_changes',13 { event: '*', schema: 'public', table: 'orders' },14 (payload) => {15 // This is where it falls apart16 setOrders(prev => /* ...merge logic here... */)17 }18 )19 .subscribe()20
21 return () => { supabase.removeChannel(channel) }22 }, [])23
24 return <OrderList items={orders} />25}This pattern has at least four problems that will bite you in production:
- Race condition between fetch and subscribe. If an order is placed after the SELECT but before the subscription is active, you miss it. There is no "subscribe from timestamp" — you get events from the moment the WebSocket confirms the subscription, not from the moment you called
.subscribe(). In a restaurant during dinner rush, a missed order is not a bug report — it is a hungry customer. - The merge logic is deceptively complex. Handling INSERT is easy — append. But UPDATE requires finding and replacing the right item. DELETE requires filtering. And if your list is sorted or paginated, every mutation needs to respect that ordering. This is state management logic masquerading as a simple callback.
- Re-render storms. Every Postgres change event calls
setOrders, which triggers a re-render of the entire component tree. With 50 updates per second on a busy evening, you are re-rendering 50 times per second. React's reconciliation is fast. It is not that fast. - Subscription cleanup is fragile.
supabase.removeChannel(channel)in the useEffect cleanup looks correct, but if the component unmounts before the subscription has finished connecting, you can leak the channel. The Supabase client will attempt to reconnect it.
The Architecture: Zustand as the Single Source of Truth
The fix is to move realtime state management outside of React's render cycle. Zustand is the right tool here — not because it is trendy, but because its stores exist independently of the component tree. A Zustand store can receive WebSocket events, update its internal state, and only notify subscribed components of the specific slices that changed.
This is the core principle of the architecture we use: the Zustand store is the single source of truth. Supabase Realtime does not drive the UI directly — it feeds the store. Custom hooks create and clean up the listeners that update the store. Every component in the application simply subscribes to the store and reacts to changes. The Realtime API is an input mechanism, not a state container.
Here is the architecture in layers:
- Supabase Client — singleton, handles WebSocket connection, auth, and channel management
- Custom Hooks — create Realtime subscriptions, wire events to Zustand actions, clean up on unmount
- Zustand Store — owns the realtime state, processes channel events, exposes selectors
- React Components — subscribe to Zustand selectors, never touch channels directly
1// store/realtime-orders.ts2import { create } from 'zustand'3import { subscribeWithSelector } from 'zustand/middleware'4
5interface Order {6 id: string7 customer_name: string8 items: OrderItem[]9 status: 'placed' | 'confirmed' | 'preparing' | 'ready' | 'picked_up'10 created_at: string11 updated_at: string12}13
14interface OrderStore {15 orders: Map<string, Order>16 connectionState: 'connecting' | 'connected' | 'disconnected' | 'error'17 lastEventAt: number | null18
19 // Actions20 setInitialData: (orders: Order[]) => void21 handleInsert: (order: Order) => void22 handleUpdate: (order: Order) => void23 handleDelete: (id: string) => void24 setConnectionState: (state: OrderStore['connectionState']) => void25}26
27export const useOrderStore = create<OrderStore>()(28 subscribeWithSelector((set, get) => ({29 orders: new Map(),30 connectionState: 'disconnected',31 lastEventAt: null,32
33 setInitialData: (orders) =>34 set({35 orders: new Map(orders.map(o => [o.id, o])),36 lastEventAt: Date.now(),37 }),38
39 handleInsert: (order) =>40 set(state => {41 const next = new Map(state.orders)42 next.set(order.id, order)43 return { orders: next, lastEventAt: Date.now() }44 }),45
46 handleUpdate: (order) =>47 set(state => {48 const next = new Map(state.orders)49 const existing = next.get(order.id)50 // Ignore stale updates51 if (existing && existing.updated_at >= order.updated_at) return state52 next.set(order.id, order)53 return { orders: next, lastEventAt: Date.now() }54 }),55
56 handleDelete: (id) =>57 set(state => {58 const next = new Map(state.orders)59 next.delete(id)60 return { orders: next, lastEventAt: Date.now() }61 }),62
63 setConnectionState: (connectionState) => set({ connectionState }),64 }))65)A few things to notice:
- Map instead of array. Lookup by ID is O(1) instead of O(n). When you are processing rapid order status updates during a Friday dinner rush, this matters.
- Stale update guard. The
handleUpdatemethod comparesupdated_attimestamps and ignores events that are older than what the store already has. This handles the race condition where Postgres Changes delivers events out of order. subscribeWithSelectormiddleware. This is what prevents re-render storms. Components can subscribe tostate.orders.get('specific-id')and only re-render when that specific order changes — not when any order in the map changes.
Custom Hooks: The Glue Between Realtime and the Store
The custom hook pattern is the key to keeping this architecture clean. Each hook is responsible for one thing: creating a Realtime subscription, wiring its events to the appropriate Zustand store actions, and cleaning up when the component unmounts.
1// hooks/use-realtime-orders.ts2import { useEffect, useRef } from 'react'3import { supabase } from '@/lib/supabase-client'4import { useOrderStore } from '@/store/realtime-orders'5import type { RealtimeChannel } from '@supabase/supabase-js'6
7export function useRealtimeOrders() {8 const channelRef = useRef<RealtimeChannel | null>(null)9 const { setConnectionState, handleInsert, handleUpdate, handleDelete } =10 useOrderStore.getState()11
12 useEffect(() => {13 // Avoid duplicate subscriptions14 if (channelRef.current) return15
16 setConnectionState('connecting')17
18 const channel = supabase19 .channel('orders-realtime')20 .on(21 'postgres_changes',22 { event: 'INSERT', schema: 'public', table: 'orders' },23 (payload) => useOrderStore.getState().handleInsert(payload.new as Order)24 )25 .on(26 'postgres_changes',27 { event: 'UPDATE', schema: 'public', table: 'orders' },28 (payload) => useOrderStore.getState().handleUpdate(payload.new as Order)29 )30 .on(31 'postgres_changes',32 { event: 'DELETE', schema: 'public', table: 'orders' },33 (payload) => useOrderStore.getState().handleDelete(payload.old.id)34 )35 .subscribe((status, err) => {36 if (status === 'SUBSCRIBED') {37 useOrderStore.getState().setConnectionState('connected')38 } else if (status === 'CHANNEL_ERROR') {39 useOrderStore.getState().setConnectionState('error')40 console.error('Realtime channel error:', err)41 } else if (status === 'CLOSED') {42 useOrderStore.getState().setConnectionState('disconnected')43 }44 })45
46 channelRef.current = channel47
48 return () => {49 if (channelRef.current) {50 supabase.removeChannel(channelRef.current)51 channelRef.current = null52 useOrderStore.getState().setConnectionState('disconnected')53 }54 }55 }, [])56}Key detail: useOrderStore.getState() is called inside each callback, not captured in a closure. This ensures you always write to the current store state, not a stale snapshot from when the subscription was created.
The hook is called once, at the layout level:
1// app/dashboard/layout.tsx (Client Component)2'use client'3
4import { useRealtimeOrders } from '@/hooks/use-realtime-orders'5
6export default function DashboardLayout({ children }) {7 useRealtimeOrders() // Subscribe once, feeds the store8 return <>{children}</>9}Every child component — the order list, the order detail panel, the status counter, the kitchen view — simply reads from the Zustand store. None of them know that Supabase Realtime exists. None of them manage subscriptions. They are pure consumers of state.
1// components/order-queue.tsx2'use client'3
4import { useOrderStore } from '@/store/realtime-orders'5import { useShallow } from 'zustand/react/shallow'6
7export function OrderQueue() {8 const activeOrders = useOrderStore(9 useShallow(state =>10 Array.from(state.orders.values())11 .filter(o => o.status !== 'picked_up')12 .sort((a, b) => new Date(a.created_at).getTime() - new Date(b.created_at).getTime())13 )14 )15
16 return (17 <div className="order-queue">18 {activeOrders.map(order => (19 <OrderCard key={order.id} order={order} />20 ))}21 </div>22 )23}When the kitchen marks an order as "ready," the Postgres UPDATE fires a Realtime event, the custom hook routes it to handleUpdate on the store, the store updates the Map, and every component subscribed to the relevant slice re-renders. The customer's order tracker updates simultaneously. The entire round-trip — database write to UI update on both screens — happens in milliseconds.
Solving the Fetch-Subscribe Race Condition
The race between initial data fetch and subscription activation is the most common source of missed events. The solution is to subscribe first, buffer events, then fetch, then replay the buffer:
1// hooks/use-realtime-sync.ts2export function useRealtimeOrdersSync() {3 const initialized = useRef(false)4
5 useEffect(() => {6 if (initialized.current) return7 initialized.current = true8
9 const store = useOrderStore.getState()10 const eventBuffer: Array<{11 eventType: string12 new?: Order13 old?: { id: string }14 }> = []15 let isBuffering = true16
17 // 1. Subscribe first — buffer events until initial fetch completes18 const channel = supabase19 .channel('orders-sync')20 .on(21 'postgres_changes',22 { event: '*', schema: 'public', table: 'orders' },23 (payload) => {24 if (isBuffering) {25 eventBuffer.push(payload)26 return27 }28 // Normal processing after buffer is flushed29 const s = useOrderStore.getState()30 if (payload.eventType === 'INSERT') s.handleInsert(payload.new as Order)31 if (payload.eventType === 'UPDATE') s.handleUpdate(payload.new as Order)32 if (payload.eventType === 'DELETE') s.handleDelete(payload.old.id)33 }34 )35 .subscribe(async (status) => {36 if (status !== 'SUBSCRIBED') return37
38 // 2. Fetch current state AFTER subscription is confirmed39 const { data } = await supabase40 .from('orders')41 .select('*')42 .order('created_at', { ascending: true })43
44 if (data) store.setInitialData(data)45
46 // 3. Replay buffered events (deduplicating against fetched data)47 isBuffering = false48 for (const event of eventBuffer) {49 const s = useOrderStore.getState()50 if (event.eventType === 'INSERT') s.handleInsert(event.new as Order)51 if (event.eventType === 'UPDATE') s.handleUpdate(event.new as Order)52 if (event.eventType === 'DELETE') s.handleDelete(event.old.id)53 }54 eventBuffer.length = 055
56 store.setConnectionState('connected')57 })58
59 return () => {60 supabase.removeChannel(channel)61 useOrderStore.getState().setConnectionState('disconnected')62 }63 }, [])64}The stale update guard in handleUpdate is what makes the replay safe. If the fetched data already includes an update that was also buffered, the timestamp comparison silently ignores the duplicate. No special deduplication logic needed — the store handles it.
Auth Token Refresh and Connection Recovery
Supabase Realtime uses the same JWT token as the REST client. When the token refreshes (which happens automatically via onAuthStateChange), the WebSocket connection needs to be updated. The Supabase client handles this internally — but only if you are using the same client instance for both auth and Realtime.
This is another reason the singleton pattern matters:
1// lib/supabase-client.ts2import { createBrowserClient } from '@supabase/ssr'3
4export const supabase = createBrowserClient(5 process.env.NEXT_PUBLIC_SUPABASE_URL,6 process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY7)One client. Used for auth, for database queries, and for Realtime subscriptions. Token refresh propagates to the WebSocket automatically.
But connection recovery is not just about tokens. WebSocket connections drop — network switches, laptop sleep/wake, mobile backgrounding. The Supabase client has built-in reconnection with exponential backoff, but your Zustand store needs to know about the connection state so the UI can respond appropriately:
1// components/connection-status.tsx2export function ConnectionStatus() {3 const connectionState = useOrderStore(state => state.connectionState)4
5 if (connectionState === 'connected') return null6
7 return (8 <div className={`connection-banner ${connectionState}`}>9 {connectionState === 'connecting' && 'Reconnecting to live updates...'}10 {connectionState === 'error' && 'Live updates unavailable — showing last known state'}11 {connectionState === 'disconnected' && 'Offline — updates paused'}12 </div>13 )14}For the THAMARAI kitchen dashboard, this connection status banner was essential. The restaurant's Wi-Fi occasionally dropped, and the kitchen staff needed to know immediately whether the order list they were looking at was live or stale. A silent disconnection — where the UI looks normal but is not receiving updates — is worse than an obvious error. The customer's order might already be ready, but the dashboard still shows "preparing."
Scaling Beyond One Table: The Multi-Store Pattern
A real application has more than one realtime entity. The THAMARAI dashboard subscribed to orders, but also to table assignments and to menu item availability (the kitchen can mark a dish as sold out, and the ordering website reflects it immediately). Each entity gets its own Zustand store and its own custom hook:
1// hooks/use-realtime-menu.ts2export function useRealtimeMenu() { /* same pattern, targets 'menu_items' table */ }3
4// hooks/use-realtime-tables.ts5export function useRealtimeTables() { /* same pattern, targets 'table_assignments' table */ }6
7// app/dashboard/layout.tsx8export default function DashboardLayout({ children }) {9 useRealtimeOrders()10 useRealtimeMenu()11 useRealtimeTables()12 return <>{children}</>13}Each hook manages its own channel. Each store manages its own state. Components can compose data from multiple stores without any store knowing about the others. This scales cleanly — adding a new realtime entity is a new store file and a new hook, not a modification to existing code.
Postgres Changes Limits and the Hybrid Approach
Postgres Changes has three limits that matter in practice:
- Payload size: Maximum 1 MB per event. Row data exceeding this is truncated. If you are storing large JSON columns, be aware that the Realtime payload may not include the complete data. Large JSON columns are silently truncated.
- Throughput: Tied to your plan's messages-per-second limit. On Pro, that is 500/s across all channels. A busy table with frequent writes can exhaust this budget, starving other subscriptions. On a self-hosted instance, you control these limits — but your server hardware is the constraint instead.
- WAL dependency: Postgres Changes uses logical replication. High write volumes increase WAL size, which can affect database performance. This is the same replication slot that powers Supabase's other features.
For the THAMARAI dashboard, Postgres Changes throughput was never an issue — a busy restaurant might process a few hundred orders per evening, which is well within any reasonable limit. But for high-throughput scenarios — think IoT dashboards, trading screens, or monitoring systems — the hybrid approach works better:
- Write data to Postgres as normal (for persistence and querying)
- Use
realtime.broadcast_changes()or Broadcast via the REST API from a database trigger or backend service to push updates to clients - Use Postgres Changes only for low-frequency, high-importance mutations (user settings, configuration changes, new entity creation)
This decouples your realtime fanout from your WAL, gives you the higher Broadcast payload limits (3 MB vs 1 MB), and lets you shape the payload — sending only the fields clients actually need rather than the entire row.
1-- Database function that broadcasts a shaped payload2CREATE OR REPLACE FUNCTION broadcast_metric_update()3RETURNS trigger AS $$4BEGIN5 PERFORM realtime.send(6 jsonb_build_object(7 'metric_id', NEW.id,8 'value', NEW.value,9 'timestamp', NEW.recorded_at10 ),11 'metric_update', -- event name12 'metrics-live', -- topic/channel13 false -- private (requires auth)14 );15 RETURN NEW;16END;17$$ LANGUAGE plpgsql;18
19CREATE TRIGGER on_metric_insert20 AFTER INSERT ON metrics21 FOR EACH ROW EXECUTE FUNCTION broadcast_metric_update();Presence: Online Status Done Right
Presence is the simplest of the three primitives to integrate, and the easiest to misuse. The common mistake: tracking too much state in the presence object.
Supabase Presence limits you to 10 keys per presence object. This is not arbitrary — presence state is synced to every connected client on every change. Stuffing a user's full profile, permissions, and preferences into their presence object means broadcasting that payload to every participant on every status change.
1// ✅ Minimal presence — just what other clients need to render2const channel = supabase.channel('dashboard-room')3channel.subscribe(async (status) => {4 if (status !== 'SUBSCRIBED') return5
6 await channel.track({7 user_id: session.user.id,8 display_name: session.user.user_metadata.name,9 avatar_url: session.user.user_metadata.avatar_url,10 current_view: 'orders', // what tab/page they're on11 online_at: new Date().toISOString(),12 })13})14
15// Zustand store for presence — same pattern16interface PresenceStore {17 users: Map<string, PresenceUser>18 syncPresence: (state: any) => void19}20
21export const usePresenceStore = create<PresenceStore>((set) => ({22 users: new Map(),23 syncPresence: (presenceState) => {24 const users = new Map<string, PresenceUser>()25 for (const [key, presences] of Object.entries(presenceState)) {26 // Each user can have multiple presences (multiple tabs)27 // Take the most recent one28 const latest = (presences as any[]).sort(29 (a, b) => new Date(b.online_at).getTime() - new Date(a.online_at).getTime()30 )[0]31 if (latest) users.set(latest.user_id, latest)32 }33 set({ users })34 },35}))Wire the presence sync events to the store via a custom hook, and components get a clean Map<string, PresenceUser> to render. No duplicate users from multiple tabs, no stale entries.
Next.js-Specific Concerns
Server Components and Realtime
Realtime subscriptions are inherently client-side — they require a WebSocket connection from the browser. Server Components cannot subscribe to Realtime channels. The architecture is:
- Server Components fetch initial data (via Supabase server client with
cookies()) - Pass initial data as props to a Client Component boundary
- Client Component hydrates the Zustand store and activates Realtime subscriptions via the custom hook
This is a feature, not a limitation. Server-rendered initial data means the page is immediately useful. Realtime subscriptions progressively enhance it with live updates. For the THAMARAI dashboard, the kitchen staff see the current order queue the instant the page loads — no loading spinner, no waiting for the WebSocket to connect. Live updates layer on top seamlessly.
App Router and Parallel Routes
If your dashboard uses parallel routes (the @slot convention), be aware that each slot is a separate React tree. A Zustand store is shared across all slots (it is a module singleton), but each slot's layout has its own effect lifecycle. Centralize your channel subscriptions in the parent layout, not in individual slots — this is why the custom hook approach works so well. One hook call in the parent layout, all slots read from the same store.
Edge Runtime Compatibility
Supabase's JavaScript client works in Edge Runtime (middleware, Edge API routes), but Realtime subscriptions do not. The WebSocket API in Edge Runtime is limited. Keep all Realtime logic in standard client-side code. If you need server-initiated realtime events, use Supabase's Broadcast REST API from your API routes:
1// app/api/notify/route.ts2import { createClient } from '@supabase/supabase-js'3
4const supabaseAdmin = createClient(5 process.env.SUPABASE_URL,6 process.env.SUPABASE_SERVICE_ROLE_KEY7)8
9export async function POST(req: Request) {10 const { message, channel } = await req.json()11
12 // Broadcast from server — no WebSocket needed13 await supabaseAdmin.channel(channel).send({14 type: 'broadcast',15 event: 'server-notification',16 payload: { message },17 })18
19 return Response.json({ sent: true })20}Testing Realtime Integrations
Testing real-time features is notoriously painful. Two approaches that actually work:
Integration Tests with Supabase Local Dev
Supabase CLI (supabase start) runs a full local stack including Realtime — whether you use the hosted platform or self-host in production. Your integration tests can:
- Start Supabase locally
- Create a subscription via the Supabase client
- Insert a row via the admin client
- Assert the subscription callback fires with the correct payload
This tests the full pipeline — Postgres trigger → WAL → Realtime server → WebSocket → your callback → Zustand store update.
Unit Tests with a Mock Channel
For unit testing the Zustand store logic, mock the channel entirely. The store is pure state management — no WebSocket dependency:
1// __tests__/realtime-orders.test.ts2import { useOrderStore } from '@/store/realtime-orders'3
4beforeEach(() => {5 useOrderStore.setState({6 orders: new Map(),7 connectionState: 'disconnected',8 lastEventAt: null,9 })10})11
12test('handleUpdate ignores stale events', () => {13 const store = useOrderStore.getState()14
15 store.handleInsert({16 id: '1', customer_name: 'Test', items: [],17 status: 'placed', created_at: '2025-10-15T18:00:00Z',18 updated_at: '2025-10-15T18:00:00Z'19 })20
21 // Stale update (earlier timestamp)22 store.handleUpdate({23 id: '1', customer_name: 'Test', items: [],24 status: 'confirmed', created_at: '2025-10-15T18:00:00Z',25 updated_at: '2025-10-15T17:55:00Z'26 })27
28 expect(useOrderStore.getState().orders.get('1')?.status).toBe('placed')29})30
31test('order status transitions update correctly', () => {32 const store = useOrderStore.getState()33
34 store.handleInsert({35 id: '1', customer_name: 'Test', items: [],36 status: 'placed', created_at: '2025-10-15T18:00:00Z',37 updated_at: '2025-10-15T18:00:00Z'38 })39
40 store.handleUpdate({41 id: '1', customer_name: 'Test', items: [],42 status: 'ready', created_at: '2025-10-15T18:00:00Z',43 updated_at: '2025-10-15T18:15:00Z'44 })45
46 expect(useOrderStore.getState().orders.get('1')?.status).toBe('ready')47})This separation — store logic tested without WebSocket infrastructure — is one of the biggest wins of the Zustand architecture. You can write dozens of edge-case tests for order state transitions without ever touching a real Supabase instance.
Supabase vs Firebase: The Enterprise Calculus
Swiss and German enterprise teams evaluating Supabase Realtime as a Firebase alternative should know the trade-offs clearly:
- Data residency and compliance: Supabase lets you choose your Postgres region (including EU — Frankfurt, specifically), and can be fully self-hosted. Firebase's Realtime Database has limited region control and cannot be self-hosted. For workloads subject to the Swiss Federal Act on Data Protection (FADP), Germany's Bundesdatenschutzgesetz (BDSG), and the overarching GDPR, this is often the deciding factor. Self-hosting Supabase means your data never leaves infrastructure you control.
- SQL vs NoSQL: Supabase is Postgres. Your data model is relational, your queries are SQL, your auth integrates with Row Level Security at the database level. Firebase's Realtime Database is a JSON tree. For complex dashboards with joins, aggregations, and reporting needs, Postgres wins decisively.
- Self-hosting: Supabase can be self-hosted. Firebase cannot. For enterprise clients with on-premises requirements — common in Swiss banking, German automotive, and regulated industries across both countries — self-hosting is not optional, it is a prerequisite.
- Realtime maturity: Firebase's Realtime Database has a decade of production hardening. Supabase Realtime is younger but improving rapidly. The Broadcast and Presence features are solid; Postgres Changes can lag under extreme write volumes due to the WAL dependency.
- Pricing model: Supabase charges based on connections and messages per second (with generous Pro plan limits). Firebase charges per connection and per data transferred. For high-connection, low-data-volume apps (many idle dashboards), Supabase tends to be cheaper. For low-connection, high-throughput apps, run the numbers carefully. Or self-host and pay only for your server infrastructure.
The Complete Architecture, Summarised
1┌─────────────────────────────────────────────────┐2│ React Components │3│ (subscribe to Zustand selectors) │4└───────────────────┬─────────────────────────────┘5 │ useShallow selectors6┌───────────────────▼─────────────────────────────┐7│ Zustand Stores │8│ ┌──────────┐ ┌──────────┐ ┌────────────┐ │9│ │ Orders │ │ Menu │ │ Presence │ │10│ │ Store │ │ Store │ │ Store │ │11│ └────▲─────┘ └────▲─────┘ └─────▲──────┘ │12└────────┼────────────┼──────────────┼────────────┘13 │ │ │14┌────────┼────────────┼──────────────┼────────────┐15│ Custom Hooks (create + clean up listeners) │16│ │ │ │ │17│ postgres_changes postgres_changes presence │18└────────┼────────────┼──────────────┼────────────┘19 │ │ │20┌────────▼────────────▼──────────────▼────────────┐21│ Supabase Client (singleton) │22│ WebSocket Connection │23│ Auth Token Management │24└─────────────────────────────────────────────────┘Each layer has a single responsibility. Custom hooks manage subscription lifecycle. Channels never leak into component code. Components never manage subscriptions. The Zustand store — the single source of truth — handles event deduplication and ordering. Auth refresh reconnects channels transparently.
This is not the simplest architecture. The tutorial pattern with useEffect and useState is simpler. It is also the architecture that breaks at exactly the moment the restaurant owner is showing the new system to their staff during a packed Friday evening. Real-time features have a peculiar quality: they work perfectly in development, mostly work in staging, and fail in the specific ways you did not test in production.
Build the boring, layered architecture. Your future self — the one debugging a WebSocket issue at 22:00 on a Friday — will thank you.
