The app I’ve been working on is used in places where internet connectivity ranges from “slow” to “nonexistent” — clinics in rural regions, field operations, locations where the team deliberately brings a laptop to act as a local hub rather than relying on any external network. Designing for these constraints forced a level of architectural thinking I hadn’t had to do before.

This post is an exploration of the design rather than a step-by-step how-to. Parts of this are still being built.


The Core Insight: Tiers, Not a Binary

The first instinct when approaching “offline mode” is to treat it as a binary: either you have internet or you don’t. But the actual problem has more texture than that.

There’s a middle state worth designing for explicitly: a local server, running on the same WiFi network, reachable over LAN even without internet. Think of a laptop running a full copy of the application stack, acting as a hub for a team of devices. That’s not “online” in the cloud sense, but it’s not fully offline either. It supports multiple users and real-time sync — the full feature set of the application.

So the architecture ended up with three tiers:

Tier 1 — Cloud      App talks to cloud Postgres. Normal operation.
Tier 2 — Local Hub  App talks to a local server over WiFi. Multi-device, real-time.
Tier 3 — Solo       App talks to nobody. IndexedDB only. One device.

These aren’t parallel alternatives — they’re a degraded-connectivity hierarchy. Tier 2 is a fallback from Tier 1; Tier 3 is a fallback from Tier 2. Critically, Tier 3 only activates when no server at all — cloud or local — can be reached.


Detection

The browser determines its tier using a waterfall:

  1. Is the page loaded from a known hub hostname or IP? → Tier 2. No further checks needed.
  2. Can I reach the cloud? (A HEAD request to a health endpoint; two consecutive failures = offline.) → if yes, Tier 1.
  3. Is there cached data? → auto-enter Tier 3 with a banner. No cache? → block with an error screen.

Detection runs continuously on a heartbeat ping every ~15 seconds. Tier transitions happen gracefully, with user prompts where action is needed.

The hub hostname check is the simplest and most reliable signal: if the page loaded from the hub’s address, the hub is obviously reachable. No ping necessary. The tricky part is that .local hostnames rely on mDNS (Bonjour/Avahi), which works reliably on iOS and macOS, inconsistently on Android, and requires Bonjour to be installed on bare Windows. The practical answer: show the hub’s IP address prominently alongside the .local hostname, and check both in the detection logic. A QR code on the hub dashboard pointing to the IP-based URL is the reliable fallback.


Two Queues, Two Very Different Problems

Solo offline mode (Tier 3) and hub mode (Tier 2) can look similar on the surface — both involve locally-written data eventually syncing to a server. But the queue implementations are fundamentally different in storage, trigger, and failure mode.

The solo offline queue lives in IndexedDB on the device. It’s populated when a user creates or modifies a record while disconnected, and flushed when connectivity returns. It survives browser restarts because IndexedDB is persistent. The hardest problem here is ID reconciliation: a record created offline gets a temporary local ID (a nanoid or similar). When it syncs, the server assigns a real ID. Any other records created offline that reference the first now have stale foreign keys. If a parent record and a child record were both created offline, syncing requires careful dependency ordering — parent first, then update the child’s foreign key reference before its own sync.

The hub’s sync queue is different in kind. It lives in Postgres on the hub itself, not in the browser. It’s populated as the hub handles writes from multiple connected devices, and drained to the cloud when the hub next detects internet. No ID reconciliation problem in the same way — it’s a proper database doing proper relational inserts.

These queues never merge. When a device transitions from solo offline to hub mode — the hub just arrived on the local WiFi, or the user just joined its network — the device’s local IndexedDB queue flushes to the hub via the hub’s standard API routes. The hub’s queue then handles the cloud sync. After that, the device queue is empty and the hub owns the sync responsibility going forward.


Auth at the Edge

A frequently underestimated problem is authentication. The app uses a third-party JWT-based auth provider. In solo offline mode, that provider is unreachable. The solution is to cache the user’s decoded identity record locally and validate sessions against the cached token. This is well-understood in the literature, but the main edge case is token expiry: a session that expires while the device is offline needs to be handled gracefully. Either extend the offline grace period deliberately, or prompt the user to reconnect briefly before entering offline mode.


What Doesn’t Change Across Tiers

One design constraint I’ve tried to hold firmly: application code at the call site shouldn’t know which tier it’s in. A data-fetching hook looks identical regardless of whether the app is talking to the cloud, a local hub, or IndexedDB. The tier logic lives only in the hooks themselves and the offline infrastructure layer underneath — not scattered throughout components.

The database schema is identical across all tiers. The API routes are identical — the hub runs the same application as the cloud, just pointed at a local database. This constraint has been a useful forcing function. When the tier boundary is well-defined, the rest of the code stays clean.


Build Order

The recommended order: build hub mode first. It’s almost entirely infrastructure work — containers, WiFi access point configuration, a sync worker — with essentially no application code changes. Each device connected to the hub operates normally and gets the full multi-user experience. Ship that first. It solves the most common field deployment problem immediately and tests the sync patterns before adding the complexity of solo offline.

Then build solo offline mode with hub awareness baked in from the start: tier detection, hub hostname checking, and the queue flush-to-hub transition. Finally, wire the two together — the offline-to-hub transition, the hub-to-offline fallback, and the hub dashboard showing pending sync state.

Each phase ships something independently useful. That’s a design goal as much as an engineering one.