Skip to main content
Stripe SystemsStripe Systems
Frontend Development📅 February 4, 2026· 23 min read

Building Offline-First PWAs with Next.js, Service Workers, and IndexedDB

✍️
Stripe Systems Engineering

Most web applications treat offline support as an afterthought — a "no internet" screen with a sad dinosaur. Offline-first flips this: the app is designed to work without a network connection, and connectivity is treated as a progressive enhancement. Data lives locally first, syncs when possible, and the user never has to wonder whether their work was saved.

This post walks through building an offline-first Progressive Web App with Next.js, covering the full stack: service workers, caching strategies, IndexedDB persistence, background sync, push notifications, and the hard problems like conflict resolution and sync queue reliability. The code examples are production-oriented, not toy demos.

PWA vs Native App: When PWA Is the Right Call

PWAs are not a universal replacement for native apps. They solve a specific set of problems well and fall short in others. Choosing between them requires understanding the constraints of both.

PWAs make sense when:

  • Your target audience uses low-end or mid-range devices where storage is precious. A PWA installs at 2–5MB versus 40–100MB for a typical native app.
  • App store distribution is unnecessary or undesirable. You control the release cycle — no review queues, no 24–72 hour waits for approval.
  • The app is content-heavy or data-entry-focused rather than graphically intensive. Forms, dashboards, reports, and CRUD interfaces are PWA sweet spots.
  • You need a single codebase across platforms. PWAs run on any device with a modern browser, including desktop.
  • Your users have unreliable connectivity. Service workers give you fine-grained control over caching and offline behavior that native apps typically handle with less flexibility.

PWAs are the wrong choice when:

  • You need deep OS integration: Bluetooth LE scanning, NFC tag writing on iOS, contact book access, or system-level background tasks. The Web Bluetooth and Web NFC APIs exist but have limited iOS support and inconsistent behavior across Android vendors.
  • The app requires sustained, heavy computation — real-time video processing, 3D rendering, or physics simulations. WebGL and WebAssembly narrow this gap but don't close it for the most demanding workloads.
  • Background processing beyond what the web platform permits is required. Service workers are terminated after a short idle period. You cannot run persistent background tasks the way a native app can.
  • Your app is a game or media-heavy experience where frame-level performance and hardware-accelerated rendering are non-negotiable.

The decision comes down to a pragmatic question: does the web platform provide the APIs your app needs? If yes, a PWA saves you from maintaining two native codebases and dealing with app store friction. If no, build native.

Service Worker Lifecycle

Service workers are the backbone of any PWA. They sit between your app and the network, intercepting every fetch request and deciding how to handle it. Understanding their lifecycle is essential because bugs in service worker registration or cache management lead to stale content, broken updates, and frustrated users.

Install Phase

The install event fires when the browser detects a new or updated service worker file. This is where you precache your app shell — the minimum set of assets needed to render the application.

const CACHE_NAME = 'app-shell-v1';
const PRECACHE_ASSETS = [
  '/',
  '/index.html',
  '/offline.html',
  '/styles/main.css',
  '/scripts/app.js',
  '/icons/icon-192.png',
  '/icons/icon-512.png',
];

self.addEventListener('install', (event) => {
  event.waitUntil(
    caches.open(CACHE_NAME).then((cache) => {
      return cache.addAll(PRECACHE_ASSETS);
    })
  );
  // Skip waiting to activate immediately instead of waiting
  // for all tabs to close. Use with caution in production —
  // this can cause version mismatches between cached assets
  // and the running page if not handled carefully.
  self.skipWaiting();
});

event.waitUntil() tells the browser not to consider the install complete until the promise resolves. If any asset in the precache list fails to fetch, the entire install fails, and the old service worker remains active. This is intentional — a partial precache would leave the app in a broken state.

self.skipWaiting() forces the new service worker to activate immediately rather than waiting for all tabs running the old version to close. This is useful during development and acceptable in production if your caching strategy handles version transitions cleanly.

Activate Phase

The activate event fires after install completes and the service worker takes control. This is where you clean up old caches to prevent storage from growing unbounded.

self.addEventListener('activate', (event) => {
  event.waitUntil(
    caches.keys().then((cacheNames) => {
      return Promise.all(
        cacheNames
          .filter((name) => name !== CACHE_NAME)
          .map((name) => caches.delete(name))
      );
    })
  );
  // Take control of all open tabs immediately
  self.clients.claim();
});

self.clients.claim() allows the newly activated service worker to take control of all open pages within its scope immediately, rather than waiting for the next navigation. Combined with skipWaiting(), this means a new service worker can go from installed to controlling all pages in a single page load.

Fetch Phase

The fetch event is where the real work happens. Every network request from a controlled page passes through this handler, and you decide whether to serve from cache, go to the network, or combine both.

self.addEventListener('fetch', (event) => {
  const { request } = event;
  const url = new URL(request.url);

  // Skip non-GET requests (POST, PUT, DELETE go straight to network)
  if (request.method !== 'GET') return;

  // Apply different strategies based on request type
  if (url.pathname.startsWith('/api/')) {
    event.respondWith(networkFirst(request));
  } else if (request.destination === 'image' || 
             request.destination === 'font' ||
             request.destination === 'style') {
    event.respondWith(cacheFirst(request));
  } else {
    event.respondWith(staleWhileRevalidate(request));
  }
});

Next.js Static Export and Service Workers

Next.js with output: 'export' in next.config.js generates a fully static site — HTML, CSS, JS, and assets — into an out/ directory. There is no Node.js server at runtime. This is ideal for PWAs because every asset is a static file that can be precached by a service worker and served from a CDN.

The key consideration is that Next.js generates hashed filenames for JS and CSS chunks (e.g., _next/static/chunks/pages/index-a1b2c3.js). Your precache manifest must be regenerated on every build to include the correct filenames. Workbox handles this automatically, which we cover in the Workbox integration section.

Caching Strategies

Choosing the right caching strategy per resource type is what separates a PWA that feels fast and reliable from one that serves stale data or breaks offline.

Cache-First

Serve from cache if available; fall back to network only on a cache miss. Best for assets that rarely change: images, fonts, compiled CSS, and versioned JS bundles.

async function cacheFirst(request) {
  const cached = await caches.match(request);
  if (cached) return cached;

  try {
    const response = await fetch(request);
    if (response.ok) {
      const cache = await caches.open(CACHE_NAME);
      cache.put(request, response.clone());
    }
    return response;
  } catch (error) {
    // If both cache and network fail, return offline fallback
    return caches.match('/offline.html');
  }
}

Cache-first is aggressive — once an asset is cached, it is never re-fetched until the cache is explicitly invalidated. This is safe for versioned assets (where the filename changes when the content changes) but dangerous for assets with stable URLs that may be updated.

Network-First

Try the network; fall back to cache if the network is unavailable or too slow. Best for API responses where freshness matters but offline access is still needed.

async function networkFirst(request, timeoutMs = 3000) {
  const cache = await caches.open(CACHE_NAME);

  try {
    const response = await Promise.race([
      fetch(request),
      new Promise((_, reject) =>
        setTimeout(() => reject(new Error('timeout')), timeoutMs)
      ),
    ]);

    if (response.ok) {
      cache.put(request, response.clone());
    }
    return response;
  } catch (error) {
    const cached = await cache.match(request);
    if (cached) return cached;

    return new Response(
      JSON.stringify({ error: 'offline', cached: false }),
      { status: 503, headers: { 'Content-Type': 'application/json' } }
    );
  }
}

The timeout is critical. On a slow 2G connection, waiting 30 seconds for a response is worse than serving slightly stale cached data. The 3-second timeout is a reasonable default — adjust based on your users' network conditions.

Stale-While-Revalidate

Return the cached version immediately for instant perceived performance, then fetch a fresh copy from the network in the background and update the cache. The user sees stale data on the current request but fresh data on the next one.

async function staleWhileRevalidate(request) {
  const cache = await caches.open(CACHE_NAME);
  const cached = await cache.match(request);

  const networkFetch = fetch(request).then((response) => {
    if (response.ok) {
      cache.put(request, response.clone());
    }
    return response;
  }).catch(() => null);

  return cached || (await networkFetch) || caches.match('/offline.html');
}

This strategy works well for semi-dynamic content: blog posts, product listings, user profiles — data that changes occasionally but where showing the previous version for a few seconds is acceptable.

Network-Only and Cache-Only

Network-Only is the default browser behavior — no caching involved. Use this for POST requests, authenticated API calls where stale data could cause security issues, and any request where you explicitly do not want caching (e.g., analytics pings, payment processing).

Cache-Only serves exclusively from the cache and never touches the network. This is narrow in application — primarily used for the offline fallback page and for assets that were precached during the install phase and should never be re-fetched.

Strategy Decision Matrix

Resource TypeStrategyReasoning
App shell HTMLStale-While-RevalidateFast load, background update
Hashed JS/CSS bundlesCache-FirstFilename changes on update
Images, fontsCache-FirstRarely change, large payloads
API: list endpointsNetwork-First (3s timeout)Freshness matters, offline fallback
API: mutations (POST/PUT)Network-Only + Sync QueueMust reach server eventually
Offline fallback pageCache-OnlyPrecached during install

IndexedDB for Offline Data

localStorage is synchronous, string-only, and limited to ~5MB. For offline-first apps that store structured data — form submissions, work orders, images — IndexedDB is the only viable option. It supports structured data, binary blobs, indexes for efficient querying, transactions, and storage limits in the hundreds of megabytes.

The raw IndexedDB API is notoriously verbose. Dexie.js provides a clean Promise-based wrapper without hiding the underlying capabilities.

Schema Definition and Versioning

import Dexie from 'dexie';

const db = new Dexie('FieldServiceDB');

// Version 1: initial schema
db.version(1).stores({
  workOrders: 'id, status, assignedTo, scheduledDate',
  formSubmissions: '++id, workOrderId, submittedAt, syncStatus',
  photoAttachments: '++id, formSubmissionId, mimeType, syncStatus',
  syncQueue: '++id, entityType, entityId, operation, createdAt',
});

// Version 2: add priority field, new index
db.version(2).stores({
  workOrders: 'id, status, assignedTo, scheduledDate, priority',
  formSubmissions: '++id, workOrderId, submittedAt, syncStatus',
  photoAttachments: '++id, formSubmissionId, mimeType, syncStatus',
  syncQueue: '++id, entityType, entityId, operation, createdAt',
}).upgrade((tx) => {
  return tx.table('workOrders').toCollection().modify((order) => {
    order.priority = order.priority || 'normal';
  });
});

The ++id prefix creates an auto-incrementing primary key. Comma-separated fields after the primary key are indexed for fast queries. Only indexed fields appear in the schema definition, but you can store any additional fields on each record.

CRUD Operations

// Create
async function saveFormSubmission(workOrderId, formData) {
  const id = await db.formSubmissions.add({
    workOrderId,
    data: formData,
    submittedAt: new Date().toISOString(),
    syncStatus: 'pending',
  });

  // Queue for background sync
  await db.syncQueue.add({
    entityType: 'formSubmission',
    entityId: id,
    operation: 'create',
    createdAt: new Date().toISOString(),
  });

  return id;
}

// Read with index
async function getPendingSubmissions() {
  return db.formSubmissions
    .where('syncStatus')
    .equals('pending')
    .toArray();
}

// Update
async function markSynced(submissionId, serverTimestamp) {
  await db.formSubmissions.update(submissionId, {
    syncStatus: 'synced',
    serverTimestamp,
  });
}

// Delete synced data older than 30 days (storage hygiene)
async function pruneOldData() {
  const cutoff = new Date();
  cutoff.setDate(cutoff.getDate() - 30);

  await db.formSubmissions
    .where('syncStatus')
    .equals('synced')
    .and((item) => new Date(item.submittedAt) < cutoff)
    .delete();
}

Schema Migration Strategy

Dexie handles schema migrations through version numbers. Each call to db.version(n) defines the schema at that version, and the optional .upgrade() callback runs data transformations. Migrations run automatically when the database is opened if the stored version is lower than the declared version.

A few hard-learned rules: never skip version numbers, never remove a version declaration from your code (Dexie needs the full version history to migrate from any starting point), and always test migrations with real data — especially when adding indexes to existing tables with thousands of records.

Conflict Resolution

When data is modified both locally and on the server before a sync occurs, you have a conflict. There is no single correct resolution strategy — it depends on your data semantics.

Last-Write-Wins (LWW): The mutation with the latest timestamp overwrites the other. Simple to implement, reasonable for single-user data where the user's most recent action is almost always the correct one. Breaks down in multi-user scenarios where independent edits should be merged rather than overwritten.

Vector Clocks: Each client maintains a logical clock that increments on every mutation. Conflicts are detected by comparing vector clocks — if neither clock is a strict ancestor of the other, you have a true conflict that requires manual resolution. This is appropriate for collaborative editing scenarios but adds significant complexity.

Operational Transform (OT) / CRDTs: Instead of storing document state, store the operations that produced the state. Merge by transforming operations against each other. This is what Google Docs uses. It is the correct approach for fine-grained collaborative editing but overkill for most CRUD applications.

For most offline-first apps — field data collection, inspections, inventory management — LWW with server-timestamp authority and user notification of conflicts is the right tradeoff. It is simple, predictable, and handles the 99% case correctly.

Background Sync

The BackgroundSync API lets you defer actions until the user has a stable connection. Instead of failing a POST request when offline, you queue the mutation and let the service worker replay it later.

Building the Sync Flow

Step 1: Queue mutations from the application code.

async function submitForm(workOrderId, formData, photos) {
  // Store form data locally
  const submissionId = await db.formSubmissions.add({
    workOrderId,
    data: formData,
    submittedAt: new Date().toISOString(),
    syncStatus: 'pending',
  });

  // Store photo blobs
  for (const photo of photos) {
    await db.photoAttachments.add({
      formSubmissionId: submissionId,
      blob: photo.blob,
      fileName: photo.name,
      mimeType: photo.type,
      syncStatus: 'pending',
    });
  }

  // Queue sync
  await db.syncQueue.add({
    entityType: 'formSubmission',
    entityId: submissionId,
    operation: 'create',
    createdAt: new Date().toISOString(),
    retryCount: 0,
  });

  // Register background sync
  const registration = await navigator.serviceWorker.ready;
  await registration.sync.register('sync-queue');

  return submissionId;
}

Step 2: Handle the sync event in the service worker.

self.addEventListener('sync', (event) => {
  if (event.tag === 'sync-queue') {
    event.waitUntil(processSyncQueue());
  }
});

async function processSyncQueue() {
  const db = await openDB(); // Open Dexie instance in SW context
  const pendingItems = await db.syncQueue
    .orderBy('createdAt')
    .toArray();

  for (const item of pendingItems) {
    try {
      await syncItem(item, db);
      await db.syncQueue.delete(item.id);
    } catch (error) {
      if (error.status >= 400 && error.status < 500) {
        // Client error — retrying won't help, mark as failed
        await db.syncQueue.update(item.id, {
          syncStatus: 'failed',
          error: error.message,
        });
      } else {
        // Server or network error — increment retry, apply backoff
        const retryCount = (item.retryCount || 0) + 1;
        if (retryCount >= 5) {
          await db.syncQueue.update(item.id, {
            syncStatus: 'failed',
            error: `Max retries exceeded: ${error.message}`,
          });
        } else {
          await db.syncQueue.update(item.id, { retryCount });
          // Throwing stops processing; the browser will retry
          // the sync event with exponential backoff
          throw error;
        }
      }
    }
  }
}

async function syncItem(item, db) {
  if (item.entityType === 'formSubmission') {
    const submission = await db.formSubmissions.get(item.entityId);
    const photos = await db.photoAttachments
      .where('formSubmissionId')
      .equals(item.entityId)
      .toArray();

    // Upload photos first, collect server URLs
    const photoUrls = [];
    for (const photo of photos) {
      const formData = new FormData();
      formData.append('file', photo.blob, photo.fileName);

      const res = await fetch('/api/uploads', {
        method: 'POST',
        body: formData,
      });

      if (!res.ok) {
        const err = new Error('Photo upload failed');
        err.status = res.status;
        throw err;
      }

      const { url } = await res.json();
      photoUrls.push(url);

      await db.photoAttachments.update(photo.id, {
        syncStatus: 'synced',
        remoteUrl: url,
      });
    }

    // Submit form with photo URLs
    const res = await fetch('/api/submissions', {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({
        ...submission.data,
        workOrderId: submission.workOrderId,
        photoUrls,
        clientTimestamp: submission.submittedAt,
      }),
    });

    if (!res.ok) {
      const err = new Error('Form submission failed');
      err.status = res.status;
      throw err;
    }

    const { serverTimestamp } = await res.json();
    await db.formSubmissions.update(item.entityId, {
      syncStatus: 'synced',
      serverTimestamp,
    });
  }
}

Idempotency

Sync replays are inherently at-least-once. The network request might succeed but the response might fail to reach the client, causing a retry of an already-processed mutation. Every sync endpoint must be idempotent.

The standard approach: generate a unique mutationId (UUID) client-side when the mutation is created and include it in every request. The server checks whether that mutationId has already been processed before applying the mutation. This turns a "might-duplicate" problem into a safe retry.

Web App Manifest

The manifest tells the browser how to install your app and how it should look when launched from the home screen.

{
  "name": "Field Service Manager",
  "short_name": "FieldSvc",
  "description": "Offline-first field service management",
  "start_url": "/",
  "scope": "/",
  "display": "standalone",
  "background_color": "#ffffff",
  "theme_color": "#1a1a2e",
  "orientation": "portrait-primary",
  "icons": [
    {
      "src": "/icons/icon-192.png",
      "sizes": "192x192",
      "type": "image/png",
      "purpose": "any maskable"
    },
    {
      "src": "/icons/icon-512.png",
      "sizes": "512x512",
      "type": "image/png",
      "purpose": "any maskable"
    }
  ]
}

display modes: standalone hides the browser chrome entirely — the app looks and feels like a native app. minimal-ui keeps a small navigation bar (back button, URL display), which is useful if your app doesn't implement its own back-navigation. fullscreen is for kiosk or immersive apps. For most PWAs, standalone is the right choice.

start_url defines what URL opens when the user launches the installed app. Set it to / or /dashboard — wherever your app's main entry point is. Some developers append a query param (e.g., /?source=pwa) to track installs in analytics.

scope restricts which URLs the service worker controls. If your app lives at /app/, set scope to /app/ so that navigating to /marketing/ opens in the browser instead of the installed PWA.

Handling the Install Prompt

Browsers show an install banner automatically when PWA criteria are met, but you can intercept and defer this to a more appropriate moment:

let deferredPrompt = null;

window.addEventListener('beforeinstallprompt', (event) => {
  event.preventDefault();
  deferredPrompt = event;
  // Show your custom install button
  document.getElementById('install-btn').style.display = 'block';
});

document.getElementById('install-btn').addEventListener('click', async () => {
  if (!deferredPrompt) return;
  deferredPrompt.prompt();
  const { outcome } = await deferredPrompt.userChoice;
  console.log(`Install prompt outcome: ${outcome}`);
  deferredPrompt = null;
});

Deferring the prompt lets you show it at a contextually appropriate time — after the user has used the app a few times, or when they try to use an offline feature.

Precaching Next.js Static Assets with Workbox

Manually maintaining a list of precache URLs breaks the moment your build output changes. Workbox generates the precache manifest automatically from your build output.

Workbox Configuration with next-pwa

The next-pwa package integrates Workbox into the Next.js build pipeline:

// next.config.js
const withPWA = require('next-pwa')({
  dest: 'public',
  register: true,
  skipWaiting: true,
  disable: process.env.NODE_ENV === 'development',
  runtimeCaching: [
    {
      urlPattern: /^https:\/\/api\.example\.com\/.*$/,
      handler: 'NetworkFirst',
      options: {
        cacheName: 'api-cache',
        expiration: {
          maxEntries: 200,
          maxAgeSeconds: 60 * 60 * 24, // 24 hours
        },
        networkTimeoutSeconds: 3,
      },
    },
    {
      urlPattern: /\.(?:png|jpg|jpeg|svg|gif|webp)$/,
      handler: 'CacheFirst',
      options: {
        cacheName: 'image-cache',
        expiration: {
          maxEntries: 100,
          maxAgeSeconds: 60 * 60 * 24 * 30, // 30 days
        },
      },
    },
    {
      urlPattern: /\.(?:js|css)$/,
      handler: 'StaleWhileRevalidate',
      options: {
        cacheName: 'static-resources',
      },
    },
  ],
});

module.exports = withPWA({
  output: 'export',
  // ... other Next.js config
});

Custom Workbox Integration

If you need more control than next-pwa provides — for example, to add custom sync handlers or notification logic — you can use Workbox directly:

// service-worker.js (custom)
import { precacheAndRoute } from 'workbox-precaching';
import { registerRoute } from 'workbox-routing';
import { NetworkFirst, CacheFirst, StaleWhileRevalidate } from 'workbox-strategies';
import { ExpirationPlugin } from 'workbox-expiration';

// Precache manifest is injected by Workbox build step
precacheAndRoute(self.__WB_MANIFEST);

// API routes: network-first with 3s timeout
registerRoute(
  ({ url }) => url.pathname.startsWith('/api/'),
  new NetworkFirst({
    cacheName: 'api-responses',
    networkTimeoutSeconds: 3,
    plugins: [
      new ExpirationPlugin({ maxEntries: 200, maxAgeSeconds: 86400 }),
    ],
  })
);

// Images: cache-first
registerRoute(
  ({ request }) => request.destination === 'image',
  new CacheFirst({
    cacheName: 'images',
    plugins: [
      new ExpirationPlugin({ maxEntries: 100, maxAgeSeconds: 2592000 }),
    ],
  })
);

The build step that injects the precache manifest runs as a postbuild script:

{
  "scripts": {
    "build": "next build",
    "postbuild": "workbox injectManifest workbox-config.js"
  }
}

Push Notifications

Push notifications let you re-engage users even when the app isn't open. The implementation spans three components: the client (subscribes), the service worker (receives and displays), and the server (sends).

VAPID Key Setup

VAPID (Voluntary Application Server Identification) keys authenticate your server with the push service. Generate them once:

npx web-push generate-vapid-keys

This outputs a public key (shared with the client) and a private key (kept on the server). Store the private key in environment variables, never in client-side code.

Client: Requesting Permission and Subscribing

async function subscribeToPush() {
  // Don't ask on first visit — wait until the user has
  // engaged with a feature that benefits from notifications
  const permission = await Notification.requestPermission();
  if (permission !== 'granted') return null;

  const registration = await navigator.serviceWorker.ready;
  const subscription = await registration.pushManager.subscribe({
    userVisibleOnly: true, // Required by Chrome: every push must show a notification
    applicationServerKey: urlBase64ToUint8Array(VAPID_PUBLIC_KEY),
  });

  // Send subscription to your server
  await fetch('/api/push/subscribe', {
    method: 'POST',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify(subscription),
  });

  return subscription;
}

// Helper: convert VAPID key from base64 to Uint8Array
function urlBase64ToUint8Array(base64String) {
  const padding = '='.repeat((4 - (base64String.length % 4)) % 4);
  const base64 = (base64String + padding)
    .replace(/-/g, '+')
    .replace(/_/g, '/');
  const rawData = atob(base64);
  return Uint8Array.from([...rawData].map((char) => char.charCodeAt(0)));
}

UX guidance on permission timing: Asking for notification permission on page load is hostile — users reflexively deny it. Instead, tie the prompt to a specific user action: "Enable notifications for work order updates?" after they've viewed their work order list. This results in significantly higher opt-in rates because the user understands the value proposition.

Service Worker: Receiving and Displaying

self.addEventListener('push', (event) => {
  let data = { title: 'New Notification', body: '' };
  
  if (event.data) {
    try {
      data = event.data.json();
    } catch (e) {
      data.body = event.data.text();
    }
  }

  const options = {
    body: data.body,
    icon: '/icons/icon-192.png',
    badge: '/icons/badge-72.png',
    data: { url: data.url || '/' },
    actions: data.actions || [],
    vibrate: [100, 50, 100],
    tag: data.tag || 'default', // Prevents duplicate notifications
    renotify: true,
  };

  event.waitUntil(
    self.registration.showNotification(data.title, options)
  );
});

self.addEventListener('notificationclick', (event) => {
  event.notification.close();

  const targetUrl = event.notification.data.url;

  event.waitUntil(
    clients.matchAll({ type: 'window', includeUncontrolled: true })
      .then((clientList) => {
        // If the app is already open, focus it and navigate
        for (const client of clientList) {
          if (client.url === targetUrl && 'focus' in client) {
            return client.focus();
          }
        }
        // Otherwise open a new window
        return clients.openWindow(targetUrl);
      })
  );
});

Server: Sending Notifications

const webpush = require('web-push');

webpush.setVapidDetails(
  'mailto:[email protected]',
  process.env.VAPID_PUBLIC_KEY,
  process.env.VAPID_PRIVATE_KEY
);

async function sendPushNotification(subscription, payload) {
  try {
    await webpush.sendNotification(
      subscription,
      JSON.stringify({
        title: payload.title,
        body: payload.body,
        url: payload.url,
        tag: payload.tag,
      })
    );
  } catch (error) {
    if (error.statusCode === 410 || error.statusCode === 404) {
      // Subscription expired or invalid — remove from database
      await removeSubscription(subscription.endpoint);
    }
  }
}

The tag field is important: notifications with the same tag replace each other rather than stacking. Use this for things like "new work order assigned" — the user should see the latest one, not five consecutive notifications as assignments come in.

Handling Offline/Online Transitions in the UI

Connectivity Detection

navigator.onLine is widely misunderstood. It returns true if the device has a network interface up — which could mean connected to WiFi with no internet access. It is not a reliable indicator of actual internet connectivity.

import { useState, useEffect, useCallback } from 'react';

function useConnectivity() {
  const [isOnline, setIsOnline] = useState(
    typeof navigator !== 'undefined' ? navigator.onLine : true
  );
  const [isActuallyConnected, setIsActuallyConnected] = useState(true);

  const checkRealConnectivity = useCallback(async () => {
    try {
      const response = await fetch('/api/health', {
        method: 'HEAD',
        cache: 'no-store',
      });
      setIsActuallyConnected(response.ok);
    } catch {
      setIsActuallyConnected(false);
    }
  }, []);

  useEffect(() => {
    const handleOnline = () => {
      setIsOnline(true);
      checkRealConnectivity();
    };
    const handleOffline = () => {
      setIsOnline(false);
      setIsActuallyConnected(false);
    };

    window.addEventListener('online', handleOnline);
    window.addEventListener('offline', handleOffline);

    // Periodic heartbeat check
    const interval = setInterval(checkRealConnectivity, 30000);

    return () => {
      window.removeEventListener('online', handleOnline);
      window.removeEventListener('offline', handleOffline);
      clearInterval(interval);
    };
  }, [checkRealConnectivity]);

  return { isOnline, isActuallyConnected };
}

Connectivity Status Component

function ConnectivityBanner() {
  const { isActuallyConnected } = useConnectivity();
  const [pendingCount, setPendingCount] = useState(0);

  useEffect(() => {
    const updatePending = async () => {
      const count = await db.syncQueue.count();
      setPendingCount(count);
    };
    updatePending();
    const interval = setInterval(updatePending, 5000);
    return () => clearInterval(interval);
  }, []);

  if (isActuallyConnected && pendingCount === 0) return null;

  return (
    <div role="status" className="connectivity-banner">
      {!isActuallyConnected && (
        <span>📡 You are offline. Changes will sync when connected.</span>
      )}
      {pendingCount > 0 && (
        <span>🔄 {pendingCount} item{pendingCount > 1 ? 's' : ''} waiting to sync</span>
      )}
    </div>
  );
}

Graceful Degradation

When the app is offline, some features should be disabled rather than allowed to fail silently:

  • Disable: Features that require real-time server state — user search, live dashboards, payment processing.
  • Enable with queuing: Data entry, form submissions, photo capture — anything that can be stored locally and synced later.
  • Enable fully: Reading cached data, browsing previously loaded content, viewing saved forms.

The rule of thumb: if the action can be idempotently replayed later, let the user do it offline and queue it. If it requires immediate server confirmation (a payment, a reservation), disable it and explain why.

Case Study: Field Service Management in Rural India

A facilities management company operating across rural districts in Rajasthan and Madhya Pradesh needed to digitize their inspection workflow. Their 200+ field technicians conducted equipment inspections at cell tower sites, many located in areas with intermittent 2G/3G connectivity and frequent dropouts lasting minutes to hours.

The existing workflow was paper forms photographed and WhatsApp'd to a coordinator — slow, error-prone, and impossible to audit. A previous native Android app (built in Java) weighed 45MB, crashed on budget phones with 2GB RAM, and required APK sideloading for updates because many technicians didn't have Google Play Store access.

Requirements

  • Technicians receive work orders on their phones and must see assigned jobs even without connectivity.
  • On-site inspections require filling structured forms (checklists, measurements, text fields) with photo evidence.
  • Completed inspections must sync to the server when connectivity returns, preserving submission order.
  • The app must run smoothly on budget Android devices (2GB RAM, Android 9+, 16GB storage).
  • Updates must be instant — no APK distribution, no sideloading.

Architecture

Stripe Systems built this as a statically exported Next.js application deployed to a CDN, with a service worker handling all offline logic.

Service worker strategy:

  • App shell (HTML, CSS, JS): Cache-first with stale-while-revalidate for HTML pages.
  • Work order API: Network-first with a 3-second timeout and fallback to cached data.
  • Photo uploads: Network-only, handled exclusively through the sync queue.
  • Static assets (icons, fonts): Cache-first with 30-day expiration.

IndexedDB schema:

db.version(1).stores({
  workOrders: 'id, status, assignedTo, scheduledDate, siteId',
  formSubmissions: '++id, workOrderId, submittedAt, syncStatus',
  photoAttachments: '++id, formSubmissionId, mimeType, syncStatus, sizeBytes',
  syncQueue: '++id, entityType, entityId, operation, createdAt, retryCount',
  appState: 'key',
});

Photo compression: Raw photos from phone cameras are 3–8MB each. Storing and syncing these over 2G is impractical. Before storage, every photo is resized and compressed client-side using the Canvas API:

async function compressPhoto(file, maxWidth = 1280, quality = 0.7) {
  const bitmap = await createImageBitmap(file);
  const scale = Math.min(1, maxWidth / bitmap.width);
  const width = Math.round(bitmap.width * scale);
  const height = Math.round(bitmap.height * scale);

  const canvas = new OffscreenCanvas(width, height);
  const ctx = canvas.getContext('2d');
  ctx.drawImage(bitmap, 0, 0, width, height);

  const blob = await canvas.convertToBlob({
    type: 'image/jpeg',
    quality,
  });

  // Ensure we stay under 500KB
  if (blob.size > 500 * 1024 && quality > 0.3) {
    return compressPhoto(file, maxWidth, quality - 0.1);
  }

  return blob;
}

This recursive compression ensures photos stay under 500KB while maintaining enough detail for inspection documentation. Average compressed size: 180KB.

Sync queue architecture:

  1. Technician fills inspection form and attaches photos. Everything is saved to IndexedDB immediately with syncStatus: 'pending'. The UI confirms the save — the technician can move on to the next site.
  2. registration.sync.register('sync-queue') is called. If the device is online, sync begins immediately. If offline, the browser will fire the sync event when connectivity returns.
  3. The sync worker processes the queue in chronological order. Photos are uploaded first via multipart form POST. Each successful upload returns a server URL.
  4. Once all photos for a submission have URLs, the form data is POST'd with the photo URLs included.
  5. The server validates the submission, stores it, and returns a confirmation with a server timestamp.
  6. The sync worker updates the local record to syncStatus: 'synced' and removes the queue entry.
  7. If a conflict is detected (e.g., a supervisor modified the work order on the server while the technician was offline), the server returns a 409 Conflict with the server's version. The local record is flagged, and the technician is shown a notification to review the conflict on their next sync.

Conflict resolution: The Stripe Systems team chose server-timestamp-wins for this application. The reasoning: inspection data is append-only in practice — technicians create submissions, they don't collaboratively edit them. The only conflict scenario is a supervisor reassigning or closing a work order while the technician is offline. In that case, the server state is authoritative, but the technician's submitted data is never discarded — it is preserved as a separate record for audit purposes.

Results

After three months of production use across 200+ technicians:

  • Full offline capability for up to 72 hours of data collection. Technicians in the most remote sites would collect two to three days of inspections before reaching a town with reliable connectivity.
  • Average sync time: 30 seconds for a full day's worth of work orders (typically 8–12 inspections with 3–5 photos each). Compressed photos and batched API calls kept bandwidth usage low.
  • Installed app size: 2.8MB versus 45MB for the previous native app. This was significant — technicians' phones typically had 1–3GB of free storage.
  • 60% reduction in data usage compared to the native app. The native app re-downloaded the full work order list on every launch. The PWA's network-first strategy with cache fallback meant repeated launches consumed zero data until a sync was needed.
  • 95% technician preference for the PWA over the previous native app in a post-deployment survey. The primary reasons cited: faster launch time, not needing to sideload updates, and the offline reliability indicator that showed them exactly how many items were pending sync.

Lessons Learned

Test on real devices in real conditions. Chrome DevTools network throttling is not a substitute for a Rs. 7,000 (~$85) Android phone on Airtel 2G in a village outside Udaipur. The team kept a set of budget test devices and periodically tested in low-connectivity areas.

IndexedDB storage limits vary wildly. Chrome on Android allocates a percentage of available disk space, but the exact amount depends on the device and OS version. The team implemented a storage quota check on app launch and warned users when available storage dropped below 50MB.

Service worker updates need a clear UX. When a new service worker is deployed, the old one continues running until all tabs are closed. The team implemented a "new version available" banner with a refresh button that calls skipWaiting() on the waiting service worker and reloads the page.

Background sync is not guaranteed. The browser decides when to fire the sync event based on network quality and battery state. On some Android devices with aggressive battery optimization, the sync event is significantly delayed. The team added a manual "Sync Now" button as a fallback that triggers the same sync logic directly.

Wrapping Up

Building an offline-first PWA requires thinking differently about data flow. The network is not a dependency — it is an optional enhancement. Data lives locally first, UI feedback is immediate, and synchronization happens in the background when conditions allow.

The technical foundation is straightforward: service workers for caching, IndexedDB for structured storage, background sync for deferred mutations, and push notifications for re-engagement. The hard part is the decisions around each: which caching strategy for which resource, how to handle conflicts, when to retry, and how to communicate sync state to the user.

Start with the basics — a precached app shell and a network-first strategy for API data — and layer in complexity as your application demands it. Not every app needs vector clocks or operational transforms. Most need reliable offline reads, queued offline writes, and a clear status indicator. Get those right and you've built something that works for users regardless of their network conditions.

Ready to discuss your project?

Get in Touch →
← Back to Blog

More Articles