Skip to main content
Stripe SystemsStripe Systems
Frontend Development📅 March 2, 2026· 19 min read

Micro-Frontend Architecture at Scale: Module Federation with React and Webpack 5

✍️
Stripe Systems Engineering

The pitch for micro-frontends is compelling: split a monolithic frontend into independently deployable units owned by autonomous teams. The reality is more nuanced. Module Federation, introduced in Webpack 5, is the most practical tool for achieving this — but it introduces genuine complexity that you need to understand before committing to it.

This post covers the mechanics of Module Federation, the architectural decisions you'll face, and a real case study of an enterprise platform built on this pattern. The goal is to give you enough depth to make informed decisions, not to sell you on the architecture.

What Micro-Frontends Solve vs. What They Complicate

Before discussing implementation, it's worth being honest about the tradeoffs.

What you gain

Independent deployability. Each team ships on its own cadence. The Claims team doesn't wait for the Policy team's sprint to finish. This is the primary benefit, and it's significant — especially when you have more than three or four teams contributing to the same frontend.

Team autonomy. Teams own their vertical end-to-end: from the database schema to the UI. They choose their own state management, testing strategy, and release schedule. This reduces cross-team coordination overhead.

Technology heterogeneity. One team can use React 18 with Zustand while another uses React with Redux Toolkit. In theory, you could mix frameworks entirely (React + Vue), though in practice this creates more problems than it solves.

Isolated failure domains. A runtime error in the Reporting micro-frontend doesn't crash the entire application. You can wrap each remote in an error boundary and degrade gracefully.

What you complicate

Runtime complexity. You now have multiple independently built JavaScript bundles that need to cooperate at runtime. Shared dependencies must be negotiated on the fly. This is a fundamentally harder problem than a single build.

Shared dependency management. Getting React to load exactly once across five micro-frontends is non-trivial. Version mismatches cause subtle bugs — duplicate React instances break hooks, context, and reconciliation.

Inconsistent UX. Without strong design system governance, each team drifts. Buttons look slightly different. Spacing is inconsistent. Loading states vary. This is an organizational problem, not a technical one, but the architecture makes it easier to occur.

Increased infrastructure overhead. Each micro-frontend needs its own build pipeline, CDN path, health checks, and monitoring. You're trading frontend complexity for infrastructure complexity.

Harder debugging across boundaries. When a bug spans two micro-frontends, you're debugging across separately built bundles with separate source maps. Stack traces cross module boundaries in non-obvious ways.

The rule of thumb: if you have fewer than three teams contributing to the frontend, a well-organized monolith with code ownership (via CODEOWNERS) is almost certainly better. Micro-frontends solve organizational scaling problems, not technical ones.

Module Federation Internals

Module Federation is a Webpack 5 feature that allows JavaScript applications to dynamically load code from other independently built applications at runtime. Understanding how it works under the hood is essential for debugging the problems you'll inevitably encounter.

The remoteEntry.js manifest

Every micro-frontend configured as a "remote" produces a remoteEntry.js file during its build. This file is the entry point — a manifest that exposes the remote's modules and declares its shared dependencies.

When the host application loads remoteEntry.js, it doesn't download the remote's entire bundle. It gets a lightweight manifest that describes what modules are available and how to fetch them on demand.

The container interface

Each remote exposes a container with two methods:

  • init(sharedScope) — Called by the host to pass the shared dependency scope to the remote. This is how the host says "here's the React instance I'm using; use this one instead of bundling your own."
  • get(moduleName) — Returns a factory function for the requested module. The host calls this to lazily load a specific component or module from the remote.
// Simplified view of what happens at runtime
const remoteContainer = window.claimsApp;

// Step 1: Initialize with shared scope
await remoteContainer.init(__webpack_share_scopes__.default);

// Step 2: Get a specific module
const factory = await remoteContainer.get('./ClaimsDashboard');
const Module = factory();

Shared scope negotiation

This is where Module Federation gets interesting. During init(), the host and remote negotiate which shared dependencies to use. The process works roughly like this:

  1. The host registers its shared dependencies in the shared scope (e.g., React 18.2.0).
  2. Each remote's init() checks the shared scope for compatible versions of its own dependencies.
  3. If a compatible version exists, the remote uses it. If not, it falls back to its own bundled copy.

This negotiation happens at runtime, not build time. That's powerful — it means remotes can be built independently — but it also means version conflicts manifest as runtime errors, not build errors.

Async loading boundary

Remotes are loaded asynchronously. The host app must treat remote modules as async imports, which means wrapping them in React.lazy() or a similar async boundary:

const ClaimsDashboard = React.lazy(() => import('claimsApp/ClaimsDashboard'));

function App() {
  return (
    <React.Suspense fallback={<LoadingSkeleton />}>
      <ErrorBoundary fallback={<RemoteLoadError name="Claims" />}>
        <ClaimsDashboard />
      </ErrorBoundary>
    </React.Suspense>
  );
}

The ErrorBoundary is non-negotiable. If the remote's CDN is down or returns a broken bundle, you need graceful degradation, not a white screen.

Runtime Integration vs. Build-Time Integration

Module Federation is a runtime integration strategy, but it's not the only option. Understanding the spectrum helps you pick the right approach.

Build-time integration

The simplest form: publish each micro-frontend as an npm package, import it in the shell, and build everything together.

{
  "dependencies": {
    "@org/claims-ui": "^2.1.0",
    "@org/underwriting-ui": "^1.8.0",
    "@org/policy-ui": "^3.0.0"
  }
}

Pros: Single build output, no runtime negotiation, standard tooling, easy to debug. Cons: Every change requires the shell to rebuild and redeploy. Teams aren't truly independent — you've just made a monolith with extra steps.

This works well for shared component libraries (design systems, utility packages) but defeats the purpose for micro-frontends that need independent deployment.

Runtime integration

Module Federation: Loads separately built bundles at runtime with shared dependency negotiation. Best balance of independence and integration.

iframes: Complete isolation. Each micro-frontend runs in its own browsing context. No shared state, no CSS conflicts, no dependency negotiation. But communication is limited to postMessage, performance is worse (each iframe is a separate document), and deep linking is painful.

Web Components: Wrap each micro-frontend in a custom element. Framework-agnostic by design. But passing complex data through attributes is clunky, event propagation across shadow DOM boundaries is unintuitive, and SSR support is limited.

Hybrid approaches

In practice, most teams end up with a hybrid:

  • Module Federation for runtime-loaded micro-frontends (the main application verticals).
  • npm packages for shared libraries that change infrequently (design system, auth utilities, API clients).
  • URL-based routing to separate unrelated applications entirely (admin portal vs. customer portal).

The decision tree: if it needs independent deployment and shares the same shell, use Module Federation. If it's a stable utility, publish it as a package. If it's a completely separate application, give it its own domain.

Shared Dependency Strategy

This is where most teams get burned. Misconfigured shared dependencies cause duplicate React instances, broken hooks, inconsistent styling, and bundle bloat. Here's the configuration that actually works, with explanations for every option.

Host application config

// webpack.config.js — Shell (Host) Application
const { ModuleFederationPlugin } = require('webpack').container;

module.exports = {
  plugins: [
    new ModuleFederationPlugin({
      name: 'shell',
      remotes: {
        claimsApp: 'claimsApp@https://cdn.example.com/claims/remoteEntry.js',
        underwritingApp: 'underwritingApp@https://cdn.example.com/underwriting/remoteEntry.js',
        policyApp: 'policyApp@https://cdn.example.com/policy/remoteEntry.js',
      },
      shared: {
        react: {
          singleton: true,    // CRITICAL: React must be a single instance.
                              // Duplicate React breaks hooks, context, and reconciliation.
          requiredVersion: '^18.2.0',
          eager: true,        // Shell loads React immediately, not lazily.
                              // Remotes will use the shell's instance.
        },
        'react-dom': {
          singleton: true,
          requiredVersion: '^18.2.0',
          eager: true,
        },
        'react-router-dom': {
          singleton: true,    // Router context must be shared or nested
                              // routers will conflict.
          requiredVersion: '^6.20.0',
        },
        '@org/design-system': {
          singleton: true,
          requiredVersion: '^4.0.0',
          strictVersion: true, // Fail loudly if a remote bundles
                               // an incompatible version. Better to catch
                               // this in staging than in production.
        },
      },
    }),
  ],
};

Remote application config

// webpack.config.js — Claims Remote Application
const { ModuleFederationPlugin } = require('webpack').container;

module.exports = {
  plugins: [
    new ModuleFederationPlugin({
      name: 'claimsApp',
      filename: 'remoteEntry.js',
      exposes: {
        './ClaimsDashboard': './src/ClaimsDashboard',
        './ClaimDetail': './src/ClaimDetail',
        './NewClaim': './src/NewClaim',
      },
      shared: {
        react: {
          singleton: true,
          requiredVersion: '^18.2.0',
          // eager is false (default) for remotes.
          // They'll use whatever React instance
          // the shell already loaded.
        },
        'react-dom': {
          singleton: true,
          requiredVersion: '^18.2.0',
        },
        'react-router-dom': {
          singleton: true,
          requiredVersion: '^6.20.0',
        },
        '@org/design-system': {
          singleton: true,
          requiredVersion: '^4.0.0',
          strictVersion: true,
        },
        // Non-singleton shared deps: multiple versions can coexist
        'lodash-es': {
          requiredVersion: '^4.17.0',
          // No singleton: true. If the shell uses 4.17.21
          // and this remote uses 4.17.20, both are compatible
          // and Webpack will deduplicate. If not, each loads its own.
        },
      },
    }),
  ],
};

Key decisions explained

singleton: true — Forces a single instance across all micro-frontends. React, ReactDOM, and any context-dependent library (router, theme providers) must be singletons. Without this, each micro-frontend bundles its own React, and hooks fail silently.

eager: true — Only set this on the host. It means the shared dependency is included in the initial bundle, not lazy-loaded. The shell needs React immediately to render the chrome (header, nav, footer). Remotes should leave this as false so they defer to the shell's instance.

strictVersion: true — Makes version mismatches a hard error instead of a warning. Use this for your design system and other libraries where version divergence causes visible inconsistency.

requiredVersion — Declares the version range this application expects. If the shared scope has a compatible version, it's used. If not, the application falls back to its own bundled copy (unless strictVersion is set, in which case it throws).

What happens when versions conflict

Suppose the shell provides React 18.2.0 and a remote was built with React 18.3.0. Because requiredVersion: '^18.2.0' uses a caret range, 18.3.0 satisfies '^18.2.0' — so the remote accepts the shell's 18.2.0. The caret range is permissive: ^18.2.0 matches >=18.2.0 <19.0.0.

If a remote declares requiredVersion: '~18.3.0' (tilde range: >=18.3.0 <18.4.0) and the shell provides 18.2.0, the requirement isn't satisfied. Since singleton: true, the remote can't load its own copy — it must use the singleton. If strictVersion: true, this throws an error. If strictVersion: false, Webpack logs a warning and uses the singleton anyway, which may or may not work.

This is why aligning on version ranges across teams matters. Document your shared dependency contracts.

Routing Strategies

Routing in a micro-frontend architecture is more nuanced than in a single-page app. The fundamental question is: who owns the router?

App shell routing (recommended)

The shell application owns the top-level router and delegates rendering to the appropriate micro-frontend based on the URL path. Each micro-frontend receives a base path and handles its own sub-routes internally.

// Shell application — src/App.jsx
import React, { Suspense } from 'react';
import { BrowserRouter, Routes, Route, Navigate } from 'react-router-dom';
import { AppShell } from './components/AppShell';
import { LoadingSkeleton } from './components/LoadingSkeleton';
import { RemoteErrorBoundary } from './components/RemoteErrorBoundary';

const ClaimsDashboard = React.lazy(() => import('claimsApp/ClaimsDashboard'));
const UnderwritingHub = React.lazy(() => import('underwritingApp/UnderwritingHub'));
const PolicyAdmin = React.lazy(() => import('policyApp/PolicyAdmin'));

export default function App() {
  return (
    <BrowserRouter>
      <AppShell>
        <Suspense fallback={<LoadingSkeleton />}>
          <Routes>
            <Route
              path="/claims/*"
              element={
                <RemoteErrorBoundary name="Claims">
                  <ClaimsDashboard />
                </RemoteErrorBoundary>
              }
            />
            <Route
              path="/underwriting/*"
              element={
                <RemoteErrorBoundary name="Underwriting">
                  <UnderwritingHub />
                </RemoteErrorBoundary>
              }
            />
            <Route
              path="/policies/*"
              element={
                <RemoteErrorBoundary name="Policy Admin">
                  <PolicyAdmin />
                </RemoteErrorBoundary>
              }
            />
            <Route path="/" element={<Navigate to="/claims" replace />} />
          </Routes>
        </Suspense>
      </AppShell>
    </BrowserRouter>
  );
}
// Claims remote — src/ClaimsDashboard.jsx
import { Routes, Route } from 'react-router-dom';
import { ClaimsList } from './ClaimsList';
import { ClaimDetail } from './ClaimDetail';
import { NewClaim } from './NewClaim';

// This component receives the /claims/* wildcard.
// Internal routes are relative to /claims/.
export default function ClaimsDashboard() {
  return (
    <Routes>
      <Route index element={<ClaimsList />} />
      <Route path=":claimId" element={<ClaimDetail />} />
      <Route path="new" element={<NewClaim />} />
    </Routes>
  );
}

The wildcard /* in the shell route is critical. It tells React Router to pass the remaining path segments to the child, allowing the remote to define its own sub-routes without the shell knowing about them.

Decentralized routing

Each micro-frontend registers its own routes with a shared router. This gives teams more autonomy but makes it harder to reason about the overall URL space. Route conflicts become a real risk.

URL-based integration

The simplest approach: each micro-frontend is a separate SPA at a different path prefix, served by the same reverse proxy. /claims/ serves one SPA, /underwriting/ serves another. No shared runtime at all. You lose smooth client-side transitions between verticals, but you gain complete isolation. This is appropriate when the verticals share minimal UI chrome.

State Sharing Across Micro-Frontends

The default answer should be: don't share state. Each micro-frontend should own its data and fetch what it needs. But sometimes you need lightweight coordination — the authenticated user, feature flags, or cross-cutting notifications.

Custom event bus

The simplest pattern that works. No library dependencies, no shared stores, no coupling.

// shared/eventBus.ts — Published as @org/event-bus npm package
type EventHandler<T = unknown> = (payload: T) => void;

class EventBus {
  private handlers = new Map<string, Set<EventHandler>>();

  on<T>(event: string, handler: EventHandler<T>): () => void {
    if (!this.handlers.has(event)) {
      this.handlers.set(event, new Set());
    }
    this.handlers.get(event)!.add(handler as EventHandler);

    // Return unsubscribe function for cleanup
    return () => {
      this.handlers.get(event)?.delete(handler as EventHandler);
    };
  }

  emit<T>(event: string, payload: T): void {
    this.handlers.get(event)?.forEach((handler) => {
      try {
        handler(payload);
      } catch (err) {
        console.error(`Event handler error for "${event}":`, err);
      }
    });
  }

  // Remove all handlers for an event (useful for cleanup)
  off(event: string): void {
    this.handlers.delete(event);
  }
}

// Singleton instance attached to window to survive module boundaries
const GLOBAL_KEY = '__APP_EVENT_BUS__';

export function getEventBus(): EventBus {
  if (!(window as any)[GLOBAL_KEY]) {
    (window as any)[GLOBAL_KEY] = new EventBus();
  }
  return (window as any)[GLOBAL_KEY];
}
// Usage in Claims micro-frontend
import { useEffect } from 'react';
import { getEventBus } from '@org/event-bus';

export function ClaimDetail({ claimId }: { claimId: string }) {
  useEffect(() => {
    // Notify other micro-frontends that user is viewing a claim
    getEventBus().emit('claim:viewed', { claimId });

    // Listen for external events
    const unsub = getEventBus().on('user:logout', () => {
      // Clean up claim-specific state
    });

    return unsub; // Cleanup on unmount
  }, [claimId]);

  // ... component rendering
}

Shared store via Module Federation

You can share a Zustand or Redux store through Module Federation's shared scope. The host creates the store and exposes it; remotes import it. This works but creates tight coupling — the store's shape becomes a contract that all consumers must agree on. Use this sparingly, typically only for truly global state like the authenticated user session.

URL state

Query parameters and hash fragments are inherently shared across micro-frontends. They're excellent for filter state, pagination, and cross-cutting selections (e.g., ?dateRange=last30d applied to all dashboards). The URL is the most debuggable, shareable, and bookmarkable form of state.

When to avoid shared state

If two micro-frontends need the same data, consider whether they should each fetch it independently. Fetching the current user's profile from an API in each micro-frontend is simpler and more resilient than wiring up a shared store. The small overhead of duplicate API calls is usually worth the decoupling.

Testing Micro-Frontends

Testing strategy must account for the fact that each micro-frontend is both a standalone application and a component within a larger system.

Unit testing in isolation

Standard practice. Each micro-frontend has its own test suite running in CI. Use React Testing Library, mock external dependencies, and test components in isolation. Nothing changes here from a monolith.

Contract testing

This is where micro-frontends require new discipline. The contract between host and remote is:

  1. The remote exposes specific modules (e.g., ./ClaimsDashboard).
  2. Those modules accept specific props (or no props).
  3. The remote expects certain shared dependencies at specific versions.

Write contract tests that validate these expectations:

// claims-remote/src/__tests__/contract.test.ts
import { describe, it, expect } from 'vitest';

describe('Claims remote contract', () => {
  it('exports ClaimsDashboard as default export', async () => {
    const mod = await import('../ClaimsDashboard');
    expect(mod.default).toBeDefined();
    expect(typeof mod.default).toBe('function'); // React component
  });

  it('exports ClaimDetail with expected props interface', async () => {
    const mod = await import('../ClaimDetail');
    expect(mod.default).toBeDefined();
  });

  it('declares compatible shared dependency versions', () => {
    const pkg = require('../../package.json');
    const react = pkg.dependencies['react'] || pkg.peerDependencies['react'];

    // Ensure this remote's React version is within the agreed range
    expect(react).toMatch(/^\^18\./);
  });
});

These tests are lightweight but they catch the most common integration failures: renamed exports, changed prop interfaces, and version drift.

Integration testing with remotes

Run the shell and all remotes together in a staging environment. Use Playwright or Cypress to verify that navigation between micro-frontends works, shared state propagates correctly, and error boundaries activate when a remote fails.

// e2e/navigation.spec.ts
import { test, expect } from '@playwright/test';

test('navigating between micro-frontends preserves shell state', async ({ page }) => {
  await page.goto('/claims');
  await expect(page.locator('[data-testid="claims-list"]')).toBeVisible();

  // Navigate to a different micro-frontend
  await page.click('[data-testid="nav-underwriting"]');
  await expect(page.locator('[data-testid="underwriting-hub"]')).toBeVisible();

  // Shell header should still show the authenticated user
  await expect(page.locator('[data-testid="user-menu"]')).toContainText('Jane Doe');
});

test('remote failure shows error boundary, not white screen', async ({ page, context }) => {
  // Block the remote's entry point to simulate CDN failure
  await context.route('**/underwriting/remoteEntry.js', (route) =>
    route.fulfill({ status: 500 })
  );

  await page.goto('/underwriting');
  await expect(page.locator('[data-testid="remote-error"]')).toContainText(
    'Underwriting is temporarily unavailable'
  );

  // Other micro-frontends should still work
  await page.click('[data-testid="nav-claims"]');
  await expect(page.locator('[data-testid="claims-list"]')).toBeVisible();
});

E2E testing at scale

Full E2E suites across all micro-frontends are expensive to run and maintain. Adopt a testing pyramid: heavy unit tests per remote, medium contract tests, light integration tests for critical cross-boundary flows, and minimal E2E for smoke tests.

Deployment Independence

The whole point of micro-frontends is independent deployment. Here's how to set up the infrastructure.

Separate CI/CD pipelines

Each micro-frontend has its own repository (or monorepo directory with isolated CI triggers) and its own pipeline:

# .github/workflows/claims-deploy.yml
name: Deploy Claims Micro-Frontend

on:
  push:
    branches: [main]
    paths:
      - 'packages/claims/**'

jobs:
  build-and-deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Install and build
        working-directory: packages/claims
        run: |
          npm ci
          npm run build
          npm run test

      - name: Deploy to CDN
        run: |
          aws s3 sync packages/claims/dist/ \
            s3://frontend-assets/claims/ \
            --cache-control "public, max-age=31536000, immutable"

          # remoteEntry.js gets a short cache TTL — it's the manifest
          # that points to content-hashed chunks.
          aws s3 cp packages/claims/dist/remoteEntry.js \
            s3://frontend-assets/claims/remoteEntry.js \
            --cache-control "public, max-age=60"

      - name: Invalidate CDN cache for remoteEntry.js
        run: |
          aws cloudfront create-invalidation \
            --distribution-id ${{ secrets.CF_DISTRIBUTION_ID }} \
            --paths "/claims/remoteEntry.js"

The critical detail: remoteEntry.js must have a short cache TTL (or be invalidated on deploy) because it's the manifest that the shell fetches to discover the latest chunks. The chunks themselves are content-hashed and can be cached aggressively.

Versioned remoteEntry.js

Some teams version their remote entry points to enable pinning:

https://cdn.example.com/claims/v2.3.1/remoteEntry.js
https://cdn.example.com/claims/latest/remoteEntry.js

The shell can point to latest for continuous deployment or pin to a specific version during incidents. This gives you a manual rollback lever without redeploying the shell.

Blue-green deployment

Each micro-frontend can implement blue-green deploys independently. Deploy the new version to a new CDN path, update remoteEntry.js to point to the new chunks, and keep the old chunks available for in-flight users. Since the shell loads remoteEntry.js on each page load (or on navigation), users pick up new versions naturally without hard refreshes.

Rollback

Rollback is redeploying the previous remoteEntry.js. Because old content-hashed chunks are still on the CDN (don't delete them immediately after deploy), reverting the manifest is instantaneous. Automate this with a health check: if error rates spike after deploy, revert remoteEntry.js to the previous version.

Case Study: Enterprise Insurance Platform

To make this concrete, here's how these patterns come together in a real system.

Context

A large insurance carrier needed to modernize their agent-facing platform. Six product teams each owned a vertical:

TeamVerticalComplexity
Team AClaims ProcessingHigh — complex forms, document upload, adjudication workflows
Team BUnderwritingHigh — risk models, third-party integrations, decision trees
Team CPolicy AdministrationMedium — CRUD-heavy, lifecycle management
Team DAgent Portal (Shell)Medium — auth, navigation, notifications, global search
Team ECustomer PortalLow–Medium — read-heavy, self-service
Team FReporting & AnalyticsMedium — data visualization, export, scheduled reports

Teams A through C and F were contributing to the same single-page application. Deployments required coordinating across all four teams. A merge conflict in a shared utility file could block three teams simultaneously. Stripe Systems was brought in to architect the migration to micro-frontends using Module Federation.

Webpack configuration

The shell application (Agent Portal) was configured as the host:

// packages/shell/webpack.config.js
const { ModuleFederationPlugin } = require('webpack').container;

module.exports = {
  output: {
    publicPath: 'https://cdn.insurance-platform.com/shell/',
    uniqueName: 'shell',
  },
  plugins: [
    new ModuleFederationPlugin({
      name: 'shell',
      remotes: {
        claimsApp: 'claimsApp@https://cdn.insurance-platform.com/claims/remoteEntry.js',
        underwritingApp: 'underwritingApp@https://cdn.insurance-platform.com/underwriting/remoteEntry.js',
        policyApp: 'policyApp@https://cdn.insurance-platform.com/policy/remoteEntry.js',
        reportingApp: 'reportingApp@https://cdn.insurance-platform.com/reporting/remoteEntry.js',
      },
      shared: {
        react: { singleton: true, requiredVersion: '^18.2.0', eager: true },
        'react-dom': { singleton: true, requiredVersion: '^18.2.0', eager: true },
        'react-router-dom': { singleton: true, requiredVersion: '^6.20.0', eager: true },
        '@insurance/design-system': {
          singleton: true,
          requiredVersion: '^4.0.0',
          strictVersion: true,
        },
        '@insurance/auth-client': {
          singleton: true,
          requiredVersion: '^2.0.0',
        },
        'date-fns': { requiredVersion: '^3.0.0' },
      },
    }),
  ],
};

The Claims remote:

// packages/claims/webpack.config.js
const { ModuleFederationPlugin } = require('webpack').container;

module.exports = {
  output: {
    publicPath: 'https://cdn.insurance-platform.com/claims/',
    uniqueName: 'claimsApp',
  },
  plugins: [
    new ModuleFederationPlugin({
      name: 'claimsApp',
      filename: 'remoteEntry.js',
      exposes: {
        './ClaimsDashboard': './src/pages/ClaimsDashboard',
        './ClaimDetail': './src/pages/ClaimDetail',
        './NewClaim': './src/pages/NewClaim',
      },
      shared: {
        react: { singleton: true, requiredVersion: '^18.2.0' },
        'react-dom': { singleton: true, requiredVersion: '^18.2.0' },
        'react-router-dom': { singleton: true, requiredVersion: '^6.20.0' },
        '@insurance/design-system': {
          singleton: true,
          requiredVersion: '^4.0.0',
          strictVersion: true,
        },
        '@insurance/auth-client': {
          singleton: true,
          requiredVersion: '^2.0.0',
        },
        // React Query is shared but NOT singleton.
        // Claims uses v5, Underwriting is still on v4.
        // Both can coexist at runtime.
        '@tanstack/react-query': {
          requiredVersion: '>=4.0.0',
          singleton: false,
        },
        'date-fns': { requiredVersion: '^3.0.0' },
      },
    }),
  ],
};

The React Query version problem

This is a real issue that surfaces in almost every Module Federation project. The Claims team upgraded to @tanstack/react-query v5 for its improved mutation handling and streaming support. The Underwriting team was mid-sprint and couldn't upgrade — their custom hooks relied on v4's onSuccess callback in useQuery, which was removed in v5.

The solution: declare @tanstack/react-query as a shared dependency with singleton: false. This allows Webpack to deduplicate when versions are compatible but load separate instances when they aren't. Claims loads v5, Underwriting loads v4, and they don't interfere with each other because React Query's state is scoped to its own QueryClient instance within each micro-frontend.

The tradeoff is bundle size — users who navigate from Claims to Underwriting load both versions. The team accepted this because the combined overhead was roughly 45 KB gzipped, and the alternative (blocking the Claims upgrade until all teams could coordinate) would have stalled development for weeks.

This is the right call in most cases. Singleton enforcement should be reserved for libraries that genuinely break when duplicated (React, ReactDOM, routers, theme providers). Everything else should default to singleton: false and let Webpack deduplicate opportunistically.

Deployment pipeline

Each team deployed independently through GitHub Actions. The pipeline for each remote followed a consistent pattern:

  1. Buildnpm ci && npm run build in the remote's package directory.
  2. Test — Unit tests + contract tests. Contract tests validated that exposed modules still matched the expected interface.
  3. Deploy chunks — Upload content-hashed files to S3 with immutable cache headers.
  4. Deploy remoteEntry.js — Upload the manifest with a 60-second cache TTL.
  5. CDN invalidation — Invalidate the remote entry path in CloudFront.
  6. Smoke test — A Playwright test loaded the staging shell, navigated to the deployed remote, and verified basic rendering.

The shell itself deployed independently as well but changed infrequently — the navigation, auth flow, and global layout were stable. Shell deploys required a broader smoke test across all verticals.

Stripe Systems set up automated rollback triggers: if the error rate for a specific remote (tracked via error boundary telemetry) exceeded 2% of page loads within 5 minutes of a deploy, the pipeline automatically reverted remoteEntry.js to the previous version and paged the owning team.

Production metrics

After three months in production, the team tracked these metrics:

Page load time by vertical (P75, measured via Navigation Timing API):

VerticalInitial LoadSubsequent Navigation
Shell + Claims1.8s420ms
Shell + Underwriting2.1s380ms
Shell + Policy1.4s290ms
Shell + Reporting2.4s510ms

Initial load includes fetching the shell, remoteEntry.js, and the remote's chunks. Subsequent navigation (e.g., Claims → Underwriting) is faster because the shell, React, and shared dependencies are already loaded — only the new remote's chunks are fetched.

Reporting had the highest load time due to its charting library (Recharts + D3 subset) adding ~180 KB gzipped. The team later addressed this by code-splitting the charting components so they loaded only when the user opened a specific report.

Error isolation in practice: In week six, the Underwriting team deployed a build that threw a runtime error when a specific third-party risk API returned an unexpected response shape. The error boundary caught it, displayed a fallback UI, and the other five verticals continued operating normally. The Underwriting team's automated rollback triggered within three minutes. In the old monolith, this error would have crashed the entire application for all users.

Deploy frequency: Before the migration, the monolith deployed 2–3 times per week with coordinated release windows. After migration, individual teams deployed 4–8 times per week each. The Claims team, which had the most active feature development, deployed 32 times in the first month — an average of more than once per working day.

Closing Thoughts

Module Federation is the most practical tool available for micro-frontends in Webpack-based React applications. It solves the deployment coupling problem without requiring iframes or separate SPAs.

But it's infrastructure for organizational scaling, not a performance optimization or a code quality improvement. A well-structured monolith with clear module boundaries is easier to build, test, debug, and operate. Adopt micro-frontends when you have the organizational pain that justifies the technical complexity — multiple teams blocked by shared deployments, divergent release cadences, or genuinely different technology requirements.

If you do adopt it, invest heavily in three things: shared dependency contracts (document and enforce version ranges), error boundaries everywhere (remotes will fail), and per-remote observability (you need to know which micro-frontend is causing errors, not just that errors exist). Get those right, and the architecture delivers on its promise of independent, autonomous teams shipping at their own pace.

Ready to discuss your project?

Get in Touch →
← Back to Blog

More Articles