Modern web apps in 2026 are expected to work even when the network is unreliable. Users start actions on trains, in low-signal offices, and across flaky mobile hotspots. If your app drops writes or forces users to retry manually, trust erodes fast. In this practical guide, you will build a production-ready background sync queue in JavaScript using Service Workers, IndexedDB, and the Web Locks API, so writes are durable, ordered, and automatically retried when connectivity returns.
Why this architecture works in production
A robust client-side write pipeline needs four things: persistence, retry control, deduplication, and safe concurrency. Here is the stack:
- IndexedDB for durable queued jobs and metadata.
- Service Worker to process sync in the background.
- Web Locks API to prevent parallel tabs from processing the same queue simultaneously.
- Idempotency keys so server writes are safe to retry.
This pattern works for forms, comments, analytics events, inventory updates, or any user action that must not be lost.
System design
Queue lifecycle
- User action creates a job with payload + idempotency key.
- Job is stored in IndexedDB with
status=pending. - App asks Service Worker to sync now (or via periodic/background sync).
- Worker acquires a lock and processes pending jobs in order.
- Server responds; job is marked
doneor rescheduled with backoff.
Step 1: IndexedDB queue layer
Use a tiny storage module. You can use raw IndexedDB, but idb keeps code clean.
import { openDB } from 'idb';
const dbPromise = openDB('sync-queue-v1', 1, {
upgrade(db) {
const jobs = db.createObjectStore('jobs', { keyPath: 'id' });
jobs.createIndex('byStatus', 'status');
jobs.createIndex('byRunAt', 'runAt');
}
});
export async function enqueueJob({ type, payload }) {
const db = await dbPromise;
const now = Date.now();
const id = crypto.randomUUID();
const job = {
id,
type,
payload,
idempotencyKey: id,
status: 'pending',
attempts: 0,
runAt: now,
createdAt: now,
updatedAt: now
};
await db.put('jobs', job);
return job;
}
export async function dueJobs(limit = 20) {
const db = await dbPromise;
const tx = db.transaction('jobs', 'readonly');
const idx = tx.store.index('byRunAt');
const now = Date.now();
const jobs = [];
let cursor = await idx.openCursor();
while (cursor && jobs.length < limit) {
const value = cursor.value;
if (value.status === 'pending' && value.runAt <= now) jobs.push(value);
cursor = await cursor.continue();
}
await tx.done;
return jobs;
}
export async function updateJob(job) {
const db = await dbPromise;
job.updatedAt = Date.now();
await db.put('jobs', job);
}
export async function removeJob(id) {
const db = await dbPromise;
await db.delete('jobs', id);
}
Step 2: Register Service Worker and trigger sync
From your app entry, register worker and poke a sync after enqueue.
if ('serviceWorker' in navigator) {
await navigator.serviceWorker.register('/sw.js', { type: 'module' });
}
export async function saveComment(postId, text) {
await enqueueJob({ type: 'comment.create', payload: { postId, text } });
const reg = await navigator.serviceWorker.ready;
if ('sync' in reg) {
await reg.sync.register('process-sync-queue');
} else {
// Fallback for browsers without Background Sync
navigator.serviceWorker.controller?.postMessage({ kind: 'PROCESS_QUEUE' });
}
}
Step 3: Single-run queue processor with Web Locks
Multiple tabs can race. The lock guarantees one processor at a time.
// sw.js
import { dueJobs, updateJob, removeJob } from './queue-db.js';
self.addEventListener('sync', (event) => {
if (event.tag === 'process-sync-queue') {
event.waitUntil(processQueue());
}
});
self.addEventListener('message', (event) => {
if (event.data?.kind === 'PROCESS_QUEUE') {
event.waitUntil(processQueue());
}
});
async function processQueue() {
if (!('locks' in self.navigator)) {
return processQueueUnlocked();
}
return self.navigator.locks.request('sync-queue-lock', async () => {
await processQueueUnlocked();
});
}
async function processQueueUnlocked() {
const jobs = await dueJobs(25);
for (const job of jobs) {
await processJob(job);
}
}
Step 4: Retry with exponential backoff + jitter
Retry only transient failures (timeouts, 429, 5xx). Permanent failures should be marked and surfaced to the UI.
function nextRunAt(attempts) {
const base = 1000; // 1s
const cap = 5 * 60 * 1000; // 5m
const exp = Math.min(cap, base * (2 ** attempts));
const jitter = Math.floor(Math.random() * 300);
return Date.now() + exp + jitter;
}
function isTransient(status) {
return status === 408 || status === 425 || status === 429 || (status >= 500 && status <= 599);
}
async function processJob(job) {
try {
const res = await fetch('/api/comments', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Idempotency-Key': job.idempotencyKey
},
body: JSON.stringify(job.payload)
});
if (res.ok) {
await removeJob(job.id);
return;
}
if (isTransient(res.status) && job.attempts < 8) {
job.attempts += 1;
job.runAt = nextRunAt(job.attempts);
await updateJob(job);
return;
}
job.status = 'failed';
await updateJob(job);
} catch {
if (job.attempts < 8) {
job.attempts += 1;
job.runAt = nextRunAt(job.attempts);
await updateJob(job);
} else {
job.status = 'failed';
await updateJob(job);
}
}
}
Server-side idempotency (critical)
Client retries are safe only if the server de-duplicates requests. Store each idempotency key with a TTL and return the previous result for duplicates.
// Express-style pseudo code
app.post('/api/comments', async (req, res) => {
const key = req.get('Idempotency-Key');
if (!key) return res.status(400).json({ error: 'missing key' });
const existing = await redis.get(`idem:${key}`);
if (existing) return res.status(200).json(JSON.parse(existing));
const created = await createComment(req.body);
await redis.set(`idem:${key}`, JSON.stringify(created), { EX: 86400 });
return res.status(201).json(created);
});
Operational tips for 2026 deployments
- Telemetry: Track queue depth, retries, age of oldest pending job, and permanent failure count.
- UX: Show optimistic updates with “Syncing…” state and a “Retry now” action.
- Schema versioning: Version IndexedDB stores and migrate carefully during app upgrades.
- Data limits: Enforce payload caps and prune stale failed jobs.
- Security: Encrypt sensitive-at-rest payloads where required and avoid storing secrets client-side.
Common pitfalls
1) Processing queue on every tab without lock
This causes duplicates and race conditions. Always gate queue processing with Web Locks or leader election fallback.
2) Infinite retries on validation errors
Do not retry 400/401/403/404 blindly. Mark as failed and surface remediation to users.
3) No kill switch
Feature-flag queue processing so you can disable it quickly if backend incidents occur.
Conclusion
Reliable offline-to-online write sync is now table stakes for serious web apps. With IndexedDB for durability, Service Workers for background processing, Web Locks for concurrency safety, and server-side idempotency for exactly-once behavior in practice, you can ship apps that stay trustworthy under real network conditions. Start with one write path, instrument it, and expand gradually across your product.

Leave a Reply