The same business logic is copy-pasted 4 times across:
| Target | Path | Language | Storage |
|---|---|---|---|
| pb-node (reference) | packages/pb-node/src/ |
JS | Kysely + SQLite |
| Cloudflare Workers | deploy/cloudflare/src/ |
JS | D1 (raw SQL) |
| Supabase Edge | deploy/supabase/.../commit-reveal/ |
TS | Kysely + Postgres |
| Google Cloud Run | deploy/google-cloud/src/ |
JS | Firestore |
| File | Shared Logic? | Duplicated 4x? | Notes |
|---|---|---|---|
beacon.js |
100% identical | Yes | Drand fetching, retry, caching, computeSelection() |
signing.js |
~98% identical | Yes | Only WASM import path differs |
routes/commitments.js |
~95% identical | Yes | Same validation, same response shape. Only diff: how config is accessed (c.get('config') vs module import) |
routes/reveals.js |
~95% identical | Yes | Same validation, same data verification. Same config access diff |
routes/server-info.js |
~98% identical | Yes | Trivial endpoint, same config access diff |
config.js |
~70% similar | Yes | Same shape, different defaults (port, operator, storage-specific fields) |
app.js |
~80% similar | Yes | Same route wiring, differs in middleware (config injection, storage creation) |
storage/* |
Different | No | Each target has a unique storage adapter (D1, Firestore, Kysely/SQLite, Kysely/Postgres) |
| Entry point | Different | No | serve() vs export default { fetch } vs Deno.serve() |
| WASM loading | Different | No | npm import vs ESM bundle vs base64-inlined |
beacon.js — Move entirely to shared codeCurrently identical in all 4 targets. Zero platform-specific code.
Functions to share:
fetchDrandRound(round) — pure fetch + JSON parsegetBeaconOutput(storage, beacon, registeredAt, maxWaitMs) — uses storage interface onlygetArrivalBeacon(storage, beacon, timestamp) — uses storage interface onlycomputeSelection(commitment, beaconOutput) — pure WASM callsDependency: storage.getCachedBeacon(), storage.cacheBeacon(), WASM functions. All are already abstracted behind the same interface in every target.
signing.js — Move entirely to shared codeFunctions to share:
initServerKey(storage) — lazy-init pattern, uses storage.getServerIdentity() / storage.setServerIdentity()getServerKey() — returns cached DID keyserverSign(obj) — signs JSON with server keysignResponse(obj) — wraps object with server_key + server_signatureOnly diff today: WASM import path (import * as wasm from 'pb-wasm' vs from './wasm/pb_wasm.js'). Fix by having the shared module accept WASM as a parameter, or by standardizing the import.
routes/commitments.js — Move business logic to shared codeThe handler body is ~95% identical. Two minor diffs:
| Diff | Current | Fix |
|---|---|---|
| Config access | c.get('config') (CF/GC) vs import { config } (pb-node) |
Always use c.get('config') — pb-node middleware already sets storage, just also set config |
| WASM import | Different paths | Pass WASM via param or standardize import |
Everything else — validation, signature check, duplicate check, beacon fetch, selection compute, signer activity, response signing — is identical.
routes/reveals.js — Move business logic to shared codeSame situation as commitments. The data verification logic (fetch URLs, hash, compare) is identical across all 4 targets. Same two minor diffs (config access, WASM import).
routes/server-info.js — Move entirely to shared codeTrivial 10-line handler. Only diff is config access pattern.
Each target has a fundamentally different storage backend:
| Target | Adapter | Why it's unique |
|---|---|---|
| pb-node | pb-storage (Kysely + SQLite) |
Uses Kysely ORM, better-sqlite3 driver |
| Cloudflare | D1Storage class |
D1 API: .prepare().bind().first(), .batch(), raw SQL |
| Supabase | KyselyStorage (inlined) |
Kysely + pg driver, JSONB columns, Postgres-specific upsert syntax |
| Google Cloud | FirestoreStorage class |
Firestore NoSQL: .doc().get(), .where(), .create(), gRPC error codes |
However: All adapters implement the same interface. This is already the right pattern — just needs to be formalized as a documented interface/type.
Each runtime has a different startup mechanism:
@hono/node-server → serve({ fetch: app.fetch, port })export default { fetch: app.fetch }Deno.serve(app.fetch)@hono/node-server (same as pb-node but different config)import * as wasm from 'pb-wasm' (npm workspace link)./wasm/pb_wasm.jsawait initWasm() at startup./wasm/pb_wasm.jsPort, operator name, storage-specific fields (dbPath, projectId, firestoreDatabase) vary per target. The config shape is the same though — just different defaults.
packages/
pb-server-core/ # NEW shared package
src/
beacon.js # Drand fetching + selection (moved from all 4)
signing.js # Server key mgmt + response signing (moved from all 4)
routes/
commitments.js # Shared route handler logic
reveals.js # Shared route handler logic
server-info.js # Shared route handler logic
types.d.ts # StorageInterface definition
index.js # Re-exports everything
pb-node/ # EXISTING — becomes thin wrapper
src/
server.js # Entry point (unchanged)
app.js # Wires shared routes + SQLite storage
config.js # Node-specific defaults
pb-storage/ # EXISTING — unchanged (Kysely adapters)
deploy/
cloudflare/
src/
index.js # Entry point (unchanged)
app.js # Wires shared routes + D1 storage
config.js # CF-specific defaults
storage/d1.js # D1 adapter (unchanged)
wasm/ # CF-specific WASM bundle
supabase/
...functions/commit-reveal/
index.ts # Entry point (unchanged)
app.ts # Wires shared routes + Postgres storage
config.ts # Supabase-specific defaults
storage/postgres.ts # Postgres adapter (unchanged)
wasm/ # Base64-inlined WASM
google-cloud/
src/
server.js # Entry point (unchanged)
app.js # Wires shared routes + Firestore storage
config.js # GC-specific defaults
storage/firestore.js # Firestore adapter (unchanged)
wasm/ # Bundled WASM
Make pb-node set config on context like the others already do. This eliminates the only diff in route handlers.
Files changed: packages/pb-node/src/app.js (add c.set('config', config) to middleware)
pb-server-core packageMove these files from packages/pb-node/src/ into packages/pb-server-core/src/:
beacon.jssigning.jsroutes/commitments.jsroutes/reveals.jsroutes/server-info.jsAdjust imports: WASM functions should be injected or imported from a common path.
Option A — Dependency injection: Each target passes WASM module to shared functions at init time:
import * as wasm from 'pb-wasm'; // or './wasm/pb_wasm.js'
import { initCore } from 'pb-server-core';
initCore({ wasm });
Option B — Re-export from each target: Each target has a wasm.js that re-exports from whatever source, and shared code imports from a relative path that each target provides.
Recommendation: Option A — cleanest, works across all runtimes.
Formalize the storage contract that all adapters already implement:
interface StorageInterface {
getServerIdentity(): Promise<Keypair | null>;
setServerIdentity(keypair: Keypair): Promise<void>;
putCommitment(id: string, commitment: object, registeredAt: string): Promise<void>;
getCommitment(id: string): Promise<{ commitment: object, registered_at: string } | null>;
putSelection(hash: string, selection: object, round: number): Promise<void>;
getSelection(hash: string): Promise<object | null>;
putReveal(id: string, hash: string, reveal: object, registeredAt: string): Promise<void>;
getReveals(hash: string): Promise<object[]>;
getSignerActivity(key: string): Promise<{ commitments_past_hour: number, commitments_past_month: number }>;
getCachedBeacon(type: string, round: number): Promise<object | null>;
cacheBeacon(type: string, round: number, output: object): Promise<void>;
// Optional (not all targets implement):
putMirror?(id: string, hash: string, mirror: object, registeredAt: string): Promise<void>;
getMirrors?(hash: string): Promise<object[]>;
mirrorExists?(hash: string, url: string): Promise<boolean>;
}
Each target's app.js becomes a thin composition:
import { createRoutes } from 'pb-server-core';
import { D1Storage } from './storage/d1.js';
import * as wasm from './wasm/pb_wasm.js';
const routes = createRoutes({ wasm });
// ... wire into Hono app with platform storage
Remove from each deploy target:
beacon.js (~140 lines × 3 = 420 lines deleted)signing.js (~56 lines × 3 = 168 lines deleted)routes/commitments.js (~190 lines × 3 = 570 lines deleted)routes/reveals.js (~180 lines × 3 = 540 lines deleted)routes/server-info.js (~24 lines × 3 = 72 lines deleted)Total: ~1,770 lines of duplicated code eliminated.
Supabase is TypeScript — shared package needs to be JS with .d.ts types, or TS compiled to JS. Supabase can import the compiled JS. Alternatively, write shared code in TS and compile for other targets.
Cloudflare bundling — Workers use wrangler which bundles via esbuild. Importing from a workspace package should work fine ("pb-server-core": "file:../../packages/pb-server-core").
Supabase Deno imports — Supabase Edge Functions run on Deno which uses URL imports or import maps. May need an import map entry for the shared package, or vendor the compiled output.
Mirrors route — Currently only in pb-node. When shared, make it optional (only mounted if storage implements putMirror/getMirrors).
Testing — After refactor, run all existing tests against all targets to verify no regressions. The pb-node test suite (with SERVER_URL remote mode) can validate any target.