Skip to main content
folksbase uses a handful of well-known design patterns consistently across the backend, frontend, and shared packages. This page documents each pattern, where it appears, and why it was chosen.

Backend Patterns

Repository Pattern

Every database query lives in a repository file. Repositories are the only layer that imports from @folksbase/db — no other code constructs SQL.
// repositories/contacts.repository.ts — SQL only, no business logic
export async function findMany(
  workspaceId: string,
  params: ContactListParams,
): Promise<PaginatedResponse<Contact>> {
  const conditions = [eq(contacts.workspace_id, workspaceId)];

  if (params.cursor) {
    conditions.push(gt(contacts.id, params.cursor));
  }

  const rows = await db
    .select()
    .from(contacts)
    .where(and(...conditions))
    .orderBy(asc(contacts.id))
    .limit(limit);

  return { data: rows, nextCursor: rows[rows.length - 1]?.id ?? null, total };
}
This keeps queries isolated and testable. When a background job needs the same data as a route handler, both call the same repository method instead of duplicating SQL. Repositories in the codebase: contacts, imports, exports, tags, stats, settings, workspaces.

Service Layer

Services sit between routes and repositories. They contain business logic — validation, orchestration across multiple repositories, external API calls — but never touch HTTP concerns.
// services/contacts.service.ts — business logic, no HTTP
export async function fetchGravatarUrl(email: string): Promise<string | null> {
  const hash = md5(normalizeEmail(email));
  const url = `https://www.gravatar.com/avatar/${hash}?d=404`;

  try {
    const response = await fetch(url, {
      method: "HEAD",
      signal: AbortSignal.timeout(2000),
    });
    return response.ok ? url : null;
  } catch (err) {
    logger.warn("Gravatar fetch failed", { email, error: err instanceof Error ? err.message : "Unknown error" });
    return null;
  }
}
The key constraint: services return data or throw errors. They never call c.json() or set HTTP headers. This makes them reusable in background jobs, which have no HTTP context.

Middleware Pipeline

Hono middleware forms a chain that every request passes through in order. Each middleware handles one concern and calls next() to pass control to the next layer.
Error Handler → CORS → Logger → Rate Limiter → Route Handler
The error handler wraps the entire chain — if anything downstream throws, it catches the error and returns a consistent response shape. Auth middleware is applied per-route rather than globally, because some endpoints (health check, webhooks, OpenAPI spec) don’t need authentication.
// middleware/rate-limit.ts — one concern: rate limiting
export const rateLimiter = createMiddleware(async (c, next) => {
  const ip = c.req.header("x-forwarded-for")?.split(",")[0]?.trim() ?? "unknown";
  const { success, limit, remaining, reset } = await ratelimit.limit(ip);

  c.header("X-RateLimit-Limit", limit.toString());
  c.header("X-RateLimit-Remaining", remaining.toString());

  if (!success) {
    return c.json({ code: "RATE_LIMITED", message: "Too many requests" }, 429);
  }

  await next();
});
Two rate limiters exist: a general one (100 requests per 60 seconds) and a stricter upload limiter (5 per 10 minutes). The upload limiter uses the authenticated userId when available, falling back to IP.

Facade Pattern

The email service exposes a clean interface for sending emails, hiding the complexity of template rendering, Resend API calls, and error handling behind simple async functions.
// services/email.service.ts — unified interface
export const emailService = {
  sendWelcome,
  sendImportComplete,
  sendImportFailed,
  sendImportErrorReport,
  sendExportComplete,
  sendWeeklyDigest,
  sendOneOff,
};
Every function in the facade follows the same shape: accept typed params, render a React Email template, send via Resend, return a { success, error? } result. Callers never deal with Resend directly.

Graceful Degradation

All external API calls — Anthropic AI, Resend email, Gravatar — follow the same pattern: try the call, cache the result, and fall back silently on failure. AI failures never break the CSV import flow. Email failures are logged but don’t prevent the operation from completing.
// csv-ai.service.ts — AI call with fallback
try {
  const result = await anthropic.messages.create({ ... });
  await redis.setex(cacheKey, 3600, JSON.stringify(parsed));
  return parsed;
} catch (error) {
  logger.error("AI mapping failed, using fallback", { error });
  return fallback(headers); // returns { header, field: null, confidence: "low" }
}
This pattern appears in three places: AI column mapping (csv-ai.service.ts), AI import summary generation (process-csv.ts), and Gravatar URL fetching (contacts.service.ts).

Cache-Aside (Redis)

Data is fetched from the database, then cached in Redis with a TTL. Subsequent reads hit the cache. Writes invalidate the cache so the next read fetches fresh data.
// repositories/contacts.repository.ts
const cached = await redis.get(countCacheKey(workspaceId));
if (cached) return Number.parseInt(cached, 10);

const [result] = await db.select({ count: count() }).from(contacts).where(...);
await redis.setex(countCacheKey(workspaceId), 300, result.count.toString());
return result.count;
Every redis.set() call includes a TTL — no exceptions. Contact counts cache for 5 minutes, AI column mapping results for 1 hour, and CSV chunk data for 1 hour.

Event-Driven Jobs (Step Orchestration)

Background jobs use Inngest’s event-driven model. Route handlers emit events, and Inngest functions subscribe to them. Each logical unit of work is wrapped in step.run() for isolated retries.
// jobs/process-export.ts — each step retries independently
const { customFieldKeys, totalRows } = await step.run("resolve-export-metadata", async () => {
  const [keys, rowCount] = await Promise.all([
    contactsRepo.getDistinctCustomFieldKeys(workspaceId, filteredIds),
    contactsRepo.countForExport(workspaceId, filteredIds),
  ]);
  return { customFieldKeys: keys, totalRows: rowCount };
});

const { blobUrl } = await step.run("stream-export", async () => {
  const { stream } = createCsvStream({ fetchBatch, batchSize: BATCH_SIZE, customFieldKeys, totalRows });
  const blob = await put(blobPath, stream, { access: "private", ... });
  return { totalRows, blobUrl: blob.url };
});
Jobs orchestrate, services execute. The job file coordinates the steps, but the actual business logic lives in service and repository methods called from within step.run().

Idempotent Upserts

CSV imports can contain duplicate emails, and imports can be retried. The onConflictDoUpdate pattern ensures that inserting the same contact twice updates the existing record instead of failing.
// repositories/contacts.repository.ts
await db.insert(contacts)
  .values(batch)
  .onConflictDoUpdate({
    target: [contacts.email, contacts.workspace_id],
    set: { updatedAt: new Date(), firstName: sql`excluded.first_name` },
  });
A guard exists for the edge case where there are no fields to update — onConflictDoNothing is used instead, because Drizzle throws if set receives an empty object. The settings repository demonstrates this pattern.

Streaming (Constant Memory)

Large file operations use streaming to avoid buffering entire files in memory. Uploads split the stream with tee(), exports fetch contacts in cursor-based batches and pipe through csv-stringify, and downloads proxy the blob stream directly to the HTTP response. This pattern is covered in depth on the Streaming Architecture page.

Frontend Patterns

Server Components by Default

Every component is a React Server Component unless it genuinely needs client-side interactivity. When both data fetching and interactivity are needed, the component is split into two: a Server Component wrapper that fetches data, and a Client Component leaf that handles interaction.
// Server Component — fetches data
app/contacts/page.tsx
  └── fetches contacts via apiServer
  └── renders <ContactsPageContent initialContacts={data} />

// Client Component — handles interaction
components/contacts/contacts-page-content.tsx ('use client')
  └── receives initial data as props
  └── manages search, filters, pagination, modals
'use client' is only added when required: event handlers, browser APIs (localStorage, window), SWR hooks, or state that changes without navigation.

Stale-While-Revalidate (SWR)

Client-side data fetching uses SWR for automatic caching, revalidation, and optimistic updates. Server-fetched data is passed as fallback to SWRConfig, so the first render uses server data and subsequent interactions fetch fresh data from the API.
// contacts-page-content.tsx — server data as SWR fallback
const fallback = useMemo(() => {
  const map: Record<string, unknown> = {};
  if (initialTags) map["/tags"] = initialTags;
  if (initialContacts) {
    map[unstable_serialize(["/contacts", { limit: 20 }])] = initialContacts;
  }
  return map;
}, [initialContacts, initialTags]);

return (
  <SWRConfig value={{ fallback }}>
    <ContactsPageInner />
  </SWRConfig>
);
This gives the best of both worlds: fast initial render from the server, and real-time updates on the client.

Custom Hooks for Domain Logic

Reusable client-side logic is extracted into custom hooks that encapsulate state management, API calls, and derived values. Components stay focused on rendering.
HookWhat it encapsulates
useContacts()Cursor-based pagination, search, tag filtering, page size preference
useSession()Auth state, token refresh
useDebouncedCallback()Debounced search input (300ms)
useStats()Dashboard stats fetching
useLogout()Logout flow
// hooks/use-contacts.ts — pagination + filtering in one hook
const { contacts, total, loading, hasMore, hasPrev, nextPage, prevPage, setSearch, setTagFilter } = useContacts();

Retry with Backoff

The frontend API client retries failed mutation requests (POST, PUT, DELETE) once with a 1-second delay when the server returns 502, 503, or 504. GET requests are not retried — SWR handles revalidation for reads.
// lib/api.ts — retry logic for transient failures
for (let attempt = 0; attempt <= MAX_RETRIES; attempt++) {
  try {
    const response = await fetch(url, mergedOptions);
    if (!response.ok) {
      if (attempt < MAX_RETRIES && isRetryable(method, null, response.status)) {
        await delay(RETRY_DELAY_MS);
        continue;
      }
      throw new Error(error.message || "API request failed");
    }
    return await response.json();
  } catch (err) {
    if (attempt < MAX_RETRIES && isRetryable(method, err)) {
      await delay(RETRY_DELAY_MS);
      continue;
    }
    throw err;
  }
}
Network errors (TypeError from fetch) are also retried, since they typically indicate a transient connectivity issue.

Cross-Cutting Patterns

Schema-First Validation (Zod)

Zod schemas are the single source of truth for data shapes. TypeScript types are derived from schemas via z.infer<>, request validation uses zValidator(), and OpenAPI documentation uses resolver() — all from the same schema definition.
// lib/openapi-schemas.ts — one schema, three uses
export const contactSchema = z.object({
  id: z.string().uuid(),
  email: z.string(),
  first_name: z.string(),
  // ...
});

// 1. TypeScript type
type Contact = z.infer<typeof contactSchema>;

// 2. Request validation
zValidator("param", z.object({ id: z.string().uuid() }))

// 3. OpenAPI documentation
responses: {
  200: {
    content: { "application/json": { schema: resolver(contactSchema) } },
  },
}
This eliminates drift between validation, types, and documentation. Change the schema, and all three update automatically.

Shared Type Derivation

Types in @folksbase/types are derived from the Drizzle database schema, not defined manually. This means the TypeScript types always match the database columns.
// packages/types/src/index.ts
import type { contacts } from "@folksbase/db";

export type Contact = typeof contacts.$inferSelect;
export type NewContact = typeof contacts.$inferInsert;
export type ContactWithTags = Contact & { tags?: Tag[] };
API-specific types (pagination, error responses, status types) are also defined here so both the frontend and backend share the same shapes.

Structured Logging

All logging goes through a structured logger that outputs JSON with consistent fields. No console.log anywhere in production code.
import { logger } from '@/lib/logger.js';

// ✅ Structured — searchable, parseable
logger.info("Export completed", { exportId, totalRows, blobUrl });
logger.error("AI mapping failed", { error, headers });

// ❌ Forbidden
console.log("export done", exportId);
Log entries include the level, message, timestamp, and an optional context object with IDs, counts, durations, or error details. This makes logs searchable in production monitoring tools.

Consistent Error Shape

Every error response from the API follows the same structure, whether it’s a validation error, an auth failure, or an unhandled exception:
{
  code: "VALIDATION_ERROR" | "UNAUTHORIZED" | "NOT_FOUND" | "RATE_LIMITED" | "INTERNAL_ERROR",
  message: "Human-readable description",
  details?: unknown  // Zod issues for validation errors
}
The global error handler middleware catches all unhandled errors and maps them to this shape. Route handlers don’t need their own try/catch — they let errors propagate to the middleware.

Encryption at Rest

Sensitive data (like Resend API keys) is encrypted before storage using AES-256-GCM. The settings repository transparently encrypts on write and decrypts on read — callers never deal with ciphertext.
// repositories/settings.repository.ts
export async function upsert(workspaceId: string, updates: Partial<...>) {
  if (updates.resend_api_key && typeof updates.resend_api_key === "string") {
    updates.resend_api_key = encrypt(updates.resend_api_key);
  }
  // ... insert/update ...
  if (settings.resend_api_key) {
    settings.resend_api_key = decrypt(settings.resend_api_key);
  }
  return settings;
}

Cursor-Based Pagination

Every paginated query uses cursor-based pagination. OFFSET is never used — it causes performance degradation at scale and inconsistent results during concurrent writes.
// ❌ Forbidden — gets slower as offset grows
db.select().from(contacts).limit(50).offset(page * 50)

// ✅ Required — consistently fast via primary key index
db.select().from(contacts)
  .where(cursor ? gt(contacts.id, cursor) : undefined)
  .orderBy(asc(contacts.id))
  .limit(50)
On the frontend, the useContacts() hook maintains a cursor stack for back/forward navigation without OFFSET.

Pattern Summary

PatternWhereWhy
Repositoryrepositories/*.tsIsolate SQL, make queries reusable and testable
Service Layerservices/*.tsSeparate business logic from HTTP and SQL concerns
Middleware Pipelinemiddleware/*.tsCompose cross-cutting concerns (auth, rate limiting, error handling)
Facadeemail.service.tsHide Resend complexity behind a clean interface
Graceful DegradationAI and email callsExternal failures never break core functionality
Cache-AsideRedis + repositoriesFast reads with TTL-based expiry
Event-Driven JobsInngest step.run()Isolated retries, decoupled from HTTP handlers
Idempotent UpsertsCSV importSafe retries, no duplicate contacts
StreamingImport/export pipelinesConstant memory regardless of file size
RSC by Defaultapps/webMinimize client JavaScript, fast initial render
SWR FallbackClient componentsServer data for first render, live updates after
Schema-First (Zod)Validation, types, OpenAPIOne source of truth, no drift
Shared Types@folksbase/typesFrontend and backend always agree on data shapes
Cursor PaginationAll paginated queriesConsistent performance at any scale

What’s Next?

Backend Architecture

The layered architecture in detail.

Streaming Architecture

How large files are handled without running out of memory.

Frontend Architecture

RSC-first approach and component structure.

Database Schema

Tables, relationships, and naming conventions.