Use this file to discover all available pages before exploring further.
folksbase uses a handful of well-known design patterns consistently across the backend, frontend, and shared packages. This page documents each pattern, where it appears, and why it was chosen.
Every database query lives in a repository file. Repositories are the only layer that imports from @folksbase/db — no other code constructs SQL.
// repositories/contacts.repository.ts — SQL only, no business logicexport async function findMany( workspaceId: string, params: ContactListParams,): Promise<PaginatedResponse<Contact>> { const conditions = [eq(contacts.workspace_id, workspaceId)]; if (params.cursor) { conditions.push(gt(contacts.id, params.cursor)); } const rows = await db .select() .from(contacts) .where(and(...conditions)) .orderBy(asc(contacts.id)) .limit(limit); return { data: rows, nextCursor: rows[rows.length - 1]?.id ?? null, total };}
This keeps queries isolated and testable. When a background job needs the same data as a route handler, both call the same repository method instead of duplicating SQL.Repositories in the codebase: contacts, imports, exports, tags, stats, settings, workspaces.
Services sit between routes and repositories. They contain business logic — validation, orchestration across multiple repositories, external API calls — but never touch HTTP concerns.
The key constraint: services return data or throw errors. They never call c.json() or set HTTP headers. This makes them reusable in background jobs, which have no HTTP context.
Hono middleware forms a chain that every request passes through in order. Each middleware handles one concern and calls next() to pass control to the next layer.
The error handler wraps the entire chain — if anything downstream throws, it catches the error and returns a consistent response shape. Auth middleware is applied per-route rather than globally, because some endpoints (health check, webhooks, OpenAPI spec) don’t need authentication.
Two rate limiters exist: a general one (100 requests per 60 seconds) and a stricter upload limiter (5 per 10 minutes). The upload limiter uses the authenticated userId when available, falling back to IP.
The email service exposes a clean interface for sending emails, hiding the complexity of template rendering, Resend API calls, and error handling behind simple async functions.
Every function in the facade follows the same shape: accept typed params, render a React Email template, send via Resend, return a { success, error? } result. Callers never deal with Resend directly.
All external API calls — Anthropic AI, Resend email, Gravatar — follow the same pattern: try the call, cache the result, and fall back silently on failure. AI failures never break the CSV import flow. Email failures are logged but don’t prevent the operation from completing.
This pattern appears in three places: AI column mapping (csv-ai.service.ts), AI import summary generation (process-csv.ts), and Gravatar URL fetching (contacts.service.ts).
Data is fetched from the database, then cached in Redis with a TTL. Subsequent reads hit the cache. Writes invalidate the cache so the next read fetches fresh data.
Every redis.set() call includes a TTL — no exceptions. Contact counts cache for 5 minutes, AI column mapping results for 1 hour, and CSV chunk data for 1 hour.
Background jobs use Inngest’s event-driven model. Route handlers emit events, and Inngest functions subscribe to them. Each logical unit of work is wrapped in step.run() for isolated retries.
Jobs orchestrate, services execute. The job file coordinates the steps, but the actual business logic lives in service and repository methods called from within step.run().
CSV imports can contain duplicate emails, and imports can be retried. The onConflictDoUpdate pattern ensures that inserting the same contact twice updates the existing record instead of failing.
A guard exists for the edge case where there are no fields to update — onConflictDoNothing is used instead, because Drizzle throws if set receives an empty object. The settings repository demonstrates this pattern.
Large file operations use streaming to avoid buffering entire files in memory. Uploads split the stream with tee(), exports fetch contacts in cursor-based batches and pipe through csv-stringify, and downloads proxy the blob stream directly to the HTTP response.This pattern is covered in depth on the Streaming Architecture page.
Every component is a React Server Component unless it genuinely needs client-side interactivity. When both data fetching and interactivity are needed, the component is split into two: a Server Component wrapper that fetches data, and a Client Component leaf that handles interaction.
// Server Component — fetches dataapp/contacts/page.tsx └── fetches contacts via apiServer └── renders <ContactsPageContent initialContacts={data} />// Client Component — handles interactioncomponents/contacts/contacts-page-content.tsx ('use client') └── receives initial data as props └── manages search, filters, pagination, modals
'use client' is only added when required: event handlers, browser APIs (localStorage, window), SWR hooks, or state that changes without navigation.
Client-side data fetching uses SWR for automatic caching, revalidation, and optimistic updates. Server-fetched data is passed as fallback to SWRConfig, so the first render uses server data and subsequent interactions fetch fresh data from the API.
// contacts-page-content.tsx — server data as SWR fallbackconst fallback = useMemo(() => { const map: Record<string, unknown> = {}; if (initialTags) map["/tags"] = initialTags; if (initialContacts) { map[unstable_serialize(["/contacts", { limit: 20 }])] = initialContacts; } return map;}, [initialContacts, initialTags]);return ( <SWRConfig value={{ fallback }}> <ContactsPageInner /> </SWRConfig>);
This gives the best of both worlds: fast initial render from the server, and real-time updates on the client.
Reusable client-side logic is extracted into custom hooks that encapsulate state management, API calls, and derived values. Components stay focused on rendering.
Hook
What it encapsulates
useContacts()
Cursor-based pagination, search, tag filtering, page size preference
The frontend API client retries failed mutation requests (POST, PUT, DELETE) once with a 1-second delay when the server returns 502, 503, or 504. GET requests are not retried — SWR handles revalidation for reads.
Zod schemas are the single source of truth for data shapes. TypeScript types are derived from schemas via z.infer<>, request validation uses zValidator(), and OpenAPI documentation uses resolver() — all from the same schema definition.
Types in @folksbase/types are derived from the Drizzle database schema, not defined manually. This means the TypeScript types always match the database columns.
// packages/types/src/index.tsimport type { contacts } from "@folksbase/db";export type Contact = typeof contacts.$inferSelect;export type NewContact = typeof contacts.$inferInsert;export type ContactWithTags = Contact & { tags?: Tag[] };
API-specific types (pagination, error responses, status types) are also defined here so both the frontend and backend share the same shapes.
Log entries include the level, message, timestamp, and an optional context object with IDs, counts, durations, or error details. This makes logs searchable in production monitoring tools.
The global error handler middleware catches all unhandled errors and maps them to this shape. Route handlers don’t need their own try/catch — they let errors propagate to the middleware.
Sensitive data (like Resend API keys) is encrypted before storage using AES-256-GCM. The settings repository transparently encrypts on write and decrypts on read — callers never deal with ciphertext.
Every paginated query uses cursor-based pagination. OFFSET is never used — it causes performance degradation at scale and inconsistent results during concurrent writes.
// ❌ Forbidden — gets slower as offset growsdb.select().from(contacts).limit(50).offset(page * 50)// ✅ Required — consistently fast via primary key indexdb.select().from(contacts) .where(cursor ? gt(contacts.id, cursor) : undefined) .orderBy(asc(contacts.id)) .limit(50)
On the frontend, the useContacts() hook maintains a cursor stack for back/forward navigation without OFFSET.