Background Jobs
Some operations in folksbase take too long for a synchronous HTTP response — importing a 100K-row CSV, exporting contacts to a file, or sending email notifications. These run as background jobs powered by Inngest.
Why Inngest?
Background jobs need to be reliable. If a job fails halfway through processing 50,000 contacts, you don’t want to start over from row 1. You want to retry just the part that failed.
Most job queues give you function-level retries: the entire job re-runs on failure. Inngest gives you step-level retries — each logical unit of work inside a job can retry independently. This is the key reason folksbase uses Inngest over simpler alternatives like BullMQ or a plain Redis queue.
Other benefits:
- Event-driven. Jobs trigger from named events (
import/csv.confirmed, export/csv.confirmed), making the system loosely coupled.
- Built-in observability. The Inngest dashboard shows every step execution, retry, and failure — no custom logging infrastructure needed.
- Cron support. Scheduled jobs (like the weekly digest) use a simple cron expression instead of a separate scheduler.
How It Works
The Inngest client is configured in apps/api/src/lib/inngest.ts:
import { Inngest } from "inngest";
export const inngest = new Inngest({
id: "folksbase",
eventKey: env.INNGEST_EVENT_KEY,
});
Jobs are defined in apps/api/src/jobs/ and registered with the Inngest serve handler in the API. When an event is sent (e.g., from a route handler after a CSV upload), Inngest picks it up and runs the matching job function.
The step.run() Pattern
This is the most important pattern in the jobs codebase. Every logical unit of work inside a job must be wrapped in step.run().
Why?
step.run() creates a checkpoint. If the job fails after step 3, Inngest replays the function but skips steps 1–3 (their results are memoized) and retries from step 4. Without step.run(), a failure anywhere means the entire job re-runs from scratch.
The Rule
// ✅ Correct — each unit of work is a step
const importRecord = await step.run("fetch-import", async () => {
return importsRepo.findById(workspaceId, importId);
});
const { chunkCount } = await step.run("download-and-chunk", async () => {
return downloadAndChunk(blobUrl, importId);
});
// ❌ Wrong — raw async call with no retry isolation
const importRecord = await importsRepo.findById(workspaceId, importId);
What Makes a Good Step?
Each step should be a self-contained unit that either succeeds or fails cleanly:
| Good step | Why |
|---|
| Fetch a record from the database | Idempotent read — safe to retry |
| Process a batch of 500 CSV rows | Bounded work — retries only this batch |
| Send a notification email | Side effect isolated — won’t re-send previous emails on retry |
| Upload a file to blob storage | Single operation — clear success/failure |
Avoid putting multiple unrelated operations in one step. If step “process-and-email” fails on the email part, the processing work gets re-done unnecessarily.
Jobs Orchestrate, Services Execute
Jobs are orchestrators. They coordinate the sequence of steps but don’t contain business logic themselves. The actual work happens in service and repository methods called from within step.run().
// ✅ Job orchestrates, service executes
await step.run("send-notification", async () => {
const settings = await settingsRepo.findByWorkspaceId(workspaceId);
if (settings.notify_on_export_complete) {
await emailService.sendExportComplete({ to: userEmail, ... });
}
});
// ❌ Don't put raw SQL or complex logic directly in the job
await step.run("send-notification", async () => {
const settings = await db.select().from(workspaceSettings).where(...);
// ... 50 lines of business logic ...
});
This keeps jobs thin and testable. The same service methods used by jobs are also used by route handlers — one implementation, two entry points.
Event Data Convention
All jobs receive userId in their event data. This is used for Supabase user lookups (fetching the user’s email for notifications, for example).
type ProcessCsvEvent = {
data: {
importId: string;
workspaceId: string;
userId: string; // Always present — used for getUserById()
};
};
Use userId (not workspaceId) when calling supabaseAdmin.auth.admin.getUserById(). These are different identifiers — using the wrong one returns no user.
Error Handling in Jobs
Jobs use a try/catch at the top level to handle failures gracefully:
- Mark the record as failed — update the import/export status in the database
- Send a failure notification — best-effort email to the user
- Re-throw the error — so Inngest can track the failure and apply its retry policy
try {
// ... steps ...
} catch (error) {
await importsRepo.updateStatus(importId, "failed", {
error_log: { message: error.message },
});
// Best-effort notification
try {
await sendImportFailed({ to: userEmail, ... });
} catch (emailError) {
logger.error("Failed to send failure email", { importId });
}
throw error; // Let Inngest handle retries
}
For errors that should never be retried (e.g., the import record doesn’t exist), use Inngest’s NonRetriableError:
if (!record || record.status !== "processing") {
throw new NonRetriableError(`Invalid import state: ${record?.status || "not found"}`);
}
Available Jobs
folksbase currently has four background jobs:
| Job | Event / Trigger | Purpose |
|---|
process-csv | import/csv.confirmed | Parse and import CSV contacts in chunks |
process-export | export/csv.confirmed | Stream contacts to a CSV file in blob storage |
send-welcome | user/signed.up | Send welcome email to new users |
weekly-digest | Cron: 0 8 * * 1 (Monday 8am UTC) | Send weekly activity summary email |
For detailed documentation on each job, see the Individual Jobs page.
What’s Next?