Some operations in folksbase take too long for a synchronous HTTP response — importing a 100K-row CSV, exporting contacts to a file, or sending email notifications. These run as background jobs powered by Inngest.Documentation Index
Fetch the complete documentation index at: https://docs.folksbase.joselito.dev/llms.txt
Use this file to discover all available pages before exploring further.
Why Inngest?
Background jobs need to be reliable. If a job fails halfway through processing 50,000 contacts, you don’t want to start over from row 1. You want to retry just the part that failed. Most job queues give you function-level retries: the entire job re-runs on failure. Inngest gives you step-level retries — each logical unit of work inside a job can retry independently. This is the key reason folksbase uses Inngest over simpler alternatives like BullMQ or a plain Redis queue. Other benefits:- Event-driven. Jobs trigger from named events (
import/csv.confirmed,export/csv.confirmed), making the system loosely coupled. - Built-in observability. The Inngest dashboard shows every step execution, retry, and failure — no custom logging infrastructure needed.
- Cron support. Scheduled jobs (like the weekly digest) use a simple cron expression instead of a separate scheduler.
How It Works
The Inngest client is configured inapps/api/src/lib/inngest.ts:
apps/api/src/jobs/ and registered with the Inngest serve handler in the API. When an event is sent (e.g., from a route handler after a CSV upload), Inngest picks it up and runs the matching job function.
The step.run() Pattern
This is the most important pattern in the jobs codebase. Every logical unit of work inside a job must be wrapped in step.run().
Why?
step.run() creates a checkpoint. If the job fails after step 3, Inngest replays the function but skips steps 1–3 (their results are memoized) and retries from step 4. Without step.run(), a failure anywhere means the entire job re-runs from scratch.
The Rule
What Makes a Good Step?
Each step should be a self-contained unit that either succeeds or fails cleanly:| Good step | Why |
|---|---|
| Fetch a record from the database | Idempotent read — safe to retry |
| Process a batch of 500 CSV rows | Bounded work — retries only this batch |
| Send a notification email | Side effect isolated — won’t re-send previous emails on retry |
| Upload a file to blob storage | Single operation — clear success/failure |
Jobs Orchestrate, Services Execute
Jobs are orchestrators. They coordinate the sequence of steps but don’t contain business logic themselves. The actual work happens in service and repository methods called from withinstep.run().
Event Data Convention
All jobs receiveuserId in their event data. This is used for Supabase user lookups (fetching the user’s email for notifications, for example).
Error Handling in Jobs
Jobs use a try/catch at the top level to handle failures gracefully:- Mark the record as failed — update the import/export status in the database
- Send a failure notification — best-effort email to the user
- Re-throw the error — so Inngest can track the failure and apply its retry policy
NonRetriableError:
Available Jobs
folksbase currently has four background jobs:| Job | Event / Trigger | Purpose |
|---|---|---|
process-csv | import/csv.confirmed | Parse and import CSV contacts in chunks |
process-export | export/csv.confirmed | Stream contacts to a CSV file in blob storage |
send-welcome | user/signed.up | Send welcome email to new users |
weekly-digest | Cron: 0 8 * * 1 (Monday 8am UTC) | Send weekly activity summary email |
What’s Next?
Individual Jobs
Detailed documentation for each background job.
Streaming Architecture
How large file operations use streaming to avoid memory exhaustion.