Under the Hood: OpenAI Integration & Prompt Engine
Under the Hood: OpenAI Integration & Prompt Engine
Shipped in v1.0.11
With v1.0.11, NurtureHub's AI generation layer is fully operational. This post explains what was built, how it works, and what it means for agents using the platform.
Why This Matters
NurtureHub's core promise is that an agent assigns a contact to a category and immediately gets a personalised, ready-to-approve three-email nurture sequence — no workflow building, no copywriting. v1.0.11 is the release that makes that promise technically real. Every component described below runs silently in the background every time a sequence is generated.
The OpenAI Client
All calls to OpenAI run through a purpose-built client wrapper that adds three things the bare SDK doesn't provide out of the box:
Retry logic. Transient API errors (rate limits, network timeouts) are retried automatically with exponential back-off before the error is surfaced to the application. Agents should rarely, if ever, see a generation failure caused by a temporary OpenAI hiccup.
Token counting. Token usage is calculated for every prompt before dispatch and recorded against the completion. This feeds directly into the usage tracking system (see below) and ensures no request exceeds model context limits.
Model selection. The default model is gpt-4o. The wrapper is designed to allow model selection per request, which gives the platform flexibility to route shorter or simpler generations to lighter models in future without changing calling code.
Graceful Fallback
If OpenAI is unreachable or returns a non-recoverable error after retries are exhausted, the platform fails gracefully — the agent sees a clear, actionable error state and no partial data is written. Generation can be retried manually once service is restored.
The Prompt Template Engine
12 Category-Specific Templates
NurtureHub supports twelve lead categories. Each category has its own dedicated prompt template, written and tuned specifically for:
- The property context relevant to that lead type (e.g. a Landlord prompt understands rental yield and void periods; a Seller prompt understands valuation, market appraisal, and chain dynamics).
- The intent signals typical of that category at different stages of the journey.
- The appropriate agent–contact relationship and the tone that relationship calls for.
This means the AI is never starting from a blank, generic instruction — it already knows the domain before agency-specific context is added.
System Prompt Construction
For every generation, a system prompt is assembled dynamically by layering three contextual blocks on top of the category template:
| Layer | What it contains |
|---|---|
| Brand voice profile | Tone descriptors, vocabulary preferences, formality level, and sign-off style as configured by the agency in their settings. |
| Agency context | Agency name, geographic market, and any positioning notes that should colour the copy. |
| Contact details | Contact name, assigned category, and any property or tenancy data available from the linked CRM record. |
The result is a prompt that produces copy that reads as if it were written by someone at that specific agency, for that specific contact, in that specific market — rather than generic AI output.
Usage Tracking
Every successful generation writes a record to the usageEvents table. Each record captures:
- Tokens consumed — split into prompt tokens and completion tokens.
- Estimated cost — calculated in GBP based on the model's published per-token pricing at generation time.
- Model used — so historical records remain accurate if the default model changes.
- Contact category — enabling breakdowns by lead type.
- Agency identifier — enabling per-agency reporting.
This data underpins future usage dashboards and consumption-based billing features. For now it provides full observability into AI spend at the platform level.
What This Enables Next
With the prompt engine and AI client in place, the next pieces — sequence generation UI, review-and-approval flow, and send scheduling — have a stable, well-instrumented foundation to build on.