This guide builds a support bot that handles the questions you answer on repeat: customer inquiries, product questions, onboarding help, FAQ responses. The bot runs inside OpenClaw, uses your actual knowledge base, and never makes up answers. No monthly SaaS fee. No external service to configure. You own it completely.
TL;DR
A support bot is an OpenClaw cron job that watches an input source (email, Discord, Telegram, a file), matches questions against a knowledge base you write, drafts replies, and either sends them automatically or queues them for your review. Setup takes about two hours. Ongoing cost: under $0.01 per day at typical volumes.
What a support bot actually is
The phrase “support bot” gets used for everything from a billion-dollar Salesforce deployment to a WordPress plugin with canned responses. This guide is about something specific: an agent that reads an incoming question, searches a knowledge base you wrote, and produces a reply in your voice that actually answers the question.
That distinction matters. Most canned-response systems match keywords and return pre-written text. That works for narrow FAQs but breaks immediately when the question is phrased differently than the preset. An OpenClaw support bot reads the meaning of the question and constructs a reply from the knowledge base. It handles phrasing variations, follow-up questions, and combinations of topics that no keyword system could anticipate.
The knowledge base is yours. You write it. The bot’s answers are only as good as what you put in. This is not a weakness. It’s a feature. You control exactly what the bot knows and says. It cannot hallucinate an answer that contradicts your actual policy because your actual policy is what it reads.
Before you build: what you need
The bot needs three things before setup begins:
- An input source: where questions come from. Options: a monitored Discord channel, a Telegram thread, an email inbox the agent can read, or a file that another process writes to (e.g., a contact form that appends to a CSV).
- A knowledge base: a markdown file containing your actual answers to the questions you receive. More on this below.
- A review decision: do replies go out automatically, or do they queue for your approval? For most operators, automatic for known questions and queued for anything novel is the right starting point.
Before I set up the support bot, tell me: (1) Where do support questions currently come from? List every channel: email, Discord, Telegram, a contact form, direct messages, etc. (2) What are the 10 most common questions I get? (3) What channels do I currently use to respond? I’ll use this to configure the input source and output routing correctly.
Building the knowledge base
The knowledge base is a markdown file the bot reads before drafting any reply. It should contain your real answers, written in your voice, to every question the bot will handle.
Bad knowledge base entries look like this:
- Q: What are your prices?
A: Our pricing is competitive and designed to meet your needs. Contact us for a custom quote.
That entry is useless. The bot will produce a useless reply. Garbage in, garbage out.
Good knowledge base entries look like this:
- Q: What does the starter plan cost?
A: $29/month. Includes up to 3 users, 10GB storage, and email support. No setup fee. Cancel anytime. Annual option is $290 ($24/month effective rate).
Specifics. Numbers. The actual answer. Not marketing language.
How to build the knowledge base fast
The fastest way to build a useful knowledge base is to export your past support conversations and extract the patterns from them:
I’m going to paste 20 real support conversations below. For each one, extract: (1) the core question being asked, normalized so similar questions collapse to one, (2) my actual answer from the conversation. Format as Q: / A: pairs. Group similar questions together. After extracting, tell me which question types appear most often and which answers were inconsistent across conversations.
Paste your actual support conversations after that prompt. The agent will do the extraction and tell you where your answers have been inconsistent, which is exactly where the bot will produce bad replies if you don’t standardize first.
Knowledge base structure
Save the knowledge base at a fixed path. Recommended:
Create a file at /home/node/.openclaw/workspace/support/KNOWLEDGE.md. Structure it with these sections: ## Pricing, ## Product Features, ## Getting Started, ## Common Problems, ## Policies (refunds, cancellations, terms), ## What We Don’t Support. Under each section, write Q: and A: pairs. Leave a ## Unknown section at the bottom. This is where I’ll note questions the bot couldn’t answer.
The “Unknown” section at the bottom is important. When the bot encounters a question it can’t answer from the knowledge base, it should add it there and escalate to you. Over time, the Unknown section tells you exactly what to add to improve coverage.
Building the decision framework
The decision framework is the instruction that determines what the bot does with each question. It has four parts:
- Answerable from knowledge base: draft and send (or queue for review)
- Partially answerable: draft what can be answered, flag what can’t, escalate
- Not in knowledge base: send a holding reply, log the question, notify me
- Hostile or spam: log and discard
Create a file at /home/node/.openclaw/workspace/support/DECISION-FRAMEWORK.md. Write decision rules for incoming support questions: (1) If the question is fully answerable from KNOWLEDGE.md, draft a reply and append it to /support/QUEUE.md with status DRAFT. (2) If partially answerable, draft what you can, mark the gap explicitly, append to QUEUE.md with status NEEDS-REVIEW. (3) If not in the knowledge base, append to KNOWLEDGE.md under ## Unknown and append a holding reply to QUEUE.md with status ESCALATE. (4) If spam or hostile, append to /support/SPAM.md and take no further action. Every entry in QUEUE.md must include: original question, source channel, timestamp, and draft reply.
Setting up the cron job
The cron job is what makes the bot run continuously. It checks the input source, processes any new questions, and updates the queue. The exact setup depends on your input source.
Discord channel input
For a Discord channel where customers post questions:
Create a cron job that runs every 10 minutes. Task: Read the last 20 messages from Discord channel [channel-name]. Identify any messages that are questions (not bot messages, not my own messages). For each new question (not already in /support/PROCESSED.md): read KNOWLEDGE.md, apply DECISION-FRAMEWORK.md, and process the question. After processing each question, append its message ID to PROCESSED.md so it’s not processed again. Use ollama/phi4:latest. No Telegram notification needed unless a question gets ESCALATE status.
Telegram thread input
For a Telegram group or thread:
Create a cron job that runs every 10 minutes. Task: Check my Telegram support thread for new messages. For each new message that is a question (not already in /support/PROCESSED.md): read KNOWLEDGE.md, apply DECISION-FRAMEWORK.md, process the question, and append the message ID to PROCESSED.md. For ESCALATE items, send me a Telegram notification with the original question and the reason it couldn’t be answered. Use ollama/phi4:latest for processing.
File-based input (contact forms)
For a contact form that writes to a CSV or JSON file:
Create a cron job that runs every 15 minutes. Task: Read /support/incoming/new-submissions.csv. For each row not already in /support/PROCESSED.md (match by submission ID): extract the question field, read KNOWLEDGE.md, apply DECISION-FRAMEWORK.md, process the question, and log the submission ID to PROCESSED.md. For DRAFT replies, append to QUEUE.md. For ESCALATE replies, send Telegram notification. Use ollama/phi4:latest.
The reply queue and review workflow
The QUEUE.md file is the handoff between the bot and you. Everything the bot drafts lands here. Your review workflow determines what happens next.
Recommended starting point: review everything for the first week. This tells you where the bot is getting things right, where it’s drifting from your voice, and where the knowledge base has gaps. After a week, you’ll know which question categories are safe to auto-send and which need eyes.
Every morning at 8 AM, read /support/QUEUE.md. Count entries by status: DRAFT, NEEDS-REVIEW, ESCALATE. Send me a Telegram message: “Support queue: [X] drafts ready to send, [Y] need review, [Z] escalated. Oldest pending: [timestamp of oldest entry].” Use ollama/llama3.1:8b for this summary. It just counts and formats.
Auto-send for trusted categories
Once you’ve validated that the bot handles certain question types correctly, you can auto-send for those categories:
When processing a support question, after drafting the reply, check whether the question category is in /support/AUTO-SEND.md. If yes, send the reply immediately via [channel] and log it to /support/SENT.md. If no, append to QUEUE.md with status DRAFT and wait for my review. The AUTO-SEND.md file contains the question categories I’ve approved for automatic replies.
Maintaining the knowledge base over time
The support bot gets better as the knowledge base gets better. The Unknown section in KNOWLEDGE.md is your maintenance queue. Review it weekly and convert the unknowns into proper Q&A entries.
Every Sunday at 9 AM, read the ## Unknown section of /support/KNOWLEDGE.md. Count how many unanswered questions have accumulated. Send me a Telegram message: “Knowledge base gap report: [X] unanswered questions logged this week. Top 3 most-asked: [list]. These are good candidates to add to the knowledge base.” Use ollama/phi4:latest.
The gap report turns the Unknown section into an action item instead of a graveyard. Questions that appear more than once are the highest-priority additions.
Voice drift detection
Over time, the bot can drift from your actual voice, especially as you add new knowledge base entries written at different times in different moods. A monthly voice audit catches this:
On the first of each month, read the last 30 entries in /support/SENT.md and compare them against my voice document at /support/VOICE.md. Flag any replies that use different vocabulary, sentence length, or tone than the voice document specifies. List the flagged replies and the specific drift you detected. Do not fix them. Just report. I’ll decide which to update.
The voice document
The voice document is a short file that tells the bot how you communicate. It’s distinct from the knowledge base. The knowledge base covers what to say; the voice document covers how to say it.
Create /support/VOICE.md. Include: (1) 3-5 words that describe my communication style (e.g., direct, warm, no corporate jargon). (2) Sentence length preference: short, medium, or varied. (3) Greeting format I use. (4) Sign-off format I use. (5) Three phrases I never use (the bot should avoid these). (6) Three example replies I’ve written that represent my voice at its best. Paste actual examples, not descriptions.
The last item, three actual example replies, is the most important. The bot learns more from seeing your real words than from any description of your style.
What this costs to run
A support bot running on ollama/phi4:latest costs nothing in API fees. The model runs locally on your server. At typical small-business support volumes (20-50 questions per day), the cron job runs 144 times per day and processes a question only when one exists. Most runs are a quick “no new questions” check that takes under a second.
If you route to deepseek-chat for complex or high-stakes replies, cost is approximately $0.001-0.003 per question at typical reply length. At 50 questions per day, that’s $0.05-0.15 per day, well under $5/month.
Cost breakdown (March 2026 pricing)
- Monitoring cron (ollama/phi4:latest): $0.00, runs locally
- Knowledge base lookup + reply drafting (phi4:latest): $0.00, runs locally
- Complex/escalated replies (deepseek-chat): ~$0.001-0.003 per question
- Daily summary (ollama/llama3.1:8b): $0.00, runs locally
- Monthly gap report (phi4:latest): $0.00, runs locally
- Total for 50 questions/day: Under $3/month if routing complex replies to deepseek-chat. Under $0.50/month if using phi4 for everything.
Handling edge cases
Questions that combine topics
A question like “How do I upgrade my plan and what happens to my existing data?” spans two knowledge base sections. The bot handles this well if the knowledge base entries are specific:
When a question spans multiple knowledge base sections, address each part separately in the reply. Do not combine them into one vague answer. Structure the reply: first answer part one, then answer part two. If one part is answerable and the other is not, answer what you can and flag the unanswered part explicitly with “I don’t have that information. Flagging for follow-up.”
Angry or frustrated customers
A frustrated customer needs to feel heard before they need an answer. The voice document should specify how to handle this:
Add to VOICE.md: Tone detection rules. If a message contains frustration signals (words like “still”, “again”, “unacceptable”, “terrible”, “fix this”, or multiple question marks), acknowledge the frustration in the first sentence before providing the answer. Do not be defensive. Do not apologize excessively. One sentence of acknowledgment, then the answer. These replies should always go to NEEDS-REVIEW status, not DRAFT. I want to check them before they send.
Questions that require account-specific information
Some questions can’t be answered from a general knowledge base because they require knowing something specific about the customer’s account. The bot needs to recognize these and respond appropriately:
Add to DECISION-FRAMEWORK.md: Account-specific questions. If a question requires account-specific data to answer (e.g., “why was I charged X”, “why can’t I access Y”, “what’s my current usage”), do not attempt to answer from the knowledge base. Instead, draft a reply that acknowledges the question, asks for the account identifier (email or ID), and explains that a human will follow up. Route to ESCALATE status.
Advanced: tiered routing by question type
Once the basic bot is running and you’ve built trust in the auto-send categories, you can add tiered routing based on question complexity levels:
When processing a support question, classify it first: Tier 1 (simple FAQ, single-topic, clearly covered in knowledge base), Tier 2 (multi-topic or partially covered), Tier 3 (account-specific, emotionally charged, or novel). Route Tier 1 to ollama/phi4:latest. Route Tier 2 to deepseek/deepseek-chat. Route Tier 3 to anthropic/claude-sonnet-4-6 and always set status NEEDS-REVIEW. Log the tier classification alongside every QUEUE.md entry so I can audit the routing.
Tier 3 using Claude Sonnet costs more per question but it’s the right call for high-stakes interactions. A frustrated customer who receives a perfect reply from the bot is indistinguishable from one who received a perfect reply from you.
What to do when the bot gets it wrong
The bot will make mistakes. The failure modes are predictable:
- Wrong answer from the knowledge base: the knowledge base entry was ambiguous or incorrect. Fix the entry, not the bot.
- Right answer, wrong tone: the voice document doesn’t cover this scenario. Add the example to VOICE.md.
- Question classified incorrectly: the decision framework boundaries need tightening. Add the example to DECISION-FRAMEWORK.md as an explicit case.
- Answered something it shouldn’t have: the knowledge base was too broad. Narrow the relevant entry or add an explicit “do not answer” rule to DECISION-FRAMEWORK.md.
Every mistake is a training example for the knowledge base or decision framework. Keep a short log of failures and what you changed. After 30 days you’ll have a bot that handles your specific support volume reliably.
SOTA model recommendations for support bots (March 2026)
ollama/phi4:latest
Best for the majority of support queries. At 14.7B parameters running locally, it handles single-topic FAQ questions, pricing queries, and how-to questions reliably when the knowledge base is well-written. Zero API cost. Slight weakness on multi-topic questions. It tends to blend answers when topics overlap. Use for Tier 1. On typical VPS hardware (4 vCPU, 8GB RAM), phi4 processes a support question and drafts a reply in 8-15 seconds, which is fast enough for a 10-minute cron interval without queue buildup.
deepseek/deepseek-chat
Best cost-to-quality ratio for Tier 2 questions. Multi-topic handling is significantly better than phi4. At approximately $0.001 per typical support reply, it’s the right call for anything moderately complex. Use for Tier 2.
anthropic/claude-sonnet-4-6
Best for tone matching on sensitive or emotionally charged messages. When a customer is frustrated, the difference in quality between phi4 and Sonnet is significant. Reserve for Tier 3 and high-stakes account-specific escalations. At roughly $0.01-0.03 per reply, justified for the 5-10% of support volume that needs it. One well-crafted Sonnet reply that retains an upset customer is worth more than the API cost by several orders of magnitude. The question is not whether to use it, but whether to use it for every message or only the ones that need it. The tiered routing framework above answers that.
Frequently asked questions
Can I use this for internal support (employee questions)?
Yes, and it’s often the better first use case. Internal support questions are more predictable, the knowledge base is easier to build (it’s your own company documentation), and the stakes for a wrong answer are lower. Build the internal version first, test it for two weeks, then adapt it for customer-facing use. The architecture is identical, just different knowledge bases and routing targets.
What if customers figure out they’re talking to a bot?
The right approach is transparency, not disguise. Add a note to your support channel description or auto-reply that says replies are drafted by an AI assistant and reviewed before sending (or that auto-replies come from an AI). Most customers care about getting a fast, accurate answer. They don’t care whether the first draft was written by a person or a model. What they care about is whether someone is actually accountable, and you still are, because you control the knowledge base and the review process.
How do I handle languages other than English?
phi4 and deepseek-chat both handle multilingual support well. Add a language detection step at the start of the processing instruction: “If the question is not in English, identify the language, answer in the same language, and flag the reply for my review regardless of category.” Until you’ve validated the bot’s performance in each language you receive, review all non-English replies manually.
Can the bot send replies automatically without my review?
Yes, once you’ve validated the relevant categories. Start with review-only for at least a week. After validation, enable auto-send for specific question types by listing them in AUTO-SEND.md. Never auto-send for account-specific questions, billing questions, or anything involving money. Those always need human eyes.
What’s the biggest mistake people make building support bots?
Putting the bot live before the knowledge base is good. A support bot with a thin or vague knowledge base produces thin or vague answers, which are often worse than no answer at all because they create a false impression that the question was addressed. Build the knowledge base first. Get it to where every Q has a specific, accurate A. Then build the bot. The order matters.
How do I handle support questions that come with attachments (screenshots, logs)?
For image attachments: the agent’s vision capability (if enabled) can analyze screenshots. Add to the processing instruction: “If the question includes an image, describe what you see in the image and include that description in your analysis before drafting the reply.” For log files: the agent can read text attachments. Add: “If the question includes a log file or error output, read it and identify the specific error before drafting the reply.”
My support volume is very low (5-10 questions per week). Is this worth building?
Yes, for a different reason than speed. At low volume, the value isn’t throughput. It’s consistency. Every question gets the same quality of answer regardless of what else is happening. The bot doesn’t write a worse reply on a busy day or forget to mention the refund policy when it’s late. For one-person operations, that consistency is worth more than the time savings.
How do I handle support questions that come in overnight when I’m not reviewing the queue?
Two options. First: set the auto-send threshold lower than you would during business hours. If the bot has been reliable on a category for two weeks, let it auto-send overnight for that category and flag everything else for morning review. Second: add a holding reply for NEEDS-REVIEW items that arrive outside your review window. The holding reply acknowledges the question, sets an expectation (“I’ll follow up by 10am ET”), and sends immediately. Your actual reviewed reply goes out in the morning. This keeps response times honest without requiring you to review at 3am. Add the holding reply template to VOICE.md and instruct the cron job to send it for any NEEDS-REVIEW item older than 2 hours.
What happens if the bot sends a wrong answer before I’ve set up auto-send review?
If you’re in review-only mode (everything goes to QUEUE.md and nothing sends without your approval), a wrong draft costs nothing. You catch it before it goes out and use it to improve the knowledge base. The only risk of a wrong answer reaching a customer is if you’ve enabled auto-send for a category before validating it. This is why the testing sequence matters: run Step 1 coverage testing, validate manually for a week, then enable auto-send only for categories where the drafts required zero or minimal edits across at least 10 examples.
Running a multi-channel support bot
Most operators receive questions through more than one channel. The same person might send an email, then follow up in Discord, then ask the same question again via Telegram. Without coordination, the bot processes these as three separate conversations and may send three slightly different replies.
The fix is a single QUEUE.md that all channels write to, with the channel source recorded on every entry:
Update the processing instruction for all support cron jobs: every entry written to QUEUE.md must include a “source” field: email, discord, telegram, or form. When I review QUEUE.md, I should be able to see at a glance which channel each question came from and whether the same person has asked the same question across multiple channels. For cross-channel duplicates (same question from same person within 48 hours), process only once and mark the others as DUPLICATE with a reference to the original.
Cross-channel deduplication requires a CONTACTS.md file that maps identifiers across channels. This is worth building early even if you’re starting with one channel:
Create /support/CONTACTS.md. Structure: one entry per known contact, with their identifiers across channels (email, Discord username, Telegram ID, form submission email). When processing a question, check whether the sender is already in CONTACTS.md. If yes, note any previous interactions in the reply context. If no, create a minimal entry with the channel and identifier. Never store sensitive personal information beyond what’s needed to correlate contacts across channels.
Channel-specific reply formatting
A good reply on Telegram reads differently from a good reply on email. Email allows longer prose. Telegram expects brevity. Discord supports markdown. The voice document should capture these differences:
Add to VOICE.md: Channel formatting rules. Email: full sentences, greeting and sign-off, up to 150 words for complex answers. Telegram: direct, no greeting, under 80 words, no bullet lists longer than 3 items. Discord: markdown formatting allowed, use bold for key terms, under 120 words. Form replies: formal tone, include contact information at the bottom. Apply the appropriate format based on the source channel in each QUEUE.md entry.
Integrating with ticketing systems
If you use a ticketing system (Linear, Notion databases, or a simple CSV), the support bot can write directly to it instead of a local QUEUE.md file:
Notion database integration
After drafting a reply for a support question, create a new row in my Notion database: POST https://api.notion.com/v1/pages with Authorization: Bearer [NOTION_API_KEY]. Set these properties: Question (title), Source (select: email/discord/telegram), Status (select: Draft/Needs Review/Escalate), DraftReply (rich text), Timestamp (date), ContactID (text). Save the Notion page ID to PROCESSED.md alongside the question source ID so the entry is not processed again.
Linear integration
After classifying a support question as ESCALATE, create a Linear issue: POST https://api.linear.app/graphql with Authorization: Bearer [LINEAR_API_KEY]. GraphQL mutation: createIssue with title set to the question summary, description set to the full question plus bot analysis, teamId set to [SUPPORT_TEAM_ID], priority set to 2 (medium) unless the message contains frustration signals (set priority 1). Save the Linear issue ID to PROCESSED.md.
Simple CSV ticketing
For operators who don’t use external ticketing systems, a CSV is sufficient and portable:
Instead of writing to QUEUE.md, append to /support/tickets.csv with these columns: id (auto-increment), timestamp, source_channel, sender_id, question, category, status, draft_reply, reviewed_by, sent_at. The CSV format makes it easy to open in any spreadsheet for weekly review. At the end of each week, archive the resolved rows to /support/archive/tickets-YYYY-WW.csv.
Measuring support bot performance
After two weeks of running, measure these metrics to understand where the bot is helping and where it’s not:
- Answer rate: what percentage of questions were answered from the knowledge base vs. escalated
- Review rate: what percentage of drafts required your edits before sending
- Edit distance: how much you changed the drafts on average (high edit distance means the voice or knowledge base needs work)
- Unknown growth rate: how many new unknowns are being logged per week (declining rate means the knowledge base is maturing)
- Response time: average time from question received to reply sent
On the last day of each month, generate a support bot performance report. Read /support/SENT.md, QUEUE.md, SPAM.md, and the ## Unknown section of KNOWLEDGE.md. Calculate: total questions received, answer rate (DRAFT + sent / total), escalation rate (ESCALATE / total), questions still pending in queue, unknown questions logged this month. Write the report to /support/reports/YYYY-MM.md and send me a Telegram summary with the headline numbers.
The 80% rule
A well-built support bot should answer 80% of questions from the knowledge base within 60 days of launch. If you’re below 70% after 60 days, the knowledge base has gaps. If you’re above 90%, consider whether you’re over-reaching into territory that should stay with humans.
Below 50% after 30 days is a signal the knowledge base was built from the wrong source material, or that the bot is receiving questions outside its designed scope. Audit the Unknown section to understand what’s not being covered.
The most common reason operators stay stuck below 50%: they built the knowledge base from their product documentation rather than from actual support conversations. Documentation tells you what the product does. Support conversations tell you what users don’t understand about what the product does. These are different sources with different answers. If you built from documentation, export your actual support history and rebuild from that instead.
Testing before going live
Never put a support bot live without testing it against real historical questions first. The testing sequence:
Step 1: Knowledge base coverage test
Here are 20 real support questions I’ve received in the past 90 days. [paste questions] For each one: (1) search KNOWLEDGE.md and tell me whether you found a relevant entry, (2) if yes, show me the entry and the draft reply you would produce, (3) if no, classify it as Unknown. At the end, tell me the coverage percentage and which question types are missing from the knowledge base.
Step 2: Voice consistency test
Here are 5 real replies I’ve sent to customers in the past. [paste replies] Now here are 5 questions the bot would answer. Draft replies using KNOWLEDGE.md and VOICE.md. After you’ve drafted all 5, compare them against my real replies: are the sentence length, vocabulary, and tone consistent? Note any differences.
Step 3: Edge case test
Test these edge cases and show me how the bot handles each: (1) An angry message with no clear question. (2) A question about something not in the knowledge base. (3) A question that combines two topics from different sections. (4) A question in a language other than English. (5) A message that is clearly spam. For each, show me the classification, the draft if any, and the status it would receive in QUEUE.md.
If all three tests pass, run the cron job in monitor-only mode for 48 hours (processing questions but writing to QUEUE.md without sending anything) before enabling auto-send.
When to not use a support bot
A support bot is the right tool for high-volume, repeatable questions. It’s the wrong tool in specific situations:
- Crisis situations: service outages, data breaches, anything where customers are actively losing access or money. These need a human response immediately, not a bot generating a calm FAQ reply while the situation escalates.
- Early-stage customer discovery: if you’re still figuring out what your product is, the questions you receive are valuable signal. A bot that answers them efficiently also removes you from the conversation. Read every support question yourself until the pattern is clear.
- Small, high-value customer bases: if you have 20 enterprise customers each paying $10,000/year, bot-handling their questions is a relationship risk. Those customers expect to talk to a person. A bot that answers their FAQ correctly but impersonally is a net negative.
- Legal or compliance questions: any question involving legal liability, regulatory compliance, or contractual terms should always go to a human. The bot should recognize these and escalate immediately without drafting a reply.
Add to DECISION-FRAMEWORK.md: Hard escalation triggers. If a question contains any of these terms or patterns, classify as ESCALATE immediately without drafting a reply, and send me a Telegram notification marked URGENT: legal, lawsuit, attorney, compliance, GDPR, data breach, fraud, charge dispute, refund demand over $X, any mention of contacting regulators or media. These questions must be answered by a human.
Keep Reading
WORKSHOP
Build your own AI clone
An agent that sounds like you, knows your context, and handles the work you repeat every week. Voice documents, decision frameworks, and escalation routing.
WORKSHOP
Connecting your agent to external data
APIs, webhooks, and file watchers. How to give your agent real-time information to work with, including live state tracking and change detection.
Stay current
