This system is running live
Don't take
our word for it.
Try it.
We rebuilt a miniature version of this engagement and it's running right now. Click below — you are not looking at a screenshot, you are looking at the actual software.
▶ Open the live demo →E-commerce · Client work · 5 weeks
Support Ticket Triage Classifier
Auto-routing inbound customer messages by intent + urgency
- mis-routes
- 12% → 3%
- CSAT change
- +8 points
- first response time
- 4h → 18min
The problem
A growing e-commerce business was drowning in support tickets. Every ticket landed in a single queue, and the support team manually triaged them — reading each one, deciding whether it was a refund request, a delivery question, a complaint, or a billing issue, and assigning it to the right person.
What we built
A two-step classifier:
1. **Intent classification** — what is the customer actually asking? (one of 18 categories) 2. **Urgency scoring** — is this a calm question or someone about to leave a 1-star review?
Both run on Gemini Flash with a tightly-scoped prompt and structured output. Each ticket gets a tag, a routing destination, and a suggested first reply that the agent can edit and send.
What we deliberately did not do
- We did not auto-reply. Every reply still goes through a human. The model is a force multiplier, not a replacement.
- We did not classify sentiment as a separate axis. Sentiment correlates so heavily with urgency in this dataset that the second model was wasted complexity.
Outcome
- Median first-response time: 4 hours → 18 minutes
- Misroutes (ticket sent to the wrong team): from ~12% to ~3%
- Customer satisfaction (CSAT) increased 8 points
Tech stack
Gemini Flash structured output, Python + FastAPI, a webhook into the existing Zendesk instance, no custom model training.