Ticket response standards: initial ticket responses
Have some standards!

As we settle into this brave new world of 2024, I’d like to spend the next couple of weeks talking about ticket response standards. Not style, or content, or the other stuff I talked about early in 2023, but something more oriented to the fourth dimension: time. In brief, how long should your customers expect to wait for a response? This week we’ll talk about the critical initial ticket response.
The concept
Your Slack pings, or you get a notification email, or a pager app beeps, or whatever you’re using to notify your team of new messages. The timer has started: how long will that new issue wait before you let the customer know you’ve seen and are investigating their report? Obviously it’s going to vary from case to case: you may already be busy, it may be outside business hours, or if you’ve implemented tiered support, perhaps that customer’s issues are going to automatically take a higher (or lower) priority than others coming in around the same time. But even with this expected variance, it’s important to set an expectation across the team: new issues get a first response within X minutes/hours.
Why is this so important? Well, for one, because support folks are pulled in so many different directions that it can be easy to get deeply involved in side projects, neglecting the core function of a technical support team: responding to customer issues. An established standard for initial ticket responses can help focus engineer attention on that core function.
Another great reason to set internal ticket response standards is to set a high bar with the team in terms of customer responsiveness. If customers can reasonably expect that their questions will be answered in a timely manner, they’re more likely to write in for help. If they don’t think that opening a support issue is going to help them quickly, they’re more prone to give up, or complain online, or silently look for a competing product. I’ll say more about this in a bit, but for the moment: accurate answers to customer questions is great, but if it takes too long to get those answers, customers will eventually stop asking.
Response or meaningful response?
One one team, we made a distinction between initial customer response (any response from a real person to a customer’s question or problem) and initial meaningful customer response (a response that contains something substantial). Consider the difference between these two messages:
Hi, sorry to hear you’re having issues! I’m taking a look now and will get back to you as soon as I know more.
Hi, sorry to hear you’re having issues! It looks like this is due to a misconfiguration of your SMTP settings—be sure to set the server port as well as hostname. There’s some documentation about that configuration setting here: XXXXXXXX
To a customer, the first response contains just one piece of information: that someone has seen your message. They may or may not be actively investigating, and you don’t know when you’ll hear more. Perhaps reassuring that your message didn’t fall into a void, but you still have a problem with no solution. The second message, on the other hand, is more meaningful: it has a specific diagnosis and solution, and now you can move forward in resolving your issue.
For more complicated issues, a meaningful response might include things like:
Troubleshooting steps
Requests for more information
A suggestion to join a live screenshare
Any of these is going to be more useful to the customer than a simple acknowledgement, and so the time to first meaningful response (TTFMR) metric is well worth tracking. While a fast initial response is always welcome, it’s the TTFMR that really helps establish how quickly customer issues are being handled.
Unfortunately, as we discovered on that team, TTFMR is difficult to track programmatically. How can you write a filter that determines whether a response is meaningful or not? We never cracked that nut, but instead resorted to a sampling approach: every month, we’d randomly select about 20% of support issues, and manually look them over, calculating a time for each and generating a monthly average. It wasn’t perfect, but even in this crude form the information was enlightening. It was clearly visible when we gained or lost team members. As the team grew more and more effective overall, we could watch TTFMR creep downward and know we were doing something right. And if it suddenly started going up, we’d know it was time to get more capacity: either we had to increase our efficiency, or we had to increase the team size.
Sidebar: Email vs chat vs phone
Depending on the modalities your support team operates with, the expectations for a response may vary widely. Customers who call in are expecting a response immediately, of course. If you have a chat widget on your site and a customer asks a question, they’re expecting someone to be watching and responding quickly, whether it’s a human or otherwise. And tickets submitted via email, or web form, might wait for hours (or even days) until a response, depending on your team’s current load and the severity of the issue reported. Whatever standards you and your team settle on, make sure they’re appropriate to the communications methods you’re using.
Figuring out a standard for your team
So we’re in agreement by now, I hope, that having an internal standard for first responses is important, both for setting a bar for team expectations and for tracking how quickly your customers are receiving responses to their support issues. Now how do you figure out where to actually set that bar? It’s going to depend on a lot of things, and will likely be evolving over time, but these are the considerations to keep in mind as you’re establishing your own first response standards.
Business hours: there’s no point in establishing a global 1-hour response time if you’re only providing support from 9-5 EST, Monday through Friday. Make sure any initial response standards take off-hours (and weekends, and holidays…) into account.
Team size: it’s a fool’s errand to set an aggressive first response standard that your team of two can’t possibly live up to. Be realistic about what your current team is able to achieve, and aim to drive that response time down over time.
Support load: If your team is running at full bore addressing your existing ticket load, that gives you valuable data about how long it’s already taking for initial responses. Keep that existing history in mind when setting a performance bar going forward.
Other commitments: related to the previous point, if your team is handling all kinds of other projects, that is going to prevent them from taking every new support issue the moment it comes in. Build that understanding into your expected initial response time if you want your team to be able to actually achieve that standard.
Above all, keep in mind that this initial response time is not meant to be set in stone: it’s a goal, and an achievable metric, but it’s always going to be shifting. As your team size changes, as your individual engineers’ skills improve, and as the support load itself fluctuates, you’ll need to revisit this regularly. Over time, however, you should be aiming to drive that initial response time down, and demonstrate that your team is improving the technical support experience as your customer base grows.
To commit or not to commit?
The final consideration to discuss today is simple on its face: do you communicate this internal standard to your customers and prospects? Actually figuring out an answer to this question can quickly get very complicated and will depend (there’s that word again) on internal politics, team capacity, and any number of other factors. For now, I’ll briefly look at the pros and the cons, as well as one common situation where you don’t have a choice: contractual commitments to a first response time.
Pro
Fast, accurate support can be a tremendous competitive differentiator.
Laying out your response standards ahead of time will help build customer trust in advance, rather than waiting for them to experience it a few times.
Setting out standards can paradoxically improve customer patience: if customers know they’ll have to wait up to X hours, or until business hours, they’re less likely to get impatient in the meantime and start contacting people directly.
Con
Publishing standards, then failing to meet them, is terrible for customer trust. If you’re not 100% positive you can meet your response standard, why give yourself the opportunity to disappoint your customers? Better to keep it to yourself.
If your team always has to keep capacity in reserve to be confident of meeting a response time standard, that is capacity that could otherwise be used on other team or company projects.
Contractual commitments
All of the above considerations may be moot if you have support service level agreements (SLAs) with one or more of your customers. It doesn’t matter what you want to do with team capacity—if you have guaranteed a customer that their support issues will receive an initial response within X hours, that takes higher priority than pretty much anything else your team might be responsible for. While you as a support leader are not always in control of what is and is not included in terms of support SLAs, my advice is this: unless your team is accustomed to response time standards, and capable of meeting them, do not commit to response SLAs. On the other hand, once your team has demonstrated it is able to meet a fairly aggressive response time, that lets you confidently commit to a less-aggressive SLA for customers who are willing to pay for the privilege. I’ll discuss this more in a support tiering/paid support post coming … at some point. (I know I can’t put it off forever, but that won’t stop me from trying.)
Next week: standards for updating open support issues!
Thanks for reading Andy's Support Notes 💻💥📝!