The Cost Center Trap
Customer support is often treated as a cost center. It's seen as a necessary expense β something you have to do because customers have problems, but not something that drives value for the business. The goal is usually to reduce costs: shorter call times, fewer tickets, lower headcount.
Anyone who's worked in support or operations can see the frustration that comes from this framing. Support teams are under pressure to handle more volume with fewer resources, to resolve tickets faster, to reduce escalation rates. The metrics that matter are efficiency metrics: average handle time, tickets per agent, cost per ticket.
But this view misses something critical. Support isn't just about resolving problems. It's also a source of insight about where your product is failing, where your processes are breaking, and where customer pain is accumulating. Every ticket represents a moment where something went wrong β or at least, where something could have been better.
The problem with treating support as a pure cost center is that it creates misaligned incentives. If the only goal is to reduce costs, you might optimize for fast resolution over actual problem-solving. You might create workarounds instead of fixing root causes. You might reduce ticket volume by making it harder for customers to reach you, rather than by making your product better.
But support can be a lever for driving improvements β either to the product itself, or to customer retention through lowering effort, reducing pain, or creating easy support experiences. The companies that figure this out don't just have lower support costs. They have better products, happier customers, and higher retention.
The EAT Framework
The EAT framework β Eliminate, Automate, Too Late β is a way to think about how to handle support requests. It's a simple mental model for prioritizing what to fix:
Eliminate
Fix the product so the problem doesn't happen in the first place. This is the highest-leverage intervention. If customers keep calling about billing disputes, maybe you should build better billing transparency. If they're confused about cancellation policies, maybe you should simplify the policy or make it clearer in the product. Eliminate means changing the product, the process, or the policy so that the support request never needs to happen.
Automate
Build self-service or automation so customers can resolve it themselves. Not every problem can be eliminated, but many can be automated. Self-service portals, automated refunds, password reset flows β these all reduce support volume by letting customers help themselves. Automation is still a cost, but it's a lower cost than human support, and it often provides a better experience (instant resolution vs waiting on hold).
Too Late
Accept that this problem will require human intervention, and optimize for it. Some problems are just too complex, too nuanced, or too rare to eliminate or automate. When that's the case, the goal is to make the human support experience as good as possible. Give agents the tools they need, reduce wait times, make resolution as smooth as possible. Too Late isn't a failure β it's an acknowledgment that some things will always need a human touch.
The key insight is that most support requests should be in the Eliminate or Automate categories. If you're spending a lot of time on Too Late problems, it's a sign that you're not using support data effectively to drive product improvements. Every ticket in the Too Late category is a missed opportunity to make your product better.
Support as a Strategic Lever
When you shift from thinking about support as a cost center to thinking about it as a lever for product improvement, a few things change:
Support Data Becomes Product Intelligence
Every ticket tells you something about your product. Patterns in support tickets reveal where your product is confusing, where your processes are breaking, and where customer expectations aren't being met. But most teams aren't systematically analyzing this data. They're reactive β fixing issues as they come up, but not looking for patterns that could inform product strategy.
Support Experience Drives Retention
Support isn't just about solving problems. It's also about how customers feel after they interact with you. A great support experience can turn a frustrated customer into a loyal one. A bad support experience can turn a minor issue into a cancellation. When you optimize for customer effort and satisfaction β not just ticket resolution β you're investing in retention, not just cost reduction.
Product and Support Should Be Partners
Any product team should have a close relationship with support. Support teams see problems first, understand customer pain directly, and have a sense of what's urgent vs. what's just noisy. Product teams need this input to build the right things. But too often, the relationship is transactional β support escalates issues, product fixes them β rather than strategic.
This is even more true as classification of customer pain becomes lower investment with LLMs and AI tooling. You can now classify thousands of tickets automatically, surface patterns at scale, and identify the highest-leverage product improvements without manually reading through every ticket.
The shift in mindset is subtle but important. Instead of asking "how do we reduce support costs?", you ask "how do we use support to make our product better and our customers happier?" The first question leads to cost-cutting. The second leads to product improvements that pay for themselves through lower support volume, higher retention, and better customer satisfaction.
Building the Tool: A Practical Walkthrough
To make this concrete, I built a tool that uses BBB complaints for Comcast as a stand-in for actual internal customer service data. The idea is to show how you can take unstructured support data, classify it systematically, and use those classifications to inform product prioritization.
The process has three main steps:
- Normalize the data β Extract structured information from unstructured text (titles, categories, dates, locations, complaint bodies).
- Classify the pain β Use classification logic (or LLMs) to categorize each complaint by intent, product area, and failure mode.
- Prioritize impact β Group similar complaints and use business logic (or Key Results) to identify the highest-leverage product improvements.
Obviously, each business has its own Key Results, customer segments, and product priorities. This exercise is somewhat naive in that it uses simple frequency-based prioritization. But it also demonstrates how quickly a team can get up and running with this kind of analysis β and how much value it can provide even at a basic level.
Step 1: Normalize the Data
The first step is getting the data into a structured format. Support tickets, complaints, feedback β it all comes in as unstructured text. Before you can classify it, you need to extract the basic structure: what the complaint is about, when it happened, where the customer is located, and the full text of the complaint.
For the BBB tool, this means scraping complaint listings and extracting titles, dates, categories, locations, and complaint bodies. The goal is to get everything into a consistent format so you can process it programmatically.
// Extract structured data from unstructured complaints
function extractComplaintData($: CheerioAPI, $el: Cheerio): Ticket | null {
const title = $el.find('.bpr-complaint-title').text().trim();
const dateText = $el.find('.bpr-complaint-date').text().trim();
const category = $el.find('.bpr-complaint-type').text().trim();
const complaintBody = $el.find('.bpr-complaint-body').text().trim();
// Extract location patterns
const locationMatch = complaintBody.match(/([A-Z][a-z]+,s*[A-Z]{2})/);
return {
id: `ticket-${index + 1}`,
title: title.substring(0, 200),
category: category || null,
date: extractDate(dateText),
location: locationMatch?.[0] || null,
excerpt: complaintBody.substring(0, 300),
url: extractUrl($el, baseUrl),
};
}In a real product, this step might mean pulling data from your support ticketing system (Zendesk, Intercom, etc.), extracting relevant fields, and normalizing them into a consistent schema. The key is getting clean, structured data that you can analyze at scale.
Step 2: Classify the Pain
Once you have structured data, the next step is classification. For each complaint, you want to understand:
- Intent β What is the customer trying to do? (e.g., dispute a charge, cancel service, return equipment)
- Product Area β Where in your product did this happen? (e.g., billing, account, service reliability)
- Failure Mode β Why did it fail? (e.g., bug, confusing UX, policy friction, missing self-service)
The BBB tool uses keyword-based classification (a simplified version of what you'd do in production). It looks for patterns in the complaint text to determine intent, product area, and failure mode. This works for demonstration purposes, but in production, you'd likely use an LLM for more nuanced classification.
function classifyComplaint(text: string): ClassificationResult {
const lowerText = text.toLowerCase();
// Intent classification (what customer wants)
const intentScores = {
billing_charge_dispute: scoreIntent(text, BILLING_KEYWORDS),
service_outage: scoreIntent(text, OUTAGE_KEYWORDS),
cancellation_retention: scoreIntent(text, CANCELLATION_KEYWORDS),
// ... more intent categories
};
const selectedIntent = getHighestScore(intentScores);
// Failure mode classification (why it failed)
const failureMode = determineFailureMode(selectedIntent, text);
// Generate recommendation based on intent + failure mode
const recommendation = generateRecommendation(
selectedIntent,
productArea,
failureMode
);
return { intentLabel, productArea, failureMode, eliminateRecommendation };
}LLMs make this classification much more accessible than it used to be. You can now classify thousands of tickets automatically with good accuracy, without needing to build complex rule-based systems. The key is having clear classification schemas and prompt engineering that captures the nuance you care about.
Each classification should also generate a recommendation β what should we build or change to address this category of complaints? The recommendation type maps to the EAT framework: Eliminate (product_change, policy_change), Automate (self_serve, automation), or Too Late (agent_tooling, comms).
Step 3: Prioritize Impact
Once you've classified all the complaints, the next step is to group similar ones and prioritize which improvements will have the most impact. This is where business logic comes in β different businesses will prioritize differently based on their Key Results, customer segments, and strategic goals.
The BBB tool groups complaints by recommendation type and product area, then sorts by frequency. This is a naive approach, but it demonstrates the concept. In a real product, you'd want to consider:
- Volume β How many complaints does this address?
- Severity β How painful is this for customers? (Some complaints are minor annoyances; others are cancellation-level problems.)
- Customer Value β Who is complaining? (A complaint from a high-value customer might be worth more than a complaint from a low-value one.)
- Strategic Fit β Does this align with your product strategy and Key Results?
- Effort β How hard is this to build or fix?
// Group by recommendation type and product area
const opportunityGroups = new Map<string, ClassifiedTicket[]>();
tickets.forEach((ticket) => {
const key = `${ticket.eliminateRecommendation.type}::${ticket.productArea}`;
if (!opportunityGroups.has(key)) {
opportunityGroups.set(key, []);
}
opportunityGroups.get(key)!.push(ticket);
});
// Sort by frequency (simplified - real KRs would weight differently)
opportunities.sort((a, b) => b.frequency - a.frequency);The output should be a prioritized list of roadmap opportunities β specific product improvements that address patterns in support data. Each opportunity should include:
- What the problem is (based on complaint patterns)
- Why fixing it will eliminate support volume
- How to implement it (implementation direction)
- Expected impact (based on complaint frequency and severity)
This prioritization becomes the bridge between support data and product roadmap. Instead of product teams guessing what to build next, they have data-driven recommendations based on actual customer pain.
Example Output: What the Analysis Reveals
When we ran the BBB tool on 100 Comcast complaints (obviously that's a very small sample size only for the purposes of this demo), it generated 17 distinct roadmap opportunities. Here's what the analysis surfaced:
Summary Statistics
The top intent categories were billing charge disputes (34%), service outages (16%), and account access issues (10%). The most affected product areas were Billing (37%), Service Reliability (16%), and Account management (10%). Failure modes were harder to detect with simple keyword matching β most fell into "unknown" (71%), but agent tooling gaps (22%) and missing self-service (2%) were clearly identifiable.
Top Roadmap Opportunities
The highest-frequency opportunity was enabling self-service for billing operations, which addressed 31% of all complaints. Customers were calling support for things they should be able to handle themselves: disputing charges, requesting receipts, managing promotions. This is a classic Eliminate/Automate opportunity β by building self-service capabilities, you remove the need for these support interactions entirely.
The second-highest opportunity was automating service reliability operations, addressing 12% of complaints. These were complaints about service transfers, payment arrangements, and account changes that required manual processing and led to errors or delays. Automation would eliminate the manual errors that cause these complaints.
The third opportunity was enhancing agent tools for general operations, addressing 9% of complaints. These were complex issues that couldn't be easily automated but where agents lacked the tools to resolve them efficiently. This is a Too Late optimization β accept that these will require human intervention, but give agents better tools to handle them.
Other significant opportunities included self-service for equipment returns (8%), improvements to account management product experience (7%), and better agent tooling for billing (6%). The full analysis identified 17 distinct roadmap opportunities, each with specific implementation directions and expected impact.
What This Tells Us
Even this simplified analysis reveals clear patterns. The majority of complaints (43%) fall into the Eliminate or Automate categories β things that could be fixed in the product or automated in processes. Only a small fraction (2%) clearly needed self-service improvements, suggesting that most self-service opportunities are in billing, where customers are already trying to use existing tools but hitting friction.
The high percentage of "unknown" failure modes (71%) highlights a limitation of keyword-based classification. In production, you'd use an LLM to get more nuanced classification, which would improve both the accuracy of failure mode detection and the quality of recommendations. But even with basic keyword matching, the analysis surfaces actionable insights.
Next Steps: From Analysis to Prioritization
Once you have this analysis, the next step is to prioritize based on your business context. Frequency is a starting point, but you'd also want to consider:
- Customer impact β Are these complaints from high-value customers? Are they cancellation-level problems or minor annoyances?
- Strategic alignment β Do these improvements align with your product strategy and Key Results?
- Effort vs. impact β How hard is it to build self-service billing features vs. automating service transfers?
- Revenue impact β Will reducing billing disputes improve retention? Will automating transfers reduce churn?
In a real product, you'd overlay this support data with other signals β customer value data, retention analysis, product strategy priorities β to create a prioritized roadmap. But support data gives you a baseline that's grounded in actual customer pain, not assumptions.
What This Unlocks
When product and support teams work together using this kind of analysis, a few things happen:
Product Improvements Are Data-Driven
Instead of guessing what customers want or reacting to the loudest voice, product teams can see patterns in support data that reveal where the product is actually failing. This grounds prioritization in real customer pain, not assumptions or opinions.
Support Becomes a Strategic Function
Support teams aren't just handling tickets β they're generating product intelligence. This changes the relationship between product and support from transactional (escalate and fix) to strategic (analyze and improve). Support teams feel more valued, and product teams get better input.
Cost Reduction Happens Through Improvement, Not Cutting
You don't reduce support costs by cutting headcount or optimizing for speed. You reduce them by fixing the product so fewer tickets come in, and by automating resolution so tickets that do come in are handled more efficiently. This is a virtuous cycle: better product β fewer tickets β lower costs β better product.
Retention Improves Through Better Experiences
When you eliminate and automate support requests, you're not just reducing costs. You're also improving customer experience. Customers who can resolve issues themselves feel more empowered. Customers whose problems are prevented entirely are happier. And happier customers stay longer.
The EAT framework helps structure this collaboration. For every category of support request, ask: can we Eliminate it? If not, can we Automate it? If neither works, optimize the Too Late experience. This creates a clear decision framework that aligns product and support teams around shared goals.
LLMs and AI tooling make this analysis more accessible than ever before. You can classify thousands of tickets automatically, surface patterns at scale, and generate actionable insights without manual analysis. The barrier to entry has never been lower β which means there's no excuse for treating support as just a cost center.
If you're trying to build this kind of analysis for your own product β connecting support data to product improvements, or setting up classification systems that help prioritize your roadmap β I can help. This is the kind of work I do with teams: bridging the gap between operations data and product strategy, building systems that turn support insights into actionable product improvements. Get in touch if you want to talk about how to apply this approach to your own product and support data.