Good Beginnings: Thinking in Metrics
Every project needs a north star. When we kick off a “Discovery Workshop” the first step is to agree what that star looks like. Getting there takes time. Our process involves taking the project team through critical assumptions—those things that if true, would mean success or failure. After this step, the team moves through written statements of the current problem and ends the session with a goal (see the blog article “Good Beginnings: Agreeing on the Problem”). The goal is formatted like the following pattern:
We believe that our [people/role/client] will have a better experience doing [name of the activity or journey] by making the following [improvement], [improvement], [improvement] within the next [timeframe]. We will accomplish this by doing [activity], [activity] and measuring progress with [metric], [metric].
Goal statements should end in metrics. Metrics are essentially the data collected in order to prove hypotheses and measure business outcomes. It’s critical that we ask the project team, how are we going to measure our success? When do we know we are done? What data will we monitor and measure for incremental improvements? To answer these questions, we can focus on several well-known categories for gathering metrics:
- Cycle times (measuring a business process for operational efficiency based on time);
- Data quality (measuring the accuracy of data if this is one of root causes of in-efficiency);
- Closing a gap (measuring incremental improvements on key data fields affecting business value and surface inside of key decisions such as pricing, commissions, detection strategies and other common use cases);
- Work (re)distribution (measuring the sweet spot between straight-through processing and exception routing to teams using criteria based on human factors like skills, training, and geography); and,
- Sentiment (measuring the feelings of external/internal customers and team members directly responsible for delivering the outcomes).
In a recent workshop, we explored decision opportunities in a business process that largely flowed through call center operations and a CRM system. The customer had recently deployed a mobile application hoping more self-service and would lessen call volume into the call centers. At two critical points in the process, the mandatory use of third-party APIs required 2-3 days for processing. Unfortunately for the team, the APIs could not be avoided. As a result, the project team agreed to focus on the parts of the process they could control. In our discovery sessions, they easily targeted call-backs for missing information, status updates, and customer engagement restarts because of slow customer responses.
Since the 5.4 release of irAuthor, instrumenting cycle time improvement can be as simple as time stamping when processes begin and end. In irAuthor, make sure the entity has the time stamp fields, and mark them as a metric. irServer will automatically persist metric data for reporting and analysis later.
Tip: Timestamp information is useful criteria for notifications and reminders within other decisions running as a batch.
Missing or inaccurate data causes friction within business processes. Friction triggers call-backs, people problem solving, and directly affects sentiment. No one likes taking time out of their day for calls that could be avoided. Incomplete addresses or conflicting delivery addresses between customer records is a common problem. On a more technical level, straight-through processing (a fully automated process that avoids human intervention) can’t be achieved because of data that’s miscoded or mismatched with other key criteria for the process to finish on its own. One approach to this problem is to flag the entity with a Boolean field (true/false) for known scenarios as an irAuthor metric.
Tip: Create an entity called “ProblemScenarios” and add Boolean fields named to correspond to each scenario. Mark the fields as a metric. When the scenario is caught in validation, make sure a rule action sets the field to True.
This sets a baseline for improvement. As problems are found and remediated, track your progress until it’s no longer worth review. Explore the use of third-party APIs to improve validation and look for problems that prompt human intervention. Finally, detection strategies often fall into this bucket–consider analytical models to drive down false positives.
Closing the Gap
Initial projects are often too ambitious for the first revision. There is frequently more work than the project team can implement in the first production push. The overflow usually sits in the backlog. Marking key decision fields as metrics in irAuthor can help a project team set an initial baseline for improvements later. As you work through the backlog, test your hypotheses against historical data, and A/B test your outcomes.
Tip: Use your time in the Discovery Workshop to strategize around actions that could improve the decision over time and drop them into the backlog.
Cases in CRM usually require assignments to people for long running business processes. Case assignments are a natural place for tracking metrics because they directly correlate people to their activities. Take advantage of your decision platform and explore multiple strategies for metrics. For example, skills-based case assignments will likely improve efficiency, service quality and customer sentiment. Use metrics to measure the impact of smarter assignments.
Tip: If you use CRM for claims and collections, consider assignments based on geography, amounts, days outstanding, and risk (excellent metric candidates). Group claims together for efficient outbound calling if the claims are for the same organization.
Tracking a specific sentiment score as a metric might make sense if it’s used in a calculation or some other criteria in a decision. As a general rule, customer experiences improve when they self-service and have awareness of what’s taking place. Correlating a Net Promotor Score (NPS) to specific decision metrics connects the dots and underpins how larger strategic business goals can be supported.
Mapping Metrics to Key Process Indicators (KPIs)
KPIs represent the larger strategic goals of the organization, and metrics are the underlying fields that influence KPIs. Metric data from a rule application easily aggregates with other sources to measure and improve business outcomes. For example, correlate drops in call center volume to key interactions across channels by adding metrics to automated decisions, and then aggregate data with other sources for reporting. We need metrics from all sources to analyze the impact.
Tip: KPIs can direct activities in other ways. Once KPIs exist, they can drive conversations about what’s next on the decision roadmap. Pick a couple of candidate processes that are well-known, and explore how decision management can improve what’s taking place. During a Discovery Workshop, we map the process and expose details that are ripe for automation.
From another perspective, metrics fit within the broader set of verification activities during the decision lifecycle. Verification at all levels objectively proves we have the desirable impact on the business. It’s not enough to prove that criteria for a specific rule are correct. That rule interacts with others in the rule set. The rule set behaves differently across a wide range of data scenarios.
In production, the rule set (in the form of a decision) directly impacts operations. Successful projects take this ripple effect into account. The project team verifies and measures at every level as they prepare production quality decisions.
Instrumenting decisions with metrics doesn’t have to be complicated. Start simple with descriptive statistics and a good reporting tool. Teams should allocate time to data management in their project planning and curate scenario data for A/B testing and “What If” analysis. Ideally, the same data used for testing your decisions can also be used to go after the more difficult detection scenarios based on probability. Detection of fraud and revenue recovery are both great candidates for incorporating statistical models, but both require investment in high quality data sets.
Finally, it should be said that specific rules also have a part to play in metrics. In the process of tuning a decision, it is imperative to know which rules have the most impact on a given set of scenarios. Once you can light these up as a metric, it’s a lot easier to help others understand what’s taking place and why.
Please give us your feedback. We love to hear from folks what they are thinking. Also, consider an InRule Discovery Workshop for your next project. Workshops can take your project team from discovery to a working proof-of-concept in four days (email@example.com).
Many thanks to my co-author Rob Eaman who is always up for an adventure.