Why “World Class” Performance Isn’t Measurable

Why “World Class” Performance Isn’t Measurable

Let’s say our organization needs to buy a fleet of vehicles and we have two procurement teams. We tell team 1 that we want quiet, blue, four-door, fuel-efficient cars. We tell team 2 that we want world-class, high-quality, great-value, high-performing cars. Then we give both teams a few weeks to find their vehicles. Guess which team will be able to produce measurable results? Team 1 will have the easier time, as it is clearer what is meant by the criteria provided. Team 2 will struggle because their criteria are too ambiguous. Without further clarifications, “world-class” could be interpreted to mean a hot rod sports car, a luxury sedan, or even a nice SUV. And if the team cannot agree on the specifically desired result, how can it measure success? This example demonstrates an important principle of good measure design. Before you can design a measure, you first must agree on what result you are trying to achieve. And not all results are created equal. Results written in abstract language are less measurable and harder to implement than those written in concrete language. Abstract language refers to concepts or vague ideals. Examples of abstract words or phrases include sustainable, innovative, reliable, leadership, quality, effective, leverage, efficient, resilient, optimized, or responsive. Strategic plans are often littered with this type of language, as we aim to deliver best practices, thought leadership or world-class performance. These “weasel words”, as they are often called, are notoriously hard to measure without first translating into concrete terms. Concrete language is sensory-specific, meaning it describes things you can see, hear, smell, taste, or feel. Because they are observable, concrete results are measurable. Team 1 will have no problem determining the percentage of cars procured that meet their specifications. Concrete results are also more memorable and easier to implement. So if you are struggling to design measures for your organization, your first step should be to clarify what result you are trying to achieve, in concrete terms. To learn more about developing concrete results or related measures, please look into one of our KPI training or certification programs or visit kpi.org.

Types of KPIs: The Logic Model and Beyond

As part of the KPI Basics series of content we are developing as part of the launch of the KPI.org website, I thought I would introduce the different types of key performance indicators (KPIs). As I describe in the accompanying video, like to use a framework called the Logic Model to describe the first four types. The Logic Model is a framework that is helpful for differentiating what we produce from what we can only influence. It is also helpful for separating between elements that are more operational versus those that are more strategic in nature. For every key process, we spend resources like time, money, raw materials and other inputs. Then every process has measurements that could be tied to that particular process. The outputs of my process are what we produce. Ultimately though, I want to create an impact with my work. Outcomes capture that impact. Let’s look at some examples of these types of measurements in real life. If I am a coffee maker, my Input measurements might focus on the coffee, the water, or my time invested. My Process measures could have anything to do with the process of making coffee, from the efficiency to the procedural consistency. The outputs of my process would be the coffee itself. I could have a variety of measures around the quality of my coffee output. Finally, my outcome measures would be related to things I can only influence, such as if my audience enjoys or buys the coffee. There is certainly more value in measuring impact than there is operations. If my customer enjoys the coffee I am doing something right. But you really do need a mix of both to truly understand performance. To fully understand all of the elements of strategy execution, I can then add a few other broad categories of measures to my story. Project measures monitor the progress of our improvement initiatives and projects and can be designed to improve operations or strategic impact. These track things like scope, resources, deliverables or project risk. In my coffee example, I might have a new branding campaign to sell my coffee. Employee measures tell us if employees are performing well or have the right skills and capabilities needed. I might measure my employees’ skills in making coffee, for instance. Finally, risk measures tell us if there has been an important change in a risk factor that could have a significant impact on our organization. For example, I might have a risk indicator that tells me if global coffee bean availability becomes a problem. The information that these different types of measures provide can be used to inform decision making. Using a family of measure like this can broadly inform your entire strategy. To learn more about Key Performance Indicator development and implementation, please look into one of our KPI training or certification programs or visit kpi.org.
What I Learned About KPIs from My Six-Year-Old

What I Learned About KPIs from My Six-Year-Old

I arrived to pick up my daughter on the last day of art camp just in time for program evaluations. Since we at the Balanced Scorecard Institute (BSI) use evaluation data for course improvement, I was intrigued to watch a room full of six- to nine-year-olds randomly fill in bubbles and then quickly improve their scores when the teacher noted that if any of the scores were less than three they’d have to write an explanation.  In the car on the way home, I asked my daughter why she rated the beautiful facilities only a 3 out of 5. She said, “well, it didn’t look like a porta-potty. And it didn’t look like a palace.” She also said she scored the snack low because she didn’t like the fish crackers and wished they’d had more pretzels. As I giggled at the thought of some poor City program planner or instructional designer trying to make course redesign decisions based on the data, I reflected on the basic principles that we try to follow that would have helped the city avoid some of the mistakes they had made. The first is to know your customer. Obviously, giving small children a subjective course evaluation standardized for adults was ill advised. Better would have been to ask the students about their experience using their language: did they have fun? Which activities were their favorite? Which did they not like as much? Further, the children aren’t really the customer in this scenario. Since it is the parents that are selecting (and paying for) the after-school education for their children, their perspective should have been the focus of the survey. Were they satisfied with the course curriculum? The price? The scheduling? Would they recommend the course to others? Another important principle is to make sure that your measures provide objective evidence of improvement of a desired performance result. My daughter’s teacher used descriptive scenarios (porta-potty versus palace) to help the young children understand the scoring scale, but those descriptions heavily influenced the results. Plus a child’s focus on pretzels versus crackers misses the mark in terms of the likely desired performance result. Similarly, it is important not to get fooled by false precision. Between some participants superficially filling in bubbles and others changing their answers because they don’t want to do any extra work, the city is simply not collecting data that is verifiable enough to be meaningful. These might seem like a silly mistakes, but they are common problems. We have had education clients that wanted to measure the satisfaction of a key stakeholders (politicians and unions) while ignoring their actual customers (parents and students). We see training departments that measure whether their participants enjoyed the class, but never ask if their companies are seeing any application of the learning. And we see companies making important decisions based on trends they are only imagining due to overly precise metrics and poor analysis practices. Even the evaluations for BSI certification programs require an explanation for an answer of 3 or less. I wonder how many of our students ever gave us a 4 because they didn’t want to write an answer. I have also seen evaluations go south simply because of someone’s individual food tastes. At least I can take solace in the fact that no one ever compared our facilities to a porta-potty.
Free 5-Minute Assessment