What I Learned About KPIs from My Six-Year-Old
I arrived to pick up my daughter on the last day of art camp just in time for program evaluations. Since we at the Balanced Scorecard Institute (BSI) use evaluation data for course improvement, I was intrigued to watch a room full of six- to nine-year-olds randomly fill in bubbles and then quickly improve their scores when the teacher noted that if any of the scores were less than three they’d have to write an explanation.
In the car on the way home, I asked my daughter why she rated the beautiful facilities only a 3 out of 5. She said, “well, it didn’t look like a porta-potty. And it didn’t look like a palace.” She also said she scored the snack low because she didn’t like the fish crackers and wished they’d had more pretzels. As I giggled at the thought of some poor City program planner or instructional designer trying to make course redesign decisions based on the data, I reflected on the basic principles that we try to follow that would have helped the city avoid some of the mistakes they had made.
The first is to know your customer. Obviously, giving small children a subjective course evaluation standardized for adults was ill advised. Better would have been to ask the students about their experience using their language: did they have fun? Which activities were their favorite? Which did they not like as much?
Further, the children aren’t really the customer in this scenario. Since it is the parents that are selecting (and paying for) the after-school education for their children, their perspective should have been the focus of the survey. Were they satisfied with the course curriculum? The price? The scheduling? Would they recommend the course to others?
Another important principle is to make sure that your measures provide objective evidence of improvement of a desired performance result. My daughter’s teacher used descriptive scenarios (porta-potty versus palace) to help the young children understand the scoring scale, but those descriptions heavily influenced the results. Plus a child’s focus on pretzels versus crackers misses the mark in terms of the likely desired performance result.
Similarly, it is important not to get fooled by false precision. Between some participants superficially filling in bubbles and others changing their answers because they don’t want to do any extra work, the city is simply not collecting data that is verifiable enough to be meaningful.
These might seem like a silly mistakes, but they are common problems. We have had education clients that wanted to measure the satisfaction of a key stakeholders (politicians and unions) while ignoring their actual customers (parents and students). We see training departments that measure whether their participants enjoyed the class, but never ask if their companies are seeing any application of the learning. And we see companies making important decisions based on trends they are only imagining due to overly precise metrics and poor analysis practices.
Even the evaluations for BSI certification programs require an explanation for an answer of 3 or less. I wonder how many of our students ever gave us a 4 because they didn’t want to write an answer. I have also seen evaluations go south simply because of someone’s individual food tastes.
At least I can take solace in the fact that no one ever compared our facilities to a porta-potty.