The need to measure the effectiveness of a product or service is almost universal in business, regardless of field of service or type of product. Getting feedback is not only how you know if what you are doing is working, but it’s also how you improve and grow. And — it can’t be denied — customer satisfaction data is an effective marketing tool.

But few things can be as hard to do well as to gather feedback and measure and report outcomes — for several reasons. For one, designing feedback programs that measure relevant information is complex. Also, measuring outcomes is rarely a single-objective undertaking, as data gathered serves many purposes. This means you have to be clear about what you are going to use the data for. In addition, getting people to respond/give useful information is a well-known challenge, and using the information you have gathered, whether for product improvement, customer reporting, or marketing, requires analytical insights.

Globiana’s COO and Co-Founder, Steffen Henkel, has long experience of measuring outcomes and analyzing and reporting data. When asked about his approach, he starts by talking about the importance of measuring the right thing (and how hard that can be), and of being able to identify what is useful information and what is not.

To illustrate what he means, he gives a simple example of trying to measure someone’s impression of a cross-cultural training session, something he is well-familiar with. The goal of the training on an individual level is to increase a person’s cultural awareness so that transition into life in a new country is as smooth and efficient as possible. The training session is usually given shortly before someone relocates.

The goal for a company providing the training to an employee is typically multi-faceted and includes general cultural adaptation, as well as business-specific skills, in order to be effective at work. A company sending an employee abroad to work or do business represents a big investment and a failed assignment or botched negotiation is costly. Hence, the training serves an important function in maximizing the outcome for the employee and the company alike.

So, what is useful information to measure after the training session is completed? And what is reported to whom? Steffen wants to know several different things:

  • The basics, such as was the trainer professional and knowledgeable? Were the facilities satisfactory?
  • What people thought about the course — was the format good, material engaging, etc.?
  • Was the information the trainer provided effective and useful? Does it translate to usable skills in “real life”?

Reporting considerations are both external and internal and typically include the following:

  • External reporting to purchasing client
  • Feedback from/to trainer
  • Internal reporting for the purpose of improving content and practices
  • Marketing

On the surface, the basic questions, such as was the trainer professional and knowledgeable, should be pretty straightforward, as should finding out what people thought about the course. However, even these seemingly simple questions can produce answers that are not actually measuring what you are intending to measure.

Steffen says: “Unless you have a carefully designed survey, what you may end up measuring is how a person is feeling after the class — are they tired, hungry, happy because it ended a little early and the snacks were good — rather than what they think about the content of the class. And that’s not really the information you’re after.”

According to Steffen, the best way to gather information in these cases is to make data collection as specific as possible. For example, by starting off the session by asking “what are your three main objectives with the class?”, and then at the end revisit to see if/how the objectives were met. This approach ensures context in the information gathering process. 

Understanding the effectiveness of the information the trainer has provided is trickier. This is something that can’t be measured at the conclusion of a training but has to wait until the assignee has been on location for a while, typically several months after the training took place. Gathering data months after an event comes with its own set of challenges. Because the questions revolve around if the person has been able to apply anything they learned in training in their everyday life in the new location, this information is more about “feeling” rather than raw data, meaning measuring the outcome is not something that can be done on a scale or quantified in a graph.

In this example, the ultimate indicator of training success for the employer may be whether the employee stays the course or not. Measuring that outcome represents a separate survey, targeting the hiring company.

Steffen’s “simple” example above highlights the complicated nature of gathering data and measuring outcomes and it underscores the need for deliberate data collection/reporting to be part of the overall product/service offering. 

General trends in measuring outcomes and reporting are influenced by the online tools available, many of which make it easy to collect information and data. However, the quality of information will vary, and it may not always be actionable. Clicking a happy or sad face emoji, for example, will give an indication of someone’s current state of mind but not much more.

As for a tool like Net Promoter Score (NPS), Steffen sees it as being really useful for marketing purposes and it can function as an internal indicator of how you are doing. However, it also may not give much specific actionable information. And it’s not really a useful reporting tool to clients seeking data.

Lastly, when thinking about the power of collecting and reporting data, and how your findings help make business decisions, consider this from a Helpscout study — it shows that 80% of companies say they deliver “superior” customer service, whereas only 8% of customers agree with that assessment. While this particular study doesn’t indicate the level of dissatisfaction among customers, it does point to a big difference between the perceived success by the companies and the lived experience of their customers, and that is not a good starting point for building a strong brand.

By: Felicia Shermis

Sources:

Harvard Business Review

Aircall

What’s at Stake When Business Travel Resumes, And How To Do it Safely
Resilience — The Key to Success