This is the final post in a three-part series that explores soft skills, their importance, how to train people in soft skills, and the challenges to measuring such training efforts.
We began this blog series defining soft skills, ultimately distilling their essence to things that humans do that machines can’t (or can’t do as powerfully). In part two, we discussed why soft skills training is urgently needed (despite the historically low unemployment rate) and we discussed how to deliver soft skills training (primarily learning by doing). Now, in the final post of this series, we tackle how to measure the impact of soft skills training.
We sought answers to these questions: Is measuring the impact of soft skills training possible? Is it difficult? Does it call for a distinct approach compared to when measuring hard skills?
The short answers, respectively, are yes, kind of, and not really. Let’s dive in.
Is it possible?
Many in the learning and development community have hit a wall when trying to gauge the effectiveness of their soft skills training. How, after all, can we objectively measure something that’s so subjective, something like leadership, creativity, or empathy? Such intangible qualities are different in everyone, and the form they take varies by the situation; in contrast to hard skills, soft skills aren’t something we can check off as complete or incomplete. Remember Gregory Lewis’ definition: soft skills pertain to how we do tasks, not whether we do them. So, if we can’t precisely measure the skill, can we ever measure the training?
In an effort to answer that question, we turned to some folks who definitively said, “Yes.”
Remember the 256 percent ROI metric we referenced in part one of this series? The researchers behind that stat — Achyuta Adhvaryu from the University of Michigan, Namrata Kala from Harvard University, and Anant Nyshadham from Boston College — claimed that workers who were trained in soft skills were 12 percent more productive than their counterparts, leading to a massive return on the soft skills training. What, we wondered, was their methodology for measuring that?
In essence, the authors set out to see if a soft skills training program delivered to women working in India’s garment industry would boost their job performance. The training covered skills such as communication, time management, financial literacy, problem solving, decision making, and legal literacy. Participants were randomly selected to partake in the training; those who were not chosen made up the control group.
The University of Michigan press release offers this summary of the results: “Nine months after the program ended, productivity gains, along with an increase in person-days due to retention changes, helped generate a whopping 256 percent net return on investment.”
The key words in that recap are “productivity gains” and “retention changes.” Those are the things the researchers measured, not development in communication, time management, etc. Because those upticks in productivity and retention came from more of the soft skill trainees than their counterparts, the researchers draw the correlation between the training and the gains.
The takeaway is one that we see across similar evaluations: To measure the impact of soft skills training — and any training, really — monitor the bottom line metrics.
So, where does that lead us?
That revelation leads us to consider the fundamentals of evaluating training, like clearly defining measurable objectives and appropriately setting our own expectations. To do the latter, Kevin M. Yates, a self-described L&D data detective, recommends that we forecast the level of difficulty in measuring a particular program and survey learners immediately after the training to get a sense of its long term effectiveness.
To assist in anticipating the difficulty, Yates offers an “effort scale.” He argues that the effort required for “collecting facts about a training’s impact” increases as the dynamic nature of the job increases.
The examples he gives for jobs suggest to us that the more dynamic a job, the more soft skills it requires; client engagement demands more soft skills than call center customer service, which in turn entails more soft skills than assembly line work. So, if we accept that measuring training for dynamic jobs is difficult, and dynamic jobs necessitate more soft skills, then we can surmise that assessing soft skills training will naturally be difficult.
The key is to not make evaluation harder than it already is. The key, when dealing with soft skills training, is to measure the impact on the bottom line, not each learner’s capacity for the skill itself, an aim that’s far too abstract.
How do we measure for bottom line impact?
Again, this challenge boils down to training assessment fundamentals, which is why we say there isn’t a unique approach to measuring soft skills training compared to doing so for hard skills training. The most important thing in either effort is that we know exactly what we’re looking for.
To clearly identify measurable objectives, Yates suggests we ask these three questions:
- What’s happening in the organization?
- What is the organization’s goal?
- What performance requirements are needed to achieve our goal?
The first question essentially invokes a needs analysis, which learning professionals should have already done to confirm that training is the proper solution for the performance gap. Make sure there aren’t other, easily fixable barriers to performance before investing in training.
Yates’ second question is about the bottom line metric, or metrics, the organization wants to improve. That may be employee retention, employee productivity, or something else. The answer to this question is what we ought to measure against the total cost of any soft skills training.
And finally, Yates asks us to consider what performance metrics are necessary for the organization to achieve its bottom line goal. For example, to increase the yearly productivity by 12 percent, perhaps each garment worker needed to increase her weekly finished garment count by two. Defining performance metrics enables us to set our sights on a more immediately available metric, one that will indicate whether the training is on track to meet the organization’s bottom line goal.
What did this blog series teach us?
While there is always more to learn, our hope is that readers share our overall takeaways from writing this series: a firm understanding of the essence of soft skills, as well as a belief in their importance to businesses and workers alike; an inspiration to create scenario-based learning content to train people in soft skills; and a sigh of relief, knowing that we can measure the impact of soft skills training.
Need some soft skills training for your organization? Reach out today to see how Roundtable can help you deliver.