Call it bias, but I’ve always believed that NetApp University (NetAppU) does training and education better than many of our industry peers, and the feedback we’ve received so far has supported that. But how do you quantify the value of training? At NetAppU, our primary focus and mission is delivering training that is meaningful and valuable for our learners. While we have rudimentary metrics on the success of our training programs, things like attendance rates and test scores, we wanted more. So, in the spirit of being data-driven, we decided to dive deeper.
Successful training is determined by the impact the learning has, whether it’s on job performance, productivity, or the bottom line. For each audience, the definition of success is different. For example, sales reps might consider training valuable if it increases customer visits or leads to more bookings. For customer support engineers, effective training accelerates how quickly they can help customers and resolve issues, or reduce the number of cases that are opened. Systems engineers might say a learning module is valuable if it helps them better architect a solution for customers that effectively leverages NetApp technology. Whatever those key metrics are, there has to be a tangible, measurable impact of a particular piece of training, or it is not useful to anybody.
To get this data, NetAppU is leaning on a Software-as-a-Service survey platform called Metrics that Matter (MTM). It’s built into every piece of training and education we produce, and it enables us to get real feedback that we can use to track the performance and value of each of our courses. MTM gathers data from NetAppU-delivered classes as well as training provided by our Authorized Learning Partners (ALPs).
We designed 2 different surveys with MTM that are part of every course in our learning portfolio. The first survey is administered right after a learner completes a course. It asks learners several questions about the training they received, the skills they learned, the performance of the instructor, etc. The second survey is delivered 60 days after the learner completes the course. The purpose of this second survey is to collect data on how the training and knowledge gained is being applied in a day-to-day job environment.
As we started to roll out this new tool, the data began pouring in. And what we found was even better than what we hoped for.
The following metrics are from ~1,600 evaluations collected in Q3 FY19 (November 2018 – January 2019). Where applicable, we’ve noted the industry benchmark.