Evaluating long-term impact of executive learning

Impact Expert, Claire Masson, explains a new approach to behavioural assessment

Go to the profile of Claire Masson
Mar 08, 2019
0

On a recent trip home, my mother offered me an apple, but I had no interest in eating it. She then complained that she bought too many apples and hoped that I would eat one. I did not. Minutes later a sliced-up apple appeared by my side. It was delicious. 

This small domestic incident reminded me of what researchers at Cornell University discovered. Children are 73% more likely to eat fruit if it’s given to them in slices than if it is given to them in its overwhelming entirety.1 Ruefully thinking that the definition of child could be extended in my case, it also got me thinking about impact (consumed apple) and how closely it is tied to delivery (sliced apple).

My role at Financial Times | IE Business School Corporate Learning Alliance is in impact measurement. But unlike our apple analogy, learning impact is measured after we deliver our programmes. We hope the application of the learning continues long after their completion. But how do we know?

Evaluating the long-term impact of an executive learning programme is challenging. Unlike traditional educational settings, executive education is usually short (day or week) and targeted for one theme (leadership or innovation). Most of these programmes aren’t simply about conveying new knowledge, but also about making significant behavioural changes in order to capitalise on that new-found knowledge.

If learning and development programmes aim to create behaviour change, then how will we measure success? It’s rare that those behavioural changes are measured because there are so many confounding factors. Thus, indirect measures have been developed to address impact questions.

360 feedback tools are rarely designed with learning or behaviour improvements in mind 


A common behavioural measurement tool is 360 feedback,2 which can be quite effective, if done well. Feedback is a complex and nuanced area. The worst case is not null behavioural change. No, 360 feedback has been shown to decrease motivation or increase apathy if delivered poorly. Done well, though — allowing an individual to seek feedback from those around them — can be very powerful.

In my search for the ideal measurement tool, I evaluated and analysed many 360 feedback assessments. I discovered that there are a lot of variations on the 360. Some ask for feedback on what someone should ‘stop’ or ‘start’ doing. Others use a vast number of competency-based questions that may or may not be related to one’s job. What the individual does with the feedback is often left to them with little guidance or support. The act of giving and receiving feedback is treated as sufficient to create behaviour change.

In short, 360 feedback tools are rarely designed with learning or behaviour improvements in mind because they were designed for assessment. Typically, organisations use 360 feedback as an annual event, tied to performance reviews. It tends to be something that is done to people, without them having chosen to receive feedback, which can be demotivating. Is it possible, I wondered, to embed good learning design into 360 feedback to support learning and long-term behaviour change?  I changed my search parameter from an ‘assessment tool’ to one that would do just that – aid learning!

Daily working lives

I then discovered a unique and meaningful approach by The Honeycomb Works.3 Their Honeycomb is more than a digital tool, it’s a method based on behavioural science, learning science and agile principles. Although grounded in good science, the Honeycomb puts people first. It integrates learning and behavioural feedback into their daily working lives.

Giving people options and choice is motivating, engaging them intrinsically. However, giving them too much choice can paralyse them. So, you need to give people enough options, but not too many … clear?!

Just as eating an apple is easier in small slices, the same is true for learning and feedback.  By slicing up feedback into small areas of focus, it’s more manageable to give and to receive.  In this way, it’s easier to make the necessary connections that will turn new knowledge into lasting behaviour change.

In the Honeycomb these slices are called cells. Each cell is one small topic for which an individual can get very specific feedback, access learning, and apply in practice. I can even extend the learning moment by customising a cell with a Financial Times article on a topic, like ‘learning from failure’ in the image below.

  

Instead of asking for feedback on everything all at once - and risking overload - feedback is given on a small number of relevant cells at a time. This controlled approach allows individuals to focus deeply on learning and making changes before moving on to something else. Rather than a one-off event, feedback becomes a regular habit, focused on exactly the skills or behaviours that person needs to develop, when they need to develop them.

Objective feedback

Now that we’ve solved the delivery of feedback, how do we ensure its quality? To make a change you first need to know what to change. It sounds obvious, but it’s so common for feedback to appear helpful on the surface, but lack specificity as to what to do differently. Feedback is also prone to being highly subjective. The way one person thinks something should be done isn’t the same as the next person. Without asking reviewers to give feedback on specific behaviours there is a high likelihood they will be vague.

I chose The Honeycomb Works because they built a framework of observable behaviours. Just four to six such specific behaviours comprise one cell. This approach allows each person to get highly personalised, detailed feedback on the behaviours and skills that are shown to drive success in that area.

Although the Honeycomb method is designed for the individual, human resources isn’t forgotten. They receive invaluable data detailing strengths and weaknesses, aggregated across populations. Such Honeycomb reports allow me to integrate their findings with my own client-facing impact reports to provide relevant, actionable insights.

We live in a data-rich age for impact measurement. Recognising the dangers of relying too heavily on things that are easy to count and the limits of dashboard reporting, we search for forward-thinking partners like The Honeycomb Works. In this way, impact reporting becomes part of the learning solution. Measurement isn’t merely an end state result, but becomes part of the individual’s reflective experience, resulting in even more growth.

References

1 Wansink, Brian & Just, David & Hanks, Andrew & Smith, Laura. (2013). Pre-Sliced Fruit in School Cafeterias: Children's Selection and Intake. American journal of preventive medicine. 44. 477-480. 10.1016/j.amepre.2013.02.003.

2 A 360-degree feedback (multi-rater feedback / assessment) Wikipedia: https://en.wikipedia.org/wiki/360-degree_feedback

3 The Honeycomb Works. https://www.thehoneycombworks.com/

Go to the profile of Claire Masson

Claire Masson

Digital Learning Specialist, Headspring