But it was only when working so closely on these two differing projects that the similarities and connections between the two became apparent. The performance management programme focused on new managers and looked not only at the ways in which good performance could be encouraged, specified and subsequently rewarded, but also how poor performance could be addressed, corrected and ultimately eliminated. The documentation of the process was driven by the individual. Their responsibility was to help the organisation recognise their special talents, understand their aspirations and ultimately place them into the appropriate leadership passage. The path to enlightenment could be accelerated through admission to the ranks of the ‘high potential’ from whence their engagement with the intricacies of succession planning was assured.
So far, so corporate. Just about every large organisation has something similar and who am I to criticise. In fact, I don’t. It has long been my mantra that “What gets measured, get’s done”.
But by rigorously measuring some things (usually the stuff which is easy to measure with numbers attached) and failing to measure others (the stuff which is difficult to quantify but vital to organisational success) we often unwittingly skew our team member’s efforts without really understanding why. Are we measuring what is important, or simply according importance to that which can easily be measured?
It was the community of practice discussions which got me thinking. While facilitating a group discussion on the subject, we looked at what often goes wrong. Clearly, there are problems with people not contributing to Community of Practice sites, though generally everyone is happy to engage in meetings. There are problems that without some progress on information sharing between meetings – which in global organisations may be very infrequent – the meetings which we quite like attending very often dry up in the light of perceived indifference. We tried to analyse why that indifference might exist. Why are people happy to ring someone up or send an email question to an individual but apparently less interested in consulting a special intranet or internet site where collective resources can be found, searched and discussions and conversations held?
One of the resources everyone wanted was case studies. We discussed what these case studies might include and what could be learned from them. Then we compared a wish list of these case studies with what was currently available. Bingo. We might have identified one of our issues.
The requirements that the community had of a case study was a warts and all expose of what happened, what went wrong and pitfalls to avoid. It was a classic case of wanting to learn from the mistakes of others rather than replicating the corporate cock-up. No cupboard should be opened without the skeletons inside examined with the enthusiasm of a forensic scientist let loose in a pathology lab for the first time.
But when we looked at the available case studies of projects past and approaches of yesteryear, what we saw was not an honest analysis of the routes taken, dead ends followed and errors made which could have been avoided. What was available was part of the celebration. Mountains had been moved and overwhelming odds battled. Thanks were given and everyone basked – ever so slightly smugly – in the reflected glory of the team’s achievement.
And there’s a reason why these case studies were overwhelmingly positive. The performance management system in this organisation has everyone running around in the final few weeks of the financial year, desperate to ensure achievement of year end goals before the managerial judgement of success or failure is made. Reports will be filed, evidence presented, targets checked off in time to ensure bonus payments and the chance for advancement. By default, the very information regarded as an essential spark of the community of practice was extinguished by the performance management process.
How can this be different? Let’s go back to “what get’s measured, get’s done.”
Is there a way for the performance management system to be focused on generating learning and knowledge sharing rather than celebrating success (and covering up the mistakes)? By establishing a case study framework which addresses the needs of the Community of Practice can we set targets for project leaders and implementation teams to complete these frameworks as part of each of their deliverables? Can we create a new vision for success which puts reflection and learning as a central part of what good looks like?
I think we can, but it might mean a significant culture shift. Changing organisational opinion about what constitutes success and what is involved in project reviews will need all kinds of reassurance to those individuals who – for the first few times – may feel somewhat exposed when disclosing their mistakes so publicly.