Buckingham dealt less with the more formal aspects of performance management but it’s this that was covered extensively in the Harvard Business Review.
This project at Deloitte started with simple counting of hours - which for Deloitte added up to 2 million hours completing performance reviews and ratings. I know Adobe’s project started like this as well but in general, it’s unlikely to be that helpful for most organisation as it’s only the time and costs of performance management which can be measured so objectively - you’ve still got not basis to compare the benefits in the same way, so you might as well stick to a high level subjective comparison.
The second input was a review of research in the science of ratings. A free this is useful and would extend it to the science of feedback and coaching etc - which is all part of the evidence based approach HR needs to take on board (though I’ll also be commenting on ‘evidence based HR’ shortly.) The key piece of data for Buckingham and Deloitte was that 62% of the variance in the ratings could be accounted for by individual raters’ peculiarities of perception. Actual performance accounts for only 21% of the variance. So traditional performance reviews are clearly very unlikely to work.
The third input was a carefully controlled study of their own organisation - this is the critical piece for me. And Deloitte did do what I recommended in my last post which is to develop clear objectives for the project and their performance management process or practices. Their objectives were to be able to recognise (pay) for performance, to truly understand that performance and to fuel or develop the performance (which they do through their check-in process). These are fine, and all organisations will or at least should have different objectives, but they do need to recognise that rewarding and developing for performance are largely irreconcilable and find a way around this, which I’m not sure they have. And in fact the case study notices this suggesting that Deloitte wanted to tell people what they’ve been rated (to help development) but couldn’t do so as this would inflate the ratings (and hence reduce the ability to make good decisions about pay.)
I’d also argue that at the very least, even if you’re not going to separate the assessment and development sides of performance management, that reward shouldn’t be the top priority objective. Organisations can generate a lot more impact on performance by developing their people to perform than they can from the potential but complex impacts on motivation that might but might not follow from bonuses, incentives and salary increases. At the very least it is putting the cart before the horse.
The most interesting part of the case study is the way Deloitte has tried to neutralise the idiosyncratic rater effect by having raters rate their own actions, rather than the qualities or behaviors of the ratee (it’s also interesting that Deloitte still uses the terms rater and ratee even if it wants to get away from the importance of the rating!):
“People may rate other people’s skills inconsistently, but they are highly consistent when rating their own feelings and intentions. To see performance at the individual level, then, we will ask team leaders not about the skills of each team member but about their own future actions with respect to that person.”
So in their annual, performance snapshots, they ask managers to provide four ratings - for pay, talent identification, poor performance and readiness for promotion.
Doing this may be better than providing just one rating but I’d be interested in the correlations between them i.e. whether they’re all measuring the same thing, or at least whether the halo effect means they all end up wit the same level of assessment. In addition, my own experience is that often the higher the number of assessment scores, the higher the potential for disagreement and conflict, pulling people down into debate over the numbers rather than enabling good conversation about the real things the numbers represent. It’s why I’m also not overly in favour of deletion’s desire to use big data to provide some of these ratings in future:
“And these conversations are best served not by a single data point but by many. If we want to do our best to tell you where you stand, we must capture as much of your diversity as we can and then talk about it.
We haven’t resolved this issue yet, but here’s what we’re asking ourselves and testing: What’s the most detailed view of you that we can gather and share? How does that data support a conversation about your performance? How can we equip our leaders to have insightful conversations? Our ques-tion now is not What is the simplest view of you? but What is the richest?
We want our organizations to know us, and we want to know ourselves at work, and that can’t be com- pressed into a single number. We now have the technology to go from a small data version of our people to a big data version of them. As we scale up our new approach across Deloitte, that’s the issue we want to solve next.”
Please note I’m absolutely not against using big data to inform the conversation between the manager / team leader and employee, as this helps reduce the inherent bias in the process, but that’s different to arguing the overall assessment be based upon big data. The main problem with this is that by its nature the information contained in the big data is likely to be very transactional in nature, highly reliable but with low validity and meaning. So it’s not what you want your performance assessments to be based upon.
I’d also question whether Deloitte really need its four ratings. Wouldn’t it be easier just to discuss each individual and if someone needs a higher salary, to pay them; if they need to go on a talent programme, then develop them; if they’re not performing then exiting them and if they need a promotion then promoting them? In his SHRM presentation Buckingham argued that big complex organisations will always need ratings as otherwise how does the HRD sitting in HQ make decisions about all their talent? That’s a pretty easy question to answer - they shouldn’t - at least for everyone who isn’t in a form of corporate interest group, all of these decisions should be taken locally where managers know the people they’re discussing. No form of rating is ever going to make the HRD doing this for all their locations etc a valid way of managing their talent.
I’d also question whether Deloitte really need its four ratings. Wouldn’t it be easier just to discuss each individual and if someone needs a higher salary, to pay them; if they need to go on a talent programme, then develop them; if they’re not performing then exiting them and if they need a promotion then promoting them? In his SHRM presentation Buckingham argued that big complex organisations will always need ratings as otherwise how does the HRD sitting in HQ make decisions about all their talent? That’s a pretty easy question to answer - they shouldn’t - at least for everyone who isn’t in a form of corporate interest group, all of these decisions should be taken locally where managers know the people they’re discussing. No form of rating is ever going to make the HRD doing this for all their locations etc a valid way of managing their talent.
More importantly, I don’t get how Deloitte thinks assessing the rater’s actions rather than the ratees themselves will be an improvement. Managers may assess their own future actions objectively but these actions will still be based on their inconsistent and biased interpretations of their people. Deloitte try to get round this by suggesting that as team leaders are closest to the performance of ratees and, by virtue of their roles, they must exercise subjective judgment. Deloitte are therefore interested in what these subjective judgements will be. As Buckingham explained: “We want to know this. It’s called judgment. So how do we measure their inherently subjective judgments about one another.”
But the change hasn’t really shifted the dynamics of the way people are paid and promoted etc, it’s just got rid of the single rating of the rater’s performance as a stepping stone towards these ends. But the whole process is as riddled with bias as a more traditional approach.
My final criticism of the approach is that Buckingham is at pains to stress leaders need to act in ways which suit their strengths (“there is no perfect profile there is only practice which fits your profile”), but recommends check-in as a one-size-fits-one approach within an organisation. I suspect that when businesses have developed the right best fit approach for their company, this will still need to be tailored and adapted by different teams and potentially individuals within the organisation.
More about me:
@joningham, http://linkedin.com/in/joningham
info@joningham.com, +44 7904 185134
Top 100 HR Tech Influencer - Human Resources Executive
HRD Thought Leader - HRD Connect
Mover and Shaker - HR magazine
Also develop your Strategic HCM capabilities at my new Strategic HR Academy
0 comments:
Post a Comment
Please add your comment here (email me your comments if you have trouble and I will put them up for you)