Welcome Back! Yesterday, we talked about trying to raise money after the concerts are done. Today I want to focus in on accountability, performance reviews, and tactical planning.
One topic I hear a lot about in the orchestra world is accountability. In a very obvious way, musicians are accountable every time the orchestra plays. They are held to extremely high standards; when they make a mistake, it’s often out there for everyone to hear.
In my experience, musicians often complain that others in the organization are not held to comparable standards of accountability. When you probe, this sometimes seems really to mean “I gave him a good idea, and he never followed through with it.” Or “she’s missed her goal two years in a row, why hasn’t she been fired?” Or even “something obviously went wrong, and nobody ever explained what or why, so I can only assume it is due to the staff being malevolent and/or lazy and/or incompetent.”
I think that it is important that staff (and board for that matter, but that is a different discussion) be held accountable to the same high performance standards that we expect our musicians to achieve week in and week out on stage. And to anyone who might say “orchestras don’t pay staff enough to be able to expect the highest levels of professionalism,” I would point out that we often don’t pay our musicians enough either, but that doesn’t stop them striving for and delivering exceptional performance.
What you can hold staff accountable for is going to largely depend on how you define goals. Wherever possible I like to define goals around achieving net results, rather than around doing a process. Success is not “I sent out a mailing.” Success is “I reached my budgeted goal.” But how do you hold someone accountable for achieving results that they never thought were realistic in the first place (also known as “fantasy budgeting”)? Or which were made unrealistic by circumstances outside their control?
If you are going to hold staff accountable for results, I believe you need to give them a fair amount of autonomy and freedom from constant second guessing. I have seen managers who wanted to approve every decision made by every staff member–the result is organizational paralysis. I also know that some managers believe in constant performance feedback. I don’t find this practical.
As we go through the year we have obvious successes and disappointments, and a manager needs to congratulate his or her staff for these successes and help his or her staff learn from the disappointments. But when everyone feels overwhelmed by cascading urgent deadlines, the practical reality is that we often say “we survived that one without a disaster” and move on to avert the next thing that might become a disaster. It’s very easy to lose sight of key organizational and department goals, to lose focus on the big picture. That’s why annual reviews are important.
To me this process is about reminding ourselves how what we do fits into the big picture. I’ve seen review systems that look like my first grade son’s report card: you get so many points for keeping your desk clean, so many points for playing well with others, extra credit for clear penmanship, etc. For me, that is not what a performance review should be about. A performance review is a chance to remember what we said a year ago were the most important things to focus on for the next twelve months, and look at how we did on them. It’s a chance to, after spending months treating symptoms, diagnose systematic causes, and come up with a game plan to make the symptoms go away. Getting an employee to take the time to conduct an honest and thorough self assessment often produces more insight than anything I have to say. And I find that, if anything, the people who work for me are generally harder on themselves than I am.
A key part of the review process is agreeing on priorities for the year that is just about to start, and agreeing on priorities leads quite naturally into tactical planning. I believe that orchestras often fail to achieve the results they need not because they don’t know what to do, but because they don’t execute. I’m exaggerating a bit, but you can only take so many random meetings degenerating into brainstorming about marketing ideas before you want to run screaming from the room. We already have plenty of ideas…our challenge is to prioritize, focus and execute.
I can’t tell you how many times I’ve heard “we tried that and it didn’t work,” only to find on further review that what really happened was “we tried this in a half assed way at the last minute, expecting it to fail, and it did.”
Entering the season with well-conceived, detailed tactical plans is a critical element of success. Each department needs a tactical plan that says we’re going to do this set of activities, which we reasonably expect (based on past experience) to produce that set of results. Einstein’s definition of insanity applies to us, too: doing the same thing this year that we did last year, but expecting that this year it’s going to produce dramatically better results, is not likely to be a recipe for success.
So I lean hard on the people who report to me to use the weeks leading up to the start of the season to develop tactical plans that spell out what needs to happen when, who’s responsible for making it happen, and what kind of results we expect it to produce. Because it’s not acceptable to get to the end of a year and say
“We did everything we thought we needed to do, and these activities more or less produced what we expected them to, but somehow it didn’t add up to what we had in the budget. Oh well, I guess it wasn’t possible.”
In my book, the person who says that should be sent packing — pronto.
Curt, I think most of this entry is spot-on. But as I’m sure you know, one must be careful with outcomes-based evaluation, in that you need to evaluate outcomes over MANY actions that a person takes.
If you judge a person’s skill by one outcome that s/he achieves or fails to achieve, you are going to give the person a lot of undeserved credit (or responsibility) for exogenous factors and simple luck. The decision-making should be what is evaluated, if you only have a few instances of decision-making/execution to judge.
For example, if advertising in a particular way is KNOWN to work 9 times out of 10, it is stupid to punish somebody for failure the one time it doesn’t work. Of course, if you have a set of 10 similar actions, you can more accurately judge the person based on outcomes.
Taking a hard line sounds satisfying, but any employer must also recognize the limitations on his or her own judgment.
Thanks for your comment, Aaron. I certainly agree that the aggregate of all outcomes is much more important than any single outcome. I also agree that it would be unfair to punish someone for assuming that past experience was a guide for projecting future results.
This Business Weekly article perfectly sums up my disdain for annual performance reviews. (Sorry, Curt, old friend.) I never did them. That’s not to say I didn’t set goals, which are really a separate issue altogether. My approach was real time response to achievement or failure. http://www.businessweek.com/magazine/content/09_31/b4141080608077.htm
Thanks for commenting, Christopher. There’s no single right way to manage an orchestra (which is not to say that there are not clearly wrong ways to do it).
As I acknowledged, there are ways to conduct performance reviews that are useless. But done right, I find that they are an important part of balancing autonomy and accountability (and I think that the people who work for me would agree).