+ + +
So, having chosen our association's success metrics--measurable indicators closely tied to the outcomes we seek to achieve, which are within our ability to affect, and which tend to get lost in the whirlwind of program activity--as our Wildly Important Goals, or WIGs, I was ready to move forward with the experimental implementation of 4DX in my association. And, last week, after alluding to my own assessment that that experiment hasn't gone as well as I hoped, I said I would go into some of those details this week.
Here they are, the three factors that have, in my opinion, most contributed to our difficulty:
1. The whirlwind is tenacious.
Everyone in the organization has a lot to do. And even though I was thoughtful about carving out time on the plates of the people asked to lead the data collection and reporting aspects of the WIGs, and about restructuring our existing weekly staff meeting so that it could serve as WIG meetings (where we report our progress and discuss ways we could better drive them forward), some of these activities are still taking a backseat to what often feels like more pressing and impactful concerns. It's an association. There are conferences to plan, industry reports to compile and promote, e-Newsletters to write and disseminate, workforce development programs to oversee and coordinate. Getting the WIG data, putting it in a chart or other interpretable and actionable format, and leading staff discussions around what it means and how we should adjust our behavior, isn't at the top of anyone's to-do list. Frequently, not even mine.
2. It's not clear that the WIGs really matter.
Success metrics are a relatively new factor in our association, and it's not clear, even to me, that all of them really matter when it comes to the outcomes we're trying to achieve. There are 22 of them, and some of them have multiple or multi-faceted goals associated with them. And some of them are admittedly experiments, activity-based surrogates for the true outcomes we seek. I firmly believe that, especially in our sometimes uncertain environment, picking a metric and tracking it will not only help us determine if it is the right metric, but will also build needed metric-tracking competencies that our organization might not otherwise develop. But a staff member that responds in kind to that directive quickly determines the value of certain metrics over others, and those that are proven less valuable naturally begin receiving less attention.
Partly in response to that dynamic, and partly to give the best WIGs more focus in our whirlwind of activity, I decided mid-year to attach financial bonuses to some of the metrics. Ten metrics made this cut, and the message was that if the goals associated with them are achieved, the entire staff would receive a designated bonus at year-end. The proposal was met with enthusiasm when first rolled out, but as time wore on, and some of the goals fell unachieved by the wayside, the remainder have been sucked back into the whirlwind. A certain kind of fatalism appears to have taken over, and more than one staff person has told me that they don't believe the metrics actually are things they have the ability to affect.
3. I've been leading the effort from behind.
I've talked about 4DX and its principles with my two senior staff people. I gave them a copy of the book to read, and we discussed how to try and apply its principles in the organization. They participated actively in the debate and discussion over choosing our success metrics as our WIGs, and, when we moved forward, we were all in agreement on that front.
But I haven't talked about 4DX with the rest of the staff people, the very people we're now asking to take leadership roles in WIG meetings and the tracking and reporting of their lead and lag measures. 4DX, WIGs, lead and lag measures--these are not terms that we have used or discussed openly in the office, choosing instead to frame the initiative in our own vernacular. I thought this was preferrable, especially since some of our own strategic practices and elements have been changing in the last few years. I didn't want to introduce yet another lexicon into that confusion. I was also, admittedly, concerned that other intelligent people would reasonably draw other conclusions that the ones I drew from the experience of reading 4DX. Nor did I want everything in it to be seen as a mandate from the boss. I consciously only wanted to experiment with pieces of it.
And in addition to all of that, I haven't been pushing hard. I'm the leader for a handful of the WIGs myself, and I do my best to lead by example in that role, scrupulously tracking the data, bringing it into the appropriate WIG meetings, and leading the resulting discussions about extra actions to take when and if we start falling behind any of the lead measures. I've encouraged others with the same responsibilities to do the same, but I haven't mandated it. I want to see what they do with it, see the value that they themselves can bring to the process. I'm not interested in seeing them "jump-to" just because I told them to do something. If the experiment is going to work, I initially thought, the habits and mechanisms of the process have to be embraced and embedded into our organizational culture.
There's probably more going on in the organization than that, but as I pause for a few moments of reflection, these are the three factors that seem to have most complicated our experiment with 4DX.
+ + +
This post first appeared on Eric Lanke's blog, an association executive and author. You can follow him on Twitter @ericlanke or contact him at email@example.com.