This year I tried something different. With a nod towards a colleague who does something similar with his board, I decided to report the annual progress of our programs in an "innovation matrix".
We'd already gotten into the habit of using the green, yellow, and red lights to indicate relative success or difficulty in meeting our performance metrics for the year. What's new is that in addition to sorting our programs horizontally by strategic priority (in our terminology: Workforce, Technology, and Inclusiveness) I also sorted them vertically by innovation factor.
On the the left we have our Core programs. These are our bread and butter, the stuff we’ve been doing for some time and for which we have robust resources and infrastructure to help us execute. On the right are our Experimental programs. These are exactly that—experiments that we launched this year with little or no track record of success to support them. And in the middle are our Developing programs, successful experiments from the past that are beginning to migrate towards to core.
Looking at our performance this way gave us the ability to make the following conclusions:
1. Generally speaking, we advanced our objectives in all three of our areas of strategic priority.
2. Most of our core and developing programs were highly successful, with a few weak spots related to our ability to reach out and connect beyond our traditional membership. This was most apparent with regard to some of the educational audiences we must build better bridges to if our long-term Workforce agenda is to be successful.
3. Finally, we were very innovative. We tried a lot of new ideas this year, and although some didn’t gain the traction they needed to succeed, a high percentage of them added value, and should be considered for stronger development next year.
These observations would have easily been lost in the noise if I hadn't organized things this way.
Like everything else you bring to an association board for the first time, it was a bit of a risk. But it turned out to be a risk worth taking. Their response was overwhelmingly positive. Turns out many of them run their companies the same way--trying many new ideas, monitoring them to see which return value, and then investing more resources into those that do. The failed experiments fall off the chart. The successful ones are developed and begin moving towards to core. And every new year a bunch of new experiments are introduced.
It's something we've been doing for a while. But the matrix gave us a way to see it in action and, perhaps more importantly, a tool for discussing the intentionality behind it at the board table.
The red lights always scare people. By themselves, they represent failures. They indicate that we failed to meet the identified metric of success of some of our programs--and some people don't like to admit that. But in the context of our innovation matrix, the red lights also become part of our success. After all, innovation doesn't happen without them.
+ + +
This post was written by Eric Lanke, an association executive, blogger and author. For more information, visit www.ericlanke.blogspot.com, follow him on Twitter @ericlanke or contact him at email@example.com.