I frequently work with groups who are trying to answer a simple question: “What should we prioritise?” They are dealing with an uncomfortable reality: they have resources, but not enough to do everything.
Sometimes, they have conflicting data about which option would yield the most benefit; other times, they have no data at all, only opinions. Sometimes, it seems easy to promote an idea that already has some traction, or support by others, while other times, it’s more tempting to attempt a disruptive innovation that is opposed by many.
I’ve met these dilemmas hundreds of times over a decade of work with groups who’ve tried to establish everything from research priorities to population health priorities, from investment priorities to which projects to champion and support. So, what do the really good prioritisers do?
They use six criteria:
1. Evidence: the replicable basis for doing this, including cost-benefit where this is available (arguing benefit of action, against costs of inaction)
2. Impact: the effect upon beneficiaries
3. Reach: the numbers of people who will benefit
4. Effort invested: the degree to which this builds on past / existing partnerships / structures / achievements
5. Readiness: immediate willingness of essential (i.e., powerful) participants to contribute / make required changes
6. Cost: ability to fund the initiative, or move towards the required incentive structure
If I was asked to weight these, I’d offer the following:
– Effort invested and readiness = 35% (The principle here is: “Work with what you’ve got right now” recognising that the main sub-factor is influential power-brokers supporting the option)
– Evidence = 30% (based on the validity and consensus of the evidence)
– Impact and Reach = 20% (I usually combine these)
– Cost = 15% (if this is achievable within current resource settings, that’s a huge lift-off factor)
This gives you potential for a perfect score of 100. Often a simple crowd-sourced rating of each criterion against each proposal is sufficient to draw out a pattern. You can then use this pattern to reach consensus.
How does this work in real life?
I asked local government elected representatives (Councillors) to work out which two or three of some 10 – 12 large scale capital projects (each cost between $5m – $30m) ought to be prioritised and how they should be sequenced. Each project was presented briefly by the executive most knowledgeable about it (on the criteria, of course) and the assumptions were challenged. Next, each Councillor did TWO things: scored each project against each criterion and also ranked each project. The ranking served as a cross-check for the scorings (in my experience, they mostly align, but are a good source of diagnostic questions when they don’t!).
In this case, the client was delighted because they’d reached an impasse with this group who refused to move forward with ANY proposal. Instead they had analysis paralysis (commissioning more reports and business cases) and delved deeper into details, arguing over fine points of each other’s ‘pet projects’. In less than half a day we had a significant breakthrough which meant that this client could rapidly move forward and start executing.