Chapter 6: Prioritization and Scoring

Learning Objectives

After completing this chapter, you will be able to:

  • Apply a structured scoring framework to IT investments
  • Calculate business value, risk, and cost scores using weighted criteria
  • Determine priority rankings and funding decisions
  • Facilitate objective prioritization discussions with stakeholders
  • Implement governance structures to maintain scoring consistency
  • Avoid common prioritization pitfalls and cognitive biases

Introduction: The Challenge of Objective Prioritization

In every organization, the demand for IT investments far exceeds the available capacity to deliver them. Business units propose ambitious digital transformation initiatives. Technology teams identify critical infrastructure upgrades. Security organizations demand compliance projects. Customer experience teams advocate for new channels and capabilities. Each stakeholder arrives convinced their proposal deserves immediate funding and top priority.

Without a structured approach to prioritization, portfolio decisions inevitably become political exercises rather than strategic choices. The loudest voice wins. The most influential sponsor secures funding. The most recent crisis drives immediate action. Pet projects advance while strategically critical initiatives languish. Decisions become inconsistent, difficult to justify, and impossible to defend when stakeholders challenge the rationale.

This dysfunction carries severe consequences. Organizations fund the wrong initiatives, wasting precious resources on low-value work while deferring strategic priorities. Teams become cynical about the investment process, believing outcomes are predetermined by politics rather than merit. Business sponsors learn to game the system, inflating benefits estimates and downplaying risks to secure approval. Trust in portfolio governance erodes, and the organization loses the ability to make effective investment decisions.

The solution lies in objective, transparent prioritization anchored in a structured scoring framework. By evaluating all investment proposals against consistent criteria, calculating quantitative priority scores, and making funding decisions based on data rather than influence, organizations can restore integrity to the portfolio process. Stakeholders may not always agree with the outcomes, but they can understand the rationale and trust the process.

This chapter presents a comprehensive scoring framework that evaluates IT investments across three critical dimensions: business value, risk, and cost. The framework provides detailed scoring guidance for each dimension, enables calculation of weighted priority scores, and establishes governance structures to maintain consistency over time. Organizations that implement disciplined scoring processes make better investment decisions, achieve stronger portfolio outcomes, and build stakeholder confidence in portfolio management.


The Scoring Framework: Three Dimensions of Evaluation

Effective prioritization requires evaluating investment proposals across multiple dimensions to capture their full impact on the organization. A single-dimension approach proves inadequate. Scoring only on business value ignores risk and cost considerations. Evaluating only cost misses strategic importance. Assessing only risk overlooks the potential for breakthrough innovation.

The prioritization framework presented in this chapter employs three complementary dimensions that together provide a comprehensive evaluation of each investment:

Business Value measures the strategic and financial return an investment delivers to the organization. This dimension captures strategic alignment, financial impact, customer benefits, operational improvements, and competitive positioning. High business value indicates an investment that advances organizational objectives and generates significant returns.

Risk assesses the likelihood an investment will encounter implementation challenges, delivery problems, or benefit realization failures. This dimension evaluates technical complexity, delivery feasibility, business adoption concerns, organizational change impact, and external dependencies. High risk suggests an investment faces significant execution uncertainty.

Cost quantifies the total investment required, including capital expenditure, operating expenses, internal resource consumption, and total cost of ownership. This dimension ensures the organization considers the full financial and resource commitment. High cost indicates a substantial investment that constrains capacity for other initiatives.

These three dimensions receive differential weighting to reflect their relative importance in portfolio decisions:

Dimension Weight Rationale
Business Value 40% Primary driver of investment decisions
Risk 30% Critical factor in implementation success
Cost 30% Essential constraint on portfolio capacity

Business value receives the highest weight because the fundamental purpose of any investment is to deliver value to the organization. However, risk and cost together account for 60% of the priority score, ensuring these factors significantly influence decisions. An investment with exceptional business value but unacceptable risk or cost will not achieve top priority.

Each dimension comprises multiple sub-dimensions that capture different aspects of value, risk, and cost. This multi-level structure enables nuanced evaluation while maintaining manageable scoring complexity. The following sections detail the scoring guidance for each dimension.


Business Value Score: Measuring Strategic Return

Business value represents the most important dimension in prioritization because it determines what the organization gains from an investment. This dimension evaluates five distinct aspects of value, each weighted to reflect its strategic importance:

Sub-dimension Weight Description
Strategic Alignment 25% Direct alignment to strategic objectives
Financial Impact 25% Revenue generation, cost savings, efficiency gains
Customer Impact 20% Customer satisfaction and experience improvements
Operational Impact 15% Process improvement and productivity enhancement
Competitive Impact 15% Market position and competitive differentiation

Strategic Alignment

Strategic alignment measures how directly an investment supports the organization’s documented strategic objectives. Investments that enable strategic priorities deliver value by advancing the most important organizational goals. Those with weak strategic connection may deliver local benefits but fail to contribute to enterprise success.

Scoring strategic alignment requires understanding the organization’s current strategic plan and identifying clear linkages between the investment and specific strategic objectives. The scoring guidance distinguishes six levels of alignment:

Score Description Indicators
5 Directly enables strategic objective Named explicitly in strategic plan or executive priorities; enables key strategic outcome
4 Strongly supports strategic priority Clear, direct linkage to major strategic goal; explicitly requested by executive leadership
3 Indirectly supports strategy Enables other strategic work; supports execution of strategic initiatives
2 Neutral strategic impact Operational necessity but no direct strategic contribution; maintains baseline capability
1 Limited strategic relevance Nice to have improvement with weak strategic connection
0 No strategic alignment No identifiable business justification; purely technical or personal preference

A score of 5 indicates the investment directly enables achievement of a strategic objective. For example, if the strategic plan identifies “expand into Asian markets” as a priority, an investment in regional e-commerce capabilities enabling that expansion would score 5. The investment is named in the strategic plan or is explicitly required to achieve a documented strategic goal.

A score of 4 reflects strong strategic support without being explicitly named. The investment clearly advances a strategic priority and executive sponsors have requested it, but it represents one of several initiatives supporting the strategy rather than the sole enabling investment.

A score of 3 indicates indirect strategic support. The investment enables other strategic work or provides capabilities that strategic initiatives will leverage, but it does not directly deliver strategic outcomes itself. Infrastructure investments often score 3 because they create platforms for strategic initiatives.

A score of 2 represents operational necessity without strategic contribution. The investment maintains baseline capabilities, meets regulatory requirements, or addresses technical debt. These investments are necessary but do not advance strategic objectives.

Scores of 1 or 0 indicate weak or absent strategic justification. These investments may deliver local benefits or satisfy specific stakeholder preferences, but they do not contribute meaningfully to organizational strategy.

Financial Impact

Financial impact measures the quantified financial return from an investment, including revenue increases, cost reductions, and efficiency gains that translate to bottom-line value. This sub-dimension requires rigorous financial analysis and benefit quantification rather than qualitative assertions.

Score Description Annual Financial Impact
5 Very high financial impact Greater than $5M annual benefit
4 High financial impact $1M - $5M annual benefit
3 Moderate financial impact $500K - $1M annual benefit
2 Low financial impact $100K - $500K annual benefit
1 Minimal financial impact Less than $100K annual benefit
0 No financial impact No measurable financial benefit

These thresholds should be adjusted based on organizational size and industry. A $5M impact represents “very high” for a mid-size organization but might be “moderate” for a Fortune 500 enterprise. Organizations should calibrate thresholds to reflect approximately 5%, 1-5%, 0.5-1%, 0.1-0.5%, and less than 0.1% of annual revenue for scores 5, 4, 3, 2, and 1 respectively.

Financial impact must be calculated based on quantified, defendable benefits over a defined time period. Acceptable benefit types include:

  • Direct revenue increases from new products, services, or markets
  • Cost reductions from efficiency improvements or eliminated activities
  • Cost avoidance from preventing expected cost increases
  • Productivity gains translated to capacity for additional work
  • Risk mitigation value from reducing expected losses

Avoid accepting vague assertions like “improve efficiency” without quantification. Require sponsors to specify exact cost reductions, revenue increases, or productivity improvements with supporting calculations. Benefits should be annual recurring impact in steady-state operation, not one-time gains.

Customer Impact

Customer impact evaluates how an investment affects customer satisfaction, experience, convenience, or service quality. In an era when customer experience drives competitive advantage, investments that improve customer interactions deliver strategic value even when financial benefits prove difficult to quantify.

Score Description Examples
5 Major customer experience transformation New channel launch, dramatic satisfaction improvement, fundamental service redesign
4 Significant customer benefit Improved service quality, reduced wait times, enhanced convenience
3 Moderate customer benefit Minor experience improvements, incremental enhancements
2 Limited indirect customer impact Internal focus with eventual customer benefit
1 No direct customer impact Pure internal initiative with no customer touchpoint
0 Potential negative customer impact Risk of service degradation or customer dissatisfaction

A score of 5 indicates the investment fundamentally transforms how customers interact with the organization. Examples include launching a new digital channel, implementing omnichannel capabilities, or redesigning core customer journeys. These investments typically correlate with measurable customer satisfaction increases of 10+ Net Promoter Score points.

A score of 4 reflects significant customer benefits through improved service delivery. Examples include reducing customer wait times, enhancing self-service capabilities, or personalizing customer interactions. Customers notice and value these improvements.

A score of 3 represents moderate, incremental improvements. Customers may not dramatically change their perception of the organization, but they experience minor enhancements in convenience, speed, or service quality.

Scores of 2 or lower indicate primarily internal investments with limited or no customer impact. These investments may improve internal processes or capabilities, but customers do not directly experience the benefits.

A score of 0 suggests potential customer harm. This might occur when an investment requires customer-facing changes that degrade existing service levels or when internal changes create customer-visible disruption.

Operational Impact

Operational impact measures improvements to internal processes, productivity, and efficiency. These investments reduce manual work, eliminate process bottlenecks, improve decision-making, or enable staff to focus on higher-value activities.

Score Description FTE Impact
5 Significant efficiency transformation Greater than 10 FTE capacity created
4 Major process improvement 5-10 FTE capacity created
3 Moderate efficiency improvement 2-5 FTE capacity created
2 Minor efficiency gains Less than 2 FTE capacity created
1 No operational impact Maintains status quo
0 Operational burden increase Adds work, reduces efficiency

Operational impact should be quantified in terms of Full-Time Equivalent (FTE) capacity created or time saved. Process automation that eliminates 10,000 hours of annual manual work creates approximately 5 FTE of capacity. This capacity enables the organization to absorb additional work without hiring or to redeploy staff to higher-value activities.

Scoring requires identifying specific processes being improved and quantifying current effort. Avoid accepting vague claims like “improve efficiency.” Require sponsors to specify which processes improve, current time consumption, and expected time savings.

A score of 0 indicates the investment actually increases operational burden. This might occur when new capabilities require additional processes, controls, or oversight that consume more effort than they save.

Competitive Impact

Competitive impact assesses how an investment affects the organization’s market position, competitive advantage, or differentiation. This sub-dimension proves particularly important in competitive industries where technology enables strategic positioning.

Score Description Indicators
5 Major competitive advantage Market leadership position, unique differentiation, significant competitive separation
4 Significant competitive differentiation Notable differentiation from competitors, enhanced market position
3 Competitive parity Keeps pace with market, matches competitor capabilities
2 Minimal competitive relevance Limited market visibility, weak competitive impact
1 No competitive impact Internal focus, no market-facing competitive implications
0 Potential competitive disadvantage May create competitive vulnerability

A score of 5 indicates the investment creates substantial competitive advantage. This might include unique capabilities competitors cannot easily replicate, significant first-mover advantage, or technology that enables new business models. These investments fundamentally strengthen market position.

A score of 4 reflects notable differentiation without creating insurmountable competitive barriers. Competitors could potentially match the capability, but the investment provides meaningful near-term advantage.

A score of 3 represents competitive parity investments that prevent the organization from falling behind. Many digital transformation initiatives score 3 because they implement capabilities competitors already possess. While not differentiating, they are necessary to remain competitive.

Scores of 2 or lower indicate limited competitive relevance. These investments may deliver value but do not significantly affect competitive dynamics.

Calculating Business Value Score

The business value score combines these five sub-dimensions using weighted averaging:

Business Value = (Strategic Alignment × 0.25) +
                (Financial Impact × 0.25) +
                (Customer Impact × 0.20) +
                (Operational Impact × 0.15) +
                (Competitive Impact × 0.15)

For example, an investment might score:

  • Strategic Alignment: 4 (strongly supports strategic priority)
  • Financial Impact: 3 ($750K annual benefit)
  • Customer Impact: 5 (new digital channel)
  • Operational Impact: 2 (minor automation)
  • Competitive Impact: 4 (significant differentiation)

Business Value = (4 × 0.25) + (3 × 0.25) + (5 × 0.20) + (2 × 0.15) + (4 × 0.15) Business Value = 1.00 + 0.75 + 1.00 + 0.30 + 0.60 = 3.65

This investment delivers strong overall business value with particularly high scores in strategic alignment, customer impact, and competitive impact.


Risk Score: Evaluating Implementation Uncertainty

Risk represents the second critical dimension in prioritization. While business value measures potential return, risk assesses the likelihood of actually achieving that return. High-risk investments face significant execution uncertainty and may fail to deliver expected benefits despite consuming substantial resources.

The risk dimension evaluates five aspects of implementation and delivery risk:

Sub-dimension Weight Description
Technical Risk 25% Technology complexity, architecture challenges, integration difficulty
Delivery Risk 25% Schedule achievability, resource availability, vendor dependencies
Business Risk 20% Benefit realization uncertainty, user adoption concerns
Organizational Risk 15% Change management complexity, organizational readiness
External Risk 15% Regulatory factors, market conditions, external dependencies

Higher risk scores indicate greater uncertainty and higher likelihood of problems. Organizations typically prefer lower-risk investments that deliver more predictable outcomes.

Technical Risk

Technical risk evaluates the complexity of the technology solution, integration challenges, and technical feasibility. Investments requiring new, unproven technologies or complex integration across many systems carry higher technical risk than those using established platforms and straightforward implementations.

Score Description Indicators
5 Very high technical risk Bleeding-edge technology, unproven in production; complex multi-system integration; significant technical unknowns
4 High technical risk New technology adoption, substantial complexity, limited organizational experience
3 Moderate technical risk Known technology but significant integration work; moderate complexity
2 Low technical risk Proven technology approach, limited integration needs, standard implementation
1 Very low technical risk Mature technology, simple implementation, minimal integration
0 No technical risk Trivial technical implementation, well-established approach

A score of 5 indicates substantial technical uncertainty. Examples include implementing emerging technologies without production track records, complex integration across many legacy systems with poor documentation, or solutions requiring significant custom development in unfamiliar technology stacks.

A score of 4 reflects high complexity or new technology adoption. The organization may lack deep experience with the technology, or the solution requires substantial integration work across multiple platforms.

A score of 3 represents moderate technical risk typical of standard enterprise implementations. The technology is proven, but the implementation involves meaningful complexity.

Scores of 2 or lower indicate straightforward technical implementations using proven technologies and standard approaches. Technical risk is minimal and predictable.

Delivery Risk

Delivery risk assesses the achievability of the schedule, availability of required resources, vendor reliability, and project execution complexity. Aggressive timelines, resource constraints, or dependence on unproven vendors increase delivery risk.

Score Description Indicators
5 Very high delivery risk Extremely aggressive schedule, resource unavailability, unproven vendor, complex dependencies
4 High delivery risk Tight timeline, resource constraints, vendor concerns, significant dependencies
3 Moderate delivery risk Realistic schedule with some buffer, manageable resource needs, known vendors
2 Low delivery risk Comfortable schedule, resources secured, experienced delivery partners
1 Very low delivery risk Conservative timeline, dedicated resources, proven delivery team
0 Minimal delivery risk Routine implementation, ample resources and time

A score of 5 indicates serious delivery concerns. The schedule may be unrealistic given the scope, required resources may not be available, or the organization depends on vendors with questionable delivery track records.

A score of 4 reflects meaningful delivery challenges without being catastrophic. The timeline is tight but potentially achievable, resources are constrained but can likely be secured, or vendors are unproven but credible.

A score of 3 represents typical delivery risk for enterprise initiatives. The schedule includes some buffer, resources are identified, and vendors have relevant experience.

Scores of 2 or lower indicate high confidence in delivery. The schedule is conservative, resources are secured, and the delivery team has successfully completed similar implementations.

Business Risk

Business risk evaluates uncertainty in benefit realization and user adoption. Even technically successful implementations can fail to deliver expected value if users do not adopt the solution or if business benefit assumptions prove incorrect.

Score Description Indicators
5 Very high business risk Highly uncertain benefits, expected user resistance, unproven business case
4 High business risk Benefit assumptions uncertain, adoption concerns, significant change required
3 Moderate business risk Some benefit uncertainty, manageable adoption challenges
2 Low business risk Well-defined benefits, positive user sentiment, proven benefits model
1 Very low business risk Certain benefits, eager users, validated business case
0 Minimal business risk Guaranteed benefits, mandatory adoption

A score of 5 indicates substantial business risk. The benefits business case makes questionable assumptions, users have expressed resistance to the change, or similar initiatives have failed in the past.

A score of 4 reflects significant business uncertainty without being completely speculative. Benefits follow reasonable assumptions but lack validation, or users are neutral rather than enthusiastic.

A score of 3 represents typical business risk for new capabilities. Benefits are logically derived but not proven, and adoption will require change management but is achievable.

Scores of 2 or lower indicate high confidence in benefit realization. Benefits are well-validated through pilots or benchmarks, and users actively desire the new capability.

Organizational Risk

Organizational risk assesses the complexity of organizational change, capability gaps, and cultural readiness. Investments requiring significant role changes, new skills, or culture shifts carry higher organizational risk than those fitting existing organizational patterns.

Score Description Indicators
5 Very high organizational risk Fundamental organizational change, major capability gaps, culture resistance
4 High organizational risk Significant role changes, substantial training needs, change management complexity
3 Moderate organizational risk Manageable change, moderate training requirements, some organizational adaptation
2 Low organizational risk Minor role adjustments, familiar capabilities, limited change impact
1 Very low organizational risk Minimal change, existing capabilities sufficient
0 No organizational risk No organizational impact

A score of 5 indicates the investment requires fundamental organizational transformation. Job roles change significantly, employees need to develop entirely new capabilities, or the change conflicts with established culture.

A score of 4 reflects substantial organizational challenge. Roles evolve meaningfully, significant training is required, or the change demands organizational adaptation.

A score of 3 represents manageable organizational change typical of process improvements. Some training is needed and roles adjust, but the change fits within normal organizational evolution.

Scores of 2 or lower indicate minimal organizational disruption. Employees can perform new processes with limited training, and the change aligns with existing capabilities.

External Risk

External risk evaluates factors outside organizational control, including regulatory uncertainty, market volatility, external dependencies, and economic conditions. High external risk exists when investment success depends on external factors the organization cannot influence.

Score Description Indicators
5 Very high external risk Major regulatory uncertainty, vendor viability concerns, critical external dependencies
4 High external risk Regulatory changes possible, market volatility, significant external dependencies
3 Moderate external risk Some external factors, manageable dependencies
2 Low external risk Limited external dependencies, stable environment
1 Very low external risk Minimal external factors
0 No external risk Fully within organizational control

A score of 5 indicates substantial external uncertainty. Regulatory frameworks may change during implementation, critical vendors face viability questions, or success depends on market conditions beyond organizational influence.

A score of 4 reflects meaningful external factors without catastrophic exposure. Some regulatory evolution is possible, or success depends on external partners who are generally reliable but not guaranteed.

A score of 3 represents typical external risk for enterprise initiatives. Some external factors exist but can be managed through contingency planning.

Scores of 2 or lower indicate minimal external exposure. The investment is largely within organizational control with limited external dependencies.

Calculating Risk Score

The risk score combines these five sub-dimensions:

Risk Score = (Technical Risk × 0.25) +
            (Delivery Risk × 0.25) +
            (Business Risk × 0.20) +
            (Organizational Risk × 0.15) +
            (External Risk × 0.15)

For example:

  • Technical Risk: 3 (moderate complexity)
  • Delivery Risk: 2 (comfortable schedule)
  • Business Risk: 4 (uncertain adoption)
  • Organizational Risk: 3 (manageable change)
  • External Risk: 1 (minimal external factors)

Risk Score = (3 × 0.25) + (2 × 0.25) + (4 × 0.20) + (3 × 0.15) + (1 × 0.15) Risk Score = 0.75 + 0.50 + 0.80 + 0.45 + 0.15 = 2.65

This investment faces moderate overall risk, with the highest concern being business adoption uncertainty.


Cost Score: Quantifying Investment Requirements

Cost represents the third dimension, quantifying the total investment required across multiple cost categories. Understanding full cost is essential for capacity planning and ensures the organization considers all resource implications.

The cost dimension evaluates four aspects of investment cost:

Sub-dimension Weight Description
Capital Cost 30% Initial capital expenditure and implementation cost
Operating Cost 25% Recurring annual operational expenses
Resource Cost 25% Internal resource and staff time required
Total Cost of Ownership 20% Comprehensive 3-5 year cost including all categories

Higher cost scores indicate larger investments that consume more portfolio capacity. Organizations generally prefer lower-cost investments that deliver strong value without excessive resource consumption.

Capital Cost

Capital cost measures the initial investment required for implementation, including software licenses, hardware infrastructure, implementation services, and development effort. This represents the primary upfront expenditure.

Score Description Investment Range
5 Very high capital cost Greater than $5M
4 High capital cost $2M - $5M
3 Moderate capital cost $500K - $2M
2 Low capital cost $100K - $500K
1 Very low capital cost Less than $100K
0 No capital cost No capital expenditure or funded separately

These thresholds should be calibrated to organizational size. A $5M investment is “very high” for a mid-size organization but might be “moderate” for a large enterprise. Adjust thresholds to reflect approximately 5%, 2-5%, 0.5-2%, 0.1-0.5%, and less than 0.1% of annual IT budget for scores 5, 4, 3, 2, and 1 respectively.

Capital cost should include all implementation-related expenditures:

  • Software licenses and subscriptions (for multi-year commitments)
  • Hardware and infrastructure
  • Implementation services from vendors and consultants
  • Internal development and configuration effort
  • Testing, training, and change management
  • Contingency reserves

Operating Cost

Operating cost measures recurring annual expenses after implementation, including software subscriptions, infrastructure hosting, maintenance fees, and support costs. Operating costs consume ongoing budget capacity and accumulate significantly over multi-year horizons.

Score Description Annual Cost
5 Very high operating cost Greater than $1M annually
4 High operating cost $500K - $1M annually
3 Moderate operating cost $200K - $500K annually
2 Low operating cost $50K - $200K annually
1 Very low operating cost Less than $50K annually
0 No operating cost No recurring costs

Include all recurring annual expenses:

  • SaaS subscriptions and license renewals
  • Cloud infrastructure and hosting
  • Vendor maintenance and support fees
  • Third-party services and integrations
  • Ongoing development and enhancement

Operating costs often exceed capital costs over multi-year periods. A solution with $500K implementation cost and $300K annual operating cost represents a $1.4M investment over three years.

Resource Cost

Resource cost quantifies internal staff time required for implementation and ongoing operation. This includes business analysts, architects, developers, testers, and support staff from IT and business units.

Score Description Internal FTE
5 Very high resource needs Greater than 20 FTE-months
4 High resource needs 10-20 FTE-months
3 Moderate resource needs 5-10 FTE-months
2 Low resource needs 2-5 FTE-months
1 Very low resource needs Less than 2 FTE-months
0 No internal resources Fully managed by vendors

FTE-months represent total effort from all internal staff. An initiative requiring five people working for three months consumes 15 FTE-months. This metric captures the resource capacity consumed rather than calendar duration.

Resource cost is often the most constrained factor in portfolio capacity. Organizations may have budget for external costs but lack internal staff to oversee implementation, perform testing, support adoption, and provide ongoing operation.

Total Cost of Ownership

Total cost of ownership (TCO) evaluates comprehensive multi-year cost including capital, operating, and resource costs over a 3-5 year horizon. TCO provides the most accurate cost assessment by capturing all expenditures and the cumulative impact of recurring costs.

Score Description 3-Year TCO
5 Very high TCO Greater than $10M
4 High TCO $5M - $10M
3 Moderate TCO $2M - $5M
2 Low TCO $500K - $2M
1 Very low TCO Less than $500K
0 No meaningful TCO Minimal total cost

Calculate TCO by summing all costs over three years:

TCO = Capital Cost + (Operating Cost × 3) + (Resource Cost × Internal Rate)

For example, an investment with $1M capital cost, $300K annual operating cost, and 15 FTE-months at $15K per FTE-month:

TCO = $1M + ($300K × 3) + (15 × $15K)
TCO = $1M + $900K + $225K = $2.125M

This investment falls in the “moderate TCO” range (score of 3).

Calculating Cost Score

The cost score combines these four sub-dimensions:

Cost Score = (Capital Cost × 0.30) +
            (Operating Cost × 0.25) +
            (Resource Cost × 0.25) +
            (TCO × 0.20)

For example:

  • Capital Cost: 3 ($1M)
  • Operating Cost: 3 ($300K annually)
  • Resource Cost: 4 (15 FTE-months)
  • TCO: 3 ($2.1M over 3 years)

Cost Score = (3 × 0.30) + (3 × 0.25) + (4 × 0.25) + (3 × 0.20) Cost Score = 0.90 + 0.75 + 1.00 + 0.60 = 3.25

This investment requires moderate to high cost across all dimensions.


Priority Calculation: Determining Final Priority

The priority score combines business value, risk, and cost into a single numeric priority that enables rank-ordering all investments. The priority formula applies dimensional weights and inverts risk and cost scores:

Priority Score = (Business Value × 0.40) +
                ((5 - Risk Score) × 0.30) +
                ((5 - Cost Score) × 0.30)

Risk and cost are inverted (5 - score) because lower risk and lower cost are more favorable. An investment with risk score 2 (low risk) contributes more to priority than one with risk score 4 (high risk).

Worked Example

Consider an investment with the following dimensional scores:

  • Business Value: 3.65 (strong value from earlier example)
  • Risk: 2.65 (moderate risk from earlier example)
  • Cost: 3.25 (moderate-high cost from earlier example)

Calculate priority:

Priority = (3.65 × 0.40) + ((5 - 2.65) × 0.30) + ((5 - 3.25) × 0.30)
Priority = 1.46 + (2.35 × 0.30) + (1.75 × 0.30)
Priority = 1.46 + 0.71 + 0.53
Priority = 2.70

This investment achieves priority score 2.70, placing it in the “P3 - Medium” priority band (discussed below).

The priority formula balances value against risk and cost. An investment with exceptional business value (5.0) but very high risk (4.5) and very high cost (4.5) would score:

Priority = (5.0 × 0.40) + ((5 - 4.5) × 0.30) + ((5 - 4.5) × 0.30)
Priority = 2.00 + 0.15 + 0.15 = 2.30

Despite maximum business value, high risk and cost reduce priority to “P3 - Medium” range. This reflects the reality that risky, expensive investments face significant execution challenges even when they promise substantial value.


Priority Bands: Translating Scores to Decisions

Priority scores translate into priority bands that drive funding decisions and portfolio allocation. Five priority bands cover the full range from critical initiatives that must be funded immediately to proposals that should be rejected:

Priority Score Range Action Funding Timing Portfolio %
P1 - Critical 4.0 - 5.0 Must do Immediate funding 10-15%
P2 - High 3.0 - 3.9 Should do Current cycle 25-35%
P3 - Medium 2.0 - 2.9 Could do Next cycle if capacity 30-40%
P4 - Low 1.0 - 1.9 Defer Future consideration 15-25%
P5 - Reject 0 - 0.9 Do not fund Never 0%

P1 - Critical initiatives score 4.0 or higher, indicating exceptional business value combined with acceptable risk and cost. These investments directly enable strategic objectives, deliver substantial financial or customer value, and present feasible execution plans. P1 initiatives should represent only 10-15% of the portfolio because few investments truly deserve “critical” status. Organizations that designate 40% of proposals as P1 have not applied sufficiently rigorous scoring.

P1 initiatives receive immediate funding regardless of cycle timing. When a P1 initiative emerges mid-cycle, the organization should fund it immediately, potentially deferring lower-priority work to free capacity.

P2 - High initiatives score 3.0 to 3.9, representing strong value with manageable risk and cost. These investments advance strategic priorities, deliver meaningful benefits, and present solid business cases. P2 initiatives should constitute 25-35% of the portfolio.

P2 initiatives receive funding in the current planning cycle if capacity exists. Organizations typically fund all P1 and P2 initiatives, then selectively fund P3 initiatives based on remaining capacity.

P3 - Medium initiatives score 2.0 to 2.9, indicating moderate value, moderate risk, or moderate cost. These investments deliver value but do not rise to strategic priority. They might include valuable operational improvements, compliance requirements, or infrastructure investments. P3 initiatives represent 30-40% of the portfolio.

P3 initiatives receive funding if capacity remains after funding P1 and P2 work. Many P3 initiatives defer to future cycles. This is appropriate and healthy—not every valuable idea can be funded immediately.

P4 - Low initiatives score 1.0 to 1.9, suggesting limited value, high risk, high cost, or some combination thereof. These investments may deliver value under certain conditions but are not compelling given current priorities and constraints. P4 initiatives represent 15-25% of proposals.

P4 initiatives should be deferred indefinitely. Sponsors may resubmit proposals in future cycles if circumstances change, business cases improve, or capacity increases. Some P4 initiatives never receive funding and eventually become obsolete.

P5 - Reject initiatives score below 1.0, indicating they should not be funded under any foreseeable circumstances. This might occur when:

  • No strategic justification exists
  • Costs vastly exceed benefits
  • Risk is unacceptably high with no mitigation path
  • The proposal conflicts with strategic direction

P5 initiatives should be formally rejected with clear rationale. This provides closure to sponsors and prevents indefinite reconsideration.


Scoring Process: Facilitating Objective Evaluation

Effective scoring requires a structured process that promotes objective evaluation, incorporates diverse perspectives, and produces consistent results. The scoring session brings together a cross-functional committee to evaluate each investment proposal.

Scoring Session Participants

Scoring Committee (5-7 members): The permanent committee responsible for evaluating all proposals. Members represent diverse perspectives:

  • Business executive(s) providing business value and strategic alignment expertise
  • Finance representative assessing financial impact and cost accuracy
  • IT representative evaluating technical feasibility and delivery risk
  • Enterprise architect ensuring architectural alignment
  • Portfolio manager considering portfolio balance and capacity

Business Sponsor: The executive sponsoring the investment presents the business case and answers questions. The sponsor does not participate in scoring.

Portfolio Analyst: The portfolio manager or analyst facilitates the session, manages scoring tools, and ensures process adherence. The analyst may provide guidance on scoring criteria interpretation.

Scoring Session Format

A typical scoring session lasts 60-90 minutes per investment and follows this agenda:

Business Case Presentation (15-20 minutes): The sponsor presents the investment proposal, covering:

  • Business problem or opportunity
  • Proposed solution approach
  • Expected benefits (strategic, financial, customer, operational, competitive)
  • Implementation approach and timeline
  • Required investment and resources
  • Key risks and mitigation strategies

The sponsor should provide objective data supporting the business case rather than advocacy arguments. Facts matter more than enthusiasm.

Q&A and Discussion (15-20 minutes): Committee members ask clarifying questions to understand the proposal fully. Focus questions on areas needed for scoring:

  • How does this align to specific strategic objectives?
  • What evidence supports the benefit estimates?
  • What similar implementations have been completed?
  • What are the most significant risks?
  • Are required resources available?

Individual Scoring (5-10 minutes): Each committee member independently scores the investment across all dimensions and sub-dimensions. Individual scoring before discussion prevents anchor bias and groupthink. Members should base scores on the provided scoring guidance rather than personal opinion.

Score Discussion and Calibration (15-20 minutes): The facilitator displays scores from all committee members, typically showing ranges and averages for each dimension. The committee discusses significant disagreements to understand different perspectives.

For example, if strategic alignment scores range from 2 to 5, committee members explain their reasoning. Perhaps one member knows the investment is explicitly named in the strategic plan (score 5) while others were unaware (score 2-3). The committee calibrates around the correct score (5) once facts emerge.

The facilitator should ensure all voices are heard, particularly minority opinions that may identify risks or opportunities others missed.

Final Score Agreement (5-10 minutes): After discussion, committee members revise their scores if convinced by discussion. The facilitator calculates final scores using averages or consensus depending on governance model.

Scoring Best Practices

Organizations achieve better scoring outcomes by following these practices:

Use the full scoring range (0-5): Many committees exhibit central tendency bias, clustering most scores at 3. This collapses the scoring range and fails to differentiate high and low performers. Committees should be encouraged to use scores of 0, 1, 4, and 5 when appropriate.

Score based on facts, not opinions: Scores should reflect objective assessment against defined criteria rather than subjective preferences. “I think this is important” is opinion. “This is named in the strategic plan as a top-3 priority” is fact.

Score dimensions independently: Avoid halo effects where one dimension influences others. An investment with excellent strategic alignment (5) might have high technical risk (4). Score each dimension based on its own criteria.

Discuss disagreements constructively: Significant score variations typically reflect different information or interpretation. Discussion should focus on understanding these differences and reaching alignment based on facts.

Document scoring rationale: The portfolio analyst should document key reasoning for scores, particularly for unusually high or low scores. This documentation proves valuable for future calibration and appeals.

Maintain scoring discipline: Committee members may feel pressure to inflate scores for initiatives they favor or for influential sponsors. The facilitator must maintain objective evaluation based on criteria.


Scoring Governance: Ensuring Consistency

Scoring governance establishes the structures and processes that maintain scoring consistency across time, committee members, and investment types.

Scoring Committee Composition

The scoring committee should include 5-7 members representing diverse perspectives. Smaller committees lack sufficient viewpoint diversity. Larger committees become unwieldy and time-consuming.

Committee membership should remain stable over time to build shared mental models and consistent interpretation. However, periodic rotation (2-year terms with staggered rotation) prevents entrenchment and brings fresh perspectives.

Committee members require training in:

  • Scoring framework and criteria
  • Bias recognition and mitigation
  • Facilitation and discussion techniques
  • Portfolio management principles

Calibration Sessions

Organizations should conduct calibration sessions quarterly to maintain scoring consistency. Calibration sessions review recent scoring decisions and assess whether scores accurately predicted outcomes.

Retrospective Analysis: Review completed investments from 12-24 months ago and compare actual outcomes to predicted scores. Did investments with high business value scores (4-5) actually deliver the predicted value? Did high-risk investments (4-5) encounter the predicted challenges?

This analysis identifies patterns:

  • Systematic over-scoring or under-scoring of particular dimensions
  • Committee blind spots (e.g., consistently underestimating organizational risk)
  • Sponsor optimism bias (consistently over-stating benefits or under-stating costs)

Criteria Refinement: Based on retrospective analysis, committees may adjust scoring criteria or thresholds. If all strategic alignment scores cluster at 3-4, perhaps the criteria need clearer differentiation between levels.

Scoring Practice: Calibration sessions can include practice scoring of historical proposals to ensure new committee members interpret criteria consistently with established members.

Handling Scoring Appeals

Sponsors occasionally disagree with scoring outcomes and wish to appeal. Organizations should establish clear appeal procedures:

  1. Written Appeal: Sponsor submits written appeal identifying specific scores they believe are incorrect and providing factual evidence supporting different scores.

  2. Limited Scope: Appeals must focus on factual errors or misunderstandings, not subjective disagreements. “The committee didn’t understand that this investment is named in the strategic plan” is valid. “I believe strategic alignment should be 5 not 4” is not.

  3. Committee Review: The scoring committee reviews the appeal and determines whether new information warrants score revision. Most appeals should be denied because they represent disagreement rather than error.

  4. Final Decision: The portfolio governance board makes final decisions on appeals if needed.

Appeal rights ensure fairness while preventing endless re-litigation of scoring decisions.


Common Pitfalls: Avoiding Scoring Dysfunction

Organizations implementing scoring frameworks often encounter predictable challenges that undermine scoring effectiveness. Understanding these pitfalls enables proactive mitigation.

Central Tendency Bias

Problem: Committee members avoid extreme scores, clustering most scores around 3. This compresses the scoring range and fails to differentiate high-performing and low-performing investments. Everything becomes “average.”

Root Cause: Committee members feel uncomfortable assigning very low scores (0-1) or very high scores (5), believing “nothing is perfect” or “something is better than nothing.”

Solution: Explicitly coach committees to use the full 0-5 range. Establish expectations that score distributions should approximate:

  • Scores of 0-1: 10-15% of scores
  • Scores of 2: 20-25%
  • Scores of 3: 30-40%
  • Scores of 4: 20-25%
  • Scores of 5: 10-15%

Not every investment should match this distribution, but across many investments, scores should span the full range.

Halo Effect

Problem: High scores on one dimension inappropriately influence scores on other dimensions. An investment with exceptional strategic alignment receives inflated scores on customer impact, operational impact, and competitive impact because committee members generalize “this is strategic” to “this must be good in all ways.”

Root Cause: Cognitive bias that causes overall impressions to influence specific judgments.

Solution: Score dimensions independently without discussing overall impressions until after individual scoring. Structure the scoring interface to show one dimension at a time rather than displaying all dimensions simultaneously.

Anchor Bias

Problem: The first score shared influences subsequent scores. When a senior committee member announces “I scored this 4 on strategic alignment,” others adjust their scores toward 4 regardless of their initial assessment.

Root Cause: Social pressure and deference to authority figures.

Solution: Require individual scoring before any discussion. Committee members must commit to scores before seeing others’ scores. This ensures each member’s independent judgment contributes to the final outcome.

Political Pressure

Problem: Influential sponsors pressure the committee to inflate scores or override scoring outcomes. Committee members feel unable to assign accurate scores when the CEO’s pet project is being evaluated.

Root Cause: Power dynamics and fear of repercussions.

Solution: Establish executive commitment to objective scoring and protect committee members from retaliation. The governance board must visibly support scoring decisions even when outcomes displease powerful stakeholders. If executives will override scoring outcomes whenever convenient, the entire framework loses credibility.

Consider anonymous individual scoring to reduce direct pressure on specific committee members.

Incomplete Data

Problem: Proposals lack sufficient detail to score accurately, forcing committee members to make assumptions or guess.

Root Cause: Inadequate business case requirements or premature scoring of immature proposals.

Solution: Establish minimum business case requirements that proposals must meet before scoring. Required elements include:

  • Quantified benefits with supporting calculations
  • Implementation approach and timeline
  • Detailed cost breakdown
  • Risk assessment with mitigation plans
  • Resource requirements

Defer scoring of incomplete proposals until sponsors provide required information. Uncertainty should be reflected in risk scores rather than optimistic assumptions.

Inconsistent Criteria Interpretation

Problem: Different committee members interpret scoring criteria differently, leading to inconsistent scoring across investments or over time.

Root Cause: Ambiguous criteria definitions or inadequate training.

Solution: Develop detailed scoring guidance with specific examples for each score level. Conduct regular calibration sessions where committee members score the same proposal and discuss interpretation differences. Train new committee members thoroughly before they participate in actual scoring.

Maintain a scoring precedent database documenting how specific types of investments have been scored historically to promote consistency.

Problem: Business sponsors systematically overestimate benefits and underestimate costs and risks, leading to inflated scores that do not reflect reality.

Root Cause: Cognitive bias, advocacy mindset, and incentive misalignment.

Solution: Require independent validation of business cases before scoring. Finance should verify financial impact calculations. Architecture should assess technical feasibility. Portfolio management should validate resource estimates. Committee members should apply appropriate skepticism to sponsor claims and score based on validated information rather than sponsor assertions.


Key Takeaways

  • Objective scoring provides defensible, transparent prioritization based on consistent criteria rather than political influence

  • Three dimensions—business value, risk, and cost—provide comprehensive evaluation that balances return against execution uncertainty and investment requirements

  • Weighted sub-dimensions enable nuanced assessment of strategic alignment, financial impact, customer value, technical risk, delivery feasibility, and cost factors

  • Priority scores calculated from weighted dimensions enable rank-ordering all investments and making data-driven funding decisions

  • Priority bands translate scores into actionable decisions, with P1-P2 initiatives receiving funding and P3-P4 initiatives deferred based on capacity

  • Structured scoring process with cross-functional committees, individual scoring, and calibration discussion produces objective, consistent results

  • Scoring governance through stable committees, regular calibration, and appeals processes maintains consistency across time

  • Common pitfalls—including central tendency, halo effects, and political pressure—can be mitigated through process design and executive commitment

Organizations that implement rigorous scoring frameworks make better investment decisions, achieve stronger portfolio outcomes, and build stakeholder confidence in portfolio management.


Review Questions

  1. What problems emerge when portfolio prioritization lacks objective scoring frameworks, and how does structured scoring address these problems?

  2. Why are business value, risk, and cost weighted 40%, 30%, and 30% respectively, and what would be the implications of different weightings?

  3. How would you score an investment that enables a top strategic objective but requires new, unproven technology and costs $8M? Walk through each dimension.

  4. What distinguishes a P1 (Critical) priority investment from a P2 (High) priority investment, and what percentage of the portfolio should each represent?

  5. Why must risk and cost scores be inverted (5 - score) in the priority calculation formula?

  6. How does individual scoring before discussion help mitigate cognitive biases in committee scoring?

  7. What evidence would you require from a business sponsor claiming a $2M annual financial benefit from an investment?

  8. Under what circumstances should a scoring committee reject an appeal from a business sponsor disputing their investment’s priority score?


Summary

Prioritization and scoring form the foundation of disciplined portfolio management by enabling objective evaluation of competing investment proposals. The scoring framework evaluates each investment across three critical dimensions: business value measuring strategic and financial return, risk assessing execution uncertainty, and cost quantifying required investment. Each dimension comprises weighted sub-dimensions that enable nuanced evaluation while maintaining manageable complexity.

Priority scores combine dimensional assessments using a weighted formula that produces numeric priorities enabling rank-ordering all investments. Priority bands translate scores into funding decisions, with critical and high-priority initiatives receiving immediate funding while medium and low-priority initiatives are deferred based on capacity constraints.

The scoring process brings together cross-functional committees to evaluate proposals through structured sessions incorporating business case presentations, independent scoring, calibration discussion, and final score agreement. Scoring governance maintains consistency through stable committee composition, regular calibration sessions, and formal appeals processes.

Organizations that implement rigorous scoring frameworks overcome the political dysfunction that plagues portfolio decision-making and achieve objective, defensible prioritization aligned to strategic objectives.


Chapter Navigation


Back to top

IT Portfolio Management Handbook - MIT License - © 2025