~/blog/cto-infrastructure-review-stakeholder-alignment
CTO Craft2023·1 September 2023

How to Run a CTO Infrastructure Review That Stakeholders Actually Trust

A platform infrastructure review that produces the right technical answer but fails to bring stakeholders with it is a failed review. Here's the process I use — structured to produce decisions that the board, product leadership, and engineering teams can all trust.

CTOinfrastructure-reviewstakeholder-managementleadershipprocess

The Problem With Most Infrastructure Reviews

Most CTO infrastructure reviews have a structural flaw: they're conducted by engineers, evaluated by engineers, and presented to non-engineers as a conclusion. The non-engineers approve a decision they don't fully understand and have limited ability to challenge.

This creates two problems. First, the conclusion may miss non-technical constraints (commercial relationships, regulatory requirements, contractual obligations) that the engineering team wasn't aware of. Second, when the project inevitably encounters difficulty — and replatforming projects always do — stakeholders who didn't understand the decision have no basis for maintaining their confidence in it.

The infrastructure review process I've developed is designed to produce a decision that is simultaneously technically rigorous and stakeholder-legible.

Phase 1: Pre-Discovery — Options, Not Recommendations

The first phase is deliberately narrow: produce a clear options paper with three to four distinct approaches, each honestly assessed for advantages and limitations. At this stage, you're not making a recommendation — you're creating a shared vocabulary and a structured basis for the conversation that follows.

The options paper should be readable by a non-technical senior leader in 20 minutes. For each option:

  • ·What it is (one paragraph, no jargon)
  • ·What it does well for our specific situation
  • ·What it doesn't do, or does poorly
  • ·Rough cost and time implications

Critically: present genuine options, not a false choice. If you've already decided on the answer and you're using the review process to justify it, experienced stakeholders will notice. Present each option as genuinely viable, because at this stage, it may be.

The output of this phase is a shortlisted option — typically two or three that are genuinely viable — with a clear recommendation for which to take into detailed evaluation. This recommendation goes to the senior management group for sign-off before deeper work begins.

Phase 2: Product Discovery — Cross-Functional Requirements

With one or two options in focus, the next phase is gathering requirements from every function that the platform serves. Not just engineering. Product managers, operations, customer service, finance, marketing.

This is where platform reviews most commonly fail. Engineering teams evaluate platforms against engineering requirements and miss that the operations team has a workflow that the new platform handles differently, or that customer service uses a specific reporting feature that will need to be rebuilt.

The product discovery phase should produce:

  • ·A requirements matrix: each function's requirements mapped to each option's capability
  • ·A capability gap analysis: where each option requires custom development to meet requirements
  • ·A timeline and resource estimate for closing each gap

This phase typically takes 6-8 weeks for a platform of meaningful complexity. Rushing it is the single biggest risk factor for a failed replatforming.

Phase 3: The Comparison Framework

The comparison should be structured around dimensions that matter to both technical and non-technical stakeholders:

Technical dimensions (credible to engineers, understandable to the board):

  • ·Customisation capability — can we build what the business needs?
  • ·Scalability — can it handle 2x, 5x current load?
  • ·Security — what does compliance and data protection look like?
  • ·Performance — measurable, with baseline and target metrics (use Core Web Vitals)

Commercial dimensions (most important to non-technical stakeholders):

  • ·Total cost of ownership over 3 years — including transaction fees, plugins, and engineering
  • ·Speed to market for the initial migration
  • ·Speed of subsequent feature development
  • ·Vendor dependency risk

Operational dimensions (important to the team running it):

  • ·Developer experience and hiring market
  • ·Operational overhead (managed vs self-hosted)
  • ·Integration ecosystem

Making Cost Comparisons Honest

The single most important discipline in cost comparisons: model total cost, not subscription cost. Engineering leaders who present "Shopify costs £2K/month, custom costs £50K/month in engineering" are presenting an incomplete comparison. The full comparison includes:

  • ·Every plugin the platform requires to meet the requirements matrix
  • ·Transaction fees projected at current and expected future revenue
  • ·The staff cost difference between the two options
  • ·The migration cost (one-time)
  • ·The ongoing maintenance differential

When you model this honestly, the comparison often looks very different from the headline numbers.

The Recommendation Presentation

The recommendation presentation to the senior management group should follow a specific structure:

  1. 1.The problem we're solving — what are the specific, measurable limitations of the current platform?
  2. 2.What we evaluated — the options considered, with honest assessment of each
  3. 3.The recommendation — what we're proposing, and why
  4. 4.The cost and timeline — total cost of ownership, realistic timeline with milestones
  5. 5.The risks — what could go wrong, and how we're mitigating each
  6. 6.The success criteria — how we'll know it worked (measurable outcomes)

Section 5 — the risks — is the most important section for building stakeholder trust. Non-technical stakeholders have been burned by technology projects that were sold as certain and delivered as troubled. A recommendation that honestly enumerates risks and mitigations demonstrates that you've done the work, and gives stakeholders something to hold you accountable to.

The Timeline Discipline

For a meaningful replatforming programme, the timeline should follow four phases:

Pre-discovery (1-2 months): Options paper, initial shortlisting, senior management alignment.

Product discovery (2-3 months): Cross-functional requirements, detailed capability assessment, technical specification.

Implementation (4-6 months): Development, internal testing, beta preparation.

Deployment (1-2 months): Staged rollout, monitoring, stabilisation.

The most common mistake is under-investing in discovery and over-promising on implementation speed. A rushed discovery phase produces a technical specification that misses requirements, which produces a replatforming that launches missing features, which produces a frustrated business and a damaged engineering reputation.

Do the discovery properly. The implementation speed will follow.