Why Performance Metrics Belong in Architecture Decisions
Platform reviews tend to be dominated by capability discussions: what can the new platform do, what constraints does it impose, what does it cost? These are the right questions. But they often lack the quantitative rigour that comes from measurable technical outcomes.
Core Web Vitals — Google's standardised performance metrics — provide something rare in architectural decision-making: objective, comparable numbers that matter both technically (page speed) and commercially (SEO ranking, conversion rate). When I was evaluating platform options, mapping each option's expected Core Web Vitals outcomes gave the comparison a precision that capability discussions alone couldn't provide.
The Baseline: What We Were Starting From
Before the review, the existing platform (PHP monolith, VueJS frontend, EC2 hosting) measured:
| Metric | Baseline |
| --- | --- |
| First Contentful Paint (FCP) | 2.2 seconds |
| Largest Contentful Paint (LCP) | 3.6 seconds |
| Time to First Byte (TTFB) | 0.2 seconds |
| First Input Delay (FID) | 5ms |
| Lighthouse Performance Score | 46 |
| Lighthouse Accessibility | 96-100 |
A Performance score of 46 is in the "poor" range. LCP of 3.6 seconds is above Google's "needs improvement" threshold of 2.5 seconds. These numbers weren't just a technical concern — they were a direct SEO disadvantage in a competitive organic search landscape.
The Options Comparison
Part of the value of establishing baseline metrics is being able to project expected outcomes for each option. This is what the platform comparison showed:
| Platform Option | FCP | LCP | Performance Score |
| --- | --- | --- | --- |
| Current (PHP/VueJS monolith) | 2.2s | 3.6s | 46 |
| Shopify / SaaS | 2.2s | 3.6s | 55-60 |
| Bespoke Microservices | 2.0-2.6s | 3.0-4.7s | 60-65 |
| Hybrid (custom + best-in-class) | 1.5s | 2.0s | 60-65 |
The Shopify estimate assumed Shopify's own performance profile, which is reasonable for a standard Shopify store but hasn't historically been strong for complex theme customisations. The bespoke microservices range was wide because it depended entirely on implementation quality. The hybrid projected well because Next.js with SSR on Vercel's edge network has a well-understood performance ceiling.
The Actual After-Launch Numbers
Eighteen months later, post-replatforming to the hybrid architecture (Next.js on Vercel), the measured numbers were:
| Metric | Before | After | Improvement |
| --- | --- | --- | --- |
| FCP | 2.2s | 1.0s | 55% faster |
| LCP | 3.6s | 1.4s | 61% faster |
| TTFB | 0.2s | 0.3-0.6s | Acceptable increase |
| INP (replaced FID) | 5ms | 110ms | Within "Good" threshold |
| Lighthouse Performance | 46 | 77 | +31 points |
| Lighthouse Accessibility | 96 | 91 | Marginal decrease |
The performance gains exceeded the projections. LCP at 1.4 seconds is comfortably within Google's "Good" threshold of 2.5 seconds. The Performance score improvement from 46 to 77 moved the site from "poor" to "needs improvement" (and approaching "good").
The TTFB increased slightly — a known trade-off when introducing SSR versus a static asset served from a CDN. For subscription platforms where personalised content is the norm, this is acceptable.
Why TTFB Increased and Why That's Fine
The baseline's 0.2s TTFB was achieved by serving a static VueJS bundle from S3/CloudFront — fast because the server does nothing. The new platform's TTFB of 0.3-0.6s reflects server-side rendering: the server is computing the page before sending the first byte.
For a subscription platform where most pages require personalised content (current subscription status, delivery schedule, account details), SSR is not optional — you can't serve a static bundle and then hydrate personalisation client-side without a visible layout shift. The TTFB increase is the cost of serving meaningful HTML on the first response rather than a JavaScript bundle that re-renders after API calls.
The SEO Impact
Core Web Vitals are a confirmed Google ranking signal (part of the Page Experience update). Moving from a Performance score of 46 to 77, and from LCP of 3.6s to 1.4s, is a material improvement in ranking eligibility for pages that were previously penalised.
For subscription businesses where organic search is an acquisition channel, this translates directly to revenue. The performance improvement isn't a vanity metric — it's a business metric.
Using CWV in Platform Reviews
The lesson I'd pass on: establish your Core Web Vitals baseline before starting an infrastructure review, not after. Measuring the current state objectively gives you:
- 1.A benchmark to project each option against
- 2.A success metric to measure the project against post-launch
- 3.A business case for the investment that non-technical stakeholders understand
Saying "we're going to improve page speed" is vague. Saying "we're going from LCP 3.6s to under 2.5s, moving us from the 'needs improvement' to 'good' threshold, with measurable SEO impact" is a business case.