Skip to content
February 10, 202615 min readfrontend

Core Web Vitals Audit Checklist (2026 Update)

Google's CWV thresholds haven't changed but the measurement methodology has. Here's the updated audit checklist with INP replacing FID, real-world field data strategies, and the optimizations that actually move the needle.

performancecore-web-vitalsseoinplighthouse
Core Web Vitals Audit Checklist (2026 Update)

TL;DR

INP (Interaction to Next Paint) replaced FID in March 2024 and most SaaS dashboards still haven't adapted. FID measured input delay only... the easy part. INP measures the entire interaction lifecycle: input delay + processing time + presentation delay. The result: 40% of sites that passed FID fail INP. The 2026 audit checklist covers the three metrics that matter (LCP, INP, CLS), the field data sources that Google actually uses for ranking (CrUX, not Lighthouse), and the 12 optimizations I've seen move the needle most across advisory clients. Stop chasing Lighthouse scores. Start measuring real user experience with the Web Vitals library and RUM data.

Part of the Performance Engineering Playbook ... a comprehensive guide to building systems that stay fast under real-world load.


The INP Shift Nobody Prepared For

When Google replaced First Input Delay with Interaction to Next Paint in March 2024, the industry treated it as a minor metric swap. It wasn't. FID only measured the delay before the browser started processing an interaction... typically 50-100ms on most sites. INP measures the full round-trip: from user click to the next frame being painted on screen.

This distinction matters because most performance problems happen during processing, not during input delay. A React component that re-renders 500 nodes on a button click has zero FID issues... the browser starts processing immediately. But the INP is 800ms because the main thread is blocked while React reconciles the virtual DOM.

I've audited 15+ SaaS dashboards in the past year. The median INP on data-heavy pages is 350-500ms. Google's "good" threshold is 200ms.


The Three Metrics (2026 Thresholds)

MetricGoodNeeds ImprovementPoorWhat It Measures
LCP< 2.5s2.5-4.0s> 4.0sLargest visible content element load time
INP< 200ms200-500ms> 500msWorst interaction responsiveness (full lifecycle)
CLS< 0.10.1-0.25> 0.25Cumulative layout shift score

These thresholds haven't changed since 2024. What changed is the measurement methodology and the weight Google places on field data vs. lab data.

Field Data vs. Lab Data

Field data (CrUX ... Chrome User Experience Report) is what Google uses for ranking signals. It represents the 75th percentile of real user experiences across all device types and network conditions.

Lab data (Lighthouse, WebPageTest) is a synthetic test from a single device profile. It's useful for debugging but does not directly influence search rankings.

The most common mistake: optimizing for a Lighthouse score of 100 while real users on 4G connections with mid-range Android devices have a completely different experience. A Lighthouse score of 95 with good CrUX data beats a Lighthouse score of 100 with poor CrUX data every time.


The Audit Checklist

Phase 1: Measurement Setup (Do This First)

Before optimizing anything, instrument your application to collect real user data.

1. Install the Web Vitals Library

import { onLCP, onINP, onCLS } from "web-vitals"; function sendToAnalytics(metric: { name: string; value: number; id: string; delta: number }) { // Send to your analytics backend fetch("/api/vitals", { method: "POST", body: JSON.stringify({ name: metric.name, value: metric.value, page: window.location.pathname, connection: navigator.connection?.effectiveType, device: navigator.userAgent, timestamp: Date.now(), }), keepalive: true, }); } onLCP(sendToAnalytics); onINP(sendToAnalytics); onCLS(sendToAnalytics);

The keepalive: true flag ensures the beacon is sent even if the user navigates away before the fetch completes. Without it, you'll undercount metrics on pages with high bounce rates.

2. Set Up CrUX Dashboard

https://lookerstudio.google.com/datasources/create?connectorId=AKfycbxk2OdsuU8RLhjl8MrDfEOV

CrUX data updates monthly. Use it for trend analysis, not real-time debugging. Your RUM data fills the gap between CrUX updates.

3. Configure Performance Budgets

{ "budgets": [ { "path": "/*", "timings": [ { "metric": "largest-contentful-paint", "budget": 2500 }, { "metric": "interaction-to-next-paint", "budget": 200 }, { "metric": "cumulative-layout-shift", "budget": 0.1 } ], "resourceSizes": [ { "resourceType": "script", "budget": 300 }, { "resourceType": "total", "budget": 800 } ] } ] }

Break the build if any budget is exceeded. Performance budgets that don't enforce consequences are documentation, not guardrails.


Phase 2: LCP Optimization

LCP measures how quickly the largest visible content element renders. For most SaaS applications, this is a hero image, heading, or above-the-fold data table.

4. Identify Your LCP Element

new PerformanceObserver((entryList) => { for (const entry of entryList.getEntries()) { console.log("LCP element:", entry.element); console.log("LCP time:", entry.startTime); console.log("LCP size:", entry.size); } }).observe({ type: "largest-contentful-paint", buffered: true });

Run this in your browser console on key pages. The LCP element is often not what you expect... on data-heavy pages, it might be a table cell or a chart SVG, not the page heading.

5. Preload Critical Resources

<!-- Preload the LCP image --> <link rel="preload" as="image" href="/hero.webp" fetchpriority="high" /> <!-- Preload critical fonts --> <link rel="preload" as="font" type="font/woff2" href="/fonts/inter-var.woff2" crossorigin /> <!-- Preconnect to API origins --> <link rel="preconnect" href="https://api.yoursaas.com" />

fetchpriority="high" on your LCP image tells the browser to prioritize it over other images. This alone can improve LCP by 200-400ms on image-heavy pages.

6. Server-Side Render Above the Fold

For SaaS dashboards, the LCP element is often data-dependent. Server-side rendering the initial data load eliminates the client-side fetch waterfall:

// Next.js App Router ... server component async function DashboardPage() { // Data fetch happens on the server, before HTML reaches the browser const metrics = await db.query( 'SELECT * FROM dashboard_metrics WHERE tenant_id = $1 LIMIT 10', [tenantId] ); return ( <main> <MetricsTable data={metrics} /> {/* LCP element renders with data */} </main> ); }

This pattern reduces LCP by 500-1500ms on data-heavy pages compared to client-side fetching with loading spinners.


Phase 3: INP Optimization

INP is where most SaaS applications fail. Dashboard interactions... filtering, sorting, toggling... trigger expensive re-renders that block the main thread.

7. Break Up Long Tasks

Any task that blocks the main thread for more than 50ms degrades INP. Use scheduler.yield() to break long tasks into smaller chunks:

// Feature-detect scheduler.yield (not supported in Safari as of March 2026) const yieldToMain = globalThis.scheduler?.yield ? () => scheduler.yield() : () => new Promise<void>((resolve) => setTimeout(resolve, 0)); async function filterLargeDataset(data: DataRow[], filters: FilterConfig): Promise<DataRow[]> { const results: DataRow[] = []; const chunkSize = 1000; for (let i = 0; i < data.length; i += chunkSize) { const chunk = data.slice(i, i + chunkSize); const filtered = chunk.filter((row) => applyFilters(row, filters)); results.push(...filtered); // Yield to the main thread every 1000 items if (i + chunkSize < data.length) { await yieldToMain(); } } return results; }

Browser support note: scheduler.yield() is supported in Chrome 129+, Edge 129+, Firefox 142+, and Opera 115+, but not in Safari. The fallback using setTimeout(resolve, 0) works everywhere but doesn't preserve task priority. For ~72% of global users, scheduler.yield() works natively... for the rest, the fallback is still a significant improvement over blocking the main thread.

**8. Virtualize Large Lists** Rendering 5,000 table rows when only 20 are visible is the most common INP violation I see in SaaS applications: ```typescript import { useVirtualizer } from '@tanstack/react-virtual'; function DataTable({ rows }: { rows: DataRow[] }) { const parentRef = useRef<HTMLDivElement>(null); const virtualizer = useVirtualizer({ count: rows.length, getScrollElement: () => parentRef.current, estimateSize: () => 48, // row height in px overscan: 5, }); return ( <div ref={parentRef} style={{ height: '600px', overflow: 'auto' }}> <div style={{ height: `${virtualizer.getTotalSize()}px` }}> {virtualizer.getVirtualItems().map((virtualRow) => ( <div key={virtualRow.key} style={{ position: 'absolute', top: 0, transform: `translateY(${virtualRow.start}px)`, height: `${virtualRow.size}px`, }} > <TableRow data={rows[virtualRow.index]} /> </div> ))} </div> </div> ); }

Virtualization typically reduces INP on data tables from 300-800ms to under 100ms.

9. Debounce Input Handlers

Search inputs, filter dropdowns, and range sliders that trigger re-renders on every keystroke or change event are INP killers:

function SearchInput({ onSearch }: { onSearch: (query: string) => void }) { const [value, setValue] = useState(''); const debouncedSearch = useMemo( () => debounce((query: string) => onSearch(query), 150), [onSearch] ); return ( <input value={value} onChange={(e) => { setValue(e.target.value); // Update UI immediately debouncedSearch(e.target.value); // Debounce the expensive operation }} /> ); }

The 150ms debounce feels instant to users while reducing re-renders by 80-90% during fast typing.


Phase 4: CLS Optimization

CLS measures how much the page layout shifts during loading. The most common causes in SaaS applications: images without dimensions, dynamically loaded content, and web fonts.

10. Reserve Space for Dynamic Content

/* Reserve space for content that loads asynchronously */ .dashboard-chart { aspect-ratio: 16 / 9; min-height: 300px; contain: layout; } /* Font display swap with fallback metrics */ @font-face { font-family: "Inter"; src: url("/fonts/inter-var.woff2") format("woff2"); font-display: swap; size-adjust: 107%; ascent-override: 90%; descent-override: 22%; line-gap-override: 0%; }

The size-adjust and override properties match the fallback font metrics to your custom font, eliminating the layout shift when the web font loads.

11. Use CSS contain for Isolated Components

.sidebar-widget { contain: layout style; } .notification-banner { contain: layout; position: fixed; top: 0; }

The contain property tells the browser that layout changes inside the element won't affect elements outside it. This prevents expensive reflows from cascading through the document.


Phase 5: Monitoring and Regression Prevention

12. Set Up Automated CWV Monitoring

// CI integration ... fail the build on CWV regression import { startFlow } from "lighthouse"; import puppeteer from "puppeteer"; async function auditCWV(url: string) { const browser = await puppeteer.launch(); const page = await browser.newPage(); const flow = await startFlow(page); // Navigate and measure await flow.navigate(url); // Simulate interaction for INP await flow.startTimespan(); await page.click('[data-testid="filter-button"]'); await page.waitForSelector('[data-testid="filtered-results"]'); await flow.endTimespan(); const report = await flow.generateReport(); const results = JSON.parse(report); // Assert thresholds const lcp = results.audits["largest-contentful-paint"].numericValue; const cls = results.audits["cumulative-layout-shift"].numericValue; if (lcp > 2500) throw new Error(`LCP ${lcp}ms exceeds 2500ms budget`); if (cls > 0.1) throw new Error(`CLS ${cls} exceeds 0.1 budget`); await browser.close(); }

Run this in CI on every pull request. Lab data in CI won't match field data exactly, but it catches regressions before they reach production.


The Common Mistakes

Mistake 1: Optimizing for Lighthouse Instead of CrUX

Lighthouse runs on a simulated mid-tier mobile device with simulated 4G. Your users might be on fiber with M3 MacBooks or on 3G with 2019 Android phones. Optimize for the 75th percentile of your actual users, not for a synthetic benchmark.

Mistake 2: Ignoring INP on Internal Tools

"It's an internal dashboard, performance doesn't matter." I've heard this at 6 companies. Internal tools with 500ms+ INP cost your team 15-30 minutes per day in accumulated interaction lag. Over a year, that's 60-120 hours per engineer. At $150K/year loaded cost, you're spending $4,500-9,000 per engineer on slow tooling.

Mistake 3: Third-Party Script Amnesia

Analytics, chat widgets, A/B testing tools, and error monitoring scripts add 200-800ms to LCP and degrade INP by blocking the main thread. Audit every third-party script:

// Long Animation Frames API (LoAF) ... the modern replacement for longtask observers // Shipped in Chrome 123+, provides richer attribution than longtask if (PerformanceObserver.supportedEntryTypes.includes("long-animation-frame")) { new PerformanceObserver((list) => { for (const entry of list.getEntries()) { if (entry.duration > 50) { console.warn("Long animation frame:", { duration: entry.duration, blockingDuration: entry.blockingDuration, scripts: entry.scripts?.map((s) => ({ sourceURL: s.sourceURL, duration: s.duration, invoker: s.invoker, })), }); } } }).observe({ type: "long-animation-frame", buffered: true }); } else { // Fallback to longtask API for non-Chromium browsers new PerformanceObserver((list) => { for (const entry of list.getEntries()) { if (entry.duration > 50) { console.warn("Long task:", { duration: entry.duration, source: entry.attribution?.[0]?.containerSrc, }); } } }).observe({ type: "longtask", buffered: true }); }

The Long Animation Frames API (LoAF) provides script-level attribution... which specific source file caused the long frame, how long each script ran, and what triggered it. The web-vitals library v5+ uses LoAF internally for INP attribution. If you're diagnosing INP issues in Chromium browsers, LoAF is the tool to reach for first.


When to Apply This

  • You're tracking SEO as a growth channel and CWV affects your rankings
  • Your SaaS dashboard has user-facing pages that need to feel responsive
  • Your CrUX data shows "Needs Improvement" or "Poor" on any metric
  • You're losing deals because your product demo feels slow compared to competitors

When NOT to Apply This

  • Internal tools with fewer than 50 users... invest the time elsewhere
  • Pre-launch MVPs where the product hypothesis isn't validated yet
  • API-only services with no user-facing frontend

Need a performance audit that goes beyond Lighthouse scores? I help SaaS teams identify and fix the real bottlenecks... the ones that affect revenue, not just metrics.


Continue Reading

This post is part of the Performance Engineering Playbook ... covering database optimization, caching strategies, monitoring, and zero-downtime operations.

More in This Series

Get insights like this weekly

Join The Architect's Brief — one actionable insight every Tuesday.

Need help with performance?

Let's talk strategy