All insights
Engineering6 min read

Web Performance Metrics Every Developer Should Track

Performance metrics tell different stories. Knowing which metric to look at for which problem — and what to do about it — is the difference between useful monitoring and noise.

NC

Nextcraft Engineering Team

The Metrics Landscape

Web performance measurement has two domains:

Lab metrics — measured in controlled conditions (Lighthouse, PageSpeed Insights, WebPageTest). Reproducible, useful for debugging, not representative of real users.

Field metrics — measured from real user sessions (Chrome User Experience Report, Search Console, RUM tools). Representative, but harder to debug against.

For SEO and ranking purposes: field metrics matter. For diagnosing and fixing problems: lab metrics are indispensable. Use both.

The Core Web Vitals Triad

LCP (Largest Contentful Paint)

What it measures: When the largest visible content element finishes rendering.
Good threshold: Under 2.5 seconds
What causes poor LCP: Unoptimized images, slow server response (TTFB), render-blocking resources, lazy-loaded LCP elements.

The most common fix: <Image priority /> on the hero image.

INP (Interaction to Next Paint)

What it measures: Responsiveness to user interactions throughout the page lifetime.
Good threshold: Under 200ms
What causes poor INP: Long JavaScript tasks blocking the main thread, heavy event handlers, large React re-renders, third-party scripts.

The most common fix: reduce JavaScript execution with Server Components, use useTransition for non-urgent updates.

CLS (Cumulative Layout Shift)

What it measures: Unexpected layout shifts after initial render.
Good threshold: Under 0.1
What causes poor CLS: Images without dimensions, dynamic content injected above existing content, font swaps (FOUT), ads and embeds.

The most common fix: next/image (adds dimensions automatically), next/font (eliminates FOUT).

Secondary Metrics Worth Tracking

TTFB (Time to First Byte): Time from navigation request to first byte of response. Directly affected by server processing time, CDN effectiveness, and geographic distance. Target: under 600ms. Fix: edge caching, CDN, faster database queries.

FCP (First Contentful Paint): When the first DOM content is painted. Correlates with perceived load time. Affected by render-blocking resources and server response time.

TBT (Total Blocking Time): Sum of time periods where the main thread was blocked for more than 50ms during page load. Strong lab proxy for INP. Useful for diagnosing interactivity problems in a controlled environment.

TTI (Time to Interactive): When the page is fully interactive (not just visually complete). Less important post-INP but still useful for identifying hydration-heavy pages.

Setting Up Real User Monitoring

Track field metrics from your actual users. Options:

Vercel Speed Insights — zero-config RUM for Vercel-deployed apps. Shows Core Web Vitals by page, device, and geography.

code
// app/layout.tsx
import { SpeedInsights } from '@vercel/speed-insights/next';

export default function Layout({ children }: { children: React.ReactNode }) {
  return (
    <html>
      <body>
        {children}
        <SpeedInsights />
      </body>
    </html>
  );
}

Web Vitals API — capture metrics manually and send to your analytics platform:

code
// app/components/WebVitals.tsx
'use client';

import { useReportWebVitals } from 'next/web-vitals';

export function WebVitalsReporter() {
  useReportWebVitals((metric) => {
    // Send to your analytics
    fetch('/api/analytics', {
      method: 'POST',
      body: JSON.stringify({
        name: metric.name,
        value: metric.value,
        rating: metric.rating, // 'good' | 'needs-improvement' | 'poor'
        id: metric.id,
      }),
    });
  });
  
  return null;
}

Reading Performance Data

When looking at performance data, segment it:

By device type: Mobile performance is almost always worse. If your overall scores look fine but mobile scores are poor, you have a real problem — mobile is the majority of web traffic.

By geography: Users far from your server region have higher TTFB. If you're hosted in US-East and 40% of your users are in Europe, Asian or European edge nodes can dramatically improve their TTFB.

By page: Aggregate site scores mask problem pages. A homepage scoring 95 and a checkout page scoring 45 isn't "site score 70" — it's a broken checkout page.

Over time: Performance regressions often ship with feature releases. Track week-over-week changes to catch regressions before they impact many users.

The Performance Budget

A performance budget is a set of limits on metrics — any build that exceeds them fails CI. This prevents performance regressions from shipping:

code
// .lighthouserc.json
{
  "ci": {
    "assert": {
      "assertions": {
        "largest-contentful-paint": ["error", { "maxNumericValue": 2500 }],
        "cumulative-layout-shift": ["error", { "maxNumericValue": 0.1 }],
        "total-blocking-time": ["error", { "maxNumericValue": 200 }],
        "interactive": ["warn", { "maxNumericValue": 5000 }]
      }
    }
  }
}

Pair with Lighthouse CI in your GitHub Actions pipeline:

code
- name: Run Lighthouse CI
  uses: treosh/lighthouse-ci-action@v9
  with:
    configPath: '.lighthouserc.json'
    uploadArtifacts: true

Performance budgets are especially valuable on teams where engineers don't habitually check performance. The budget enforces discipline without relying on culture.

The 80/20 of Performance

80% of performance improvements come from:

  1. Server-side rendering (not client-side)
  2. Image optimization with correct dimensions and formats
  3. Eliminating render-blocking scripts
  4. Connection pooling and query optimization (TTFB)
  5. CDN caching for static assets

Everything else is incrementalism. Do the big five first.

Stay Informed.

Join 1,200+ founders and engineers receiving our monthly deep dives on product engineering, design, and growth.

Insights once a month. No spam. Unsubscribe anytime.