All guides
Engineering14 min read

React Performance Optimization Handbook

A complete reference for diagnosing and fixing React performance problems — from unnecessary renders to bundle size to server-side optimization.

NC

Nextcraft Agency

The Performance Debugging Mindset

Performance problems are almost always identified by measurement, not intuition. Before optimizing anything, profile first. The thing that feels slow and the thing that is slow are often different.

The tools:

  • React DevTools Profiler: Shows which components rendered, how long each took, and why they rendered
  • Chrome Performance Tab: Shows the full rendering pipeline including layout, paint, and JavaScript execution
  • Lighthouse: Lab performance scores for page load
  • Web Vitals: Real-user field data

Optimize the bottleneck you measured. Don't optimize the code you wrote most recently.


Part 1: Preventing Unnecessary Re-renders

Understanding When React Renders

A component re-renders when:

  1. Its state changes
  2. Its props change (by reference, not deep equality)
  3. Its context changes
  4. Its parent re-renders (and it's not memoized)

Most React performance problems are caused by #4: parent re-renders triggering cascading re-renders throughout a component tree.

React.memo: Memoizing Components

React.memo wraps a component and prevents re-renders when props are shallowly equal to the previous render:

code
const ProductCard = React.memo(function ProductCard({ product }: { product: Product }) {
  return (
    <div>
      <h3>{product.name}</h3>
      <p>{product.price}</p>
    </div>
  );
});

// Now ProductCard only re-renders when its `product` prop changes by reference

Caveats:

  • Works on shallow equality — if product is a new object reference with the same values on each parent render, React.memo won't help
  • Has overhead from the shallow comparison — don't wrap every component, only ones with measurable render cost
  • Use the React DevTools Profiler to confirm re-renders are happening before adding memo

useCallback: Stable Function References

When a function is passed as a prop to a memoized child, a new function reference on each render breaks memoization:

code
// Bad — new function reference on every parent render
function Parent() {
  const handleClick = () => console.log('clicked'); // new reference each render
  return <MemoizedChild onClick={handleClick} />;
}

// Good — stable reference
function Parent() {
  const handleClick = useCallback(() => {
    console.log('clicked');
  }, []); // empty deps = created once
  
  return <MemoizedChild onClick={handleClick} />;
}

useMemo: Expensive Computations

Cache expensive computations that don't need to run on every render:

code
function ProductList({ products, filter }: Props) {
  // Expensive filter — only re-run when products or filter changes
  const filteredProducts = useMemo(
    () => products.filter(p => matchesFilter(p, filter)),
    [products, filter]
  );
  
  return (
    <ul>
      {filteredProducts.map(p => <ProductCard key={p.id} product={p} />)}
    </ul>
  );
}

The key question: is this computation actually expensive? For a simple array filter over 20 items, useMemo adds overhead without benefit. Profile first.


Part 2: State Architecture

Collocate State

State that only one component needs belongs in that component. State lifted too high in the tree causes unnecessary re-renders in components that don't use it.

code
// Bad — selection state in parent causes entire list to re-render on every selection
function ProductList({ products }: Props) {
  const [selectedId, setSelectedId] = useState<string | null>(null);
  
  return (
    <ul>
      {products.map(p => (
        <ProductCard
          key={p.id}
          product={p}
          selected={p.id === selectedId}
          onSelect={() => setSelectedId(p.id)}
        />
      ))}
    </ul>
  );
}

// Better — each card manages its own selected state
// Or: use a separate SelectedProductProvider with optimized context

Splitting Context

A single large context object causes every consumer to re-render when any part of the context changes:

code
// Bad — changing `notifications` causes `theme` consumers to re-render
const AppContext = createContext({ user, theme, notifications });

// Good — separate contexts for independent state slices
const UserContext = createContext<User>(null!);
const ThemeContext = createContext<Theme>('light');
const NotificationsContext = createContext<Notification[]>([]);

External State: When to Use Zustand or Jotai

For state that's:

  • Shared across distant parts of the tree
  • Updated frequently
  • Independent of component lifecycle

React's built-in useState + context has re-render problems at scale. External stores like Zustand handle selective re-renders:

code
import { create } from 'zustand';

interface ProjectStore {
  selectedProjectId: string | null;
  setSelectedProject: (id: string) => void;
}

const useProjectStore = create<ProjectStore>((set) => ({
  selectedProjectId: null,
  setSelectedProject: (id) => set({ selectedProjectId: id }),
}));

// Only re-renders when selectedProjectId changes
function ProjectHeader() {
  const selectedId = useProjectStore(state => state.selectedProjectId);
  // ...
}

Part 3: Code Splitting and Bundle Size

Dynamic Imports with next/dynamic

Components that are heavy, used conditionally, or below the fold are candidates for code splitting:

code
import dynamic from 'next/dynamic';

// Chart library — heavy, not needed on initial load
const RevenueChart = dynamic(() => import('./RevenueChart'), {
  loading: () => <ChartSkeleton />,
  ssr: false, // Don't SSR charts that need browser APIs
});

// Only downloads the chart library when the component renders
export function Dashboard() {
  return (
    <div>
      <MetricsGrid />
      <RevenueChart />  {/* Chart code loads when this renders */}
    </div>
  );
}

Bundle Analysis

Identify what's in your bundle:

code
# Install the analyzer
npm install @next/bundle-analyzer

# next.config.ts
const withBundleAnalyzer = require('@next/bundle-analyzer')({
  enabled: process.env.ANALYZE === 'true',
});

export default withBundleAnalyzer(nextConfig);

# Run analysis
ANALYZE=true npm run build

Look for:

  • Large dependencies that could be replaced with smaller alternatives
  • Dependencies that should be dynamically imported
  • Duplicate modules (two versions of the same library)
  • Server-only code that leaked into the client bundle

Tree Shaking

Import only what you use:

code
// Bad — imports entire library
import _ from 'lodash';
const result = _.groupBy(items, 'category');

// Good — imports only what's needed
import groupBy from 'lodash/groupBy';
const result = groupBy(items, 'category');

// Or replace with native:
const result = Object.groupBy(items, item => item.category);

Part 4: Rendering Patterns

Virtualization for Long Lists

Rendering 10,000 DOM nodes is expensive. Virtualization renders only what's visible:

code
import { useVirtualizer } from '@tanstack/react-virtual';

function VirtualizedList({ items }: { items: Item[] }) {
  const parentRef = useRef<HTMLDivElement>(null);
  
  const virtualizer = useVirtualizer({
    count: items.length,
    getScrollElement: () => parentRef.current,
    estimateSize: () => 60, // estimated row height
  });
  
  return (
    <div ref={parentRef} style={{ height: '600px', overflow: 'auto' }}>
      <div style={{ height: virtualizer.getTotalSize(), position: 'relative' }}>
        {virtualizer.getVirtualItems().map(virtualItem => (
          <div
            key={virtualItem.key}
            style={{
              position: 'absolute',
              top: virtualItem.start,
              height: virtualItem.size,
            }}
          >
            <ItemRow item={items[virtualItem.index]} />
          </div>
        ))}
      </div>
    </div>
  );
}

Use virtualization when rendering more than ~100 items. The performance improvement is dramatic.

Suspense for Async Loading States

Structure your loading states with Suspense for a better perceived performance:

code
// Bad — waterfall: parent loads, then child starts loading
export default async function Dashboard() {
  const user = await getUser();  // 100ms
  const projects = await getProjects(user.id);  // 150ms
  // Total: 250ms before anything renders
  return <ProjectList projects={projects} />;
}

// Good — parallel with Suspense
export default async function Dashboard() {
  const user = await getUser();
  
  return (
    <>
      <UserHeader user={user} />
      <Suspense fallback={<ProjectListSkeleton />}>
        <ProjectList userId={user.id} />  {/* Fetches independently */}
      </Suspense>
    </>
  );
}

// ProjectList fetches its own data
async function ProjectList({ userId }: { userId: string }) {
  const projects = await getProjects(userId);
  return <ul>{projects.map(p => <ProjectCard key={p.id} project={p} />)}</ul>;
}

The header renders immediately. The project list shows a skeleton while its data fetches. Users see content faster even though total data loading time is similar.


Part 5: Server Component Optimization

Move Data Fetching to the Leaves

Fetch data as close to where it's used as possible. This enables parallel fetching:

code
// Bad — parent fetches all data serially
async function Dashboard() {
  const projects = await getProjects();
  const metrics = await getMetrics();
  const activity = await getActivity();
  
  return (
    <>
      <ProjectList projects={projects} />
      <MetricsGrid metrics={metrics} />
      <ActivityFeed activity={activity} />
    </>
  );
}

// Good — each component fetches its own data in parallel
async function Dashboard() {
  return (
    <>
      <ProjectList />     {/* fetches projects internally */}
      <MetricsGrid />     {/* fetches metrics internally */}
      <ActivityFeed />    {/* fetches activity internally */}
    </>
  );
}

React renders these Server Components in parallel by default — the page renders as fast as the slowest individual query, not the sum of all queries.

Streaming with Suspense

Wrap slow Server Components in Suspense to stream them after the page shell:

code
export default function ReportsPage() {
  return (
    <main>
      <PageHeader title="Reports" />  {/* Renders immediately */}
      
      <Suspense fallback={<ReportSkeleton />}>
        <HeavyReport />  {/* Streams in when ready — doesn't block page load */}
      </Suspense>
    </main>
  );
}

This changes the user experience from "blank page for 2 seconds" to "page with header immediately, report appears after 2 seconds." Both take 2 seconds; only the latter feels fast.


Part 6: Measuring Success

After optimization, measure the impact:

Before/after LCP: Record LCP in Lighthouse before and after. Anything over 10% improvement on a critical page is meaningful.

Bundle size delta: npm run build reports page sizes. Track the JavaScript size for changed pages.

Re-render count: React DevTools Profiler shows render counts. A component that rendered 50 times in a user session and now renders 5 times is a real improvement.

Real user metrics: Changes in Search Console CWV data (takes 2–4 weeks to appear), changes in your RUM tool.

Performance optimization without measurement is guesswork. Measurement without a hypothesis is data collection. The combination — hypothesis, measurement, optimization, re-measurement — is engineering.

Stay Informed.

Join 1,200+ founders and engineers receiving our monthly deep dives on product engineering, design, and growth.

Insights once a month. No spam. Unsubscribe anytime.