All insights
SEO9 min read

Technical SEO for Developers: What Actually Moves Rankings

Most technical SEO guides are written for marketers. This one is written for developers — the people who can actually implement the fixes that matter.

NC

Nextcraft Engineering Team

Why Developers Should Own Technical SEO

Search engines are software. They fetch URLs, parse HTML, execute some JavaScript, and index the content they find. The factors that affect how well they do this job are almost entirely under the control of engineers, not marketers.

Content strategy, keyword research, link building — those are marketing problems. Rendering, response times, structured data, crawl efficiency — those are engineering problems. And engineering problems are the ones with the clearest cause-and-effect relationships.

The Rendering Stack

Before any other technical SEO consideration, get your rendering model right.

Search engines prefer fully rendered HTML. The spectrum from best to worst for crawlability:

  1. Static HTML — served as a file, always complete on delivery
  2. Server-Side Rendering (SSR) — generated per-request, complete on delivery
  3. Incremental Static Regeneration (ISR) — static with periodic updates, almost always complete
  4. Client-Side Rendering (CSR) — empty HTML shell, content requires JavaScript execution

Modern Next.js App Router applications are Server Components first, which puts them squarely in categories 2 and 3. CSR is the exception, not the default.

If you have content in Client Components that needs to rank, wrap it: keep the data-fetching in a Server Component and pass the data down to a Client Component for interactivity.

Response Codes: Getting Them Right

Search engines trust response codes. Getting them wrong can cost you indexation:

301 vs 302: Use 301 for permanent redirects (URL changes, domain migrations). Use 302 only when the redirect is genuinely temporary. Googlebot transfers PageRank through 301s but not reliably through 302s.

404 vs 410: A 404 tells Googlebot the page is missing — it will check back. A 410 (Gone) tells Googlebot the page has been intentionally removed — it will stop checking. Use 410 for deleted content you never want re-indexed.

Soft 404s: The worst kind. A page returns 200 but contains no meaningful content (empty search results, "no products found"). Google detects these and may deindex the page. Either redirect to a relevant page or return a proper 404.

In Next.js:

code
import { notFound } from 'next/navigation';

export default async function PostPage({ params }: { params: { slug: string } }) {
  const post = await getPost(params.slug);
  
  if (!post) {
    notFound(); // Returns proper 404, not a soft 404
  }
  
  return <PostContent post={post} />;
}

Canonical URLs

Duplicate content doesn't get you penalized — it just dilutes your crawl budget and splits ranking signals. Canonicals solve this by telling search engines which version of a URL is the "real" one.

Common duplicate scenarios:

  • https://example.com/page and https://www.example.com/page
  • https://example.com/page and https://example.com/page?utm_source=newsletter
  • https://example.com/products and https://example.com/products?sort=price

In Next.js, set metadataBase and the canonical is derived automatically:

code
// layout.tsx
export const metadata: Metadata = {
  metadataBase: new URL('https://www.example.com'),
};

// /products/page.tsx
export const metadata: Metadata = {
  alternates: {
    canonical: '/products',
  },
};

This generates: <link rel="canonical" href="https://www.example.com/products" />

Structured Data That Actually Gets Used

Structured data (JSON-LD) gives search engines machine-readable context about your content. Rich results (star ratings, breadcrumbs, FAQs in SERPs) come from this.

The types most likely to generate rich results for a software/agency site:

code
// FAQ — appears as expandable Q&A in SERPs
const faqSchema = {
  "@context": "https://schema.org",
  "@type": "FAQPage",
  "mainEntity": [
    {
      "@type": "Question",
      "name": "How much does Next.js development cost?",
      "acceptedAnswer": {
        "@type": "Answer",
        "text": "Next.js development projects typically range from..."
      }
    }
  ]
};

// Article — enables article-specific rich results
const articleSchema = {
  "@context": "https://schema.org",
  "@type": "TechArticle",
  "headline": "Technical SEO for Developers",
  "author": { "@type": "Person", "name": "Nextcraft Team" },
  "datePublished": "2026-04-14",
  "publisher": {
    "@type": "Organization",
    "name": "Nextcraft"
  }
};

Validate everything with Google's Rich Results Test before publishing.

Crawl Budget and Internal Linking

Crawl budget is the number of pages Googlebot will crawl on your site in a given timeframe. For most sites this is unlimited in practice. It matters when you have:

  • Thousands of generated pages (e-commerce faceted navigation, auto-generated URLs)
  • Many low-quality or thin pages
  • Slow server response times

For agency/SaaS sites, the more practical concern is crawl discovery — making sure Google can find all your pages by following internal links.

The rule: every page you want indexed should be reachable via links from other pages in at most 3 clicks from the homepage. Pages that exist only in the sitemap, with no inbound internal links, rank poorly.

Build internal linking into your content strategy:

  • Blog posts should link to relevant service pages
  • Service pages should link to relevant case studies
  • Case studies should link to the services they demonstrate

The robots.txt and Sitemap Relationship

robots.txt controls which paths crawlers are allowed to access. Your sitemap tells them which paths contain your best content.

These should be consistent: don't list a URL in your sitemap if it's disallowed in robots.txt. And don't include pages you don't want indexed in your sitemap (thin pages, duplicate content, admin routes).

In Next.js:

code
// robots.ts
export default function robots(): MetadataRoute.Robots {
  return {
    rules: {
      userAgent: '*',
      allow: '/',
      disallow: ['/api/', '/admin/', '/_next/'],
    },
    sitemap: 'https://www.example.com/sitemap.xml',
  };
}

The Page Speed Connection

Page speed is a confirmed ranking factor for both mobile and desktop. But more importantly, it affects crawl efficiency: Googlebot allocates crawl time based on server responsiveness. Slow servers get crawled less.

The technical SEO page speed checklist:

  • First Byte Time (TTFB) under 600ms — use edge caching or CDN
  • No render-blocking resources in <head> — Next.js handles this for fonts, use next/script for third-party JS
  • Images served at correct dimensions — use next/image
  • Text compressed — Gzip or Brotli at the server/CDN level

Technical SEO is unglamorous, systematic work. But it's the foundation everything else builds on. Get it right once and it compounds.

Stay Informed.

Join 1,200+ founders and engineers receiving our monthly deep dives on product engineering, design, and growth.

Insights once a month. No spam. Unsubscribe anytime.