What We Learned Migrating to React Server Components
Notes from migrating our SPA to the Next.js App Router and React Server Components. What improved, what broke, and what surprised us.

Where we started
Our main app was a client-side React SPA. Vite bundler, React Router, Redux for state, a fairly standard REST API behind it. Built over about two years by a team that fluctuated between three and six people. Not a disaster. Not great either.
The problem was load time. On a decent connection, the app would hydrate and you'd see content in maybe two seconds. On a mid-range phone over a spotty 3G connection โ which described a big chunk of our actual users โ you'd stare at a white screen, then a spinner, then another spinner, then finally something you could interact with. Sometimes 4+ seconds of that. We had a Lighthouse score we didn't like talking about.
The bundle was the main offender. We were shipping markdown-it, date-fns, prismjs, a couple of charting libraries, and a bunch of utility code to every single visitor just to render what was mostly static content. The blog pages were the worst. A post that was just text and some code snippets was pulling in 380KB of JavaScript. For what? Syntax highlighting that could have been done anywhere. Date formatting that didn't need to happen in the browser. It was wasteful and we knew it.
We'd talked about server-side rendering before. Tried a half-hearted implementation once that got abandoned when the hydration mismatches started piling up. That was 2024. By late 2025, React Server Components were stable, the Next.js App Router felt ready, and we were tired of optimizing around a problem that felt architectural.
So we committed to the migration. Three weeks estimated. A team of four.
Early decisions
The first real decision was whether to migrate incrementally or rewrite from the App Router up. We chose incremental โ keep the existing pages working, migrate route by route, test as we go. This turned out to be the right call, but it created a weird period where half the app was on the old Pages Router and half on the new App Router, which made debugging confusing. Error messages from Next.js weren't always clear about which system was complaining.
We started with the pages that had the most to gain: the blog post renderer, the sidebar, the documentation pages. Stuff that was almost entirely static. The theory was simple โ move these to Server Components, and the libraries they depend on stop shipping to the browser entirely.
// Before: client-side, ships markdown-it to every visitor
import MarkdownIt from 'markdown-it';
import Prism from 'prismjs';
export function PostRenderer({ rawMarkdown }) {
const [html, setHtml] = useState('');
useEffect(() => {
const md = new MarkdownIt();
setHtml(md.render(rawMarkdown));
}, [rawMarkdown]);
return <div dangerouslySetInnerHTML={{ __html: html }} />;
}
// After: Server Component, markdown-it stays on the server
import MarkdownIt from 'markdown-it';
export async function PostRenderer({ slug }) {
const post = await db.post.findUnique({ where: { slug } });
const md = new MarkdownIt();
const html = md.render(post.content);
return <div dangerouslySetInnerHTML={{ __html: html }} />;
}
No "use client" directive. No useState, no useEffect. The component runs on the server, renders HTML, sends it down. The browser never sees markdown-it. That was the promise, and for these simple cases, it worked exactly as advertised.
The second decision was what to do about Redux. We had a global store handling user auth state, shopping cart, UI preferences, and โ this was the problem โ also a bunch of data that was really just server state. Product listings, blog post metadata, user profiles. All of it piped through Redux.
Server Components don't have access to React Context. No Provider tree, no useSelector. So we had to decide: migrate state management entirely, or split it. We split it. Interactive stuff โ cart, theme toggle, notification preferences โ stayed in a client-side Zustand store (we dropped Redux in the process, partly for this reason, partly because nobody liked writing Redux anymore). Everything else became direct database queries or server-side fetch calls inside the Server Components.
This was the right architectural move. It also took a full week longer than expected.
The messy middle
Week two is when things got uncomfortable.
The first real surprise was third-party libraries. About half of our react-* npm packages broke when used inside Server Components. The reason: they used hooks or browser APIs internally but didn't have "use client" directives in their package exports. The error messages were sometimes helpful, sometimes not. One carousel library just threw a cryptic webpack error about window is not defined buried three levels deep in a dependency.
Our fix was a pattern we used over and over:
// components/client-wrappers/chart-wrapper.tsx
"use client";
export { BarChart } from 'react-charts-lib';
Just re-export the component from a file that has the client directive. Felt hacky. Worked fine. We ended up with a client-wrappers directory that had about twelve of these files by the end. I still don't love it, but the library ecosystem hasn't fully caught up yet.
The second surprise was more insidious. We shipped a build that looked fine locally, passed tests, and then we noticed the client bundle had ballooned by 2MB. What happened: a Server Component imported from a shared utility file. That utility file also exported a map component that pulled in Mapbox GL. The bundler, seeing the import from a file that contained both server and client exports, pulled the entire thing into the client bundle. The fix was splitting the utility file into two โ one for server-only exports, one for client exports. But we didn't catch it for two days, and it could have easily gone to production.
This is the kind of thing that makes me nervous about RSC long-term. The boundary between server and client code is invisible in the file system. There's no visual distinction. You have to be disciplined about imports, and discipline doesn't scale to a team of twenty.
Then there was the "use client" boundary problem. A Server Component can render a Client Component, but you can't go the other direction โ a Client Component can't import a Server Component. We had a layout where a server-rendered sidebar was supposed to sit inside a client-side interactive panel. The fix was restructuring so the panel received the sidebar as a children prop:
// layout.tsx (Server Component)
import { InteractivePanel } from './interactive-panel'; // Client Component
import { Sidebar } from './sidebar'; // Server Component
export default function Layout() {
return (
<InteractivePanel>
<Sidebar />
</InteractivePanel>
);
}
Logical once you understand the model. But we had three or four spots where the component hierarchy had to be rearranged, and each one was a small puzzle to figure out.
The team conversation nobody expected
This is the section that isn't really about code.
When we started putting database queries directly inside React components, our two backend engineers had a strong reaction. Not quite revolt, but close. Ravi said something like "we spent years getting business logic out of the view layer and now you want me to put Prisma queries in JSX?" Fair point, honestly.
The argument for it: Server Components run on the server. There's no reason to build a REST endpoint, serialize the data to JSON, send it over HTTP, deserialize it, and put it into state โ when the component is already running on the machine that has the database. You're adding latency and complexity for an abstraction boundary that doesn't exist anymore.
The argument against it: where do the queries go when a mobile app needs the same data? Where does the business logic live? If your validation and data access is scattered across fifty React components, how do you audit it?
We didn't fully resolve this. What we landed on was a compromise. Data access goes through a lib/queries layer โ plain async functions that call Prisma and return typed objects. The Server Components call these functions, not raw Prisma. So the queries are colocated with the UI but the actual database interaction is in a shared layer that a future API could also use.
// lib/queries/posts.ts
export async function getPostBySlug(slug: string) {
return db.post.findUnique({
where: { slug },
include: { author: true, tags: true },
});
}
// app/blog/[slug]/page.tsx (Server Component)
import { getPostBySlug } from '@/lib/queries/posts';
export default async function BlogPost({ params }) {
const post = await getPostBySlug(params.slug);
if (!post) notFound();
return <PostRenderer post={post} />;
}
Ravi was okay with this. Meera thought we should have gone further and kept a full API layer. I'm still not sure who's right. The lib/queries approach works for now, but I can see it getting messy as the app grows. There's a version of this codebase six months from now where someone puts auth checks in a component instead of the query layer and we have a security hole. That's a process problem, not a technology problem, but the technology makes it easier to make the mistake.
There was also a cultural adjustment period. Our frontend devs were used to thinking of components as browser things. Effects, event handlers, DOM manipulation. Now some components were basically controller functions that happened to return JSX. The mental model shift isn't small. Two people on the team said it clicked after about a week. One person said it still felt wrong at the end of the migration. That person isn't wrong to feel that way โ the ergonomics are genuinely different from what React has been for a decade.
Caching and revalidation
I want to mention this because it was a time sink that I didn't anticipate.
Next.js has aggressive caching defaults in the App Router. fetch calls in Server Components are cached by default. Route segments are statically rendered where possible. This is great for performance and initially confusing for development, because you change data in the database and the page doesn't update.
We hit this on day three. Someone updated a blog post in the CMS, the page still showed the old content, and we spent an hour thinking our database queries were broken before realizing the page was cached. The fix:
// Force dynamic rendering for pages that need fresh data
export const dynamic = 'force-dynamic';
// Or revalidate on a timer
export const revalidate = 60; // seconds
For most of our content pages, a 60-second revalidation was fine. For the dashboard and user-specific pages, we needed force-dynamic. For the blog, we ended up using on-demand revalidation triggered by a webhook from the CMS.
The caching system is powerful but has a lot of knobs. We spent more time configuring caching behavior than we did on the actual component migration. The docs are decent but there are edge cases โ like what happens when a cached page renders a component that calls a non-cached fetch inside it โ where the behavior wasn't obvious and we had to test empirically.
Streaming and Suspense
One thing that worked better than expected: streaming with Suspense boundaries. We wrapped slow data fetches in Suspense and the page shells out immediately with the fast parts, then fills in the slower sections as they resolve.
import { Suspense } from 'react';
export default function DashboardPage() {
return (
<div>
<h1>Dashboard</h1>
<Suspense fallback={<StatsSkeleton />}>
<StatsPanel />
</Suspense>
<Suspense fallback={<ActivitySkeleton />}>
<RecentActivity />
</Suspense>
</div>
);
}
StatsPanel and RecentActivity are both async Server Components that fetch their own data. The page renders the h1 and the skeleton loaders immediately, then each section pops in as its data arrives. No client-side JavaScript orchestrating this. No loading state management. It just works.
This was probably the single most satisfying part of the migration. The old app had a useEffect waterfall where the dashboard would fetch stats, wait, then fetch activity, wait, then render. Sequential by accident. Now both fetches happen in parallel on the server and stream to the client independently. The dashboard feels noticeably snappier even though the data isn't arriving faster โ it's just not blocked on a sequence anymore.
The numbers
Time-To-Interactive on a mid-range phone over 3G: 4.2 seconds before, 1.1 seconds after.
Client JavaScript bundle for blog pages: 380KB before, 97KB after.
Lighthouse performance score: 61 before, 94 after. Mobile.
First Contentful Paint: 2.8 seconds down to 0.6 seconds.
The timeline: three weeks estimated, four and a half weeks actual. Team of four. Most of the overrun was the third-party library issues and the Redux-to-Zustand migration, not the RSC conversion itself.
These numbers are real and they justified the effort. A nearly 4x improvement in TTI on mobile โ that translates to actual user behavior. Our bounce rate on mobile dropped measurably in the two weeks after launch. I won't attribute it entirely to the migration because we also fixed some content issues around the same time, but the correlation is there.
What I'd do differently
I'd map the third-party library situation before starting. Literally go through package.json, check each react-* dependency for "use client" support, and build the wrapper files in advance. We did this reactively and it ate at least three days of scattered debugging.
I'd enforce the server/client file boundary from day one. Separate directories, lint rules, something. The shared utility file incident that bloated the bundle by 2MB was preventable with better project structure.
I'd also spend more time upfront aligning the team on where data access logic should live. The backend/frontend tension around database queries in components was a real friction point, and we'd have moved faster if we'd had the lib/queries convention agreed on before writing the first Server Component.
And honestly, I'm not fully sure we made the right call on every page. Some of our interactive features โ the product configurator, the search โ are heavily client-side and the Server Component model adds complexity there without much benefit. We have a few pages that are almost entirely "use client" components wrapped in a thin server layout, and I wonder if those would have been simpler left as they were.
The migration was worth it. The performance difference is real. The architecture is better in most places. But there's a new category of mistakes to make โ invisible bundle boundary violations, caching surprises, the server/client mental model split โ and I don't think we've built the habits yet to catch them reliably. It's a young pattern. We're still learning the shape of it.
Four months in now and I still check the bundle analyzer more often than I used to.
Written by
Anurag Sinha
Developer who writes about the stuff I actually use day-to-day. If I got something wrong, let me know.
Found this useful?
Share it with someone who might find it helpful too.
Comments
Loading comments...
Related Articles
Building Production-Ready Apps With Next.js: The Architecture Shift
Tracing the migration path from traditional React SPAs to the Next.js App Router, addressing routing mechanics, caching layers, and server action boundaries.
Tailwind vs CSS Modules: What We Ended Up Doing
How our team debated and resolved the Tailwind vs CSS Modules question. We didn't pick just one.
Accessibility Bugs I Keep Finding in Web Apps
The most frequent accessibility violations I encounter in code reviews, why they matter, and the specific fixes.