FORGE
Services About Blog
Why we deploy on Vercel — and what we gave up to do it
Engineering 23 March 2026 · 5 min read

Why we deploy on Vercel — and what we gave up to do it

Vercel is not the cheapest option. It is not the most flexible option. It is the one where deploys just work — and for a studio that ships fast, that is the right trade.

RO
Robert Okoroafor
FORGE

Every infrastructure decision is a trade. You are not choosing the best option — you are choosing the set of compromises you can live with. Vercel is our choice. Here is what that means in practice.

What Vercel actually does well

The deploy pipeline is the thing. Push to main, get a production URL in under two minutes. Every pull request gets its own preview URL automatically. Rollback is a single click. Edge network handles the distribution without configuration.

None of this is magic. It is good engineering wrapped in good defaults. The defaults happen to be the right ones for the kind of projects we build — Astro sites, Next.js apps, API routes at the edge. The friction of deploying is close to zero.

For a studio that ships frequently, zero-friction deploys compound. Every time a deploy is not a conversation, a decision, or a configuration problem, someone is doing design work instead. That is the real cost of a slow deploy pipeline — not the minutes, but the context switching.

What Vercel does not do

Long-running processes. Background jobs with duration beyond the function timeout. Websocket connections that need to stay open. Anything that requires a persistent server rather than a stateless function.

These are architectural constraints, not bugs. Vercel is a platform for a specific model of computing — request-response, edge-first, stateless. Projects that need something different are projects that need a different platform.

We have hit this ceiling twice. Both times the answer was to move the long-running work to a separate service — a Railway instance, a Cloudflare Worker with Durable Objects — and keep the front-end on Vercel. The split architecture is more complex, but the complexity is contained.

The pricing conversation

Vercel is not cheap at scale. The free tier is generous. The pro tier is reasonable. Enterprise pricing is real and requires a conversation.

The mistake is comparing Vercel's price to a raw cloud instance. The comparison should be to the engineering time you are not spending on infrastructure. The DevOps work that Vercel absorbs — CDN configuration, SSL certificates, cache invalidation, edge routing, preview environments — has a cost when you do it yourself. It is just a cost that does not appear on a single invoice.

For studios without dedicated infrastructure engineers, Vercel's price is almost always cheaper than the alternative. For large teams with those engineers already in place, the calculation is different.

What we have learned

Environment variables with newlines in them will break your build in a way that takes longer than it should to diagnose. The error message is clear once you know what to look for.

Preview deployments are underused. Clients who can click a URL and see exactly what will go live ask better questions and approve faster. The URL is worth sending every time.

The analytics are basic but honest. Vercel's built-in analytics do not replace proper instrumentation, but they catch regressions in Core Web Vitals before Google does.

And the deploy logs are your friend. When something goes wrong — and eventually something always goes wrong — the logs are detailed enough to find the problem without guessing.

New project

Start a
project.

Discovery call
Loading calendar…