It has nothing to do with the products being bad.
Vercel, Supabase, Neon, Railway, Render, Fly.io, and the rest are often excellent at what they do, and the tradeoffs they make just don't match the way I want to operate, and they never have.
I want to know what's running on my server, I want to be able to log in and fix things when they break, and I want a hosting bill that doesn't move when traffic does.
What's interesting is that this used to be a contrarian position and now it's becoming a common one.
Self-hosting is having a moment, and a lot of builders who jumped on the managed a few years ago are quietly moving back to VPSs. The reasons are worth examining because they're a mix of legitimate technical complaints and emotional reactions to a market that's been overreaching for a decade.
What Managed Services Actually Sell
To be fair, the managed stack solves real problems.
- Vercel makes deploying a Next.js app trivial and takes care of CDN, observability, and more.
- Supabase wraps Postgres in an SDK and handles auth, edge functions, and a host of other things.
- Neon does serverless Postgres with branching that's quite useful for development workflows.
- Render and Railway take the friction out of deploying a side project that needs a few services.
Managed services aren't only the cloud-native crowd either. WP Engine and Kinsta do this for WordPress, Servd does it for Craft CMS, and Heroku has been doing it for apps since forever.
The pattern is the same across all of them.
Pay for someone else to manage the boring parts so you can focus on building. That's a reasonable trade for a lot of people and I'm not going to argue against it.
The pitch glosses over the same set of costs in every category, though. Pricing that scales unpredictably once traffic shows up, lock-in that's invisible until you try to leave, or hidden fees that arrive when you cross some threshold you didn't notice.
And there's also the slow erosion of skill that happens when you stop knowing how the stack actually works.
The black-box part is what gets me most.
There's no SSH into Vercel because there's no box to SSH into.
The Vercel CLI is a real tool and it does a lot of legitimate work for managing projects, deployments, environment variables, and domains.
It's just not a shell into the runtime where your code is actually executing. When something goes wrong inside a function at the moment a real request hits it, you're working through whatever logs and traces the platform decided to surface.
For most things most people deploy there, that's fine. When it isn't, you're stuck with someone else's diagnostics.
The Setup I've Been Running for Years
My infrastructure looks fairly standard for someone who's been doing this a while.
I run a Vultr VPS at a fixed monthly price with Postgres for some apps and SQLite for others on the box directly, Caddy in front for TLS and routing, Docker for some apps and bare deployments for others, and backup cron jobs I wrote a long time ago and have only had to revise a handful of times.
I know exactly what's happening on the box because I set it up myself.
When something breaks I SSH in, read the logs, and fix it. The whole setup costs roughly $50/mo and runs several sites very comfortably. In fact, I could run everything on a $20/mo box, but I like the headroom.
I picked Vultr personally because their pricing has been stable, their network is fast in the regions I need, and their support has been responsive every time I've actually had to use it.
Hetzner is the other obvious choice in this space and I'd recommend either one. The hardware-and-bandwidth-per-dollar math at Hetzner is hard to beat, and a recent comparison piece noted that you can serve billions of requests and push 20TB of egress per month for under $10.
Trying to do the same on Vercel or Firebase would cost hundreds or thousands of dollars depending on the workload.
This setup isn't elegant by 2026 standards and it's not what most modern indie hackers are reaching for.
There's no GitHub Actions integration with witty deploy messages, no preview environments per branch, no edge functions deploying to fifty regions automatically.
But it works, it's been working for years, and the cost is predictable down to the dollar.
Why Managed Never Appealed
The honest reasons are simpler than the strategic case for self-hosting.
I don't enjoy paying for abstractions that hide the underlying tool I actually want to use.
Most of these products bundle real engineering work on top of underlying tools I already know how to use directly, which is to say Postgres, Lambda, a CDN, a build pipeline.
The wrapper teams have done thoughtful work and the convenience they sell is real, but for what I build I'd rather use the underlying tools directly, which usually means a Node app on a VPS with Postgres running on the same box.
The wrappers add a UI, a dashboard, and some marketing copy, and they charge a multiple of what the raw service costs underneath. Sometimes that markup is worth it for the convenience and often it isn't, especially when the convenience comes with a lock-in tax that arrives later.
The other honest reason is that I don't want anyone else's hands on my infrastructure. When something goes down at Vercel, your site goes down too, and you have no way to fix it.
When my VPS provider has an outage, that's still a problem, but it's a problem with one company I have a direct relationship with rather than three layers of vendors with overlapping SLAs.
That means lower coordination cost when something breaks and a much shorter list of people who could be the cause.
There's also something I rarely see articulated in the managed-vs-self-hosted debate. Running your own infrastructure forces you to actually understand what you're running, and that understanding accumulates across years of operating.
The builder who has been managing their own Postgres, nginx, and Linux box for a decade has skills that don't degrade the way knowledge of a specific managed product does.
Postgres has changed a lot since 2010, but the fundamentals (how to write SQL, design schemas, reason about indexes, debug slow queries) still apply.
Vercel's platform from 2020 is mostly a different product than Vercel's platform today.
Why Some People Are Moving Back
The reasons some builders are leaving managed services in 2026 are specific and worth listing.
| Driver | What's Behind It |
|---|---|
| Pricing predictability | Cloud bills that triple overnight when a tweet goes viral, with no warning |
| Lock-in fatigue | Builders realizing they can't leave a managed service without rewriting half their app |
| AI training data concerns | Worry about what code, content, and customer data managed providers can see and use |
| Performance over distance | Edge functions that don't actually feel fast for users in regions the CDN deprioritizes |
| Cost arbitrage | A $20/month Hetzner or Vultr box outperforming a $400/month managed setup for many workloads |
| Sovereignty and control | Wanting to know what runs where, who has access, and what happens if a vendor changes terms |
| Tooling maturity | Coolify, Dokploy, Kamal, and others making self-hosting closer to managed-service ergonomics |
None of these are new concerns.
What changed is that the tools for self-hosting got dramatically better while managed pricing kept climbing, and the gap between "easy and expensive" and "slightly less easy and much cheaper" narrowed enough that the math flipped for a lot of people.
The Tools That Closed the Gap
The reason this isn't a 2018 conversation anymore is that you can now have most of the Vercel experience on a VPS without becoming a sysadmin.
Two tools have done most of the work to close that gap.
Coolify is an open source self-hostable PaaS that gives you a Heroku-like web dashboard for managing your servers, applications, and databases. It supports git-push deploys, automatic SSL, preview environments, database backups, and a library of over 280 one-click services for things like Plausible, n8n, and Supabase.
You install it on a VPS, point it at your repos, and it handles the orchestration that you'd otherwise be doing through Docker commands and reverse proxy configs.
The project crossed 50,000 GitHub stars in early 2026, which tells you something about how many builders are reaching for this kind of tool.
Dokploy is the newer alternative that competes directly with Coolify. It's built on Docker and Traefik, has a cleaner interface that some people prefer, and includes native Docker Swarm support for multi-node deployments. One eight-month side-by-side comparison found Dokploy idling at around 0.8% CPU compared to Coolify's 6%, which matters on smaller VPSs where every bit of overhead counts.
Both tools turn a $20 VPS into something that feels close to Heroku or Vercel when you're actually deploying, with git-push deploys, one-click database provisioning, and rollback when something breaks.
The features that used to make the managed stack feel like the only sensible option for indie builders aren't unique to managed services anymore.
You can run the same workflow on infrastructure you control, with predictable pricing and full SSH access when something goes sideways.
Kamal is worth mentioning too for the CLI-preferring crowd. Built by 37signals for their own Rails apps and now used at HEY's scale, Kamal deploys Docker containers to any server via SSH using a YAML config file, with no web dashboard, no platform daemon, and no platform overhead.
If you want the absolute minimum surface area between your code and your servers, Kamal is the cleanest answer, though it requires more comfort with the terminal than Coolify or Dokploy.
What Self-Hosting Actually Requires
The renaissance narrative can make this sound easier than it is for people coming from a fully managed background, so the honest accounting matters.
You need to know enough Linux to not panic when something breaks, a backup strategy that you've actually tested by restoring from it rather than one that exists only in theory, and a working understanding of SSL renewal (which Caddy and Coolify automate, but you should know what's happening), OS security patches, and firewall configuration.
You also need to understand the database, the reverse proxy, and the application enough to know where to look when something goes wrong when you'd rather be doing anything else.
This isn't a small ask.
Managed services exist because for a lot of people, this skill set is either uninteresting or actively unwanted. Telling those people to learn Linux so they can save $30 a month on hosting is bad advice.
The math only works if you're going to enjoy the operational side, or if you have enough infrastructure that the savings actually move the needle, or if the control matters to you for reasons beyond cost.
The Middle Ground
The interesting part of the renaissance is that most of the people moving back aren't going all the way to bare metal.
Of course not.
They're picking the boxes they want managed and the boxes they want to control directly.
This is closer to how I've been operating for years. DNS and CDN happen at Vultr alongside everything else because keeping infrastructure in one place removes a layer of vendor coordination I don't need.
I run my own Postgres because I want direct access to the database and zero-downtime upgrades on my schedule. I self-host most of my apps but use a managed transactional email service because deliverability is a specialized job I have no interest in handling myself at the moment.
The decision about what to host yourself ends up being practical rather than ideological, and it gets made line by line.
That same logic applies in the other direction too.
WP Engine, Kinsta, and Servd exist because some builders sincerely don't want to think about server administration and would rather pay a premium to outsource it entirely.
That's a fine choice when the math works out, especially for content sites where the value of the time saved exceeds the cost of the managed plan. The real issue with the managed stack is that it became the default recommendation for everyone, including for use cases where it's the wrong fit.
What I'd Tell a Builder Starting Today
Some practical takeaways without prescribing ideology.
If you don't know how to run a server and don't want to learn, use the managed stack, because your time has value and the convenience is real. If you do know how or you want to learn, a $20/month VPS will run more than you think and teach you more than any tutorial.
If you're somewhere in between, look at Coolify or Dokploy or Kamal, because they've closed most of the ergonomic gap that used to make managed services feel like the only sensible option.
The infrastructure decision shouldn't be based on what other indie builders are using on Twitter (or in this article for that matter). Pick based on what you'll actually maintain six months from now.
The Long View
Infrastructure trends swing on a roughly 10-year cycle.
The big swing into managed everything was a response to how painful self-hosting used to be in the early 2010s, when LAMP stacks were brittle, configuration was manual, and Docker hadn't matured yet.
The current swing back is a response to how expensive and locked-in managed services have become in the 2020s, combined with how much better the tooling has gotten on the self-hosted side.
Neither side is permanently winning.
The builders who do best across cycles are the ones who pick the level of abstraction that matches their actual operational appetite, regardless of what's currently fashionable in their corner of the internet.
I picked mine years ago, mostly by accident and partly by stubbornness, and I've stuck with it because it still fits how I want to work.
I'm not writing this to convince anyone to leave their managed stack.
If Vercel and Supabase work for you, that's a fine answer and I'm not going to pretend otherwise. The renaissance is interesting to me because it means more builders are running their own math again, and the answers they're landing on look different from the answers they would have given two or three years ago.
That's healthy regardless of which way they end up going.