In our last chapter, we accomplished a feat of creation. We selected robust hardware and meticulously forged our server—the beating heart of our sovereign cloud. It sits, waiting, humming with potential. We have a powerful machine with a clean, secure operating system. But a powerful server with nothing running on it is just an expensive space heater.
Now, we stand at a critical crossroads. We must decide how we will run the dozens of services that will make up our new digital life—our email, our file sync, our smart home, our password manager. The decision we make here will dictate our security posture, our backup strategy, and the day-to-day reality of managing our kingdom for years to come.
This is the great strategic debate of the modern server world: Virtualization versus Containerization. You will know these technologies by their common names: Virtual Machines (VMs) and Docker.
To the newcomer, they can seem similar, both offering a way to run applications in isolation. But they are fundamentally different in their philosophy and their architecture. One is a fortress, offering maximum security and separation. The other is a high-density apartment block, offering incredible efficiency and speed.
This guide will not be a simple list of pros and cons. It will be a deep dive into the strategic implications of this choice, guided by the core principles of the FUTO guide: security, stability, and a simple, “idiotproof” approach to management. We will explore the core problem they both solve, dissect how each technology works using clear analogies, and make a definitive, reasoned choice for the architecture of our sovereign cloud.

The Core Problem: Why We Can’t Just Install Everything
Before we can compare the solutions, we must first understand the problem they solve. Why can’t we just SSH into our new Ubuntu server and start installing everything directly onto the operating system?
The answer is a nightmare scenario that programmers have a special name for: Dependency Hell.
Imagine your server’s operating system is a shared workshop. You want to install your first service, a photo gallery, and it requires a specific tool from the workshop: Python version 3.8. Everything works perfectly. Now, you want to install your second service, a smart home controller. But this new service is more modern; it was built to use Python version 3.10.
When you install it, it replaces the workshop’s Python 3.8 with 3.10. Your smart home now works, but the next time you try to open your photo gallery, it crashes. The tool it relied on has been changed, and it no longer functions. Now imagine this problem multiplied by dozens of services and hundreds of shared libraries and dependencies. One update to one service can create a cascade of failures in others.
This is dependency hell. It turns your clean, stable server into a fragile house of cards. To build a reliable system, we need a way to give each and every service its own private, isolated workshop with its own perfect set of tools that will never be touched or altered by anyone else. We need isolation.
The Fortress – Full Virtualization (Virtual Machines)
Full virtualization is the oldest, most battle-tested solution to the isolation problem. It is the architectural equivalent of building a completely separate, fully functional house for every single service.
The Analogy: Imagine your server’s physical hardware is a large plot of land. Using virtualization, you don’t just give a service its own workshop; you build an entire, self-contained house for it. Each house has its own foundation, its own walls, its own plumbing, its own electrical system, and its own unique set of keys. What happens in one house has absolutely no bearing on the house next door.
How Virtualization Works
At the core of virtualization is a piece of software called a Hypervisor. The hypervisor is the master contractor or landlord for your plot of land. It carves up the physical hardware resources—the CPU cores, the RAM, the storage—and allocates them to each “house,” or Virtual Machine (VM).
On our Ubuntu Server, we will be using KVM (Kernel-based Virtual Machine), a hypervisor that is built directly into the Linux kernel, making it extremely fast and efficient.
Here is the architectural stack:
- Hardware: The physical server itself.
- Host OS: Our main Ubuntu Server installation.
- Hypervisor (KVM): The software layer that manages the VMs.
- Virtual Machines: Each VM is a complete package containing:
- Virtual Hardware: A simulated CPU, RAM, NIC, and hard drive.
- A Full Guest Operating System: Each VM runs its own complete, independent copy of an OS (like another instance of Ubuntu Server).
- Libraries & Dependencies: All the tools the application needs.
- The Application Itself: The service we want to run.

The Strengths of the Fortress (Pros)
- Maximum Security & Isolation: This is the paramount advantage of VMs. Because each VM is running a full, separate operating system kernel, the boundary between them is incredibly thick. A security breach or catastrophic crash inside one VM is contained within that “house.” It is extremely difficult for an attacker to “break out” of a VM and affect the host or other VMs. This is the strongest isolation model available.
- Total OS Flexibility: Each VM is its own computer. If you have a specific application that runs better on a different Linux distribution like Debian or CentOS, you can simply create a Debian VM on your Ubuntu host. You have complete freedom to use the best OS for the job.
- The “Golden Master” Philosophy: VMs are self-contained and predictable. Once you configure a VM for a service, it becomes a stable, unchanging appliance. It’s not affected by updates on the host or in other VMs.
- The “Idiotproof” Backup Strategy: This is a cornerstone of the FUTO guide’s philosophy. Backing up a VM is the epitome of simplicity. A VM is just a file on the host system—a large virtual hard disk file (e.g.,
photos.qcow2). To back it up, you simply shut down the VM and copy that single file to a backup location. To restore it, you copy the file back. There are no complex configurations or databases to worry about. You are backing up and restoring the entire working state of the computer in one atomic operation. This simplicity makes for an incredibly robust and reliable disaster recovery plan.
The Cost of the Fortress (Cons)
- Resource Overhead: Every fortress needs its own foundation. Each VM must run a full copy of an operating system, which consumes its own dedicated chunk of RAM and CPU cycles just to idle. This “VM tax” means you can run fewer VMs than containers on the same hardware.
- Slower Performance: Starting a VM is like booting a physical computer. It takes time for the guest OS to load, from many seconds to over a minute.
- Large Storage Footprint: Because each VM includes a full OS, their virtual disk files are large, often many gigabytes in size, even for a simple service.
The Apartment Block – OS-Level Virtualization (Containers)
Containerization, with Docker being the most popular implementation, is a newer, more lightweight approach to isolation. It is the architectural equivalent of building a single, large apartment block where each service gets its own secure apartment.
The Analogy: Your server’s hardware and host OS are the apartment building’s foundation, structural supports, main plumbing, and electrical grid. It’s a shared infrastructure. A container is a private, locked apartment within that building. Each apartment has its own furniture and decorations (the application and its dependencies), but they all rely on the building’s shared core infrastructure (the host OS kernel).
How Containerization Works
Instead of a hypervisor, containerization uses a Container Engine like Docker. The engine is the building manager, responsible for creating, running, and managing the “apartments,” or Containers.
The key difference is what’s inside. A container packages up the application and its libraries, but it does not include a guest operating system. Instead, all containers on a host directly share the kernel of the host operating system.
Here is the architectural stack:
- Hardware: The physical server.
- Host OS: Our main Ubuntu Server installation.
- Container Engine (Docker): The software that manages the containers.
- Containers: Each container is a lightweight package containing:
- Libraries & Dependencies: The tools the application needs.
- The Application Itself: The service.
Notice what’s missing: the Guest OS. This is the secret to its efficiency.

The Strengths of the Apartment Block (Pros)
- Incredible Speed and Efficiency: Because they don’t need to boot a full OS, containers can start in milliseconds. The lack of a guest OS also means the RAM and CPU overhead is tiny. You can run significantly more containers than VMs on the same hardware.
- Lightweight Footprint: Container images are much smaller than VM images, making them faster to download, store, and deploy.
- Extreme Portability: The “write once, run anywhere” philosophy is Docker’s biggest selling point. A developer can build a container image on their laptop, and be confident it will run exactly the same way on your server or on a massive cloud platform, because it packages all its own dependencies.
The Weakness of the Apartment Block (Cons)
- Weaker Security & Isolation: This is the most significant drawback from a security perspective. All the “apartments” share the same foundation (the host kernel). While the container engine creates strong barriers, they are software barriers within a single running kernel. A sophisticated attacker who finds a serious vulnerability in the kernel could potentially “break out” of a container and gain access to the host system or other containers. The security boundary is fundamentally thinner than the “air gap” provided by a full hypervisor.
- Kernel Dependency: All containers must be compatible with the host OS kernel. You cannot, for example, run a Windows-based container on a Linux host (and vice-versa). This reduces flexibility.
- The Complexity of “Real World” Deployments: While a single container is simple, a real-world application often consists of multiple interconnected containers (e.g., a web front-end, a database, a caching service). Managing the networking between these containers, and especially managing their persistent data (using “volumes”), can become very complex. The
docker-compose.ymlconfiguration files can grow into an intricate web that is difficult to troubleshoot. - A More Fragile Backup Strategy: This is a direct contradiction to the FUTO philosophy of simplicity. There is no “single file” to back up a Dockerized application. A proper backup requires you to:
- Back up the persistent data stored in the Docker volumes.
- Back up your
docker-compose.ymlfile and any other custom configurations. - Know the exact version of the container image you were using. Restoring this is a multi-step process with more room for error than simply copying a single VM file back into place. It is not “idiotproof.”
The Verdict: Why We Choose the Fortress
Now, we make our strategic choice. While containers and Docker are a revolutionary technology, for the goals of our sovereign cloud—a secure, resilient, and easily managed home for our digital life—the choice is clear.
We will be using Virtual Machines as our primary method of isolation.
This decision is not a rejection of Docker, but an affirmation of our priorities:
- Security Above All: Our sovereign cloud will house our most private data. We will not compromise on isolation. The thick, hardware-enforced walls of a VM provide the highest level of security, ensuring a breach in one service cannot easily spread to another.
- Simplicity in Management and Disaster Recovery: The “one service, one VM” model is incredibly easy to understand. More importantly, the single-file snapshot backup strategy is the most robust and foolproof method available to a single administrator. This peace of mind is priceless.
- Avoiding Unnecessary Complexity: For the dozen or so services we will be running, the resource overhead of VMs on our modern hardware is a negligible price to pay for the massive reduction in management complexity compared to a multi-container, multi-volume Docker setup.
It is important to note that this is not a permanent divorce from containers. We will absolutely use Docker inside some of our VMs. Certain applications, like Mailcow, are packaged and distributed exclusively as a set of Docker containers. In these cases, we will create a dedicated VM, install Docker inside it, and run the application. This gives us the best of both worlds: the application’s developers get the portability of containers, and we get the security and backup simplicity of a VM fortress enclosing the entire setup.
What’s Next?
We have made our strategic choice. We will build a kingdom of fortresses, not a single apartment block. Our architecture will prioritize security, isolation, and dead-simple backups above all else.
With this decision made, we can now prepare our server to become a landlord. In the next post, “Preparing the Host: Networking for Virtual Machines,” we will configure our host server’s networking, creating a “virtual switch” that will allow our future VM tenants to communicate with our network and the outside world securely and efficiently.








