The development world is quickly making the shift to Docker and microservices, and every day I feel like I’m running more Docker containers. My daily laptop is an Macbook Pro (Early 2015), but I’ve recently had the chance to test the Dell XPS 15 (9550) as well. In doing so, I got to wondering what the performance penalty for running containers through a Virtual Machine (such as Boot2Docker or Kitematic) as opposed to running on Bare Metal Linux.
|Dell XPS 15 (9550)||MacBook Pro 13”||All Virtual Machines|
|Base Frequency||2.6 GHz||2.7 GHz|
|Boost Frequency||3.5 GHz||3.1 GHz|
|Speed||DDR 4 2133 MHz||DDR3 1867 MHz|
|Disk||512 GB||256 GB||Same as Host|
|Interface||PCIE 3.0 x4||PCIE 2.0 x4|
CPU and Memory Benchmark
The Macbook Pro’s Memory and CPU performance are pretty much the same between Bare Metal and Virtualbox.
The Dell XPS 15’s Memory and CPU performance are also pretty much the same between Bare Metal Windows, Bare Metal Linux, the Windows VirtualBox VM, and the Windows Hyper-V VM. Note that the Dell XPS 15 has 4 physical cores, but the OS sees 8 because of Hyper-Threading. The VMs were only allocated 4 Virtual Cores, so that is the reason that the CPU Multi scores are not closer.
Storage performance inside of the VirtualBox Guest on OSX using VirtualBox suffer between a 2x - 5x as compared to Bare Metal performance. Storage performance between the Guest and a Docker Container running in the Guest is pretty much the same.
The Dell XPS 15 storage benchmarks are a bit more interesting. Windows Hyper-V Guest storage performance is 2x-3x better than Windows VirtualBox Guest storage performance, although Bare Metal Linux is the clear winner here. Storage performance between the Guests and a Docker Container running in the Guests is pretty much the same.
Shared Mounts were implemented over SMB in Hyper-V and using the native shared folder support in VritualBox.
Putting it all Together
Considering CPU, Memory, and Storage for running Docker Containers, what are the performance impacts on choosing Bare Metal vs. Virtual?
- CPU: Not a huge difference between Bare Metal and Virtual performance. To maximize Virtual performance, allocate the same number of Virtual CPUs to your guest that your Host OS has. So if you have a 2-core Hyper-Threaded Intel CPU, allocate 4 virtual CPUs to your guest.
- Memory: Memory speed is similar between Bare Metal and Virtual, but memory size allocation is not.
- Bare Metal will allow Docker Containers to use up to the maximum amount of memory on the machine, then use native swapping.
- VirtualBox makes you pre allocate the upper memory limit of the the Guest before starting it.
- Hyper-V offers dynamically allocated memory, which will add or remove memory from the Host as it is used on the VM.
- Swap in Virtual Guests will be slower than swap on Bare Metal if you are running memory intensive Docker containers.
- Storage: Storage will be slow on Hyper-V, and even slower on VirtualBox.
If you use Docker containers heavily and can run Bare Metal Linux, do it! I recently purchased an Intel NUC, M.2 SSD, and 16GB of Memory, and now use that as my primary development machine with Ubuntu on it. Total cost was around $400. I’ve noticed a significant speed increase in running Docker workloads since switching to Bare Metal Linux.
I keep the Macbook Pro around for a better email and productivity software experience and run Docker through a Ubuntu Guest in VirtualBox when traveling. Good Bare Metal Linux support on laptops is hard to find, so research before you buy if that’s something you’re interested in. The Dell XPS 15 (9550) ran Linux pretty well but sometimes would not wake up from suspend.
A Note on Hyper-V Vagrant Support
Hyper-V Vagrant support is pretty shaky for Windows 10 at the time of this writing. I had to hack up the VM Import script in order to get a Generation 2 Base Box to import correctly. If you like to run your VMs through Vagrant, you will likely find a smoother experience using VirtualBox as the hypervisor even though it will run Docker Containers slower than Hyper-V.