My Great ProxMox Experiment

I have a Windows Server 2003 VM on VMWare ESXi, running SQL Server 2005 with a database that is split between E: and F: drives (the VM has 3 virtual disks). As a test for ProxMox 5.2, I wanted to convert this to a ProxMox VM, then increase the E: space to consolidate the data into a single drive and remove the F: drive.  Long story short: Everything worked fine.

The following articles cover the details of the endeavor:

Overall, ProxMox is a very slick product.  I’ve used it on and off for various test home labs, and opted to use it at work on Lab computers that no longer support VMWare ESXi hosts.  Installation was pretty painless, though understanding the various storage options got me a bit confused. 

The worst problem I had with ProxMox was that the web-based Console used to access the VM wouldn’t work in IE 11.  I thought there was some major issue as the web-based Console would never connect to the VM.  But once I logged in with Chrome, I could access the VM perfectly fine through the exact same ‘console’ option.

In one experiment, I did go for a high-availability with host-to-host cross replication and switch-over to a single host after powering down one of the hosts.  After some trial and error I got things to work, but it’s not as smooth as VMWares replication features.

Both ProxMox and vSphere/ESXi are available for free. ESXi free is a crippled version of the product, specifically in the area of templates, backups and High Availability features. You can purchase a perpetual license of vSphere for approximately $5,654, which breaks down to about 4,470 for a perpetual license key, and 1,184 for 1 year of support which covers up to 3 hosts and 3 CPUs per host. By comparison, the ProxMox free version primarily just comes with a nag screen at web login. The ProxMox support subscription varies in price, but they charge between $796 and $74.90 per CPU and per host, which can wind up costing more than VMWare based on your needs.

For example, in the past year I have come to rely on VMWare’s support which has been excellent. Based on the past year of VMWare support, we would need to purchase the $398 ProxMox subscription, which would cost $1,592 per year for our 3 hosts with 4 CPUs. That’s $408 more than the VMWare support.

Both VMWare and ProxMox have handled the situations I threw at them. Both have worked wonderfully, but both have experienced hiccups along the way. None of those hiccups have required a complete system restore, only various changes in settings. The trade-offs come in the form of high availability, backups, and support. Converting a VM from ESXi to ProxMox was a painless endeavor.

Generally speaking, VMWare is the king of the virtualization land. Its price is a bit steep for small organizations, but it may not be that far off from a similar ProxMox installation. Relying on your own technical expertise can save you money.

If I were faced with a dirt-cheap scenario, I would probably go with a free ESXi setup and rely on open source backup software such as Macrium Reflect. If there’s a second hardware host available, I would split the VMs between the 2 hosts, and using scheduled scripting to stop the VMs and cross-copy them between the 2 hosts. Not exactly High Availability, but still workable in case of hardware failure.

Once a budget is introduced, I would still push for VMWare, despite the higher initial investment. The assumption is that there are 2 host machines with cross-replication of VMs for High Availability. The VMWare support has been excellent in our production environment, and I have not used ProxMox support because we don’t have a subscription.

Note that if you are limited to older hardware, then ProxMox may be your only solution. VMWare updates eventually stop supporting older processors. Hardware that works fine with ProxMox or an older VMWare installation may not be usable with a current version of VMWare..

Our solution is VMWare with production support running on 2 cross-replicated servers and all VMs copied to a disaster recovery server. If 1 host physically dies, we can be back up and running in 5 minutes. If 2 hosts die, it can take a few hours, but the disaster recovery machine can be brought online. We also use ProxMox free on 2 lab servers. They allow me to run several low-priority VMs and test VMs and test changes.

Update: Emergency Mode
After completing my experiment, I rebooted my ProxMox server after it had been running for more than 2 months. Except, it kept booting into Emergency mode. Networking was working as I could ping the host server, but not much else.

I went to the console, logged in and manually launched the sshd service, and was able to remote in from my desktop. I then looked at journalctl -xb looking for red flags, and I found red/error reports regarding the tmp volume.

After further review, I found that I had created a logical volume and made a permanant mount line in etc/fstabs, but later removed the logical volume without removing the line from fstabs. Trying to mount the non-existing logical volume was my cause. Once I manually removed that line and rebooted, everything was back to normal.