Skip to main content


Proxmox vs FreeBSD: Which Virtualization Host Performs Better?

Since migrating many servers from Proxmox to FreeBSD, we have consistently felt that the VMs are more responsive. It's time to conduct some concrete tests.

https://it-notes.dragas.net/2024/06/10/proxmox-vs-freebsd-which-virtualization-host-performs-better/

#FreeBSD #Proxmox #Linux #Virtualization #kvm #bhyve #IT #SysAdmin #ITNotes #NoteHUB

reshared this

in reply to Stefano Marinelli

This is interesting. Thanks.

The file creation speed is very significant - going from 407 MB/s on proxmox to 1467 MB/s on freebsd.

Do you know why there is such a large difference?

in reply to moozer

@moozer Considering that the Proxmox host on ZFS goes at 968.64 MB/s and FreeBSD host on ZFS flies at 1625.67 MB/s, I'd dare to say that ZFS seems to be much more optimized on FreeBSD than on the Proxmox PVE Kernel.
in reply to Stefano Marinelli

ZFS on Linux still not mature yet, and FreeBSD support ZFS out of the box, to the kernel level.

If ZFS had close integration with the Linux kernel, it would have much or less similar I/O performance, both are POSIX complaint systems after all.

in reply to Stefano Marinelli

Thanks,

I've recently moved some of my VMs from KVM to bhyve and have noticed a significant performance improvement.
Most noticeably with Nextcloud (Debian guest) where it feels really quick and snappy now where it felt clunky before. I guess this may be due to improved IO performance similar to what you have tested.

in reply to Paul Wilde :dontpanic2: :smeghead:

@paul Exactly the reason why I decided to perform those tests. Moving VMs to bhyve led people to think I've upgraded the hardware as they're "quicker". Glad to know you're having the same experience!
in reply to Benjamin Kwiecień 🇵🇸

to be fair it's probably ZFS doing a lot of the heavy lifting here, but @stefano 's testing has shown that even on ZFS on proxmox (KVM) the performance still isn't quite as good as ZFS on FreeBSD (bhyve) - which is likely due to the ZFS modules in FreeBSD being native over additional kernel modules in proxmox (or any other Linux distro that can use it)

bhyve is definitely worth looking into if you use VMs though, for sure!

in reply to Paul Wilde :dontpanic2: :smeghead:

@paul @ben I don’t know, I could equally imagine the Qemu I/O paths not being ideal. Qemu is a beast of a code base, which even after years of effort hasn’t managed to get close to eliminating the big Qemu lock (BQL) for example; bhyve is tiny in comparison.
in reply to Stefano Marinelli

Rule of thumb: if ZFS consistently beats ext4 in a basic filesystem benchmark on Linux, there's either something wrong with your setup or your benchmark is flawed.

I'd switch from sysbench to fio for I/O related tests.

in reply to Stefano Marinelli

I learned something new today and remembered our previous conversation. ext4 uses lazy initialization by default.

https://fedetft.wordpress.com/2022/01/23/on-ext4-and-forcing-the-completion-of-lazy-initialization/

in reply to Stefano Marinelli

nice.

The results are interesting - I wouldnt have expected such a wide margin in some of the results.

That said, Ive never used Proxmox and it has been a while since I’ve used KVM at all (and at the time I was using Fedora/Centos on XFS - ext4 has eaten enough data for me to never ever want to use it again)

Unknown parent

Stefano Marinelli
@sourcerer I agree: number can tell many things, but often impressions tell much more 🙂
Unknown parent

Stefano Marinelli
@sourcerer PS: thank you!
Unknown parent

Stefano Marinelli
@sourcerer thank you. And thank you for being a part of the BSD Cafe!
in reply to Stefano Marinelli

This is excelent, thanks for sharing it. One other test that might be interesting would be to overprovision a bunch of VMs and see how it handles scheduling when resources are scarce. Of course it's easy for me to think tasks up when it's not my time.