ESXi or bare metal CentOS 8 for co-location?
Hi everyone,
I'm getting a Dell 1U server ready that's going out on co-location in the beginning of next year.
The server is going to run DirectAdmin on CentOS 8, which will be the main workload for the server.
I was started to think about it though. Would it maybe be better to install like ESXi hypervisor instead of just CentOS? If something gets broken and I want to reinstall. Would it make sense to have reinstallation done easier with ESXi, or should I just run it bare metal and reinstall through iDRAC if I need to? If that's possible to reach from outside the datacenter that is.
I have a lot of idling VPS:s, so I does not really make sense to run a hypervisor just to be able to create virtual instances, but I guess it could be interesting to have more features compared to just a bare metal server.
I have no experience with ESXi btw, if that affects any advice from you guys.
Thanks.
Comments
Personally, I would just install CentOS directly, unless you plan on doing a lot of reinstalling there shouldn't be a reason.
As for colocation just make sure you can have iDRAC on a private network accessible via VPN. Or at the least the data center doesn't charge to connect a remote KVM for you.
PureVoltage - Custom Dedicated Servers Dual E5-2680v3 64gb ram 1TB nvme 100TB/10g $145
New York Colocation - Amazing pricing 1U-48U+
I would go with CentOS on baremetal, and then run VMs or containers on top of it if needed. CentOS is a full operating systems with all the tools and goodies available for Linux. ESXi is a cut down product that is really nerfed unless you can write a check to VMware.
It has been a while since I tried to do VMware on the cheap, so things may have changed. At the time, all the management functions were getting migrated to vCenter, and the management features builtin to ESXi were slowly rotting.
I have proxmox on a dedi and it's a big pain and I regret not installing bare metal. If I can temporarily migrate the files I will reinstall without proxmox.
Thanks for the input guys. I’ll go with a bare metal installation of CentOS.
Why you say that?
Nexus Bytes Ryzen Powered NVMe VPS | NYC|Miami|LA|London|Netherlands| Singapore|Tokyo
Storage VPS | LiteSpeed Powered Web Hosting + SSH access | Switcher Special |
Rebooting the box under proxmox requires some kind of messy procedure with the idrac. Maybe it's not set up properly, but that such an issue could arise is itself an example of proxmox pain.
Do you have to go into the server and kill VM processes?
I learned turning on VM Guest Service and not installing the agent with cause Proxmox to hang indefinitely on shutdown until the VM processes are kill. (I don't remember the exact terminology since it's been a while at this point.) Since it's using systemd, it will kill services parallel to save time except the VM processes which won't respond, so you effectively get locked out of the server without console access. That was a lesson that got learned several times.
Using regular ACPI to shutdown VMs works great though.
I don't remember what I had to do to bring the box back up, but it involved going into idrac and messing with the network or other boot settings.
I think your proxmox isn't configured correctly.
The most common issue of proxmox not booting is if you set up the drives incorrectly. Proxmox 6 let's you do this from gui so Less chance of you messing it up (compared to proxmox 4).
I've been using proxmox since 4 and find 5 and 6 much more stable.
Somik.org - Server admins cheat codes