Here's my amateur attempt to answer some of your questions --
@Not_Oles said: (1) why should the single core score be better with LXC than without?
Based on your numbers posted, the inverse of that is true, as expected. Your first run on the host has the best single core score overall. Unless you're referring to the raw CPU frequency, in which case see my next answer below.
@Not_Oles said: (2) As between 32 and 1 core tests, why does the GB single core performance decrease when the processor speed increases?
I wouldn't pay too much attention to the CPU freq as that is just whatever speed the OS + CPU is reporting at the time that the stat is pulled (cat /proc/cpuinfo). The reported clock will always be somewhere between the min and max frequency (inclusive). The more cores it has, the more likely some of those cores are idling and awaiting work (therefore more chance of a lower frequency being reported). Those frequencies are likely hitting the max clock of the CPU (or burst clock, if available) when it's geekbench crunching time.
@Not_Oles said: (3) Does the GB single core score decrease with the "real core" count? 1 and 2 logical cores might run on one hardware core. 3 and 4 might run on a second "real core."
This I'm not so sure of. Ideally, the single core score should remain relatively the same no matter how many cores are allocated to the VM. For more concrete answers, I'd run more tests, as to me this looks to be within the margin of error to call all the single-core scores "somewhat equal".
@Not_Oles said: What's going on with the IPv4 Send Speed to Clouvider in NY?
I'll probably just be repeating what you already know here, but throughput of the network is bound to fluctuate depending on when exactly the tests are being run, how much traffic is going over the routes between your server and Clouvider's, how pegged Clouvider's speedtest server is at that moment in time, etc. Wouldn't surprise me if those cross-Atlantic links go from congested to free in just a few minute time window.
I find it interesting that the IPv6 speed tests to Clouvider's NYC location didn't suffer the same issues that the IPv4 test did that you noted above. Perhaps the different protocols are using separate routes between the two locations and the IPv4 route is defaulting to a more congested one (just a theory).
@Mason said:
Here's my amateur attempt to answer some of your questions --
@Not_Oles said: (1) why should the single core score be better with LXC than without?
Based on your numbers posted, the inverse of that is true, as expected. Your first run on the host has the best single core score overall. Unless you're referring to the raw CPU frequency, in which case see my next answer below.
@Not_Oles said: (2) As between 32 and 1 core tests, why does the GB single core performance decrease when the processor speed increases?
I wouldn't pay too much attention to the CPU freq as that is just whatever speed the OS + CPU is reporting at the time that the stat is pulled (cat /proc/cpuinfo). The reported clock will always be somewhere between the min and max frequency (inclusive). The more cores it has, the more likely some of those cores are idling and awaiting work (therefore more chance of a lower frequency being reported). Those frequencies are likely hitting the max clock of the CPU (or burst clock, if available) when it's geekbench crunching time.
@Not_Oles said: (3) Does the GB single core score decrease with the "real core" count? 1 and 2 logical cores might run on one hardware core. 3 and 4 might run on a second "real core."
This I'm not so sure of. Ideally, the single core score should remain relatively the same no matter how many cores are allocated to the VM. For more concrete answers, I'd run more tests, as to me this looks to be within the margin of error to call all the single-core scores "somewhat equal".
@Not_Oles said: What's going on with the IPv4 Send Speed to Clouvider in NY?
I'll probably just be repeating what you already know here, but throughput of the network is bound to fluctuate depending on when exactly the tests are being run, how much traffic is going over the routes between your server and Clouvider's, how pegged Clouvider's speedtest server is at that moment in time, etc. Wouldn't surprise me if those cross-Atlantic links go from congested to free in just a few minute time window.
I find it interesting that the IPv6 speed tests to Clouvider's NYC location didn't suffer the same issues that the IPv4 test did that you noted above. Perhaps the different protocols are using separate routes between the two locations and the IPv4 route is defaulting to a more congested one (just a theory).
@Mason said: Unless you're referring to the raw CPU frequency, in which case see my next answer below.
You are right! I did intend to ask about the CPU frequency. I failed to write the question clearly! ?
the CPU freq as that is just whatever speed the OS + CPU is reporting at the time that the stat is pulled (cat /proc/cpuinfo). The reported clock will always be somewhere between the min and max frequency (inclusive).
Those clueless like me might imagine that yabs could be used to compare two different processors as well as to compare one processor in two different sets of circumstances. When comparing two different processors, knowing the maximum frequency might be useful. Clueless me imagined that the frequency yabs reported would be the maximum.
I want to learn about how the frequency report varies, both in general, and specifically how the 5950X speed report apparently increases from 2185 MHz to 2700 MHz just because of moving from bare metal to "inside" a "bare metal LXC container." I mean, since "the container has no walls," how would merely changing the cgroups and the namespaces increase the processor frequency significantly? There must be more to the story!
within the margin of error to call all the single-core scores "somewhat equal".
Got it!
speed tests to Clouvider's NYC location
I've seen this multiple times. I should do an mtr.
swap reported by yabs as 5.0 GiB
That's right about @Ympker being involved! Whenever anything truly great turns up, we can find him there!
The container used for the tests is still around, so I looked at the Proxmox web GUI to see how much space had been assigned for swap. Here's what I saw:
@Mason You are the best! Thanks for paying such careful attention to my questions! Also, thanks yet again for all your help previously -- both to me personally and to the LES community! ?
@Not_Oles said: The container used for the tests is still around, so I looked at the Proxmox web GUI to see how much space had been assigned for swap. Here's what I saw:
I've not done much with LXC but the swap in proxmox could be presented as a swap partition - that could still be added to via a swapfile on the disk itself which could top up the swap to 5gb.
For example on my desktop I have:
$ sudo swapon
NAME TYPE SIZE USED PRIO
/dev/sda3 partition 1.9G 696.5M -2
/swapfile file 8G 0B -3
You could disable CPU frequency scaling at a BIOS level which would keep the processor at max frequency, although not max boost frequency so you'd still possible see different results. I guess for fairness you could try and take CPU speed while running the geekbench test, but that would need a second yabs process I imagine.
root@test-container:~# swapon
NAME TYPE SIZE USED PRIO
none virtual 5G 0B 0
root@test-container:~# mount | grep swap
lxcfs on /proc/swaps type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
root@test-container:~#
On the same node:
root@Proxmox-VE ~ # swapon
NAME TYPE SIZE USED PRIO
/dev/md0 partition 16G 0B -2
root@Proxmox-VE ~ # mount | grep swap
root@Proxmox-VE ~ # mount | grep md0
root@Proxmox-VE ~ #
@Mason@Mr_Tom Based on the above, it looks like yabs is reporting the same number that swapon reports for the size of the swap. The number reported by yabs and swapon apparently can seem different from the size configured in Proxmox's GUI LXC container creation dialog.
Not technically a VPS. KVM VM deployed on a home server. Was just curious to see how it compares.
Half of the cores assigned to VM. (PN50 Mini pc form factor zen 2 4700U / 64gig / nvme). About 500 bucks worth of gear. Unsure why AES and AMD-V is disabled. Single core sucks cause it's essentially a laptop cpu. Surprised at the weak disk though
@havoc said:
Not technically a VPS. KVM VM deployed on a home server. Was just curious to see how it compares.
Half of the cores assigned to VM. (PN50 Mini pc form factor zen 2 4700U / 64gig / nvme). About 500 bucks worth of gear. Unsure why AES and AMD-V is disabled. Single core sucks cause it's essentially a laptop cpu. Surprised at the weak disk though
Comments
Here's my amateur attempt to answer some of your questions --
Based on your numbers posted, the inverse of that is true, as expected. Your first run on the host has the best single core score overall. Unless you're referring to the raw CPU frequency, in which case see my next answer below.
I wouldn't pay too much attention to the CPU freq as that is just whatever speed the OS + CPU is reporting at the time that the stat is pulled (
cat /proc/cpuinfo
). The reported clock will always be somewhere between the min and max frequency (inclusive). The more cores it has, the more likely some of those cores are idling and awaiting work (therefore more chance of a lower frequency being reported). Those frequencies are likely hitting the max clock of the CPU (or burst clock, if available) when it's geekbench crunching time.This I'm not so sure of. Ideally, the single core score should remain relatively the same no matter how many cores are allocated to the VM. For more concrete answers, I'd run more tests, as to me this looks to be within the margin of error to call all the single-core scores "somewhat equal".
I'll probably just be repeating what you already know here, but throughput of the network is bound to fluctuate depending on when exactly the tests are being run, how much traffic is going over the routes between your server and Clouvider's, how pegged Clouvider's speedtest server is at that moment in time, etc. Wouldn't surprise me if those cross-Atlantic links go from congested to free in just a few minute time window.
I find it interesting that the IPv6 speed tests to Clouvider's NYC location didn't suffer the same issues that the IPv4 test did that you noted above. Perhaps the different protocols are using separate routes between the two locations and the IPv4 route is defaulting to a more congested one (just a theory).
Likely is baked into the LXC template that you used. But, for now, I'll blame @Ympker, because why not?
Hope the above helps more than it hurts! Cheers!
Head Janitor @ LES • About • Rules • Support • Donate
It's good to hear from you, too, mate! Hope you and your loved ones are doing well in these times Take care!
As for the swap allocation: Ofc it was my doing. I like messing with ppl's swap
Ympker's VPN LTD Comparison, Uptime.is, Ympker's GitHub.
You are right! I did intend to ask about the CPU frequency. I failed to write the question clearly! ?
Those clueless like me might imagine that yabs could be used to compare two different processors as well as to compare one processor in two different sets of circumstances. When comparing two different processors, knowing the maximum frequency might be useful. Clueless me imagined that the frequency yabs reported would be the maximum.
I want to learn about how the frequency report varies, both in general, and specifically how the 5950X speed report apparently increases from 2185 MHz to 2700 MHz just because of moving from bare metal to "inside" a "bare metal LXC container." I mean, since "the container has no walls," how would merely changing the cgroups and the namespaces increase the processor frequency significantly? There must be more to the story!
Got it!
I've seen this multiple times. I should do an mtr.
That's right about @Ympker being involved! Whenever anything truly great turns up, we can find him there!
The container used for the tests is still around, so I looked at the Proxmox web GUI to see how much space had been assigned for swap. Here's what I saw:
@Mason You are the best! Thanks for paying such careful attention to my questions! Also, thanks yet again for all your help previously -- both to me personally and to the LES community! ?
MetalVPS
I've not done much with LXC but the swap in proxmox could be presented as a swap partition - that could still be added to via a swapfile on the disk itself which could top up the swap to 5gb.
For example on my desktop I have:
You could disable CPU frequency scaling at a BIOS level which would keep the processor at max frequency, although not max boost frequency so you'd still possible see different results. I guess for fairness you could try and take CPU speed while running the geekbench test, but that would need a second yabs process I imagine.
@Mr_Tom
Same container:
On the same node:
Different container on a different node:
@Mason @Mr_Tom Based on the above, it looks like yabs is reporting the same number that swapon reports for the size of the swap. The number reported by yabs and swapon apparently can seem different from the size configured in Proxmox's GUI LXC container creation dialog.
I didn't notice this before! ?
MetalVPS
Ok... that I can't answer lol
Racknerd 1GB KVM VPS LEB Special
# https://github.com/masonr/yet-another-bench-script #
Sat Feb 13 18:15:03 IST 2021
HosterLabs 2GB KVM IPV6 special
# https://github.com/masonr/yet-another-bench-script #
Sat 13 Feb 2021 06:37:43 PM IST
@hosterlabs
Hostsolution 2GB 200GB
#https://github.com/masonr/yet-another-bench-script
Sat Feb 13 18:48:52 IST 2021
Basic System Information:
---------------------------------
Processor : Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz
CPU cores : 2 @ 2397.222 MHz
AES-NI : ✔ Enabled
VM-x/AMD-V : ❌ Disabled
RAM : 1.9 GiB
Swap : 0.0 KiB
Disk : 196.8 GiB
@cociu
Inception 1GB 1TB
# https://github.com/masonr/yet-another-bench-script #
Sat 13 Feb 19:10:58 IST 2021
@AnthonySmith
vserver 1GB 5GB
# https://github.com/masonr/yet-another-bench-script #
Sat Feb 13 19:02:57 IST 2021
@vserversite
Webhorizon 512MB 7GB
https://github.com/masonr/yet-another-bench-script
Sat Feb 13 19:04:45 IST 2021
@Abdullah
Debian KVM on Hetzner Ax101
MetalVPS
Dedicated KVM Slice 512MB BuyVM Luxembourg Debian 10
taking Avoro Winter Deal 2019 for another spin, seems like it's time to put it back in production. less steal now than before:
I bench YABS 24/7/365 unless it's a leap year.
Webhosting24 10x10x10 Special
No matter how beautiful a bench looks, it can never beat nature for me.
Facts!!
Nexus Bytes Ryzen Powered NVMe VPS | NYC|Miami|LA|London|Netherlands| Singapore|Tokyo
Storage VPS | LiteSpeed Powered Web Hosting + SSH access | Switcher Special |
The disk IO seems much lower than our other KVMs at Singapore, canyou create a ticket so I can look, or PM regd. email ?
https://webhorizon.net
ticket #7977790 made
and closed due to super fast response by @Abdullah
replied, seems like a confusion on my part. This is an OVZ NAT vps. sorry!
https://webhorizon.net
Dubai $5/m
TerraHost i3 BF 500gb HDD replaced with 2TB ssd
I’m a simple man I see gifs, I press thanks
After a lot of back and forth with @Not_Oles (for all good reasons!)
VPS reviews and benchmarks |
Shot in a ticket to have Clouvider check out their iperf servers.
Considering reducing the number of retries so when iperf servers aren't operational the script doesn't take forever to run
Head Janitor @ LES • About • Rules • Support • Donate
I bench YABS 24/7/365 unless it's a leap year.
Not technically a VPS. KVM VM deployed on a home server. Was just curious to see how it compares.
Half of the cores assigned to VM. (PN50 Mini pc form factor zen 2 4700U / 64gig / nvme). About 500 bucks worth of gear. Unsure why AES and AMD-V is disabled. Single core sucks cause it's essentially a laptop cpu. Surprised at the weak disk though
pretty good actually for GB5
I bench YABS 24/7/365 unless it's a leap year.
Yeah pretty happy with it. Low power (10W idle), 8 core & loads of mem makes it a good always on proxmox machine for gitlab, pihole etc
@vyas @cybertech @Mason @Everybody
Is anyone aware of anything that could be done to improve the server's performance? I hope the performance could be as good as possible. Thanks!
MetalVPS