Testing bandwidth between VPSes?
Hi all,
Just reaching out to ask if anyone knew of any fancy automated tool that could be installed on multiple VPSes to test the bandwidth between them on a regular basis? I've got a Los Angeles Hosthatch service that seems to have less-than-expected network performance and I wanted to collect to more data see if it varies over time. Smokeping's showing cyclical latency variation but actual throughput stats would be great.
I was thinking of rolling something myself using iperf and cron, however thought I'd ask to see if there's something out there that I may have missed during my research...
thanks in advance!
Comments
HostHatch LA is 🥔 24/7. I have one of those 10TB $110 VPS but it feels less special because of the network.
Your experience beats a scheduled iperf any day Damn, I have the same 10tb service and was hoping network would be better than my 3tb - it's not .
My HH Chicago one is more (network) performant even though it's further away. I may just throw everything up to Chicago instead then sync LA from CHI.
Just setup cron to run iperf3 on an hourly basis for a few days. iperf3 is really easy to use anyhow. On Debian/Ubuntu, you can install it via:
No further config needed. To bench, just run:
Then filter the output of iperf3 to only get the total in/out (you could just copy
function launch_iperf
from Mason's YABS and alter the print statement in line 616) and log it line by line to a CSV file. After a few days you download that CSV and create a chart in Excel.Just as a warning: iperf3 is quite heavy on the network and uses a lot of bandwidth, maybe don‘t let it run too often/too long. (And, remove the script again once you do not longer need it.)
— Michael
Alwyzon - Virtual Servers in Austria starting at 4,49 €/month (excl. VAT)
i have HH LAX too, download is okayish at about 1gbps, but upload is.....iffy.
tried sending between both HH NVMe and Storage VPS in LAX.
NVMe sends at ~100MB/s to Storage, but the other way around is only ~9MB/s. Noted that IOWait while sending is at 80 ish.
I bench YABS 24/7/365 unless it's a leap year.
...
Thanks, that's what I was intending to do - either have it spit out a CSV or maybe throw it into a rrd... don't worry, with the speeds I'm experiencing on this service, I don't think it will use too much data
That's pretty line ball with my findings. Latency also seems to follow my neighbors' backup schedules (the dark green is the service)
downloading speeds are fine-ish, IOWait less than 10.
I bench YABS 24/7/365 unless it's a leap year.
mtr might be interesting as well
HS4LIFE (+ (* 3 4) (* 5 6))
I think I build such thing in the past, to benchmark CDN's using LES with 10MB files.
Lemme search.
Free NAT KVM | Free NAT LXC
OP, will you be raising a ticket with HH?
I bench YABS 24/7/365 unless it's a leap year.
@cybertech - I lodged one about a month ago, they said there was a known peering dispute between Level3 and NTT and said wait until mid-September for resolution. If you are experiencing similar with your services, may be worthwhile yourself opening a ticket too?
This was related to Sydney only, with Level3 sending traffic from Sydney to Tokyo to LA instead of directly. But recently Telstra Global was added there along with Level3 and it is no longer a problem. Can you please re-test and update your ticket if you still see the longer route?
As for LA - we use Psychz there. Please open a ticket with us including MTRs on both sides and both source/destination IPs (and if you can include another provider in LA who is showing better performance, that will be even better as we have something to compare to) - and we will get it optimized.
Thanks
Thanks @hosthatch, really appreciate it. Unfortunately I don't have the LA letbox service (which had better performance) that I migrated to HH from, however I've asked other users in this thread who are also affected to see if they have alternate services - may be able to sort out some MTRs for us.
Can also confirm the Sydney Hosthatch to Chicago Hosthatch routing is much improved, dropped 50ms - so good!
just made a ticket with "localized" tests to and fro HH NVMe VPS because i believe my upload problem is something to do with high IOWait when sending a file.
Briefly tested with a Virmach LAX 10G VPS as well, Virmach sends in at 80MB/s, but HH Box sends out at 10MB/s, no difference.
I bench YABS 24/7/365 unless it's a leap year.
Got this from my HH LAX storage which does seem better than when I got it
I bench YABS 24/7/365 unless it's a leap year.
the issue remains.
I bench YABS 24/7/365 unless it's a leap year.
I didn’t get that email
I got the email, service was restarted but too soon to see if it changed anything