Hope not, as I have one near there already (Vint Hill): Seattle/Denver/Dallas in order of preference, though not ideal due to likely more Asian traffic. :-(
I guess, what's more concerning is my Buffalo with the two additional IPs. Time will tell. (I've been rebuilding it using Almalinux 8.6 in some forlorn hope.)
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
@AlwaysSkint - My Chicago was moved to Secaucus, NJ as I expected. Seattle/Denver/Dallas 🤷🏻♂️.
By the way the Amsterdam migration special was moved to Frankfurt and the disk is now read only. I expect it will be fixed in time, but you did not miss anything there. I hope your recovery is going well, we are pulling for you.
My tiny Buffalo VPS surreptitiously (had to spell check!) moved to NYC. Dunno exactly when. It's currently running Debian 10 and miraculously started up absolutely perfectly. Mein Gott!
Had to change its' hostname to reflect the new location though.
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
My vps7 previously in Atlanta went offline several minutes ago.
It's visible in WHMCS as being on ATLKVM12 host node, but the main IP suddenly became 149.57.204.*.
I thought Miami beach club would be opening?
@yoursunny said:
My vps7 previously in Atlanta went offline several minutes ago.
It's visible in WHMCS as being on ATLKVM12 host node, but the main IP suddenly became 149.57.204.*.
I thought Miami beach club would be opening?
If you didn't select a different location, and if the present location still exists in their new plan, you're not going to be migrated somewhere else. Same happens with my Atlanta one on ATLKVM13. New main IP, original IP now secondary (still accessible).
vps7 is now showing as in ATLZ010 host node and "Offline".
Two months ago I pressed the button for New York, but it assigned IP but did nothing else.
I pressed "Power On" and the machine came online.
Then I had to adjust Netplan config, Route48.org tunnel, and DNS records for it to work again.
Docker containers must be re-created.
Otherwise, they would enter a restart loop because the previous public IPv4 no longer exists.
This is why it's good to keep virtual servers "Offline" after a migration and let the customer start it.
Still waiting for Miami beach club because Zayo network is said to be crap.
@yoursunny said:
Still waiting for Miami beach club because Zayo network is said to be crap.
Also waiting, hoping to join you on the Miami beach! I did end up moving one of my VPS to Atlanta and the network was somewhat passable, a lot of interesting routing decisions though. the INAP locations are performing a lot better
Your service has been migrated if it is on a new "node" using the name LAXA000, NYCB000, TYOC000, AMSD000, FFME000, PHXZ000, SEAZ000, DENZ000, ATLZ000, SJCZ000, or MIAZ000.
Your service has been migrated if it is on a new "node" using the name LAXA000, NYCB000, TYOC000, AMSD000, FFME000, PHXZ000, SEAZ000, DENZ000, ATLZ000, SJCZ000, or MIAZ000.
Bulk of migrations are complete to some level, with a lot of problems. FFM is still facing disk configuration issue, I couldn't get to it, but luckily only a small number of people are affected outside of FFME04 which rebooted into no disks, and potentially FFME05 which is displaying services as online but large disk issue.
FFM has ECC RAM onsite and this will repair FFME001 which is actually correcting the errors just fine for now, but to avoid any comorbidities.
Migrations had issue with reconfigurations, SolusVM can't handle that many and keeps crashing. We'll be going through today and also fixing incorrect IPv4 showing up on WHMCS but for the most part you should be able to reconfigure and get it to work. A small percentage of these will still have problems booting back up, and we're actively still going through those right now.
@yoursunny said: vps7 is now showing as in ATLZ010 host node and "Offline".
Two months ago I pressed the button for New York, but it assigned IP but did nothing else.
I pressed "Power On" and the machine came online.
Then I had to adjust Netplan config, Route48.org tunnel, and DNS records for it to work again.
Docker containers must be re-created.
Otherwise, they would enter a restart loop because the previous public IPv4 no longer exists.
This is why it's good to keep virtual servers "Offline" after a migration and let the customer start it.
Still waiting for Miami beach club because Zayo network is said to be crap.
Miami will be the next thing to be worked on, sometime this week. I do believe our schedule will open back up and we'll also be focusing on returning support response times to normal.
My LA one finally migrated to LAXA031, but it seems that there are only a few EOL installation templates on that node. Newer ones show up in solusvm but won't work.
@fan said:
My LA one finally migrated to LAXA031, but it seems that there are only a few EOL installation templates on that node. Newer ones show up in solusvm but won't work.
OS templates may have ran into issue for LAX since they were interrupted by the network issue we faced, and then interrupted again with mass migrations. I'll put in another sync soon.
@Virmach is it possible to move CHI to a closer venue, such as Denver, rather than NYC?
Are all AMS earmarked for FFM?
Also, please refrain from migrating multi-IP VPS until you're closer to fixing that issue.
Patiently (kinda) waiting for rDNS to be working again, so that my server notifications get delivered.
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
@VirMach said:
Bulk of migrations are complete to some level, with a lot of problems. FFM is still facing disk configuration issue, I couldn't get to it, but luckily only a small number of people are affected outside of FFME04 which rebooted into no disks, and potentially FFME05 which is displaying services as online but large disk issue.
May need to check FFME002 as well I'm afraid, my migrated one to it has been down for quite a while. Every day or two I poke it with a stick but it seems pretty dead.
@VirMach said:
Bulk of migrations are complete to some level, with a lot of problems. FFM is still facing disk configuration issue, I couldn't get to it, but luckily only a small number of people are affected outside of FFME04 which rebooted into no disks, and potentially FFME05 which is displaying services as online but large disk issue.
May need to check FFME002 as well I'm afraid, my migrated one to it has been down for quite a while. Every day or two I poke it with a stick but it seems pretty dead.
I only see the "Migrate" button instead of "Ryzen migrate" even though I got the email (and I only have one service active). Is this normal? I also want to migrate to MIA.
@VirMach said:
Ryzen migrate button will be converted to Ryzen to Ryzen location change button by around the end of this week.
I hope this can be used multiple times, to allow more flexibility in changing needs.
Suggested procedure:
Location change may be used once per month.
SecureDragon was one of the first providers offer location change.
They allowed one migration per day, and I think it's too much risk for abuse.
If port 25 unblock is granted, location change is not allowed on this service.
This would cut down spam.
Migration with data is queued into a batch system and not immediate.
Once requested, the service is powered off and locked, until the migration completes, which may take up to 24 hours.
Tickets created within these 24 hours are auto-closed.
This would prevent server overloading.
Out of curiosity what is the difference between NYCB014 and NYCM101? Status page says all nodes should be NYCB, the other one is NYCM, both ends in the same place. Did you fat-fingered B/M?
Also is this normal, as I only see it on AMSD025 (from like 8 nodes I have access too)
Migration with data is queued into a batch system and not immediate.
Once requested, the service is powered off and locked, until the migration completes, which may take up to 24 hours.
Tickets created within these 24 hours are auto-closed.
This would prevent server overloading.
This hobbyistic point of view may be damn annoying for someone who migrate stuff in production and want to respawn it back online to work on it as soon as possible. Or at least within scheduled time to be around to put thing back in order, not some random "up to 24 hours - so not sleep for you tonight" time.
Comments
Hope not, as I have one near there already (Vint Hill): Seattle/Denver/Dallas in order of preference, though not ideal due to likely more Asian traffic. :-(
I guess, what's more concerning is my Buffalo with the two additional IPs. Time will tell. (I've been rebuilding it using Almalinux 8.6 in some forlorn hope.)
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
@AlwaysSkint - My Chicago was moved to Secaucus, NJ as I expected. Seattle/Denver/Dallas 🤷🏻♂️.
By the way the Amsterdam migration special was moved to Frankfurt and the disk is now read only. I expect it will be fixed in time, but you did not miss anything there. I hope your recovery is going well, we are pulling for you.
LES • About • Donate • Rules • Support
LAX Quadranet is offline now. hope the network reconfig works.
I bench YABS 24/7/365 unless it's a leap year.
One of my VM moved from CHI to NYC and the other seems to be hiding on a node still in CHI heh.
LAX Quadranet is online now. hope the network reconfig worked.
I bench YABS 24/7/365 unless it's a leap year.
My tiny Buffalo VPS surreptitiously (had to spell check!) moved to NYC. Dunno exactly when. It's currently running Debian 10 and miraculously started up absolutely perfectly. Mein Gott!
Had to change its' hostname to reflect the new location though.
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
My interfaces file was stuffed up and had to be altered using LXC, but packet loss has gone.
My vps7 previously in Atlanta went offline several minutes ago.
It's visible in WHMCS as being on ATLKVM12 host node, but the main IP suddenly became 149.57.204.*.
I thought Miami beach club would be opening?
Webhosting24 aff best VPS; ServerFactory aff best VDS; Cloudie best ASN; Huel aff best brotein.
If you didn't select a different location, and if the present location still exists in their new plan, you're not going to be migrated somewhere else. Same happens with my Atlanta one on ATLKVM13. New main IP, original IP now secondary (still accessible).
vps7 is now showing as in ATLZ010 host node and "Offline".
Two months ago I pressed the button for New York, but it assigned IP but did nothing else.
I pressed "Power On" and the machine came online.
Then I had to adjust Netplan config, Route48.org tunnel, and DNS records for it to work again.
Docker containers must be re-created.
Otherwise, they would enter a restart loop because the previous public IPv4 no longer exists.
This is why it's good to keep virtual servers "Offline" after a migration and let the customer start it.
Still waiting for Miami beach club because Zayo network is said to be crap.
Webhosting24 aff best VPS; ServerFactory aff best VDS; Cloudie best ASN; Huel aff best brotein.
Also waiting, hoping to join you on the Miami beach! I did end up moving one of my VPS to Atlanta and the network was somewhat passable, a lot of interesting routing decisions though. the INAP locations are performing a lot better
MIAMI!!
Bulk of migrations are complete to some level, with a lot of problems. FFM is still facing disk configuration issue, I couldn't get to it, but luckily only a small number of people are affected outside of FFME04 which rebooted into no disks, and potentially FFME05 which is displaying services as online but large disk issue.
FFM has ECC RAM onsite and this will repair FFME001 which is actually correcting the errors just fine for now, but to avoid any comorbidities.
Migrations had issue with reconfigurations, SolusVM can't handle that many and keeps crashing. We'll be going through today and also fixing incorrect IPv4 showing up on WHMCS but for the most part you should be able to reconfigure and get it to work. A small percentage of these will still have problems booting back up, and we're actively still going through those right now.
Miami will be the next thing to be worked on, sometime this week. I do believe our schedule will open back up and we'll also be focusing on returning support response times to normal.
Ryzen migrate button will be converted to Ryzen to Ryzen location change button by around the end of this week.
My LA one finally migrated to LAXA031, but it seems that there are only a few EOL installation templates on that node. Newer ones show up in solusvm but won't work.
OS templates may have ran into issue for LAX since they were interrupted by the network issue we faced, and then interrupted again with mass migrations. I'll put in another sync soon.
In the meantime all Ryzen nodes are getting a final sync of netboot.xyz to make sure that at least works as an option.
@Virmach is it possible to move CHI to a closer venue, such as Denver, rather than NYC?
Are all AMS earmarked for FFM?
Also, please refrain from migrating multi-IP VPS until you're closer to fixing that issue.
Patiently (kinda) waiting for rDNS to be working again, so that my server notifications get delivered.
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
May need to check FFME002 as well I'm afraid, my migrated one to it has been down for quite a while. Every day or two I poke it with a stick but it seems pretty dead.
windowz 2022 image not working
Same with me on node FFME003.
I only see the "Migrate" button instead of "Ryzen migrate" even though I got the email (and I only have one service active). Is this normal? I also want to migrate to MIA.
Same for my FFME003-004. One sees no disk, other not even trying to boot.
Missing In Action.
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
Good luck with that, hope this won't cause any more chaos and ruin my only one in Tokyo that isn't suffering from constant packet loss.
I hope this can be used multiple times, to allow more flexibility in changing needs.
Suggested procedure:
Location change may be used once per month.
SecureDragon was one of the first providers offer location change.
They allowed one migration per day, and I think it's too much risk for abuse.
If port 25 unblock is granted, location change is not allowed on this service.
This would cut down spam.
Migration with data is queued into a batch system and not immediate.
Once requested, the service is powered off and locked, until the migration completes, which may take up to 24 hours.
Tickets created within these 24 hours are auto-closed.
This would prevent server overloading.
Webhosting24 aff best VPS; ServerFactory aff best VDS; Cloudie best ASN; Huel aff best brotein.
Out of curiosity what is the difference between NYCB014 and NYCM101? Status page says all nodes should be NYCB, the other one is NYCM, both ends in the same place. Did you fat-fingered B/M?
Also is this normal, as I only see it on AMSD025 (from like 8 nodes I have access too)
Haven't bought a single service in VirMach Great Ryzen 2022 - 2023 Flash Sale.
https://lowendspirit.com/uploads/editor/gi/ippw0lcmqowk.png
This hobbyistic point of view may be damn annoying for someone who migrate stuff in production and want to respawn it back online to work on it as soon as possible. Or at least within scheduled time to be around to put thing back in order, not some random "up to 24 hours - so not sleep for you tonight" time.