Oh I figured out what happened, the IP address on my server changed. Updating DNS let me log into it and it appears intact. Weirdly, trying to change the root password through the control panel failed. The ssh host key also changed, which is also puzzling.
Anyway, this server is back now. I may try moving my web site to it from Buffalo and see whether Comcast still messes with it.
Thanks @Virmach for my Chicago migration! Went perfectly. Just asking so I can plan, is there an approximation ( assuming it will happen at all ) on when Chicago will be open to migrations?
@652698 said: @VirMach
what happened? I can't use it. How can it become a network abuse?SJCZ004.
"There's been a lot of cases of people doing their own network configurations due to issues with reconfigure button, but then they set the IP address to the node's main IP or a random IP in the subnet. This is considered IP stealing/network abuse, so I don't know if I have to provide a PSA here but definitely only use the IP address assigned to you. We've had some issues with the anti-IP theft script which uses ebtables so had to disable it on a lot of nodes but this doesn't mean you're allowed to use an IP address you're not paying for/not part of your service. You will be suspended and while I understand in some cases it's just a mistake, we're going to be relatively strict depending on the scenario it caused. With that said, one of these cases pretty much caused SJCZ004 to have problems with the initial configuration and indirectly led to this long outage."
@Virmach Curiosity has got the better of me.
I've got 4 Tickets merged into one today - I understand why, from your perspective. How do you know which request is related to which VM? It's not obvious from the Ticket at the customer side; presumably it's hidden from our view, for each of the Ticket entries.
[For the benefit of readers: 2 are paid Migration requests (away from Dallas), 1 is a Customer to Customer transfer, the last being the lack of multi-IP on a single VM.]
In other news. It looks as though my last Buffalo VM is moving out. Where to, remains a mystery.
[Edit: It settled to pasture in NYC ]
[Edit2: It's borked currently, and refusing to reinstall etc. No big problem for me on this particular VM.]
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
CC spotted the mass exodus and decided to call a halt on the remaining! [Speculation]
I'm thinking some technical issues going on instead. My dedi came back online about 10 minutes ago, so I'm back up and running. Virmach site still appears to be down.
CC spotted the mass exodus and decided to call a halt on the remaining! [Speculation]
I'm thinking some technical issues going on instead. My dedi came back online about 10 minutes ago, so I'm back up and running. Virmach site still appears to be down.
We have a contingency plan. It's not perfect. Nor were we expecting it to go this far or be done in this way. I've been working on it the entire time, and have emergency replacement servers to some degree. I'm sending out some vague communication that's at the very least necessary to immediately inform customers of the risks at this point.
Our site is indirectly related. Luckily it's only the few static pages that happened to be on one of the servers facing issues. This isn't important right now, what I previously said is so we're going to be focusing on that.
I'm working with our developer today to pro-rate refund all dedicated servers where possible, for any remaining duration. We don't want to give people the wrong idea of what's going on and want to reduce any panic, at least on any end we can control. Then, first come first serve basis on emergency tickets created requesting a replacement server, we'll begin provisioning those today, as well as backup servers for anyone who requires it. Both of these would temporarily have no charge (as in, free for the time being.)
This should give everyone a few different choices given the scenario, so can they proceed with a purchase as necessary elsewhere, or to at the very least have some VPS or server to hold any backup or temporary home.
Let me know if you guys have any questions or suggestions but please keep in mind they'll have to be specific and direct.
@VirMach said:
I'm working with our developer today to pro-rate refund all dedicated servers where possible, for any remaining duration. We don't want to give people the wrong idea of what's going on and want to reduce any panic, at least on any end we can control. Then, first come first serve basis on emergency tickets created requesting a replacement server, we'll begin provisioning those today, as well as backup servers for anyone who requires it. Both of these would temporarily have no charge (as in, free for the time being.)
This should give everyone a few different choices given the scenario, so can they proceed with a purchase as necessary elsewhere, or to at the very least have some VPS or server to hold any backup or temporary home.
Let me know if you guys have any questions or suggestions but please keep in mind they'll have to be specific and direct.
@VirMach said:
I'm working with our developer today to pro-rate refund all dedicated servers where possible, for any remaining duration. We don't want to give people the wrong idea of what's going on and want to reduce any panic, at least on any end we can control. Then, first come first serve basis on emergency tickets created requesting a replacement server, we'll begin provisioning those today, as well as backup servers for anyone who requires it. Both of these would temporarily have no charge (as in, free for the time being.)
This should give everyone a few different choices given the scenario, so can they proceed with a purchase as necessary elsewhere, or to at the very least have some VPS or server to hold any backup or temporary home.
Let me know if you guys have any questions or suggestions but please keep in mind they'll have to be specific and direct.
is there any chance i can get my data back?
Are you on a dedicated server that's currently offline?
@VirMach said:
I'm working with our developer today to pro-rate refund all dedicated servers where possible, for any remaining duration. We don't want to give people the wrong idea of what's going on and want to reduce any panic, at least on any end we can control. Then, first come first serve basis on emergency tickets created requesting a replacement server, we'll begin provisioning those today, as well as backup servers for anyone who requires it. Both of these would temporarily have no charge (as in, free for the time being.)
Is it safe to assume any existing dedicated servers (I had two) are gone for good? No chance at getting any remaining data off them?
@VirMach said:
I'm working with our developer today to pro-rate refund all dedicated servers where possible, for any remaining duration. We don't want to give people the wrong idea of what's going on and want to reduce any panic, at least on any end we can control. Then, first come first serve basis on emergency tickets created requesting a replacement server, we'll begin provisioning those today, as well as backup servers for anyone who requires it. Both of these would temporarily have no charge (as in, free for the time being.)
This should give everyone a few different choices given the scenario, so can they proceed with a purchase as necessary elsewhere, or to at the very least have some VPS or server to hold any backup or temporary home.
Let me know if you guys have any questions or suggestions but please keep in mind they'll have to be specific and direct.
Relaying these questions on behalf of a friend:
Is it at all likely that any existing dedicated machines will come back up for an hour or so? We've got most of the data kept on off-site backups, but one VM could benefit from a rescue mission.
For servers that were bought as part of the 'migration special' on OGF - will the old pricing be honoured once that is ready if the old machines are cancelled?
Thanks for your time. You must be having an absolute nightmare at the moment - thoughts & prayers are with you.
@VirMach said:
I'm working with our developer today to pro-rate refund all dedicated servers where possible, for any remaining duration. We don't want to give people the wrong idea of what's going on and want to reduce any panic, at least on any end we can control. Then, first come first serve basis on emergency tickets created requesting a replacement server, we'll begin provisioning those today, as well as backup servers for anyone who requires it. Both of these would temporarily have no charge (as in, free for the time being.)
Is it safe to assume any existing dedicated servers (I had two) are gone for good? No chance at getting any remaining data off them?
@acidpuke said:
I have 2 dedicated servers Chicago and Los Angeles both down .. so is it safe to assume I'm F'ed
In our next meeting with the legal team, I'll attempt to get back some information for you.
We will of course pursue this to the best of our ability. I would not necessarily know what legal ramifications are available, if any, that would assist customers in at least getting some access to their data. If anyone is interested in getting more directly involved in these discussions let me know and I'll try to pass along your message and see if that's a possibility.
@Yorkie said: Relaying these questions on behalf of a friend:
Is it at all likely that any existing dedicated machines will come back up for an hour or so? We've got most of the data kept on off-site backups, but one VM could benefit from a rescue mission.
For servers that were bought as part of the 'migration special' on OGF - will the old pricing be honoured once that is ready if the old machines are cancelled?
We will of course continue to honor everything that's within our control. We are still planning on continuing as expected with the migration specials. If your friend would like to take that route, then just ensure they request a replacement server. If the old machine is cancelled, as a direct result of today's events, I will consider allowing them to continue the migration end of the special, although it's not currently set up to be processed in that manner.
The only changes we're offering are meant to be to the customer's benefit. We're not for example, going to cancel a special offer for a dedicated server at a good price if the customer wants to continue it on a new server with our new datacenter partner.
Thanks for your time. You must be having an absolute nightmare at the moment - thoughts & prayers.
Ryzen virtual servers are not going to be affected by this matter. SolusVM and WHMCS (VPS control, billing databases) are safe as well. We're wrapping up any leftovers today and looking at disaster recovery backups if necessary. Hopefully we have done a pretty good job with Ryzen migrations so far so this should be relatively rare.
@acidpuke said:
I just opened 2 network down support tickets one for each server, is that good enough? or should I open one specifically asking for a new servers?
Same question here, just opened a ticket for my 3 servers related to them being down, but wondering if I need to specify that I want new servers. This sucks, but I expected CC to pull something shady so I luckily have backups
@VirMach said:
I'm working with our developer today to pro-rate refund all dedicated servers where possible, for any remaining duration.
Ohh lordy... I have the "FreeMach" dedi sponsored by you guys. It's "prepaid" through 2030 and renews at like $500/yr. All I'm trying to say is please make sure your dev is aware so you don't send me $4k. Lol
Hoping for the best! Fingers crossed for a replacement, but I completely understand if we have to axe the project given the circumstances. Regardless of what happens, I (and all FreeMach users) sincerely appreciate the 4.5+ years of sponsorship and stellar service!
@VirMach said:
I'm working with our developer today to pro-rate refund all dedicated servers where possible, for any remaining duration.
Ohh lordy... I have the "FreeMach" dedi sponsored by you guys. It's "prepaid" through 2030 and renews at like $500/yr. All I'm trying to say is please make sure your dev is aware so you don't send me $4k. Lol
Hoping for the best! Fingers crossed for a replacement, but I completely understand if we have to axe the project given the circumstances. Regardless of what happens, I (and all FreeMach users) sincerely appreciate the 4.5+ years of sponsorship and stellar service!
Take your $4000 and order a replacement at $250/year?
Then you can continue the project until 2038.
No need to worry beyond 2038 because Unix timestamp will overflow, which ends the world.
@acidpuke said:
I just opened 2 network down support tickets one for each server, is that good enough? or should I open one specifically asking for a new servers?
Same question here, just opened a ticket for my 3 servers related to them being down, but wondering if I need to specify that I want new servers. This sucks, but I expected CC to pull something shady so I luckily have backups
We do have some tickets from before we sent out the emergency notice, and some people who aren't as caught up to the series of events that occurred so it's best to specifically ask for new server details and mention your requirements if any.
@VirMach said:
I'm working with our developer today to pro-rate refund all dedicated servers where possible, for any remaining duration.
Ohh lordy... I have the "FreeMach" dedi sponsored by you guys. It's "prepaid" through 2030 and renews at like $500/yr. All I'm trying to say is please make sure your dev is aware so you don't send me $4k. Lol
Hoping for the best! Fingers crossed for a replacement, but I completely understand if we have to axe the project given the circumstances. Regardless of what happens, I (and all FreeMach users) sincerely appreciate the 4.5+ years of sponsorship and stellar service!
Honestly I'm just happy that people are empathetic. That in itself makes today a lot easier on me, as it would be insanely difficult to mentally get through the day if we were also being attacked and unable to defend ourselves. Of course there will be some of that but knowing the people who stick around the most understand what's happening is enough for me.
We're not going to use this situation to get out of any previous obligations, and that includes "FreeMach." You'll have a replacement. It'll be the same price, so free if it's free.
I haven't figured out how the credits will be done. I want to do it to payment method but that will likely be harder to code out. We'll probably do store credit then allow people to get it back to their original method in this instance if they contact us and it's possible. So I'm not sure if you were going to end up getting a boatload of credit in your unique situation but thanks for making me aware of that being a potential edge case, I'll pass it on to the developer.
I wonder if Virmach's neighbours can hear again. Because I know if after all the BS already happened / happening / still being sorted, this happened, there would lots of angry yelling.
I knew this whole thing would be a rough period from my experiences in the business, so many changes in so little time and so many issues to happen along the way even without stupid companies screwing you over. So I'm here for the long haul and understand the work you are doing to try and fix everything, which in the end will be a better setup.
Honestly I'm just happy that people are empathetic. That in itself makes today a lot easier on me, as it would be insanely difficult to mentally get through the day if we were also being attacked and unable to defend ourselves. Of course there will be some of that but knowing the people who stick around the most understand what's happening is enough for me.
I have two in Chicago that are offline. Short to medium term I know I can do without one and I'm going to try for both. It's not much but at least you know that's one more person who isn't going to lose their shit. We'll figure it out.
We're not going to use this situation to get out of any previous obligations, and that includes "FreeMach." You'll have a replacement. It'll be the same price, so free if it's free.
I haven't figured out how the credits will be done. I want to do it to payment method but that will likely be harder to code out. We'll probably do store credit then allow people to get it back to their original method in this instance if they contact us and it's possible. So I'm not sure if you were going to end up getting a boatload of credit in your unique situation but thanks for making me aware of that being a potential edge case, I'll pass it on to the developer.
100% for laughter value, I literally renewed Monday. I'm fine with store credits for now since I'll just be replacing existing servers eventually anyway.
Question: Are the emergency dedi's already racked and if so are they at multiple locations or all in LA for example? Tangential to that, how will location of replacements be handled?
Comments
Oh I figured out what happened, the IP address on my server changed. Updating DNS let me log into it and it appears intact. Weirdly, trying to change the root password through the control panel failed. The ssh host key also changed, which is also puzzling.
Anyway, this server is back now. I may try moving my web site to it from Buffalo and see whether Comcast still messes with it.
Thanks @Virmach for my Chicago migration! Went perfectly. Just asking so I can plan, is there an approximation ( assuming it will happen at all ) on when Chicago will be open to migrations?
TYOC040 is down for me. anyone else?
I bench YABS 24/7/365 unless it's a leap year.
Tempa beach club is open.
However, my service (currently in Atlanta) doesn't have free migration button…
Webhosting24 aff best VPS; ServerFactory aff best VDS; Cloudie best ASN; Huel aff best brotein.
If you able to migrate to Tampa from Atlanta, do let me know.
As I wanted to do the same as well. But there is no free migration.
Also in paid migration there is no option to choose where we want to migrate.
Tampa = Hotel California
You can migrate anytime you want…
But you can never leave
(Because there’s nowhere to go to!!)
VPS reviews and benchmarks |
"There's been a lot of cases of people doing their own network configurations due to issues with reconfigure button, but then they set the IP address to the node's main IP or a random IP in the subnet. This is considered IP stealing/network abuse, so I don't know if I have to provide a PSA here but definitely only use the IP address assigned to you. We've had some issues with the anti-IP theft script which uses ebtables so had to disable it on a lot of nodes but this doesn't mean you're allowed to use an IP address you're not paying for/not part of your service. You will be suspended and while I understand in some cases it's just a mistake, we're going to be relatively strict depending on the scenario it caused. With that said, one of these cases pretty much caused SJCZ004 to have problems with the initial configuration and indirectly led to this long outage."
@Virmach Curiosity has got the better of me.
I've got 4 Tickets merged into one today - I understand why, from your perspective. How do you know which request is related to which VM? It's not obvious from the Ticket at the customer side; presumably it's hidden from our view, for each of the Ticket entries.
[For the benefit of readers: 2 are paid Migration requests (away from Dallas), 1 is a Customer to Customer transfer, the last being the lack of multi-IP on a single VM.]
In other news. It looks as though my last Buffalo VM is moving out. Where to, remains a mystery.
[Edit: It settled to pasture in NYC ]
[Edit2: It's borked currently, and refusing to reinstall etc. No big problem for me on this particular VM.]
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
Trouble in paradise? Main website seems to be down along with a dedicated server of mine in Buffalo.
Edit: Dedi is back online.
Head Janitor @ LES • About • Rules • Support • Donate
CC spotted the mass exodus and decided to call a halt on the remaining! [Speculation]
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
I'm thinking some technical issues going on instead. My dedi came back online about 10 minutes ago, so I'm back up and running. Virmach site still appears to be down.
Head Janitor @ LES • About • Rules • Support • Donate
If they spotted it now after 1-1,5months is surely sounds like CC
I'm still waiting for IPv6 from them, maybe they hear about it next year or 2030
Yeah, I was being (trying to be) facetious, guys.
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
I wouldn't be able to comment on this.
We have a contingency plan. It's not perfect. Nor were we expecting it to go this far or be done in this way. I've been working on it the entire time, and have emergency replacement servers to some degree. I'm sending out some vague communication that's at the very least necessary to immediately inform customers of the risks at this point.
Our site is indirectly related. Luckily it's only the few static pages that happened to be on one of the servers facing issues. This isn't important right now, what I previously said is so we're going to be focusing on that.
I'm working with our developer today to pro-rate refund all dedicated servers where possible, for any remaining duration. We don't want to give people the wrong idea of what's going on and want to reduce any panic, at least on any end we can control. Then, first come first serve basis on emergency tickets created requesting a replacement server, we'll begin provisioning those today, as well as backup servers for anyone who requires it. Both of these would temporarily have no charge (as in, free for the time being.)
This should give everyone a few different choices given the scenario, so can they proceed with a purchase as necessary elsewhere, or to at the very least have some VPS or server to hold any backup or temporary home.
Let me know if you guys have any questions or suggestions but please keep in mind they'll have to be specific and direct.
Webhosting24 aff best VPS; ServerFactory aff best VDS; Cloudie best ASN; Huel aff best brotein.
is there any chance i can get my data back?
Are you on a dedicated server that's currently offline?
Is it safe to assume any existing dedicated servers (I had two) are gone for good? No chance at getting any remaining data off them?
I have 2 dedicated servers Chicago and Los Angeles both down .. so is it safe to assume I'm Fu*k'ed
Relaying these questions on behalf of a friend:
Is it at all likely that any existing dedicated machines will come back up for an hour or so? We've got most of the data kept on off-site backups, but one VM could benefit from a rescue mission.
For servers that were bought as part of the 'migration special' on OGF - will the old pricing be honoured once that is ready if the old machines are cancelled?
Thanks for your time. You must be having an absolute nightmare at the moment - thoughts & prayers are with you.
In our next meeting with the legal team, I'll attempt to get back some information for you.
We will of course pursue this to the best of our ability. I would not necessarily know what legal ramifications are available, if any, that would assist customers in at least getting some access to their data. If anyone is interested in getting more directly involved in these discussions let me know and I'll try to pass along your message and see if that's a possibility.
We will of course continue to honor everything that's within our control. We are still planning on continuing as expected with the migration specials. If your friend would like to take that route, then just ensure they request a replacement server. If the old machine is cancelled, as a direct result of today's events, I will consider allowing them to continue the migration end of the special, although it's not currently set up to be processed in that manner.
The only changes we're offering are meant to be to the customer's benefit. We're not for example, going to cancel a special offer for a dedicated server at a good price if the customer wants to continue it on a new server with our new datacenter partner.
Thanks, means a lot.
Just throwing this out there as well:
Ryzen virtual servers are not going to be affected by this matter. SolusVM and WHMCS (VPS control, billing databases) are safe as well. We're wrapping up any leftovers today and looking at disaster recovery backups if necessary. Hopefully we have done a pretty good job with Ryzen migrations so far so this should be relatively rare.
I just opened 2 network down support tickets one for each server, is that good enough? or should I open one specifically asking for new servers?
All my data is backed up so I don't need anything from the old servers
Ticket #319037
Ticket #140230
Same question here, just opened a ticket for my 3 servers related to them being down, but wondering if I need to specify that I want new servers. This sucks, but I expected CC to pull something shady so I luckily have backups
Ohh lordy... I have the "FreeMach" dedi sponsored by you guys. It's "prepaid" through 2030 and renews at like $500/yr. All I'm trying to say is please make sure your dev is aware so you don't send me $4k. Lol
Hoping for the best! Fingers crossed for a replacement, but I completely understand if we have to axe the project given the circumstances. Regardless of what happens, I (and all FreeMach users) sincerely appreciate the 4.5+ years of sponsorship and stellar service!
Head Janitor @ LES • About • Rules • Support • Donate
Take your $4000 and order a replacement at $250/year?
Then you can continue the project until 2038.
No need to worry beyond 2038 because Unix timestamp will overflow, which ends the world.
Webhosting24 aff best VPS; ServerFactory aff best VDS; Cloudie best ASN; Huel aff best brotein.
We do have some tickets from before we sent out the emergency notice, and some people who aren't as caught up to the series of events that occurred so it's best to specifically ask for new server details and mention your requirements if any.
Honestly I'm just happy that people are empathetic. That in itself makes today a lot easier on me, as it would be insanely difficult to mentally get through the day if we were also being attacked and unable to defend ourselves. Of course there will be some of that but knowing the people who stick around the most understand what's happening is enough for me.
We're not going to use this situation to get out of any previous obligations, and that includes "FreeMach." You'll have a replacement. It'll be the same price, so free if it's free.
I haven't figured out how the credits will be done. I want to do it to payment method but that will likely be harder to code out. We'll probably do store credit then allow people to get it back to their original method in this instance if they contact us and it's possible. So I'm not sure if you were going to end up getting a boatload of credit in your unique situation but thanks for making me aware of that being a potential edge case, I'll pass it on to the developer.
I wonder if Virmach's neighbours can hear again. Because I know if after all the BS already happened / happening / still being sorted, this happened, there would lots of angry yelling.
I knew this whole thing would be a rough period from my experiences in the business, so many changes in so little time and so many issues to happen along the way even without stupid companies screwing you over. So I'm here for the long haul and understand the work you are doing to try and fix everything, which in the end will be a better setup.
I have two in Chicago that are offline. Short to medium term I know I can do without one and I'm going to try for both. It's not much but at least you know that's one more person who isn't going to lose their shit. We'll figure it out.
100% for laughter value, I literally renewed Monday. I'm fine with store credits for now since I'll just be replacing existing servers eventually anyway.
Question: Are the emergency dedi's already racked and if so are they at multiple locations or all in LA for example? Tangential to that, how will location of replacements be handled?