@VirMach said:
They're literally saying they can't even power it on and it has no issues that would cause that. They want to ship it out but last time they did that they sent the wrong server and then sandwiched a motherboard with RAM and everything on it and essentially used it as the padding for the wrong server. They've also lost like $2000 of equipment so far.
We're asking for a CMOS reset but last time we requested that they didn't know what that was/where it was.
Not looking good.
Ack. The previously known issues with them were enough to make me believe there were worse issues prob we didn't know but they've lost and destroyed equipment as well already?
Have you not tried to get in contact with a VP or CEO or something at this point to make them aware of how terrible that location is. Or is it just the entire company is that degree of competent?
@Daevien said: but they've lost and destroyed equipment as well already?
We sent them two motherboards to swap a board they claimed was broken. With everything already on it. It took weeks/months and they fully lost or "recycled" one of them. Then there's two servers there that they've never been able to get going, and one other motherboard left which they shipped back with a server that wasn't ours. They used padding on only one side of the wrong server, and crammed in the motherboard they returned with it, basically a 30-40LB server crushing it. By the time the box got to us half of it was fully dented since it had NO padding, it bent the steel ears in the front. Luckily it wasn't our server. Unluckily it WAS our motherboard, and I'm pretty sure our server went to someone else as well.
They did credit us for one motherboard but not for anything on it, they claimed we only sent a motherboard and nothing on it and that they swapped in all the parts from the broken board which makes no sense, whatever.
Their "emergency" hourly rate is like $200 by the way. And they won't do anything outside of button presses, maybe RAM swap and KVM attachments. They still take 2 days to tell us a button press wasn't successful.
Let me re-iterate: this means they charge $200 for someone to take days only to come back and basically tell you they have no idea what anything is, they probably pay high school interns $20 an hour to do the work.
@VirMach said:
We sent them two motherboards to swap a board they claimed was broken. With everything already on it. It took weeks/months and they fully lost or "recycled" one of them. Then there's two servers there that they've never been able to get going, and one other motherboard left which they shipped back with a server that wasn't ours. They used padding on only one side of the wrong server, and crammed in the motherboard they returned with it, basically a 30-40LB server crushing it. By the time the box got to us half of it was fully dented since it had NO padding, it bent the steel ears in the front. Luckily it wasn't our server. Unluckily it WAS our motherboard, and I'm pretty sure our server went to someone else as well.
They did credit us for one motherboard but not for anything on it, they claimed we only sent a motherboard and nothing on it and that they swapped in all the parts from the broken board which makes no sense, whatever.
Oof, not only incompetent at basic DC stuff like knowing how to type and how to get into a BIOS, but losing hardware is terrible. Add on destruction of servers and then losing / shipping wrong customer the wrong server? Cause you know you are likely not the first one with any of these issues.
Edit: and yeah, the rates for remote hands is brutal and always a last step. but it sounds like these guys basically don't have anyone, they are pretty much charging money and then just ignoring the actual request, giving a oh it didn't work response if questioned. that is extra yikes
@VirMach Thank you for fixing the paid migration button. I now have locations to migrate to in my Atlanta VM.
I will pay $3 to migrate out of Atlanta. Is Dallas still problematic or did you get a new DC there?
@FrankZ said: @VirMach Thank you for fixing the paid migration button. I now have locations to migrate to in my Atlanta VM.
I will pay $3 to migrate out of Atlanta. Is Dallas still problematic or did you get a new DC there?
I wouldn't recommend Dallas. We're considering Hivelocity there if they'd just not piss me off for one week. Unfortunately they've met their quota for this week so we'll check back next week.
@BlazinDimes said:
I'm certainly enjoying it here more than OGF so far. Also enjoying VirMach's generosity during this transitional period.
The straight up communication with the community is fantastic.
LET has turned into a shithole, I try to make an effort not to participate there anymore. Community is basically gone, it's all just ads from corrupt providers.
@VirMach said: By the time the box got to us half of it was fully dented since it had NO padding, it bent the steel ears in the front. Luckily it wasn't our server. Unluckily it WAS our motherboard, and I'm pretty sure our server went to someone else as well.
See - you should see in different way... DC used server chassis as padding! Steel chassis - those should be indestructible, they took very good care of your motherboard inside, it was well protected!
On the serious note - isn't anyone in DC world recording package process, just for insurance claims? They probably have rooms dedicated to just packing stuff - why not add two-three cams there, automate recording (timestamps, bar code scan, anything)? Many shops/sellers do it...
@soulchief said:
Anyone else waiting for a VPS for making a ticket guess on Oct 5th? I messaged the account details on the 9th but maybe that was too late?
A good number of people waiting and you'll probably need to wait longer. A bunch of inactive people who halfway didn't follow instructions and a few bad apples in there as well and no time to sort and go through them. Not saying you're part of the bad crowd but I am saying you're part of the batch that's not going to get done any time soon.
What does bad apple mean? Is it a metaphor?
I've been waiting after sending a message, did I miss something?
@Jab said: On the serious note - isn't anyone in DC world recording package process, just for insurance claims? They probably have rooms dedicated to just packing stuff - why not add two-three cams there, automate recording (timestamps, bar code scan, anything)? Many shops/sellers do it...
From my experience all DCs basically by default are allowed to screw anything up, set everything on fire, lose hardware, steal hardware, and if they added cameras to that area outside of for their own closed loop system to catch external thieves, it'd just end up biting them in the ass. When they basically have no responsibility they have no reason to add any when you have no other choice.
None of them even had a contract that said "hey we acknowledge we have your hardware."
It's probably why it's so easy for them to do whatever they want and for criminals to take advantage of the whole system. Pretty much the second you send it off you have to assume one day it won't be yours. Imagine doing that with anything else...
Uhmmm, has FFME001 just died?
I just moved some files there and did little develop and hadn't had time to setup backup as I was going to do it in an hour.
Fucking Murphy law?
EDIT, I've decided to check network status..
Frankfurt Network Maintenance (In Progress)Low
Affecting Server - SolusVM
10/12/2022 15:27 Last Updated 10/12/2022 15:28
This issue affects a server that may impact your services
We're performing minor network maintenance in Frankfurt. There should not be any loss of connectivity. If something goes > wrong, it's possible there may be connectivity loss for a few minutes.
So network maintenance gone wrong, I would assume my files are safe!
EDIT2.
Fuck, now my WHMCS/SolusVM shows node as Offline rather than Timedout after XXXX ms. I am too scared to press Power on, I will wait for Network status update
EDIT3.
Online, however uptime 0 minutes - seems like it was more than network maintenance. Time to worry? Time to backup ASAP, brb, gonna hammer disk for a moment!
Oh yes, backup in progress... but I forgot to exclude some directories and now I am gonna backup 8GB of OpenStreetmap tiles so like 300k small (10 kB?) files. Good job, this will take years
@soulchief said:
Anyone else waiting for a VPS for making a ticket guess on Oct 5th? I messaged the account details on the 9th but maybe that was too late?
A good number of people waiting and you'll probably need to wait longer. A bunch of inactive people who halfway didn't follow instructions and a few bad apples in there as well and no time to sort and go through them. Not saying you're part of the bad crowd but I am saying you're part of the batch that's not going to get done any time soon.
What does bad apple mean? Is it a metaphor?
I've been waiting after sending a message, did I miss something?
You have an apple tree and go pick the apples, most of the apples are good but every so often you pick a bad apple (rotting, worms, etc) that needs to be tossed in the garbage. So Virmach found some "bad apples/people" (probably trying to give fake proof or are banned accounts) so he needs to get rid of those people first.
@Jab said: Online, however uptime 0 minutes - seems like it was more than network maintenance. Time to worry? Time to backup ASAP, brb, gonna hammer disk for a moment!
Miscommunication with xTom network team, they rebooted it I think. I didn't even notice. Other ones should go smoother. Sorry about that, really wasn't expecting it or else we'd have sent advanced notice. We just got them available right now and wanted to avoid having to schedule it as it's difficult to do.
Edit: by miscommunication I mean he just rebooted it without telling us.
This was a node that was overloading due to the previously described issue regarding disk and/or swap space. We sent this out to a dozen or so people, maybe a little bit more, so we can try moving the swap file or the disk was failing so we were clearing out the disk as a second step and in that case you might have been migrated.
This should be done by now, are you saying yours is still offline?
Comments
Ack. The previously known issues with them were enough to make me believe there were worse issues prob we didn't know but they've lost and destroyed equipment as well already?
Have you not tried to get in contact with a VP or CEO or something at this point to make them aware of how terrible that location is. Or is it just the entire company is that degree of competent?
We sent them two motherboards to swap a board they claimed was broken. With everything already on it. It took weeks/months and they fully lost or "recycled" one of them. Then there's two servers there that they've never been able to get going, and one other motherboard left which they shipped back with a server that wasn't ours. They used padding on only one side of the wrong server, and crammed in the motherboard they returned with it, basically a 30-40LB server crushing it. By the time the box got to us half of it was fully dented since it had NO padding, it bent the steel ears in the front. Luckily it wasn't our server. Unluckily it WAS our motherboard, and I'm pretty sure our server went to someone else as well.
They did credit us for one motherboard but not for anything on it, they claimed we only sent a motherboard and nothing on it and that they swapped in all the parts from the broken board which makes no sense, whatever.
Their "emergency" hourly rate is like $200 by the way. And they won't do anything outside of button presses, maybe RAM swap and KVM attachments. They still take 2 days to tell us a button press wasn't successful.
Let me re-iterate: this means they charge $200 for someone to take days only to come back and basically tell you they have no idea what anything is, they probably pay high school interns $20 an hour to do the work.
Oof, not only incompetent at basic DC stuff like knowing how to type and how to get into a BIOS, but losing hardware is terrible. Add on destruction of servers and then losing / shipping wrong customer the wrong server? Cause you know you are likely not the first one with any of these issues.
Edit: and yeah, the rates for remote hands is brutal and always a last step. but it sounds like these guys basically don't have anyone, they are pretty much charging money and then just ignoring the actual request, giving a oh it didn't work response if questioned. that is extra yikes
@VirMach Thank you for fixing the paid migration button. I now have locations to migrate to in my Atlanta VM.
I will pay $3 to migrate out of Atlanta. Is Dallas still problematic or did you get a new DC there?
LES • About • Donate • Rules • Support
I believe Dallas is still the same DC run by the above dumpster fire
I just tried to jump ship from ATL to CHI but the request didn't work lol
I wouldn't recommend Dallas. We're considering Hivelocity there if they'd just not piss me off for one week. Unfortunately they've met their quota for this week so we'll check back next week.
Probably pick Tampa.
:-[
See - you should see in different way... DC used server chassis as padding! Steel chassis - those should be indestructible, they took very good care of your motherboard inside, it was well protected!
On the serious note - isn't anyone in DC world recording package process, just for insurance claims? They probably have rooms dedicated to just packing stuff - why not add two-three cams there, automate recording (timestamps, bar code scan, anything)? Many shops/sellers do it...
Haven't bought a single service in VirMach Great Ryzen 2022 - 2023 Flash Sale.
https://lowendspirit.com/uploads/editor/gi/ippw0lcmqowk.png
What does bad apple mean? Is it a metaphor?
I've been waiting after sending a message, did I miss something?
From my experience all DCs basically by default are allowed to screw anything up, set everything on fire, lose hardware, steal hardware, and if they added cameras to that area outside of for their own closed loop system to catch external thieves, it'd just end up biting them in the ass. When they basically have no responsibility they have no reason to add any when you have no other choice.
None of them even had a contract that said "hey we acknowledge we have your hardware."
It's probably why it's so easy for them to do whatever they want and for criminals to take advantage of the whole system. Pretty much the second you send it off you have to assume one day it won't be yours. Imagine doing that with anything else...
Uhmmm, has FFME001 just died?
I just moved some files there and did little develop and hadn't had time to setup backup as I was going to do it in an hour.
Fucking Murphy law?
EDIT, I've decided to check network status..
So
network maintenance
gone wrong, I would assume my files are safe!EDIT2.
Fuck, now my WHMCS/SolusVM shows node as
Offline
rather thanTimedout after XXXX ms
. I am too scared to pressPower on
, I will wait for Network status updateEDIT3.
Online
, however uptime 0 minutes - seems like it was more than network maintenance. Time to worry? Time to backup ASAP, brb, gonna hammer disk for a moment!Haven't bought a single service in VirMach Great Ryzen 2022 - 2023 Flash Sale.
https://lowendspirit.com/uploads/editor/gi/ippw0lcmqowk.png
Yeah, was just goign to post that there was an maintenance report for Frank. Frank shoudl be fine, Frank is not on fire
What is the state of IPv6 at the moment? I remember at least a couple machines in Europe/Asia got something. Just wondering how far it's spread.
Hello! I can't access my VPS, it shows "The node is currently locked." What's going on here? Thanks.
It's TYO036. @VirMach
Oh yes, backup in progress... but I forgot to exclude some directories and now I am gonna backup 8GB of OpenStreetmap tiles so like 300k small (10 kB?) files. Good job, this will take years
Haven't bought a single service in VirMach Great Ryzen 2022 - 2023 Flash Sale.
https://lowendspirit.com/uploads/editor/gi/ippw0lcmqowk.png
You have an apple tree and go pick the apples, most of the apples are good but every so often you pick a bad apple (rotting, worms, etc) that needs to be tossed in the garbage. So Virmach found some "bad apples/people" (probably trying to give fake proof or are banned accounts) so he needs to get rid of those people first.
Miscommunication with xTom network team, they rebooted it I think. I didn't even notice. Other ones should go smoother. Sorry about that, really wasn't expecting it or else we'd have sent advanced notice. We just got them available right now and wanted to avoid having to schedule it as it's difficult to do.
Edit: by miscommunication I mean he just rebooted it without telling us.
@VirMach FFME002 network got bad ,package loss high, maybe has abusers
Nope that's just my keyboard, made a typo. The type of typo that makes networking lag.
(edit) Okay it had nothing to do with my typo, looking into it still.
Hello! I can't access my VPS, it shows "The node is currently locked." What's going on here? Thanks.
It's TYO036.
@VirMach, FFME004 takes too long to load, not offline, but just spinning...
We had to end up rebooting them all because apparently it doesn't want to work without it. I tried everything else beforehand. Sorry.
This was maintenance for getting LACP working. Good news is Frankfurt is now 2 x 1Gbps I guess.
Will this be rolling out to other locations as well?
My VPS is finally up again after long time unstable.
Thank you.
Definitely not like THAT. Holy crap what a nightmare
(edit) But yes
nice, reboot!
YABSing
smashing the temp test server on FFME003
I bench YABS 24/7/365 unless it's a leap year.
@VirMach, FFME004 works good now. Thanks.
Interesting YABS output on FFME003 VM:
Stop ruining FFME03!
-checks my list-
Oh, you can abuse that, I don't have 03, I have 01, 02, 04.
Haven't bought a single service in VirMach Great Ryzen 2022 - 2023 Flash Sale.
https://lowendspirit.com/uploads/editor/gi/ippw0lcmqowk.png