@ahnlak said:
I'm curious; is it that DCs run by staff with functioning brains simply don't exist, or that they're out of the price range of "low end" providers?
HV definitely charges a lot when compared to others in the same location. I'm fuming right now I'm taking a break, I'll be back later and explain.
Anyone please while I'm gone try to think of one good reason we should keep Tampa other than not having to migrate everyone again. I'm actually going to go touch some grass or something, sorry. Yes that means probably an 8 hour delay at least on everything. Funny how I can deal with a thousand MJJs in a telegram group and saying they're going to hire hitmen while constantly flaming me on tickets while I work without sleeping for days in a row but this is what sets me off, Hivelocity's ineptitude.
@codelock said:
I remember it being there easily accessible does this mean I have zero credits VirMach ?
Hi @codelock If credit does not show where @Jab said above than probably yes, but take a look at that transfer invoice and see if the $1 was applied.
Sub Total $3.00 USD
Credit $1.00 USD
Total $2.00 USD
Yes the credit was applied in that invoice,
Should I make a support ticket for this?
I'm probably not the right person to ask, because if it was my mistake, and only $1, I would not expect support to fix it for me and just let it slide.
@VirMach said: Anyone please while I'm gone try to think of one good reason we should keep Tampa other than not having to migrate everyone again.
Not going to get a good reason out of me as I paid to migrate out of Tampa shortly after you opened Miami. I thought the Tampa network was subpar for my use case, even if it was Hivelocity. Miami has been much better from my point of view. Just my 2 cents.
@VirMach said:
Anyone please while I'm gone try to think of one good reason we should keep Tampa other than not having to migrate everyone again. I'm actually going to go touch some grass or something, sorry. Yes that means probably an 8 hour delay at least on everything. Funny how I can deal with a thousand MJJs in a telegram group and saying they're going to hire hitmen while constantly flaming me on tickets while I work without sleeping for days in a row but this is what sets me off, Hivelocity's ineptitude.
@VirMach said:
One server. He spent an hour rebooting one server this morning. From a request two days ago to just power cycle once for something I don't want to get into.
I seriously can't think of a single reason why you would continuously restart a server except for stupidity or maliciousness. Or both?
Not going to get a good reason out of me as I paid to migrate out of Tampa shortly after you opened Miami. I thought the Tampa network was subpar for my use case, even if it was Hivelocity. Miami has been much better from my point of view. Just my 2 cents.
I have stuff in Miami, haven't been in Tampa location. I think I checked a route to one there and found Miami was better for me as well, so if that's a common thought, It doesn't sound like a terrible idea I guess.
Though getting them to pack up and ship servers to you or Miami when they spend an hour drooling pushing a power button could be a nightmare...
Maybe time to hit up the most senior person in that company you can and like WTF is happening dude, can you hire someone with the iq higher than a rock?
@Daevien said:
Maybe time to hit up the most senior person in that company you can and like WTF is happening dude, can you hire someone with the iq higher than a rock?
@codelock said:
I remember it being there easily accessible does this mean I have zero credits VirMach ?
Hi @codelock If credit does not show where @Jab said above than probably yes, but take a look at that transfer invoice and see if the $1 was applied.
Sub Total $3.00 USD
Credit $1.00 USD
Total $2.00 USD
Yes the credit was applied in that invoice,
Should I make a support ticket for this?
I'm probably not the right person to ask, because if it was my mistake, and only $1, I would not expect support to fix it for me and just let it slide.
@VirMach said: Anyone please while I'm gone try to think of one good reason we should keep Tampa other than not having to migrate everyone again.
Not going to get a good reason out of me as I paid to migrate out of Tampa shortly after you opened Miami. I thought the Tampa network was subpar for my use case, even if it was Hivelocity. Miami has been much better from my point of view. Just my 2 cents.
Will I need to pay the rest 2 usd ? Will not paying affect my account anyways
I used the migrate button with 3 usd fee , then ticket and invoice were created, but then I didn't want to migrate, so I clicked on close ticket but , the invoice is still pending,
So here is question do I have to pay the invoice or can I let it expire?
If I am required to and do pay invoice how can I reopen the ticket, I cannot re-open the ticket by replying it seems
PS don't ban me 😗
If you didn't pay you don't have to pay. If it auto pays you can ask for a refund.
@FrankZ said:
Not going to get a good reason out of me as I paid to migrate out of Tampa shortly after you opened Miami. I thought the Tampa network was subpar for my use case, even if it was Hivelocity. Miami has been much better from my point of view. Just my 2 cents.
That's interesting for me since I've seen the opposite on my end. Then again a lot of my traffic goes back into the states. If you're connecting from south america/mexico then I can see Miami being better since QN has Telexius in their blend. The big NA/EU carriers are pretty weak down there with the exception of Lumen, and whenever I send traffic from Tampa to LATAM I see it going through Lumen. Both are fine IMO, but Hive has somewhat cleaner routes across the board. I guess the only competent people on staff there are the ones working in the NOC
Okay I couldn't do it, I have nothing for me outside I'm too used to working.
TPAZ002 is being shipped out because I can't trust HV, overnighting it. FFME004 is having memory errors still, either we're insanely unlucky because they were fresh sticks or motherboard is just going haywire. Working on it now, going to try to move people off the one disk that's still unstable and then running memory test and doing another swap.
@bakageta said:
Yep, no change here, seems like I'm just unlucky this time. I'm sure it'll get sorted out.
Ticket already in, confirmed it's just you two it seems, same blade server.
Looks like it's back. Uptime matches up with when I clicked reboot in the panel to confirm the server wasn't hung, so it looks like panel controls were functional at the time. @rpollestad any luck with yours? I'm going to go ahead and close out my ticket, everything seems fine on mine now.
@yoursunny said:
By the year 2060, most of Miami would be underwater.
If you only have Miami but not Tampa, you are in trouble.
Virmach will be in the mental ward long before 2060 if each year is like this one...
Yes but the choice is mine, will I also be under water? Very compelling argument by @yoursunny but all I can see is the positive side, if I drown with the servers in Miami, no HV DC hands will be there to save my life.
@Daevien said: I seriously can't think of a single reason why you would continuously restart a server except for stupidity or maliciousness. Or both?
Okay to set the tone let's quickly remember that in Tampa, sales sold us a half cab and the only reason we went with that location or with them in general was because we needed to get servers up fast. Then the thing happened where the power drop wasn't even done, and no PDU, then apparently they gave us a PDU with only like four ports, billed us for another, then sales came in telling me he never promised a PDU and they had already given us two out of courtesy. No need to pretend that makes sense, let's move on.
TPAZ005 was having thermal issues last week, I asked for an emergency thermal repaste and then got way too busy. Of course that just means I didn't check, this was their response:
We are looking into this for you now has the server been powered down? Please also note that remote hands is 100/hr.
Like I really don't get it, but yeah it was my bad this time, I forgot to constantly repeat I want to be billed so hard. So anyway, my mistake, and when I got back to them it looked like it was doing OK briefly, enough to schedule it instead. Nope, it went down again like a few days later due to a thermal event.
So I don't know what you guys think but usually if a CPU thermal throttles and powers off, it doesn't get damaged because it successfully achieved its job of choosing to power off. And the number wasn't insanely crazy, I didn't go in and try to raise the critical limit or anything like that.
We put in DC hands request two days ago, and I hate myself for this but the tech that picked up the ticket had trouble locating the server. I should've just been like nope nevermind we're good. I had to point it out to him that the server in RU05 (rack unit #5) was indeed the 1, 2, 3, 4, 5th server from the bottom. He asked me to circle it in a photo he sent me or something like that.
He finishes the thermal paste re-application successfully, but brings up how it won't post.
I ask him if by any chance the CPU got stuck to the heatsink when he was removing it, and he confirms it did. Apparently that wasn't anything significant to mention. So I'll take over the story now with some speculation -- the tech damaged the CPU. No worries, I'm calm, it happens, I just want him to replace it with another CPU we have on hand. At some point in between this he was still in denial and I had to basically assure him that it's definitely a problem with the CPU since it happened right after a thermal paste re-application as he wanted to probably spend 3 hours "troubleshooting." I let him know to thoroughly check everything, bent pins, and then now he has to take out off heatsink from another server we're using for parts, put it in this one, etc. He finally agrees, and replies back super quickly, like faster than it took to for the initial request and like suspiciously quickly, confirms everything's fine, and the new CPU won't work either. Seems like he was in a hurry to leave work as he mentioned the next person being able to help. And mentions something about the CPU pad maybe being damaged by "heat."
Okay so good, the guy is leaving and someone else will be able to take over and get us back online, we're down two Ryzen CPUs and maybe a motherboard.
So the servers we sent in a long time ago, they finally got "set up" except networking got confused by their own team communications. They said everything was done outside of switch configuration or that's how I took it, but I couldn't get IPMI working to try to identify a server with the same motherboard (since it has Gen4 NVMe's.)
The goal at this point is to get any of the six other servers to be a donor. Oh and important funny comment, someone, I think maybe the original tech, mentioned something along the lines of "don't worry, Tampa is 24x7."
Next guy on shift...
My request: Can you please reboot RU6 through RU10? Then two more sentences about how I'm trying to determine a suitable board. Their reply, and I feel like I have to say "I'm not making this up this is 100% actually their reply" a couple times but this is one of them:
"Just to verify. Are these the 5 units that you have indicated to reboot?"
So he confirms it's done, and now I know they were full of s*** when they said IPMI was set up on them. Okay, ticket for networking. I basically ask for them to fix it for IPMI. And then following it up with this: If the above isn't clear please at the very least just power cycle RU06 through RU10, so that's the top-most five servers in our cabinet (not the bottom five.) Then we'll proceed with other steps as necessary. I left that in case maybe over the weekend the network engineers were not there/not working/no one capable of doing it was there over the weekend.
I was both right and wrong.
A ) The networking guy did get back to us 3 hours ago after about a day and a half. I find out that for some reason they did not route the IP block to our switch.
B ) The networking guy moved the ticket back to "general support" to set up the networking/IPMI.
The "general support" guy is the one guy we asked to never work on our tickets. We asked him, as well as management or whoever/whatever position you want to call it, and with some help they eventually mentioned that they'd make sure that doesn't happen.
He replies, and mind you the power cycle was already done at this time:
"I will go ahead and power cycle it as requested."
"Being an ASRock motherboard they do take some time but after 2x 30 min sessions there is still no image."
So literally coming in again, not reading anything, spending an hour trying to boot up one single server that we never asked for him to troubleshoot. And you want to know the best part? The server he spent an hour troubleshooting, we already know what's up with it because it was already addressed by the last guy and anyone would immediately be able to tell what's going on but for some reason this block of cheese decided to spent an hour pretending to reboot the the server, derailing the ticket about IPMI networking, not addressing any actual concern like I don't know maybe we said we're trying to identify the board on it if you're going to spend an hour maybe check the board model???? He didn't even open it, he literally just pressed the power button for an hour.
(Oh and the issue is that VLAN is disabled on BMC setting for IPMI on this board. Nothing to do with power, he probably pressed it so many times that now it might have a second issue.)
@bakageta said:
Yep, no change here, seems like I'm just unlucky this time. I'm sure it'll get sorted out.
Ticket already in, confirmed it's just you two it seems, same blade server.
Looks like it's back. Uptime matches up with when I clicked reboot in the panel to confirm the server wasn't hung, so it looks like panel controls were functional at the time. @rpollestad any luck with yours? I'm going to go ahead and close out my ticket, everything seems fine on mine now.
Might as well roast them here too, we let them know all the people on this single blade servers are down, abruptly, as in an outage. As in it's offline.
Their reply back:
Hello Soheil,
May we reboot your servers?
(No, sending a reboot command unfortunately didn't fix it, actually fixing it fixed it.) But hey six people and two damaged CPUs and a board weren't involved in that so I'll consider it a success especially relatively quick, relative to 24x7 Tampa facilities of course.
@bakageta said:
Yep, no change here, seems like I'm just unlucky this time. I'm sure it'll get sorted out.
Ticket already in, confirmed it's just you two it seems, same blade server.
Looks like it's back. Uptime matches up with when I clicked reboot in the panel to confirm the server wasn't hung, so it looks like panel controls were functional at the time. @rpollestad any luck with yours? I'm going to go ahead and close out my ticket, everything seems fine on mine now.
Might as well roast them here too, we let them know all the people on this single blade servers are down, abruptly, as in an outage. As in it's offline.
Their reply back:
Hello Soheil,
May we reboot your servers?
(No, sending a reboot command unfortunately didn't fix it, actually fixing it fixed it.) But hey six people and two damaged CPUs and a board weren't involved in that so I'll consider it a success especially relatively quick, relative to 24x7 Tampa facilities of course.
Ooof. I've been tempted to colo something myself so many times, but then it feels like every single person I know with firsthand experience has these kind of issues, and I just don't think I could deal with it.
@bakageta said:
Ooof. I've been tempted to colo something myself so many times, but then it feels like every single person I know with firsthand experience has these kind of issues, and I just don't think I could deal with it.
It is consistently terrible and almost never worth it. Only ever consider colo if you have some weird need that can't be fulfilled by a generic dedi. Alternatively, pick a datacenter near your home and do all the moderately complex tasks yourself.
@VirMach said:
So literally coming in again, not reading anything, spending an hour trying to boot up one single server that we never asked for him to troubleshoot. And you want to know the best part? The server he spent an hour troubleshooting, we already know what's up with it because it was already addressed by the last guy and anyone would immediately be able to tell what's going on but for some reason this block of cheese decided to spent an hour pretending to reboot the the server, derailing the ticket about IPMI networking, not addressing any actual concern like I don't know maybe we said we're trying to identify the board on it if you're going to spend an hour maybe check the board model???? He didn't even open it, he literally just pressed the power button for an hour.
(Oh and the issue is that VLAN is disabled on BMC setting for IPMI on this board. Nothing to do with power, he probably pressed it so many times that now it might have a second issue.)
Yeahhhh, I would be relaying this information to someone higher up than you have already and stressing very heavily that I was not impressed. Incompetence that you had to pay a high rate for to break your server hardware. That's nothing but losses all around for you while the walking rock has probably done this to other customers before and will continue to unless he's stopped.
@fluttershy said:
It is consistently terrible and almost never worth it. Only ever consider colo if you have some weird need that can't be fulfilled by a generic dedi. Alternatively, pick a datacenter near your home and do all the moderately complex tasks yourself.
Yeah, back years ago when I was in Dallas, I did some minor stuff for other companies in the Infomart. Was safer to get someone that was running their own servers and you'd had a few conversations with than remote hands a lot of the time lol
@Daevien said:
Yeahhhh, I would be relaying this information to someone higher up than you have already and stressing very heavily that I was not impressed. Incompetence that you had to pay a high rate for to break your server hardware. That's nothing but losses all around for you while the walking rock has probably done this to other customers before and will continue to unless he's stopped.
Is SJC HiV as well?
SJC is Dedipath/Internap. Pretty sure HiV is aware at this point.
I'm gonna reach out to Steve (Hivelocity COO) to see if there's anything he can do go address the Tampa issues. Not sure if you've spoken with him directly before, but he sometimes frequents the LE forums. The incompetence is beyond disturbing (and should be to HV leadership as well).
@fluttershy said:
SJC is Dedipath/Internap. Pretty sure HiV is aware at this point.
Ah, couldn't remember who it was through, it's an area I don't have a ton of usage in so just have PHX and one or two non Virmach LA ones. (My PHX is a LA 14 escapee lol)
@Mason said:
I'm gonna reach out to Steve (Hivelocity COO) to see if there's anything he can do go address the Tampa issues. Not sure if you've spoken with him directly before, but he sometimes frequents the LE forums. The incompetence is beyond disturbing (and should be to HV leadership as well).
Already raised it with Steve, so you don't need to (but you can, if you want)
Comments
I'm curious; is it that DCs run by staff with functioning brains simply don't exist, or that they're out of the price range of "low end" providers?
HV definitely charges a lot when compared to others in the same location. I'm fuming right now I'm taking a break, I'll be back later and explain.
Power button not connected to brain, commencing emergency reboot (time 2 sessions for 30 minutes before he run out of battery and went for redbull)!
Haven't bought a single service in VirMach Great Ryzen 2022 - 2023 Flash Sale.
https://lowendspirit.com/uploads/editor/gi/ippw0lcmqowk.png
Anyone please while I'm gone try to think of one good reason we should keep Tampa other than not having to migrate everyone again. I'm actually going to go touch some grass or something, sorry. Yes that means probably an 8 hour delay at least on everything. Funny how I can deal with a thousand MJJs in a telegram group and saying they're going to hire hitmen while constantly flaming me on tickets while I work without sleeping for days in a row but this is what sets me off, Hivelocity's ineptitude.
I'm probably not the right person to ask, because if it was my mistake, and only $1, I would not expect support to fix it for me and just let it slide.
Not going to get a good reason out of me as I paid to migrate out of Tampa shortly after you opened Miami. I thought the Tampa network was subpar for my use case, even if it was Hivelocity. Miami has been much better from my point of view. Just my 2 cents.
LES • About • Donate • Rules • Support
It’s more funny when you join such group
I seriously can't think of a single reason why you would continuously restart a server except for stupidity or maliciousness. Or both?
@FrankZ said:
I have stuff in Miami, haven't been in Tampa location. I think I checked a route to one there and found Miami was better for me as well, so if that's a common thought, It doesn't sound like a terrible idea I guess.
Though getting them to pack up and ship servers to you or Miami when they spend an hour drooling pushing a power button could be a nightmare...
Maybe time to hit up the most senior person in that company you can and like WTF is happening dude, can you hire someone with the iq higher than a rock?
Probably no luck on that front.
Will I need to pay the rest 2 usd ? Will not paying affect my account anyways
Want free vps ? https://microlxc.net
Wow! What is the hell
Remove Additional IP Fee
, that is more dumb thanEnable Port 80/443 Fee
.Tried, IPv6 doesn't work btw.
Yo, join our premium masochist club
Let me go find where VirMach said this would not be a problem. Be right back.
EDIT: Here you go ...
LES • About • Donate • Rules • Support
That's interesting for me since I've seen the opposite on my end. Then again a lot of my traffic goes back into the states. If you're connecting from south america/mexico then I can see Miami being better since QN has Telexius in their blend. The big NA/EU carriers are pretty weak down there with the exception of Lumen, and whenever I send traffic from Tampa to LATAM I see it going through Lumen. Both are fine IMO, but Hive has somewhat cleaner routes across the board. I guess the only competent people on staff there are the ones working in the NOC
Okay I couldn't do it, I have nothing for me outside I'm too used to working.
TPAZ002 is being shipped out because I can't trust HV, overnighting it. FFME004 is having memory errors still, either we're insanely unlucky because they were fresh sticks or motherboard is just going haywire. Working on it now, going to try to move people off the one disk that's still unstable and then running memory test and doing another swap.
By the year 2060, most of Miami would be underwater.
If you only have Miami but not Tampa, you are in trouble.
Webhosting24 aff best VPS; ServerFactory aff best VDS; Cloudie best ASN; Huel aff best brotein.
Virmach will be in the mental ward long before 2060 if each year is like this one...
Yes, I am using this location to cover southeastern Mexico and it really has to do with the provider here, TelMex more then anything else.
Now I have to change my keyboard. Spit my OJ all over it reading that comment.
LES • About • Donate • Rules • Support
Makes sense, telmex is connected to telexius so I can see where the routing would be better.
@VirBOT was born in these few days.
I bench YABS 24/7/365 unless it's a leap year.
On the positive side you won't have to worry about overheating CPUs. Free water cooling
Looks like it's back. Uptime matches up with when I clicked reboot in the panel to confirm the server wasn't hung, so it looks like panel controls were functional at the time. @rpollestad any luck with yours? I'm going to go ahead and close out my ticket, everything seems fine on mine now.
Yes but the choice is mine, will I also be under water? Very compelling argument by @yoursunny but all I can see is the positive side, if I drown with the servers in Miami, no HV DC hands will be there to save my life.
Okay to set the tone let's quickly remember that in Tampa, sales sold us a half cab and the only reason we went with that location or with them in general was because we needed to get servers up fast. Then the thing happened where the power drop wasn't even done, and no PDU, then apparently they gave us a PDU with only like four ports, billed us for another, then sales came in telling me he never promised a PDU and they had already given us two out of courtesy. No need to pretend that makes sense, let's move on.
TPAZ005 was having thermal issues last week, I asked for an emergency thermal repaste and then got way too busy. Of course that just means I didn't check, this was their response:
Like I really don't get it, but yeah it was my bad this time, I forgot to constantly repeat I want to be billed so hard. So anyway, my mistake, and when I got back to them it looked like it was doing OK briefly, enough to schedule it instead. Nope, it went down again like a few days later due to a thermal event.
So I don't know what you guys think but usually if a CPU thermal throttles and powers off, it doesn't get damaged because it successfully achieved its job of choosing to power off. And the number wasn't insanely crazy, I didn't go in and try to raise the critical limit or anything like that.
We put in DC hands request two days ago, and I hate myself for this but the tech that picked up the ticket had trouble locating the server. I should've just been like nope nevermind we're good. I had to point it out to him that the server in RU05 (rack unit #5) was indeed the 1, 2, 3, 4, 5th server from the bottom. He asked me to circle it in a photo he sent me or something like that.
He finishes the thermal paste re-application successfully, but brings up how it won't post.
I ask him if by any chance the CPU got stuck to the heatsink when he was removing it, and he confirms it did. Apparently that wasn't anything significant to mention. So I'll take over the story now with some speculation -- the tech damaged the CPU. No worries, I'm calm, it happens, I just want him to replace it with another CPU we have on hand. At some point in between this he was still in denial and I had to basically assure him that it's definitely a problem with the CPU since it happened right after a thermal paste re-application as he wanted to probably spend 3 hours "troubleshooting." I let him know to thoroughly check everything, bent pins, and then now he has to take out off heatsink from another server we're using for parts, put it in this one, etc. He finally agrees, and replies back super quickly, like faster than it took to for the initial request and like suspiciously quickly, confirms everything's fine, and the new CPU won't work either. Seems like he was in a hurry to leave work as he mentioned the next person being able to help. And mentions something about the CPU pad maybe being damaged by "heat."
Okay so good, the guy is leaving and someone else will be able to take over and get us back online, we're down two Ryzen CPUs and maybe a motherboard.
So the servers we sent in a long time ago, they finally got "set up" except networking got confused by their own team communications. They said everything was done outside of switch configuration or that's how I took it, but I couldn't get IPMI working to try to identify a server with the same motherboard (since it has Gen4 NVMe's.)
The goal at this point is to get any of the six other servers to be a donor. Oh and important funny comment, someone, I think maybe the original tech, mentioned something along the lines of "don't worry, Tampa is 24x7."
Next guy on shift...
My request: Can you please reboot RU6 through RU10? Then two more sentences about how I'm trying to determine a suitable board. Their reply, and I feel like I have to say "I'm not making this up this is 100% actually their reply" a couple times but this is one of them:
"Just to verify. Are these the 5 units that you have indicated to reboot?"
So he confirms it's done, and now I know they were full of s*** when they said IPMI was set up on them. Okay, ticket for networking. I basically ask for them to fix it for IPMI. And then following it up with this: If the above isn't clear please at the very least just power cycle RU06 through RU10, so that's the top-most five servers in our cabinet (not the bottom five.) Then we'll proceed with other steps as necessary. I left that in case maybe over the weekend the network engineers were not there/not working/no one capable of doing it was there over the weekend.
I was both right and wrong.
A ) The networking guy did get back to us 3 hours ago after about a day and a half. I find out that for some reason they did not route the IP block to our switch.
B ) The networking guy moved the ticket back to "general support" to set up the networking/IPMI.
The "general support" guy is the one guy we asked to never work on our tickets. We asked him, as well as management or whoever/whatever position you want to call it, and with some help they eventually mentioned that they'd make sure that doesn't happen.
He replies, and mind you the power cycle was already done at this time:
"I will go ahead and power cycle it as requested."
"Being an ASRock motherboard they do take some time but after 2x 30 min sessions there is still no image."
So literally coming in again, not reading anything, spending an hour trying to boot up one single server that we never asked for him to troubleshoot. And you want to know the best part? The server he spent an hour troubleshooting, we already know what's up with it because it was already addressed by the last guy and anyone would immediately be able to tell what's going on but for some reason this block of cheese decided to spent an hour pretending to reboot the the server, derailing the ticket about IPMI networking, not addressing any actual concern like I don't know maybe we said we're trying to identify the board on it if you're going to spend an hour maybe check the board model???? He didn't even open it, he literally just pressed the power button for an hour.
(Oh and the issue is that VLAN is disabled on BMC setting for IPMI on this board. Nothing to do with power, he probably pressed it so many times that now it might have a second issue.)
Might as well roast them here too, we let them know all the people on this single blade servers are down, abruptly, as in an outage. As in it's offline.
Their reply back:
(No, sending a reboot command unfortunately didn't fix it, actually fixing it fixed it.) But hey six people and two damaged CPUs and a board weren't involved in that so I'll consider it a success especially relatively quick, relative to 24x7 Tampa facilities of course.
Ooof. I've been tempted to colo something myself so many times, but then it feels like every single person I know with firsthand experience has these kind of issues, and I just don't think I could deal with it.
It is consistently terrible and almost never worth it. Only ever consider colo if you have some weird need that can't be fulfilled by a generic dedi. Alternatively, pick a datacenter near your home and do all the moderately complex tasks yourself.
Yeahhhh, I would be relaying this information to someone higher up than you have already and stressing very heavily that I was not impressed. Incompetence that you had to pay a high rate for to break your server hardware. That's nothing but losses all around for you while the walking rock has probably done this to other customers before and will continue to unless he's stopped.
Is SJC HiV as well?
Yeah, back years ago when I was in Dallas, I did some minor stuff for other companies in the Infomart. Was safer to get someone that was running their own servers and you'd had a few conversations with than remote hands a lot of the time lol
SJC is Dedipath/Internap. Pretty sure HiV is aware at this point.
I'm gonna reach out to Steve (Hivelocity COO) to see if there's anything he can do go address the Tampa issues. Not sure if you've spoken with him directly before, but he sometimes frequents the LE forums. The incompetence is beyond disturbing (and should be to HV leadership as well).
Head Janitor @ LES • About • Rules • Support • Donate
Ah, couldn't remember who it was through, it's an area I don't have a ton of usage in so just have PHX and one or two non Virmach LA ones. (My PHX is a LA 14 escapee lol)
Already raised it with Steve, so you don't need to (but you can, if you want)