@Papa said:
My FFME002 is offline since yesterday.
My migrated one had been working fine but I checked seeing the messages here today (hadn't added it back into monitoring setup) and saw it was down. No bootable OS it seemed and a reinstall attempt had issues with drive, so could be storage related issue. Not a big deal for me since I haven't really been doing much with that one lately. Hopefully no major issue though.
I got two emails telling me about the ryzen migrate option. I guess that one of them is for my non-ryzen KVM currently running in SJC. I think I'll wait to migrate that one until the SJC Ryzens are declared stable and working. I'm wondering about the 2nd email. It might just be random, but I have another Virmach BF VPS which is a small OpenVZ so I'm wondering if it was generated because of that, and whether that one can also be migrated. I had been planning to use it for monitoring but in practice it's been an idler.
@willie said:
but I have another Virmach BF VPS which is a small OpenVZ so I'm wondering if it was generated because of that, and whether that one can also be migrated. I had been planning to use it for monitoring but in practice it's been an idler.
Yep, they are moving all vps as far as I know, even the BF ones. Though I thought they had changed all the openvz ones to kvm a while back? (I had phased all of mine out before they did that so don't know first hand the status I guess)
I imagine it's costing them on some of the weird / special packages but they are still doing it rather than just dropping them so good on them.
@willie said:
I got two emails telling me about the ryzen migrate option. I guess that one of them is for my non-ryzen KVM currently running in SJC. I think I'll wait to migrate that one until the SJC Ryzens are declared stable and working. I'm wondering about the 2nd email. It might just be random, but I have another Virmach BF VPS which is a small OpenVZ so I'm wondering if it was generated because of that, and whether that one can also be migrated. I had been planning to use it for monitoring but in practice it's been an idler.
@Daevien's right, pretty sure they discontinued OVZ a few years back and converted everybody to KVM. I converted my ( formerly ) OVZ box over to Ryzen last month and it went fine.
@VirMach I assume FFME001.VIRM.AC shit itself, just like 002?
Rebooting whole day today, now I've VNC'ed to see "read only filesystem" and after reboot we back to standard 'no disk found, fuck you'. Shutdown, reinstall Debian, same thing 'no disk found, fuck you'?
@FrankZ said: I show offline in billing panel, and the same as you described above in SolusVM.
The VPS itself is running fine and does not have any issues.
SolusVM isn't broken on it. I haven't had a chance to fix it since it's lower down the priority list since the node is actually online as you mentioned. Controls will be broken though so anyone's VM that's offline because of a reboot or other command will not be able to be booted up since controls are broken.
@Jab said: @VirMach I assume FFME001.VIRM.AC shit itself, just like 002?
Frankfurt is constantly shitting itself and I still don't know why for some of the nodes. It seems like some of the BIOS or kernel settings aren't sticking properly so the NVMe is winding down.
For one or two of them though it may be related to memory issues. That's either 002 or 001 if I recall correctly.
@Daevien said: than just dropping them so good on them.
We don't really care about the monetary cost, that's something we can eat. What it ends up costing though is a lot of headache and of course bad experiences where it doesn't go smoothly.
Our team isn't necessarily prepared to do these large scale server builds and constant migrations and I'm sure it will be frustrating when it affects support reply times, and a lot of high paying customers end up leaving as a result. We'll try to avoid migrating everyone again a third time to avoid these headaches for us and everyone else.
@willie said:
I got two emails telling me about the ryzen migrate option. I guess that one of them is for my non-ryzen KVM currently running in SJC. I think I'll wait to migrate that one until the SJC Ryzens are declared stable and working. I'm wondering about the 2nd email. It might just be random, but I have another Virmach BF VPS which is a small OpenVZ so I'm wondering if it was generated because of that, and whether that one can also be migrated. I had been planning to use it for monitoring but in practice it's been an idler.
Yeah, SolusVM sends emails per node so if you have more than one it'll send it for all of them. We were just trying to inform everyone at once and I forgot SolusVM may do that.
Quick update on XPG drives: I haven't had a chance to take them to the post office and our post office is horrendous at pickups, they just fully ignore them. So they haven't been getting picked up and I haven't been driving to the post office. Sorry for the delay but at the same time I feel like I gave plenty of warnings that you might end up getting it "soon"
@VirMach said: Quick update on XPG drives: I haven't had a chance to take them to the post office and our post office is horrendous at pickups, they just fully ignore them. So they haven't been getting picked up and I haven't been driving to the post office. Sorry for the delay but at the same time I feel like I gave plenty of warnings that you might end up getting it "soon"
That's a good thing in my case.
Grinch stole the little name card in my mailbox about three weeks ago.
Overly careful mail carrier believes there's nobody living at my address and has been returning all the first class mail to the sender.
I only figured out this problem when an eBay order with tracking number showed weird status.
I wrote to the postmaster to complain.
A new name card showed up the next day, and then mail delivery returned to normal.
Take your time for this walk.
Santa Monica Blvd traffic is so bad.
Priority should be given to opening Miami beach club.
My current node expires on July 05.
@VirMach said:
We don't really care about the monetary cost, that's something we can eat. What it ends up costing though is a lot of headache and of course bad experiences where it doesn't go smoothly.
Yeah, I only moved a few vps that aren't critical, so it isn't a big deal to me. I've got two others that will prob move last though that I would suffer if they went down all the time so I understand others may not be so forgiving. In the end though, considering a lot prob have bf deals that at this point are costing you money, I think a little patience is more than fair
Our team isn't necessarily prepared to do these large scale server builds and constant migrations and I'm sure it will be frustrating when it affects support reply times, and a lot of high paying customers end up leaving as a result. We'll try to avoid migrating everyone again a third time to avoid these headaches for us and everyone else.
sends coffee to team I'm sure it's been rough and weird stuff for them. I've gone through similar type things over the years. I've been out with back issues for a while now but somehow, crazy things on fire moments do seem a little interesting when you usually spend your days just trying to not be in pain
@yoursunny said: Priority should be given to opening Miami beach club.
My current node expires on July 05.
I'll try to work in time to set up a node here for you. It's all there, it just needs to be configured. Assuming the node doesn't just die on first boot.
@VirMach said:
We don't really care about the monetary cost, that's something we can eat. What it ends up costing though is a lot of headache and of course bad experiences where it doesn't go smoothly.
Yeah, I only moved a few vps that aren't critical, so it isn't a big deal to me. I've got two others that will prob move last though that I would suffer if they went down all the time so I understand others may not be so forgiving. In the end though, considering a lot prob have bf deals that at this point are costing you money, I think a little patience is more than fair
Our team isn't necessarily prepared to do these large scale server builds and constant migrations and I'm sure it will be frustrating when it affects support reply times, and a lot of high paying customers end up leaving as a result. We'll try to avoid migrating everyone again a third time to avoid these headaches for us and everyone else.
sends coffee to team I'm sure it's been rough and weird stuff for them. I've gone through similar type things over the years. I've been out with back issues for a while now but somehow, crazy things on fire moments do seem a little interesting when you usually spend your days just trying to not be in pain
Yeah, it's been rough. We've had our fair share of stressful times but this one's absolutely the worst and it's the first time I've actually gotten uncomfortable. We just end up being at the mercy of all the datacenter partners that have their own weird quirks but luckily none have been as bad as Psychz, as in at least we vaguely get what we're paying for (a cabinet that's ours to use.)
We've just never had to answer a huge influx of tickets AND migrate everyone AND make sure new features keep functioning WHILE SolusVM rolls out bad updates AND build new hardware AND keep this boatload of stuff organized AND pack/coordinate shipments AND configure the new hardware AND deal with said hardware running into a crazy amount of issues AND organize and redo all the networking AND try to meet hard deadlines, all while we're understaffed. On top of that the lack of sleep and running into heavy objects with my kneecaps as well as handling these heavy all day is finally catching up to me. I'm eating something like 4,000-5,000 calories a day and still losing weight.
We're still doing it but the results are far from perfect and it sucks because it's not really up to our normal standards. I wish we could coordinate it all perfectly, give everyone ample notice, updates, and keep up with the tickets but we just can't do it all at once.
A lot of things just end up taking a long time not because they would take a long time to do but because it's difficult to work it into the schedule without the rest of the day falling apart. But yes, walking is faster than taking a car, I just don't have anything to carry them with right now. I've taken packages down to the post office with a trash bag in the past but I'm also exhausted.
The worst part though is getting to the post office and there being a 30 minute line every time. I could just drop them off but I always get a receipt because otherwise they tend to lose things. They still tend to lose things.
Migrated a couple of VPSes, one to Denver, DENZ002, seems to work nicely
(Wasn't able to find netboot.xyz ISO, or any ISO's in SolusVM, but reinstalling Debian 11 from SolusVM worked, and I could load netboot.xyz using GRUB from there on. Nice performance.)
The other node I migrated to Frankfurt, FFME002. It's not even starting now. SolusVM reports it being successfully booted/rebooted etc, but nothing runs. (It was booted/no disk earlier, but no response today.) I assume it's the mentioned issues still going on, so I'll just wait, no rush ...
@VirMach said:
We're still doing it but the results are far from perfect and it sucks because it's not really up to our normal standards. I wish we could coordinate it all perfectly, give everyone ample notice, updates, and keep up with the tickets but we just can't do it all at once.
I'd say you it sounds like you need more people but then you'd need to fit time in there to check & train people too so that just adds to your workload which doesn't help. Hang in there and avoid the kneecap and other bodily injuries, as someone that wrecked their back at the end of 2017 and did a major move in the middle of covid as well, I really can't recommend it, it makes everything so much harder.
@flips said:
Migrated a couple of VPSes, one to Denver, DENZ002, seems to work nicely
I tried to get one to Denver in the last window but just missed it. The netboot through grub trick is handy, I've used it a few times. https://netboot.xyz/docs/booting/grub/ for anyone that hasn't.
The other node I migrated to Frankfurt, FFME002. It's not even starting now. SolusVM reports it being successfully booted/rebooted etc, but nothing runs. (It was booted/no disk earlier, but no response today.) I assume it's the mentioned issues still going on, so I'll just wait, no rush ...
Yeah, FFME002 worked great for most of a day after I migrated to it then it imploded lol
Migrated from AMSD to FFME003 - and got network unreachable on the host. Reconfigure network doesn`t help, causing different errors like Unknown error or Unknown OS. VNC passes into VM, but I couldn't find out what is wrong - there is eth0, iptables ok, routes seem to be ok.
@yoursunny said: Priority should be given to opening Miami beach club.
My current node expires on July 05.
I'll try to work in time to set up a node here for you. It's all there, it just needs to be configured. Assuming the node doesn't just die on first boot.
@VirMach said:
We don't really care about the monetary cost, that's something we can eat. What it ends up costing though is a lot of headache and of course bad experiences where it doesn't go smoothly.
Yeah, I only moved a few vps that aren't critical, so it isn't a big deal to me. I've got two others that will prob move last though that I would suffer if they went down all the time so I understand others may not be so forgiving. In the end though, considering a lot prob have bf deals that at this point are costing you money, I think a little patience is more than fair
Our team isn't necessarily prepared to do these large scale server builds and constant migrations and I'm sure it will be frustrating when it affects support reply times, and a lot of high paying customers end up leaving as a result. We'll try to avoid migrating everyone again a third time to avoid these headaches for us and everyone else.
sends coffee to team I'm sure it's been rough and weird stuff for them. I've gone through similar type things over the years. I've been out with back issues for a while now but somehow, crazy things on fire moments do seem a little interesting when you usually spend your days just trying to not be in pain
Yeah, it's been rough. We've had our fair share of stressful times but this one's absolutely the worst and it's the first time I've actually gotten uncomfortable. We just end up being at the mercy of all the datacenter partners that have their own weird quirks but luckily none have been as bad as Psychz, as in at least we vaguely get what we're paying for (a cabinet that's ours to use.)
We've just never had to answer a huge influx of tickets AND migrate everyone AND make sure new features keep functioning WHILE SolusVM rolls out bad updates AND build new hardware AND keep this boatload of stuff organized AND pack/coordinate shipments AND configure the new hardware AND deal with said hardware running into a crazy amount of issues AND organize and redo all the networking AND try to meet hard deadlines, all while we're understaffed. On top of that the lack of sleep and running into heavy objects with my kneecaps as well as handling these heavy all day is finally catching up to me. I'm eating something like 4,000-5,000 calories a day and still losing weight.
We're still doing it but the results are far from perfect and it sucks because it's not really up to our normal standards. I wish we could coordinate it all perfectly, give everyone ample notice, updates, and keep up with the tickets but we just can't do it all at once.
all the best. still waiting for tokyo load and network to be addressed, but both TYO and LAX Ryzen VMs have been up and idling pretty well, especially LAX that never had even a hiccup since day 1.
production plans for these are put on the backburners though, so cant imagine those who use them for real. but then again this is lowend pricing so.. always have backup plans i guess?
@Papa said:
Confirm my vm on FFME004 is also down.
While FFME003 (migrated from AMSD with data and available by vnc day ago) shows OS setup in vnc now.
All migrations ran into pretty much every problem they could with every step of the way. All nodes are going to unfortunately have negatively affected VMs but we're working hard to continue to make sure the majority are functional and we're definitely keeping track of all the problems on our end, it just ended up on a ballooning queue.
Right now LAX is having switch problems and we're getting QuadraNet to reconfigure it.
A few of the other problems include but not limited to:
The script doing the migrations screwed up on some FFME nodes where it didn't create the LVM properly, approximately 50 VMs affected.
The script adding IP addresses ran into problems where it was in some way ratelimited by SolusVM, probably 5-10% of people migrated do not have the new IP set as their primary. Most of these are resolved.
SolusVM is notoriously bad at reconfigurations, and a good number of VMs need to be reconfigured because the initial reconfiguration failed, luckily this is a button available on your end as well.
SolusVM new version appears to preserve states, meaning if your VM was powered down to migrate it may remain powered down, luckily again this is a button on your end that can be pressed to solve most of these.
Old IP addresses were not removed properly, we're going through and removing these every day. This is not a huge issue but it also means if it's not done, WHMCS doesn't grab the new IP and may display your old IP instead. The correct IP displays on SolusVM.
SolusVM doesn't update the VNC port for migrated VMs, at least not in this case, so if your VM doesn't power on, outside of an old ISO being mounted, it means it's most likely this issue. You can disable VNC on your end and boot, otherwise we have to manually change these port numbers. We already have someone else hired to process these and they're actively working on it as migrations occur.
@VirMach said:
All migrations ran into pretty much every problem they could with every step of the way. All nodes are going to unfortunately have negatively affected VMs but we're working hard to continue to make sure the majority are functional and we're definitely keeping track of all the problems on our end, it just ended up on a ballooning queue.
Certainly some problems are not due to migrations. I migrated one VM to Frankfurt and it ran properly for days, then suddenly died.
Maybe someone else's migration stuffed it up later on, but I migrated about 8 VMs to various locations and the actual migration process was fine. Seems more like an underlying issue on the server.
SolusVM is notoriously bad at reconfigurations, and a good number of VMs need to be reconfigured because the initial reconfiguration failed, luckily this is a button available on your end as well.
SolusVM new version appears to preserve states, meaning if your VM was powered down to migrate it may remain powered down, luckily again this is a button on your end that can be pressed to solve most of these.
Old IP addresses were not removed properly, we're going through and removing these every day. This is not a huge issue but it also means if it's not done, WHMCS doesn't grab the new IP and may display your old IP instead. The correct IP displays on SolusVM.
SolusVM doesn't update the VNC port for migrated VMs, at least not in this case, so if your VM doesn't power on, outside of an old ISO being mounted, it means it's most likely this issue. You can disable VNC on your end and boot, otherwise we have to manually change these port numbers. We already have someone else hired to process these and they're actively working on it as migrations occur.
None of these solutions worked on both my nodes FFME003/004, one is powered down no matter what, the other couldn't find it`s disk.
Comments
LES • About • Donate • Rules • Support
I did the "Ryzen migrate" to Frankfurt (FFME001) and the VPS is getting power cycled about every 15 minutes.
My FFME002 is offline since yesterday.
My migrated one had been working fine but I checked seeing the messages here today (hadn't added it back into monitoring setup) and saw it was down. No bootable OS it seemed and a reinstall attempt had issues with drive, so could be storage related issue. Not a big deal for me since I haven't really been doing much with that one lately. Hopefully no major issue though.
I got two emails telling me about the ryzen migrate option. I guess that one of them is for my non-ryzen KVM currently running in SJC. I think I'll wait to migrate that one until the SJC Ryzens are declared stable and working. I'm wondering about the 2nd email. It might just be random, but I have another Virmach BF VPS which is a small OpenVZ so I'm wondering if it was generated because of that, and whether that one can also be migrated. I had been planning to use it for monitoring but in practice it's been an idler.
Yep, they are moving all vps as far as I know, even the BF ones. Though I thought they had changed all the openvz ones to kvm a while back? (I had phased all of mine out before they did that so don't know first hand the status I guess)
I imagine it's costing them on some of the weird / special packages but they are still doing it rather than just dropping them so good on them.
@Daevien's right, pretty sure they discontinued OVZ a few years back and converted everybody to KVM. I converted my ( formerly ) OVZ box over to Ryzen last month and it went fine.
@VirMach I assume FFME001.VIRM.AC shit itself, just like 002?
Rebooting whole day today, now I've VNC'ed to see "read only filesystem" and after reboot we back to standard 'no disk found, fuck you'. Shutdown, reinstall Debian, same thing 'no disk found, fuck you'?
That wasn't XPG, right?
Haven't bought a single service in VirMach Great Ryzen 2022 - 2023 Flash Sale.
https://lowendspirit.com/uploads/editor/gi/ippw0lcmqowk.png
SolusVM isn't broken on it. I haven't had a chance to fix it since it's lower down the priority list since the node is actually online as you mentioned. Controls will be broken though so anyone's VM that's offline because of a reboot or other command will not be able to be booted up since controls are broken.
Frankfurt is constantly shitting itself and I still don't know why for some of the nodes. It seems like some of the BIOS or kernel settings aren't sticking properly so the NVMe is winding down.
For one or two of them though it may be related to memory issues. That's either 002 or 001 if I recall correctly.
We don't really care about the monetary cost, that's something we can eat. What it ends up costing though is a lot of headache and of course bad experiences where it doesn't go smoothly.
Our team isn't necessarily prepared to do these large scale server builds and constant migrations and I'm sure it will be frustrating when it affects support reply times, and a lot of high paying customers end up leaving as a result. We'll try to avoid migrating everyone again a third time to avoid these headaches for us and everyone else.
Yeah, SolusVM sends emails per node so if you have more than one it'll send it for all of them. We were just trying to inform everyone at once and I forgot SolusVM may do that.
Quick update on XPG drives: I haven't had a chance to take them to the post office and our post office is horrendous at pickups, they just fully ignore them. So they haven't been getting picked up and I haven't been driving to the post office. Sorry for the delay but at the same time I feel like I gave plenty of warnings that you might end up getting it "soon"
Should've said "soon™"
That's a good thing in my case.
Grinch stole the little name card in my mailbox about three weeks ago.
Overly careful mail carrier believes there's nobody living at my address and has been returning all the first class mail to the sender.
I only figured out this problem when an eBay order with tracking number showed weird status.
I wrote to the postmaster to complain.
A new name card showed up the next day, and then mail delivery returned to normal.
Take your time for this walk.
Santa Monica Blvd traffic is so bad.
Priority should be given to opening Miami beach club.
My current node expires on July 05.
Webhosting24 aff best VPS; ServerFactory aff best VDS; Cloudie best ASN; Huel aff best brotein.
Yeah, I only moved a few vps that aren't critical, so it isn't a big deal to me. I've got two others that will prob move last though that I would suffer if they went down all the time so I understand others may not be so forgiving. In the end though, considering a lot prob have bf deals that at this point are costing you money, I think a little patience is more than fair
sends coffee to team I'm sure it's been rough and weird stuff for them. I've gone through similar type things over the years. I've been out with back issues for a while now but somehow, crazy things on fire moments do seem a little interesting when you usually spend your days just trying to not be in pain
My current node expires on July 05.
I'll try to work in time to set up a node here for you. It's all there, it just needs to be configured. Assuming the node doesn't just die on first boot.
Yeah, it's been rough. We've had our fair share of stressful times but this one's absolutely the worst and it's the first time I've actually gotten uncomfortable. We just end up being at the mercy of all the datacenter partners that have their own weird quirks but luckily none have been as bad as Psychz, as in at least we vaguely get what we're paying for (a cabinet that's ours to use.)
We've just never had to answer a huge influx of tickets AND migrate everyone AND make sure new features keep functioning WHILE SolusVM rolls out bad updates AND build new hardware AND keep this boatload of stuff organized AND pack/coordinate shipments AND configure the new hardware AND deal with said hardware running into a crazy amount of issues AND organize and redo all the networking AND try to meet hard deadlines, all while we're understaffed. On top of that the lack of sleep and running into heavy objects with my kneecaps as well as handling these heavy all day is finally catching up to me. I'm eating something like 4,000-5,000 calories a day and still losing weight.
We're still doing it but the results are far from perfect and it sucks because it's not really up to our normal standards. I wish we could coordinate it all perfectly, give everyone ample notice, updates, and keep up with the tickets but we just can't do it all at once.
A lot of things just end up taking a long time not because they would take a long time to do but because it's difficult to work it into the schedule without the rest of the day falling apart. But yes, walking is faster than taking a car, I just don't have anything to carry them with right now. I've taken packages down to the post office with a trash bag in the past but I'm also exhausted.
The worst part though is getting to the post office and there being a 30 minute line every time. I could just drop them off but I always get a receipt because otherwise they tend to lose things. They still tend to lose things.
Migrated a couple of VPSes, one to Denver, DENZ002, seems to work nicely
(Wasn't able to find netboot.xyz ISO, or any ISO's in SolusVM, but reinstalling Debian 11 from SolusVM worked, and I could load netboot.xyz using GRUB from there on. Nice performance.)
The other node I migrated to Frankfurt, FFME002. It's not even starting now. SolusVM reports it being successfully booted/rebooted etc, but nothing runs. (It was booted/no disk earlier, but no response today.) I assume it's the mentioned issues still going on, so I'll just wait, no rush ...
I'd say you it sounds like you need more people but then you'd need to fit time in there to check & train people too so that just adds to your workload which doesn't help. Hang in there and avoid the kneecap and other bodily injuries, as someone that wrecked their back at the end of 2017 and did a major move in the middle of covid as well, I really can't recommend it, it makes everything so much harder.
I tried to get one to Denver in the last window but just missed it. The netboot through grub trick is handy, I've used it a few times. https://netboot.xyz/docs/booting/grub/ for anyone that hasn't.
Yeah, FFME002 worked great for most of a day after I migrated to it then it imploded lol
Migrated from AMSD to FFME003 - and got network unreachable on the host. Reconfigure network doesn`t help, causing different errors like Unknown error or Unknown OS. VNC passes into VM, but I couldn't find out what is wrong - there is eth0, iptables ok, routes seem to be ok.
all the best. still waiting for tokyo load and network to be addressed, but both TYO and LAX Ryzen VMs have been up and idling pretty well, especially LAX that never had even a hiccup since day 1.
production plans for these are put on the backburners though, so cant imagine those who use them for real. but then again this is lowend pricing so.. always have backup plans i guess?
I bench YABS 24/7/365 unless it's a leap year.
Now FFME004 dropped to grub rescue.
Confirm my vm on FFME004 is also down.
While FFME003 (migrated from AMSD with data and available by vnc day ago) shows OS setup in vnc now.
FFME005 and FFME006 too
server-factory: Cheap and Powerful Netherlands VPS/VDS !!! (aff)
Virtfusion migration works well
Chris on https://hostingforums.net/
All migrations ran into pretty much every problem they could with every step of the way. All nodes are going to unfortunately have negatively affected VMs but we're working hard to continue to make sure the majority are functional and we're definitely keeping track of all the problems on our end, it just ended up on a ballooning queue.
Right now LAX is having switch problems and we're getting QuadraNet to reconfigure it.
A few of the other problems include but not limited to:
Certainly some problems are not due to migrations. I migrated one VM to Frankfurt and it ran properly for days, then suddenly died.
Maybe someone else's migration stuffed it up later on, but I migrated about 8 VMs to various locations and the actual migration process was fine. Seems more like an underlying issue on the server.
Atlanta 7 finally back up. Thanks @Virmach
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
With your experience, you should write a book called "How not to perform mass server migration".
None of these solutions worked on both my nodes FFME003/004, one is powered down no matter what, the other couldn't find it`s disk.
Was still hoping that one of my Buffalo VMs would be
But there is still no migrate option for Miami :d
LES • About • Donate • Rules • Support
I'm a little concerned as to where my Chicago nameserver may end up automatically migrating to.
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
NYC?
LES • About • Donate • Rules • Support