|
Yeah, I'm done with HostGator after this poo poo. I knew when they were bought out that things were likely going to get worse, but had no idea they were going to be this terrible. No connection to our dedicated server for 8 hours now, and the only updates they've put out are "we're having some network issues." The move to the Provo datacentre was bad enough, but at least that forced me to migrate from our ancient FreeBSD setup to a current CentOS one, so it will be painless to move somewhere else now.
|
# ¿ Apr 17, 2014 02:21 |
|
|
# ¿ Apr 29, 2024 02:43 |
|
vty posted:You won't see an RCA or a serious RFO, they never do them. In the past if one was released it was via Brent. Ha, not a chance we'll get an RFO, they've already blamed "vendor firmware" for the outage, which we all know is a lie. My total outage time was somewhere near 24 hours. After 15 hours I got access to the primary IP of the server, but none of the secondary IPs were routing properly. They were still down 21 hours after the original outage, when I went to bed. All working this morning when I got up. Looks like LiquidWeb or similar will be getting our business shortly.
|
# ¿ Apr 18, 2014 00:12 |
|
vty posted:By "Vendor Firmware" they're referring to a third party such as Cisco or whomever is currently the vendor of their routers. Yeah, I realise this, but to tell us that a Cisco/Juniper/etc bug took out the network for an entire datacentre for close to 24 hours is laughable. Well, I guess it would be laughable for a company that actually knows what they are doing, but I suppose it's become pretty obvious over the past couple years that EIG is a terribly run company and a simple bug could cause this much downtime for them. In a decent company, they'd have extensive backups of firmware and configs for every device, plus backup hardware for major backbone equipment, and any sort of outage like this could be at least mitigated by rolling back to a previous version or replacing hardware. An hour or two outage is just something that can happen in even the best of circumstances, but an entire day is unheard of. I wonder what level of screw up would cause this. Maybe a firmware update on a large number of devices for the Heartbleed bug went horribly wrong and configs everywhere got wiped out? They go to restore backups and find that backups are missing/corrupt, and so they basically had to redo configs from scratch. Basically the kind of thing that makes a network guy wake up in a cold sweat in the middle of the night. Would explain why it took so long for things to come back online, and why it was just small bits of the network coming back every couple hours. Oh well, doesn't really matter in the end. We got a free month out of it, and hopefully I'll be moved over somewhere else by then. LiquidWeb seems pretty popular for dedicated servers, I'll probably give them a whirl.
|
# ¿ Apr 21, 2014 14:31 |
|
vty posted:The senior HG tech team is some of the best I've ever worked with. I don't know how involved they are with Bluehosts datacenter, I doubt at all. Yeah, we've been with HG for almost 7 years now, and they were really great up until they got purchased. Ticket responses within minutes, and it was engineers replying with actual useful information every time. Unfortunately, they don't seem to be involved at all with Provo. Not surprisingly, they still host all of their own stuff out of the Dallas DC, which never seems to have a problem like this. The red flag should have me in the face when the move to Provo occured. We got an email on Oct 28th saying we had to migrate by Nov 5th (5 business days) and that FreeBSD was no longer offered, so we had to move to a Linux flavour. I replied back thinking they had made a typo with the Nov 5th date, but it was actually correct, though they "graciously" gave me a couple more weeks. How could anyone with even a slight knowledge of this stuff could think that a couple days is long enough to migrate a dedicated server to a new OS is beyond me.
|
# ¿ Apr 22, 2014 00:52 |
|
Comradephate posted:I have a friend with a bunch of gear at burst.net who was basically told "hey we're migrating all of your servers to a new datacenter next week, you'll be down during the migration." Speaking of burst.net, that datacentre move was likely because they defaulted on the equipment and space leases at their old datacentre, and it's looking like they're doing the same thing at the new one. According to the court papers, they haven't paid leases in 6+ months. https://vpsboard.com/topic/3862-bre...cuments-within/ https://vpsboard.com/topic/3885-bur...er-was-evicted/ There's a whole bunch of other threads there and at webhostingtalk about them and their current situation. If he's got stuff with them still, I'd tell him to get it moved, fast, or he's likely to end up offline with no notice, never to access them again. If he has his own equipment coloed with them, a guy has posted with some details about getting it back: https://vpsboard.com/topic/4072-have-server-missing-at-burstnet-in-dunmore-scranton-pennsylvania/
|
# ¿ Apr 23, 2014 01:50 |
|
Kaninrail posted:So I have a really irritating issue that I'd like to ask for some advice/opinions on. You could configure a smarthost in Exchange that points to the ISPs email server, that's the simplest fix. Even if you do get them to set up the rDNS for you, plenty of over aggressive spam filters will probably still end up blocking you, because they love to just treat all client IP ranges with ISPs as belonging to the residential dynamic ranges. Also, if the company is using a hosted spam filter, often times you can set the smarthost to point to their servers instead of the ISPs.
|
# ¿ Jun 24, 2015 09:36 |
|
|
# ¿ Apr 29, 2024 02:43 |
|
Hey, an upside to my slowness in moving away from HostGator (EIG) to Linode, since I had no clue there were these sorts of issues with Linode until I was reading about all the ongoing DDoS stuff. Was going to have our US office add credit card info to our account this week after months and months of procrastination. Guess that's on hold. I was just about to ask, "is DigitalOcean fine for running production websites?", then realised that I obviously don't care, since I'm using EIG right now.
|
# ¿ Jan 6, 2016 04:53 |