|
vty posted:Quarterly HostGator Provo outage today. Looks like incompetence at the datacenter level, as it's also taken down Bluehost. IIRC that also happened last time.
|
# ? Apr 17, 2014 01:41 |
|
|
# ? Apr 26, 2024 03:26 |
|
Yeah, I'm done with HostGator after this poo poo. I knew when they were bought out that things were likely going to get worse, but had no idea they were going to be this terrible. No connection to our dedicated server for 8 hours now, and the only updates they've put out are "we're having some network issues." The move to the Provo datacentre was bad enough, but at least that forced me to migrate from our ancient FreeBSD setup to a current CentOS one, so it will be painless to move somewhere else now.
|
# ? Apr 17, 2014 02:21 |
|
McGlockenshire posted:Looks like incompetence at the datacenter level, as it's also taken down Bluehost. IIRC that also happened last time. not really surprising. EIG has been relocating all of their properties into one massive datacenter that they have zero ability to actually manage or keep online.
|
# ? Apr 17, 2014 07:14 |
|
Misogynist posted:No raised floors? Yuck. What does the cooling situation look like in there? Those glass doors make up the 'cool' aisle. There are only 3 or 4 rows of those cabinets
|
# ? Apr 17, 2014 13:09 |
|
You won't see an RCA or a serious RFO, they never do them. In the past if one was released it was via Brent. Well, this outage is especially bad (still stuff down today) so maybe they'll say something.
|
# ? Apr 17, 2014 15:43 |
|
Holy gently caress. Serious news from Linode was just announced. $45 million investment in these upgrades. https://blog.linode.com/2014/04/17/linode-cloud-ssds-double-ram-much-more/ TL;DR:
This is pretty amazing overall. A reduction in the number of vCPUs is somewhat disappointing (but maybe it goes up for other plans?), but given the other upgrades, it's definitely worth it. All you have to do is log into your Linode dashboard and then enter the migration queue. Fangs404 fucked around with this message at 16:33 on Apr 17, 2014 |
# ? Apr 17, 2014 16:31 |
|
We’ve greatly reduced the contention on these new machines compared to our old structure, and in testing this new arrangement provides much more consistent CPU time with less potential for steal. We think it’s great and totally worth the move, otherwise we wouldn’t have done it. Be curious to see what the actual performance difference is.
|
# ? Apr 17, 2014 16:52 |
|
Awesome, I wasn't expecting another RAM double so soon, and full SSDs! Their plans are much more competitive now, great changes.
|
# ? Apr 17, 2014 17:49 |
|
Fangs404 posted:Holy gently caress. Serious news from Linode was just announced. $45 million investment in these upgrades. I'm not a Linode customer, but these prices and performance/features are pretty impressive.
|
# ? Apr 17, 2014 17:54 |
|
Here's some performance data from a test we did a while back. Here are my results from before this most recent upgrade. Here's what I currently get (Linode 2048 running 64-bit Ubuntu 13.10): code:
code:
code:
Overall, this is pretty drat awesome. I imagine this will provide for some serious performance improvements for web servers with lots of DB hits.
|
# ? Apr 17, 2014 19:45 |
|
Yeah, SSD was pretty inevitable. EC2 has been using SSD for their current-generation (c3/m3) instance storage for awhile now. (Shame that EBS still sucks rear end unless you pay out the nose.)
|
# ? Apr 17, 2014 20:06 |
|
Fangs404 posted:Here's some performance data from a test we did a while back. Still compiling the same version of nmap as before?
|
# ? Apr 17, 2014 20:28 |
|
Bob Morales posted:Still compiling the same version of nmap as before? Yes, 6.25.
|
# ? Apr 17, 2014 20:36 |
|
vty posted:You won't see an RCA or a serious RFO, they never do them. In the past if one was released it was via Brent. Ha, not a chance we'll get an RFO, they've already blamed "vendor firmware" for the outage, which we all know is a lie. My total outage time was somewhere near 24 hours. After 15 hours I got access to the primary IP of the server, but none of the secondary IPs were routing properly. They were still down 21 hours after the original outage, when I went to bed. All working this morning when I got up. Looks like LiquidWeb or similar will be getting our business shortly.
|
# ? Apr 18, 2014 00:12 |
|
In case anyone isn't hip to all tech acronyms ever RFO is reason for outage. Also the RFO (man aren't I cool now) is probably power related. That Provo datacenter wasn't built for the density they're using it at.
|
# ? Apr 18, 2014 00:44 |
|
JBark posted:Ha, not a chance we'll get an RFO, they've already blamed "vendor firmware" for the outage, which we all know is a lie. By "Vendor Firmware" they're referring to a third party such as Cisco or whomever is currently the vendor of their routers. The post on Facebook where "they don't have vendors, they own the datacenter" is wholly ignorant. Owning a datacenter does not make you the hardware/software vendor for L2/L3 equipment. This amount of time quarterly for outages is absolutely embarrassing, though. But yeah, don't expect a meaty RFO/RCA. The EIG problems are shameful. That company (HG) meant a lot to me. Edit: RFO - Reason for Outage RCA - Root Cause Analysis They're interchangeable. vty fucked around with this message at 02:39 on Apr 18, 2014 |
# ? Apr 18, 2014 02:13 |
|
The folks at Linode must be making GBS threads their pants because of DO. Wonder how many customers they've lost. DO has a crap ton of funding, and they basically have documentation that rivals Linode's now (it was smart of them to bribe people to populate it). I wonder if this actually results in a net loss in revenue for Linode as people downgrade, but I'm sure they've done their research into usage patterns, or figured out that the price sensitive customers have already left.
|
# ? Apr 19, 2014 07:55 |
|
Here is a quick question, there is a domain which expires shortly which I would like to have for a hobby project but it's on "auction" on godaddy, snapnames and so on for something like a zillion dollars. So what would be the best course of action here? It's not like I really have to have it but I would pay like a hundred bucks for it. What's happing here is there really somebody who wants the domain (it hasn't been used since 2001) for these "auction" prices or is that the domain reseller "minimum bid"? e: It's a .com, and I never had a .com domain. ElectricMucus fucked around with this message at 13:53 on Apr 20, 2014 |
# ? Apr 20, 2014 13:49 |
|
Giving DigitalOcean a try. I've been using a dedicated server at some German company, that rents decently specced ones for just 50€/mo, but I've really just been using it for my rarely visited and maintained photo blog and as personal image host. I suppose the 20$ plan (i.e. 15€) would do just as well. I wish DO would hurry up with IPv6 support. I like the minimal administration interface, tho.
|
# ? Apr 20, 2014 19:47 |
|
Fangs404 posted:
The reason for this is that people were spinning up tiny nodes and abusing the poo poo out of the CPU - the tiny ones are usually not profitable, and even worse is that they ruin the experience for the customers who actually are profitable. I don't work at linode, but our first gen ~cloud servers~ all had the same number of vCPUs, and that's the issue we ran into.
|
# ? Apr 20, 2014 19:59 |
|
ElectricMucus posted:Here is a quick question, there is a domain which expires shortly which I would like to have for a hobby project but it's on "auction" on godaddy, snapnames and so on for something like a zillion dollars. if it's listed as an auction on TDNAM/snapnames it's probably got genuine bids on it and you won't get it without winning the auction
|
# ? Apr 20, 2014 22:28 |
|
Combat Pretzel posted:Giving DigitalOcean a try. I've been using a dedicated server at some German company, that rents decently specced ones for just 50€/mo, but I've really just been using it for my rarely visited and maintained photo blog and as personal image host. I suppose the 20$ plan (i.e. 15€) would do just as well. I wish DO would hurry up with IPv6 support. I like the minimal administration interface, tho. IPv6 should be rolling out in SGP next month. https://digitalocean.uservoice.com/forums/136585-digitalocean/suggestions/2639897-ipv6-addresses Also, yes, miners abusing the crap out of small servers is a HUGE pain. Though, I don't work for linode either. Malkar fucked around with this message at 01:35 on Apr 21, 2014 |
# ? Apr 21, 2014 01:31 |
|
Rufus Ping posted:if it's listed as an auction on TDNAM/snapnames it's probably got genuine bids on it and you won't get it without winning the auction Thanks, what if it was shill bidding, I'll assume somebody will snatch it automatically, so is it ok to hope for that and buy it of that someone for a sane price? Or is it literally not worth the effort because the chance of that happening is so slim?
|
# ? Apr 21, 2014 03:43 |
|
vty posted:By "Vendor Firmware" they're referring to a third party such as Cisco or whomever is currently the vendor of their routers. Yeah, I realise this, but to tell us that a Cisco/Juniper/etc bug took out the network for an entire datacentre for close to 24 hours is laughable. Well, I guess it would be laughable for a company that actually knows what they are doing, but I suppose it's become pretty obvious over the past couple years that EIG is a terribly run company and a simple bug could cause this much downtime for them. In a decent company, they'd have extensive backups of firmware and configs for every device, plus backup hardware for major backbone equipment, and any sort of outage like this could be at least mitigated by rolling back to a previous version or replacing hardware. An hour or two outage is just something that can happen in even the best of circumstances, but an entire day is unheard of. I wonder what level of screw up would cause this. Maybe a firmware update on a large number of devices for the Heartbleed bug went horribly wrong and configs everywhere got wiped out? They go to restore backups and find that backups are missing/corrupt, and so they basically had to redo configs from scratch. Basically the kind of thing that makes a network guy wake up in a cold sweat in the middle of the night. Would explain why it took so long for things to come back online, and why it was just small bits of the network coming back every couple hours. Oh well, doesn't really matter in the end. We got a free month out of it, and hopefully I'll be moved over somewhere else by then. LiquidWeb seems pretty popular for dedicated servers, I'll probably give them a whirl.
|
# ? Apr 21, 2014 14:31 |
|
Any Network team worth their salt (I don't know Provos team at all, I'm not impressed clearly) would have RANCID or some sort of config change management setup for their devices. It's essentially a subversion/git that constantly logs in/out of the routers and verifies their configuration so that roll backs are simple and possible. I can't think of many instances where you'd have to fully reconfigure and even then I can't fathom it taking 16-48 hours. The senior HG tech team is some of the best I've ever worked with. I don't know how involved they are with Bluehosts datacenter, I doubt at all. vty fucked around with this message at 15:30 on Apr 21, 2014 |
# ? Apr 21, 2014 15:27 |
|
Malkar posted:IPv6 should be rolling out in SGP next month. https://digitalocean.uservoice.com/forums/136585-digitalocean/suggestions/2639897-ipv6-addresses I've been using IPv6 in the past on my dedicated server as another layer of security. Blackhole everything IPv4 (--edit: As well as the IPv6 6to4 range) except port 80, do SSH and what not via IPv6. I'm interested in redoing that as soon as possible. Combat Pretzel fucked around with this message at 15:47 on Apr 21, 2014 |
# ? Apr 21, 2014 15:36 |
|
Combat Pretzel posted:SGP is Singapore right? I'm in AMS2. Yeah, I'm not sure when it will be rolled out in Amsterdam, unfortunately. The goal is to have it everywhere, but not sure on the exact timetable aside from Singapore being very soon. You could set up a tunnel with sixxs.net as a workaround?
|
# ? Apr 21, 2014 19:16 |
|
JBark posted:Yeah, I realise this, but to tell us that a Cisco/Juniper/etc bug took out the network for an entire datacentre for close to 24 hours is laughable. ARP/MAC table being full and then randomly killing off entries at random could do it That's based on network attachment and not a config per se.
|
# ? Apr 21, 2014 21:36 |
|
vty posted:The senior HG tech team is some of the best I've ever worked with. I don't know how involved they are with Bluehosts datacenter, I doubt at all. Yeah, we've been with HG for almost 7 years now, and they were really great up until they got purchased. Ticket responses within minutes, and it was engineers replying with actual useful information every time. Unfortunately, they don't seem to be involved at all with Provo. Not surprisingly, they still host all of their own stuff out of the Dallas DC, which never seems to have a problem like this. The red flag should have me in the face when the move to Provo occured. We got an email on Oct 28th saying we had to migrate by Nov 5th (5 business days) and that FreeBSD was no longer offered, so we had to move to a Linux flavour. I replied back thinking they had made a typo with the Nov 5th date, but it was actually correct, though they "graciously" gave me a couple more weeks. How could anyone with even a slight knowledge of this stuff could think that a couple days is long enough to migrate a dedicated server to a new OS is beyond me.
|
# ? Apr 22, 2014 00:52 |
|
JBark posted:Yeah, we've been with HG for almost 7 years now, and they were really great up until they got purchased. Ticket responses within minutes, and it was engineers replying with actual useful information every time. Unfortunately, they don't seem to be involved at all with Provo. Not surprisingly, they still host all of their own stuff out of the Dallas DC, which never seems to have a problem like this. I have a friend with a bunch of gear at burst.net who was basically told "hey we're migrating all of your servers to a new datacenter next week, you'll be down during the migration." A lot of hosts are laughably bad at planning these things. On the flipside, when we shut down a datacenter last year, customers were given over a year's notice, which as it turns out was too long - nobody did poo poo about it because they all thought they had loads of time. Even with 13 months, frequent reminders, and very generous offers for better hardware to migrate to at a new DC, I still had unhappy customers who were not prepared when the time came to pull the plug.
|
# ? Apr 22, 2014 19:17 |
|
DigitalOcean's website is incredible with cloud-to-butt installed. Trying out their $5/month offering to see if I even make use of a VPS. Seems pretty slick so far except that either my droplet or their web VNC thingy has locked up twice for no apparent reason. e: just the web console that's making GBS threads the bed so only a problem if I need to login as root for some reason. Galler fucked around with this message at 17:45 on Apr 23, 2014 |
# ? Apr 22, 2014 21:49 |
|
Galler posted:Trying out their $5/month offering to see if I even make use of a VPS. they've been telling people about these coupon codes which you might find useful: DOIT10, DEPLOY2DO apparently they get you $10 credit
|
# ? Apr 23, 2014 00:00 |
|
Comradephate posted:I have a friend with a bunch of gear at burst.net who was basically told "hey we're migrating all of your servers to a new datacenter next week, you'll be down during the migration." Speaking of burst.net, that datacentre move was likely because they defaulted on the equipment and space leases at their old datacentre, and it's looking like they're doing the same thing at the new one. According to the court papers, they haven't paid leases in 6+ months. https://vpsboard.com/topic/3862-bre...cuments-within/ https://vpsboard.com/topic/3885-bur...er-was-evicted/ There's a whole bunch of other threads there and at webhostingtalk about them and their current situation. If he's got stuff with them still, I'd tell him to get it moved, fast, or he's likely to end up offline with no notice, never to access them again. If he has his own equipment coloed with them, a guy has posted with some details about getting it back: https://vpsboard.com/topic/4072-have-server-missing-at-burstnet-in-dunmore-scranton-pennsylvania/
|
# ? Apr 23, 2014 01:50 |
|
Galler posted:DigitalOcean's website is incredible with could-to-butt installed. If you run into any issues, feel free to toss me a PM. Or open a ticket. The latter is probably faster, actually since I never check pms.
|
# ? Apr 23, 2014 07:52 |
|
JBark posted:Speaking of burst.net, that datacentre move was likely because they defaulted on the equipment and space leases at their old datacentre, and it's looking like they're doing the same thing at the new one. According to the court papers, they haven't paid leases in 6+ months. Hahaha, that's incredible. Fortunately his company had the good sense to do a migration of all of their tech in burst's datacenters to a new company, because they felt (as all reasonable people would) that a week's notice for 12+ hours of downtime was not an acceptable way to do business.
|
# ? Apr 25, 2014 02:29 |
|
I've been with somee.com for a few months now. I primarily work in .NET anymore and wanted a stable VPS that I could afford. So far I'm really liking it but I'm always interested in other's perspectives. Has anyone else worked with them before? If so, thumbs up or have I not encountered the nightmare yet?
|
# ? Apr 26, 2014 05:06 |
|
I have an aws free tier that is expiring on the 30th. All I use it for is znc (irc bouncer). I'd like to find the cheapest / easiest alternative to aws - I obviously don't need much to run this, just a vps that is reliable.
|
# ? Apr 27, 2014 23:55 |
|
A Yolo Wizard posted:I have an aws free tier that is expiring on the 30th. All I use it for is znc (irc bouncer). I'd like to find the cheapest / easiest alternative to aws - I obviously don't need much to run this, just a vps that is reliable. https://www.lowendbox.com is a good place to look. If they're in stock, buyvm's $15/yr VPS is perfect for you. They're usually out of stock though.
|
# ? Apr 28, 2014 00:06 |
|
DNova posted:If they're in stock, buyvm's $15/yr VPS is perfect for you. They're usually out of stock though. They're in stock. However, I run znc on one of their VPS's and it's subject to various disconnects, so... mileage may vary.
|
# ? Apr 28, 2014 07:47 |
|
|
# ? Apr 26, 2024 03:26 |
|
Amdwebhost or DO?
|
# ? Apr 28, 2014 14:01 |