Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
So for a little more detail, i'm really just considering this at the moment. I am also considering moving both of our datacenters to colocation facilities and still doing traditional virtualization. I have three reasons for looking at my options.

1) the business risk of hosting our server infrastructure in two locations that also house all of our IT staff is a concern. Our primary facility with the bulk of our techs sits just off the end of a regional airport runway. We recently had a plane crash in the vicinity of our building, and it has opened some eyes. Everything is replicated to our second datacenter, but a daytime event would also kill 2/3 of the people qualified to actually restore our services.
2) we have had some facilities challenges with power and air conditioning. We aren't in the datacenter business, so it could make sense to outsource it.
3) we have begun growing again. I work for a very well capitalized financial institution that was able to assume the assets and liabilities of numerous failed banks from the FDIC during the great recession, and with this recovery nearing a record length I can only assume the next downturn is coming. If there are more bank failures this time around, I believe we are still in a great position to grow. Last time we doubled our asset size in just a few years -- if we did that again if could be advantageous to have more flexibility in provisioning new servers.

I do have a few reasons that the cloud is not an answer, and they relate to data gravity and the fact that we have no plans to outsource our core processing. Many of our anti fraud services effectively do a complete dump of our transaction tables daily. This is not only a lot of data to store, but it's also a lot of data to move around. These services are also almost universally windows app and sql servers that don't necessarily scale horizontally, which is another factor, as those may work in azure sql services or i might just have to buy giant vm instances.

I'm really just in the discussion and planning phase thus far.

Adbot
ADBOT LOVES YOU

Gucci Loafers
May 20, 2006

Ask yourself, do you really want to talk to pair of really nice gaudy shoes?


With all that data to move around you're going to be paying a lot for all the egress traffic.

I'm a little unsure what you mean by infrastructure guys but the :cloud: is bare compute and storage with no management which you will need to maintain periodically.

I'd vote for training up your employees on whichever cloud platform and slowly start moving chunks over.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Tab8715 posted:

With all that data to move around you're going to be paying a lot for all the egress traffic.

I'm a little unsure what you mean by infrastructure guys but the :cloud: is bare compute and storage with no management which you will need to maintain periodically.

I'd vote for training up your employees on whichever cloud platform and slowly start moving chunks over.

Infrastructure guys are servers storage and network.

Docjowles
Apr 9, 2009

adorai posted:

1) the business risk of hosting our server infrastructure in two locations that also house all of our IT staff is a concern. Our primary facility with the bulk of our techs sits just off the end of a regional airport runway. We recently had a plane crash in the vicinity of our building, and it has opened some eyes. Everything is replicated to our second datacenter, but a daytime event would also kill 2/3 of the people qualified to actually restore our services.

Since I tend to work at smaller companies, it's not uncommon for me to go out to lunch with all of the senior Ops people plus some random dev leads and product managers. The fact that if we got hit by the proverbial bus, the company would instantly be out of business, was not lost on us. We still did it of course (a man's gotta eat!), but it was a sobering thought.

Some freak accident wiping out a huge swathe of critical staff isn't something many small-to-medium businesses seem plan for. I guess having your office literally at the end of a runway would light a fire under management, though :v:

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Docjowles posted:

Since I tend to work at smaller companies, it's not uncommon for me to go out to lunch with all of the senior Ops people plus some random dev leads and product managers. The fact that if we got hit by the proverbial bus, the company would instantly be out of business, was not lost on us. We still did it of course (a man's gotta eat!), but it was a sobering thought.
I guess it depends on the company, but we could have our entire tech team wiped out and the company would survive. We could NOT have our primary datacenter destroyed AND have our tech team wiped out. Either/or, but not both. Well maybe the business would survive, but getting everything back up in a short amount of time would require so much institution knowledge that I don't even know where someone would begin. Even if everything were immaculately documented, it would still be so awful to even get started, especially since our documentation would have to be restored.. The basic DR plan is stored elsewhere, and would get someone a start, but it wouldn't be enough.

Proud Christian Mom
Dec 20, 2006
READING COMPREHENSION IS HARD
meanwhile if sales and HR died of bad asparagas at a conference business would suddenly become much easier to do

KS
Jun 10, 2003
Outrageous Lumpwad
Even if you have a bunch of legacy systems that don't fit the cloud model, getting them out of your building and making power/cooling/network someone else's problem is usually a really easy sell. I've had a few places where the combined bill from a colo is less than what it costs to just power server room critical load and HVAC at the corporate office.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

KS posted:

Even if you have a bunch of legacy systems that don't fit the cloud model, getting them out of your building and making power/cooling/network someone else's problem is usually a really easy sell. I've had a few places where the combined bill from a colo is less than what it costs to just power server room critical load and HVAC at the corporate office.
Good colos are one thing, but big cloud networks at their most unreliable are still generally better than random ad-hoc datacenters these days.

Nukelear v.2
Jun 25, 2004
My optional title text

adorai posted:


I do have a few reasons that the cloud is not an answer, and they relate to data gravity and the fact that we have no plans to outsource our core processing. Many of our anti fraud services effectively do a complete dump of our transaction tables daily. This is not only a lot of data to store, but it's also a lot of data to move around. These services are also almost universally windows app and sql servers that don't necessarily scale horizontally, which is another factor, as those may work in azure sql services or i might just have to buy giant vm instances.

I'm really just in the discussion and planning phase thus far.


I'm in the process of moving our midsize fintech company from colo to aws and really the database changes have been the stickiest points.

Old style active/passive windows clustering is generally off the table so you go to AlwaysOn, which has some gotchas in terms of handling failover in your apps. But it gives you shared nothing clusters and many readable replicas. Storage gotchas, Gp2 is not super reliable performance, PIOP is stupid loving expensive and 8k writes means you get dicked. In the end we're lashing together i2.8x instances and running the whole thing on ephemeral drives with an async to a gp2 backed system. Placement groups with appropriate instances give you a 10gig interconnect which is adequate for us. If you're past what an 8x instance can offer, then yea that's going to need some engineering.

In terms of your staff, there's likely still work for all your guys, it's just different than what it used to be so if they're adaptable they won't have any problems and most of our guys have enjoyed it. It's also been a nice opportunity to expose our developers (readonly) to the behind the scenes magic that makes their stuff work.

For me the most exciting part is the fast and easy iteration for new projects, all sorts of very useful features are now just a few clicks away.

Buffer
May 6, 2007
I sometimes turn down sex and blowjobs from my girlfriend because I'm too busy posting in D&D. PS: She used my credit card to pay for this.
Infrastructure savings are a little overblown, the hardware, network and cooling gets rapidly diminishing economy of scale benefits past a single DC and you can outsource a lot of that individually to a colo anyway. If your networking is super complex there's either a reason for it or you have bigger problems and the rest you need to do anyway unless you go full blown IaaS / SaaS.

On the larger scale I know a couple of places trying to bring cloud infra back into a multi-site DC env after growing to the size where it makes sense to do so. It's kinda cool / funny having infrastructure skills gradually become like COBOL though.

Also, you guys forgot capex v. opex and variable v. fixed billing. We can buy poo poo no problem where I am but adding to ops cost, even something fixed like licensing, is herculean. Variable v. fixed billing extends from that - we can't pool where we'd have to pool and we can't pass through bills to users. We have to charge up front, exactly what it will cost(with a little legroom) for 3 months of usage, and we're not allowed by policy to put breaks on things. Then there's the fun of grants (solvable) / keeping poo poo around and reproducible (also solvable) and then there's the extra fun for us to have legal sign off on it... our legal department once tried to get someone to change the GPL and then vetoed a support contract when they wouldn't. Most recently we couldn't get approval to use a backup cloud service that made a hell of a lot of sense for us. loving cultural/political problems.

Walked
Apr 14, 2003

The variable billing has been my biggest hurdle.

We budget annually, and have little room for variable costs without pre-approval (government).

AWS sales and their resellers haven't been a big help, either. Still trying to make it work though, but it's certainly not a walk in the park from a contract management standpoint.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Speaking of government, until recently Amazon wasn't incorporated in our state (Minnesota) so we couldn't do business with them directly (I work at public university). So to get around that we get AWS through some reseller that does do business in the state. I've never dealt with any of it myself, but from what I hear the reseller makes it more of a pain for us to use AWS and with the added benefit that they add some % cost on top of the billing for their "services."

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug
Sounds like a reseller; checks out.

namaste friends
Sep 18, 2004

by Smythe
Id just like to invite whoever wrote neutron to go launch themselves into the sun.

MagnumOpus
Dec 7, 2006

What are people doing for portable backups in Openstack? For Cassandra we're using tablesnap to archive to a S3 target and while that's not bulletproof its okay for where we are in development. My main conundrum right now is how to backup curated systems like Jenkins and Sonarqube. For Jenkins I'm just mounting $JENKINS_HOME to an attached volume and snapshotting that on the reg. A little bit of reading suggests I can then backup those snapshotted volumes to Swift after converting snapshot to volume, but that doesn't really feel like it gets me anything new since Swift is at the same physical location. Is there a facility in Swift for migrating objects between DCs?

Ixian
Oct 9, 2001

Many machines on Ix....new machines
Pillbug
Can anyone experienced with Openstack Swift give me a little background on how(if) you deal with large contiguous files?

I know (or have read - just getting to a project that involves Swift) that there is a 5GB object size, and over that you need to split in to segments and create a manifest object (for ingest) and for downloads you can request said manifest and have the file delivered reassembled.

What I'm wondering is, how big is reasonable (I think Swift supports up to 1,000 segments, which implies 5TB, but what is documented and what is practical are very different) and how original file fixity is handled.

Say I have a 20GB file (you run in to these in the media world, usually wrapped in a specialized container like MXF). Lets go with static large objects because that's what I'll have (I won't need to mess with dynamic large objects in Swift).

To Put that in a Swift container I am going to need to break it in to at least 4 5GB segments and create a json manifest describing the segments, their size, etc. that is delivered at the end of the upload. To Get it back the API request is for the manifest object that has the concatenated segment list. Each segment also should have its own MD5 checksum done for fixity. That much I don't have trouble with.

My question is really: does this actually work well in practice? Over HTTP? What about the original file fixity? I will have a MD5 checksum for the original 20GB file; after I get it back am I going to run in to issues there? Sorry if these are basic questions, I'm trying to figure out a little more what Swift is doing, Object storage I get but most people seem to be using Swift for a lot of smaller files (small being relative). Would like to hear other use cases if there are any.

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

FISHMANPET posted:

Speaking of government, until recently Amazon wasn't incorporated in our state (Minnesota) so we couldn't do business with them directly (I work at public university). So to get around that we get AWS through some reseller that does do business in the state. I've never dealt with any of it myself, but from what I hear the reseller makes it more of a pain for us to use AWS and with the added benefit that they add some % cost on top of the billing for their "services."

We re-sell AWS and it's not an additional cost to you (at least it shouldn't be) we just collect a percentage of your spend. We can also offer net 30 terms which helps. Otherwise you should be free to use AWS however you want. You're just stuck with a lovely reseller.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

1000101 posted:

We re-sell AWS and it's not an additional cost to you (at least it shouldn't be) we just collect a percentage of your spend. We can also offer net 30 terms which helps. Otherwise you should be free to use AWS however you want. You're just stuck with a lovely reseller.

Well our CIO was fired, for, in part, shenanigans with contracts and vendors. So it wouldn't at all surprise me that we were purposefully screwed.

evol262
Nov 30, 2010
#!/usr/bin/perl

Ixian posted:

Can anyone experienced with Openstack Swift give me a little background on how(if) you deal with large contiguous files?

I know (or have read - just getting to a project that involves Swift) that there is a 5GB object size, and over that you need to split in to segments and create a manifest object (for ingest) and for downloads you can request said manifest and have the file delivered reassembled.

What I'm wondering is, how big is reasonable (I think Swift supports up to 1,000 segments, which implies 5TB, but what is documented and what is practical are very different) and how original file fixity is handled.

Say I have a 20GB file (you run in to these in the media world, usually wrapped in a specialized container like MXF). Lets go with static large objects because that's what I'll have (I won't need to mess with dynamic large objects in Swift).

To Put that in a Swift container I am going to need to break it in to at least 4 5GB segments and create a json manifest describing the segments, their size, etc. that is delivered at the end of the upload. To Get it back the API request is for the manifest object that has the concatenated segment list. Each segment also should have its own MD5 checksum done for fixity. That much I don't have trouble with.

My question is really: does this actually work well in practice? Over HTTP? What about the original file fixity? I will have a MD5 checksum for the original 20GB file; after I get it back am I going to run in to issues there? Sorry if these are basic questions, I'm trying to figure out a little more what Swift is doing, Object storage I get but most people seem to be using Swift for a lot of smaller files (small being relative). Would like to hear other use cases if there are any.

You should honestly talk to the swiftstack guys, since they can give you a really clear answer about use cases like this

incoherent
Apr 24, 2004

01010100011010000111001
00110100101101100011011
000110010101110010
Is this also the place to talk about slow, legal death of the cloud?

quote:

The European Court of Justice has just ruled that the transatlantic Safe Harbour agreement, which lets American companies use a single standard for consumer privacy and data storage in both the US and Europe, is invalid. The ruling came after Edward Snowden's NSA leaks showed that European data stored by US companies was not safe from surveillance that would be illegal in Europe.

http://www.businessinsider.com/european-court-of-justice-safe-harbor-ruling-2015-10?r=UK&IR=T

Aunt Beth
Feb 24, 2006

Baby, you're ready!
Grimey Drawer

incoherent posted:

Is this also the place to talk about slow, legal death of the cloud?
The cloud is dead, long live the cloud.

Gucci Loafers
May 20, 2006

Ask yourself, do you really want to talk to pair of really nice gaudy shoes?



Granted this makes stuff a hell of a lot more difficult but aren't companies already hosting a lot of data in the Country of region already?

Example, Facebook stores European user data in a Europe Datacenter.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
depending on your cloud product, you might not have that option. We have one SaaS provider that only serves data from canada, for instance.

incoherent
Apr 24, 2004

01010100011010000111001
00110100101101100011011
000110010101110010
While other countries end users have super fast internet, we're still the center of the internet universe in terms of resources. We're unmatched in terms of the abundance of capacity. It made sense to have all the data redundant through the huge data centers google, facebook, etc built in the middle of nowhere on the back of our abundant dark fiber backbone in the US.

On top of all that cities and states are more than willing to give tax breaks to build these massive datacenters in the US.

Potato Salad
Oct 23, 2014

nobody cares



Holy loving poo poo.

I'm just gonna, uh, cancel that meeting literally Thursday concerning safe harbor + certain data control standards pending, well, an incredible fuckton of research on France-specific privacy + employment law.

This is a good change though.

vanity slug
Jul 20, 2010

incoherent posted:

Is this also the place to talk about slow, legal death of the cloud?

Cloud Computing: Mostly ghosts in other people's datacenters

Potato Salad
Oct 23, 2014

nobody cares


So, how does Free Harbour getting nuked affect the operations of entities formerly hosting personal info of Euro nationals in the US? Is there suddenly a huge demand for Romanian or Tinycountryistan local S3 hosting providers? Can some entities just ignore the change?

Proteus Jones
Feb 28, 2013



Potato Salad posted:

So, how does Free Harbour getting nuked affect the operations of entities formerly hosting personal info of Euro nationals in the US? Is there suddenly a huge demand for Romanian or Tinycountryistan local S3 hosting providers? Can some entities just ignore the change?

It makes things insanely complicated. I know when I worked in a global security group at an international bank, for some EMEA countries it was a huge pain in the rear end to scrub security data of personally identifying information while still providing meaningful data to the security mothership. And incident handling was a bit of a nightmare, for some countries we had to fly over there to do post-incident analysis and forensics. And this was all for internal systems and users.

I would not be surprised if some of the smaller companies just up and decide they can no longer offer their service to some countries. The hit in revenue may be less than the cost of compliance.

Fiendish Dr. Wu
Nov 11, 2010

You done fucked up now!

Potato Salad posted:

So, how does Free Harbour getting nuked affect the operations of entities formerly hosting personal info of Euro nationals in the US? Is there suddenly a huge demand for Romanian or Tinycountryistan local S3 hosting providers? Can some entities just ignore the change?

As somebody working in a small SaaS company with approx 33% of our customers being in the EU: we're ditching Rackspace and moving to Azure to spin up a EU DC which is under compliance. (in a nutshell)

Edit: it's been a long time coming. This was just sort of the "welp I guess this means we do this NOW"

Fiendish Dr. Wu fucked around with this message at 04:01 on Oct 8, 2015

namaste friends
Sep 18, 2004

by Smythe

Fiendish Dr. Wu posted:

As somebody working in a small SaaS company with approx 33% of our customers being in the EU: we're ditching Rackspace and moving to Azure to spin up a EU DC which is under compliance. (in a nutshell)

Edit: it's been a long time coming. This was just sort of the "welp I guess this means we do this NOW"

How are you doing this? You're just converting all your apps from Swift/S3 to Azure?

Gucci Loafers
May 20, 2006

Ask yourself, do you really want to talk to pair of really nice gaudy shoes?


On a similar note, with Microsoft Azure a VM Instances (Compute) to be qualified under SLA it must be in an availability set or "paired" with another VM. When Microsoft performs maintenance, only 1 of the two VMs will be down and for no more than the Azure SLA Agreement.

The kicker is, your application must be deployed in at least VM pairs. A good example is two domain controllers for the same domain or two web front-ends for the same website as if only one of them is down your application is still functional. This becomes a problem with things such as file servers or legacy applications that just run on a single instance, if it crashes you're out of luck. Azure Availability Set

Is this same behavior with the other associated cloud providers?

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Tab8715 posted:

Is this same behavior with the other associated cloud providers?
I can't speak to the SLAs involved, but that's kind of the idea. Semi reliable servers with application level HA on a massive scale. Generally speaking though, semi reliable in this case probably meets the 5 9s everyone is looking for.

incoherent
Apr 24, 2004

01010100011010000111001
00110100101101100011011
000110010101110010
Did they port DFSR to azure yet? I would love to see Microsoft back that with 5 9's.

Thanks Ants
May 21, 2004

#essereFerrari


AWS can offer dedicated hosts now (well, soon):

http://aws.amazon.com/ec2/dedicated-hosts/

I think there was a guy here running a VoIP application and pretty much resigned to running on dedicated hardware. This might be interesting.

Scikar
Nov 20, 2005

5? Seriously?

Tab8715 posted:

On a similar note, with Microsoft Azure a VM Instances (Compute) to be qualified under SLA it must be in an availability set or "paired" with another VM. When Microsoft performs maintenance, only 1 of the two VMs will be down and for no more than the Azure SLA Agreement.

The kicker is, your application must be deployed in at least VM pairs. A good example is two domain controllers for the same domain or two web front-ends for the same website as if only one of them is down your application is still functional. This becomes a problem with things such as file servers or legacy applications that just run on a single instance, if it crashes you're out of luck. Azure Availability Set

Is this same behavior with the other associated cloud providers?

The public cloud providers want you to shift things to their platform services as much as possible rather than run VMs. They're mostly there to make it easier to migrate an existing application initially, and then you can gradually break pieces off onto the platform services over time and bring the cost down. Naturally this ties you down more to that provider which is what they're really looking for. This is why the Amazon portal has a bajillion services on it and there's new ones every couple of weeks.

Fiendish Dr. Wu
Nov 11, 2010

You done fucked up now!

Cultural Imperial posted:

How are you doing this? You're just converting all your apps from Swift/S3 to Azure?

They're mostly node apps in docker containers

MagnumOpus
Dec 7, 2006

Re: Safe Harbour most mature companies already have processes in place with their corporate customers to transfer data under EU Model Clause agreements. For those needing quick alternatives this is the first place I'd suggest looking. Of course, EU Model Clauses may be susceptible to the same legal criticism as Safe Harbour, but for the moment they serve as a legal model to facilitate data transfer and protection agreements for anyone doing offshore data warehousing.

Thanks Ants
May 21, 2004

#essereFerrari


Last week I spent a couple of hours explaining AWS to someone developing web apps, I explained what part EC2, S3, RDS, VPC, Elastic Beanstalk etc. played in the overall solution, showed them some documentation as it related to Wordpress in terms of where to store static content. A pretty good overview I thought.

Today I'm getting emails telling me that Bitnami LAMP stacks are ready to use :eng99:

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Thanks Ants posted:

Last week I spent a couple of hours explaining AWS to someone developing web apps, I explained what part EC2, S3, RDS, VPC, Elastic Beanstalk etc. played in the overall solution, showed them some documentation as it related to Wordpress in terms of where to store static content. A pretty good overview I thought.

Today I'm getting emails telling me that Bitnami LAMP stacks are ready to use :eng99:
If AWS was easy enough for mere mortals to comprehend, we wouldn't have DigitalOcean nipping at their heels.

Adbot
ADBOT LOVES YOU

Thanks Ants
May 21, 2004

#essereFerrari


Related to the above, I'm struggling with IAM.

Is it possible to give access to a developer to let them create new users but restrict those new users to certain permission levels? I don't want them creating new user account with full admin privileges or granting their own account full access. At the moment they have a policy applied which prevents access to IAM, billing, and some sensitive DNS zones.

  • Locked thread