|
This is probably the laziest thing I'll ever post but does anyone have a decent "business case" they've used that successfully laid out the pricing for machines, storage, cabling, 10G infrastructure, the full works. I want to put together a proper plan with costs to try and shame management into doing the right thing in our three datacenters. I'm aiming for about 100 vm's per DC (50/50 between 2 and 4G of ram) , 10TB of storage, all aimed at being able to build up a fully comprehensive testing tier (that we completely lack). What I don't want to do is open word and reinvent the wheel as I'm trying to squeeze this in between 50 other emergencies.
|
# ¿ Apr 11, 2012 18:41 |
|
|
# ¿ Apr 28, 2024 01:01 |
|
adorai posted:Knowing nothing about your company, I would probably just try to make one of the sites the red headed stepchild, and get a 2 year refresh cycle, moving stuff from site a to site c now, and in two years refreshing site b and moving the gear that is there to site c, repeating this process every 4 years. So site C gets 4 year old gear for 2 years. To be honest I should have probably elaborated. We have about ~250 or so production servers (probably more hidden and squirreled away around the place) scattered across several datacenters. (4, but I'm only focused on 3 right now). My job is to help bootstrap the automation so that we can actually scale and manage what we already have, as well as aggressively upgrading from RHEL 3/4 to 5/6. As part of this work we've been rebuilding things straight into production. We have almost no "test" and extremely limited "dev". What I'm actually trying to spec out right now is enough virtualization to realistically build a full test tier for everything we've got. We wouldn't need as many machines (often services are part of a pool of 5-10 servers) so I think I can get away with 150 nodes at between 2-4G of RAM with 32G of disk for each server (plus an unspecified chunk of space required to replicate smaller test sets of data where appropriate). I have it in my head that the best thing to spec out is enough hypervisors to handle 100 nodes per datacenter, plus about 5TB of storage in each location as well as 10GigE and other associated stuff to get an environment that is completely dedicated to virtualization. (Our disaster recovery plans, for better or worse, involve moving to having multiple clusters of services, one per datacenter, with loadbalancing between them. We may never even get close but it seems more likely than trying to do a full DR site.)
|
# ¿ Apr 12, 2012 13:14 |