Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
sudo rm -rf
Aug 2, 2011


$ mv fullcommunism.sh
/america
$ cd /america
$ ./fullcommunism.sh


Hey there storage guys, I'm looking at getting a simple disk array for iSCSI to create some shared storage for our growing vCenter deployment (6 hosts) - the only caveat is that I really need it to be Cisco hardware. Is there something in the UCS line that could fulfill this need? Could I literally grab a C240 and put openfiler on it?

Adbot
ADBOT LOVES YOU

sudo rm -rf
Aug 2, 2011


$ mv fullcommunism.sh
/america
$ cd /america
$ ./fullcommunism.sh


The main issue is that we get significant discount on cisco products (internal pricing), so I'm thinking that jerry-rigging a C240 with Freenas would have such a huge price advantage over even an entry-level array like the MD3220i to be worth it.

sudo rm -rf
Aug 2, 2011


$ mv fullcommunism.sh
/america
$ cd /america
$ ./fullcommunism.sh


I've got a budget of ~10k. Need to get an iSCSI solution for a small VMware environment. Right now we're at 2-3 hosts, with about 30 VMs. I can do 10GE, as it would hook into a couple of N5Ks.

Dilbert steered me away from hacking together a solution with some discount UCS 240s a while back and pointed me into the direction of Dell's MD series. I'm currently looking at the MD3800i with 8 2TB 7.2k SAS drives, does that sound alright?

I was looking at the configuration options, wasn't really sure what this refereed to:



I can't tell if that's needed or not.

sudo rm -rf
Aug 2, 2011


$ mv fullcommunism.sh
/america
$ cd /america
$ ./fullcommunism.sh


I'm not sure, honestly. I'm just a contractor that manages a couple labs. I think my project manager and I searched around for something (I'm at Cisco), but we didn't see a lot of good options internally. I saw a lot of brands that I've never heard of, like Infortrend, Overland, CRU, and Promise. I mean if they'll do the job, cool, but I didn't have any idea who they were and didn't have a lot of confidence in any of them.

sudo rm -rf
Aug 2, 2011


$ mv fullcommunism.sh
/america
$ cd /america
$ ./fullcommunism.sh


Richard Noggin posted:

8 2TB NL SAS drives for 30 VMs? Have you done any IOPS analysis? I've seen bigger setups fail with more drives and less VMs.

No. This is all new to me (a theme for most of my posts in these forums, haha). How is that usually done?

But the VMs aren't all on at the same time - the majority of them are simple Windows 7 boxes used to RDP into our teaching environment. When we aren't in classes, there's not even a need to keep them spun up.


Wicaeed posted:

And yeah you're gonna be sad with just 7.2k RPM NL-SAS drives.

How in the world do you swing a Nexus 5k but can't get more than 10k for storage?

I work for Cisco, but we're a small part of it.

sudo rm -rf fucked around with this message at 03:12 on Jun 28, 2014

sudo rm -rf
Aug 2, 2011


$ mv fullcommunism.sh
/america
$ cd /america
$ ./fullcommunism.sh


Richard Noggin posted:

Then why get a SAN?

Because our Domain Controllers, DHCP Server, vCenter Server, AAA Server and workstations are also VMs, and at the moment everything is hosted on a single ESXi host. I can get more hosts, their cost isn't an issue - but additional hosts do not offer me much protection without being able to use HA and vMotion. That's a reasonable need, yeah?

sudo rm -rf
Aug 2, 2011


$ mv fullcommunism.sh
/america
$ cd /america
$ ./fullcommunism.sh


Sickening posted:

Ha/vmnotion is more for an environment that has uptime/disaster recover needs. The environment posted (outside of a dc) doesn't really have those needs. If 10k is the budget, you will get more bang for your buck to add another beefy host and skipping the storage completely.

The perfect goal for you would be 2 hosts with central storage. I don't think its in your cards with that budget.

While I would normally recommend storage for almost any virtual environment, your budget is poo poo and planning for any kind of growth with that budget isn't going to happen this time around.

What kind of budget do you think would be the minimum required to plan get my DCs protected and plan for a minimal amount of growth?

Richard Noggin posted:

See, that changes the game. You said earlier that you needed the storage for 30 infrequently used VMs that were basically jumpboxes. Without knowing the rest of your environment, I'd be inclined to tell you to put the workstation VMs on local storage and your servers on the SAN, but at least get 10K drives. Speaking to general cluster design, our standard practice is 2 host clusters, and add more RAM or a second processor if need be, before adding a third host.

This sounds like a good plan, and our hosts are pretty beefy - the discount we get on UCS hardware is significant.

Edit: I really appreciate you guys walking me through this. I'm essentially a one man team, so I don't have a lot of options for assistance outside of my own ability to research.

sudo rm -rf
Aug 2, 2011


$ mv fullcommunism.sh
/america
$ cd /america
$ ./fullcommunism.sh


Sickening posted:

I would think somewhere in the 25k to 30k. Servers, expandable storage, and networking behind it. It might seem like a lot, but after the extra fees, support contracts, and tax that is where you are going to be.

What do you mean by this?

sudo rm -rf
Aug 2, 2011


$ mv fullcommunism.sh
/america
$ cd /america
$ ./fullcommunism.sh


1000101 posted:

For DCs you wouldn't need to worry too much since you can just build 1 DC on each ESXi host and they'll all replicate to each other.

AAA could probably be similar, just need to define multiple AAA sources on your network devices.


I largely agree with his plan as well. Focus on what you need shared storage for (things like vcenter, maybe AAA, anything else) and keep throwaway items on local disk.

Also do you happen to be in the bay area?

Nah, I'm in Atlanta.

sudo rm -rf
Aug 2, 2011


$ mv fullcommunism.sh
/america
$ cd /america
$ ./fullcommunism.sh


Sickening posted:

Ugh, are you serious?

Yes? I wanted to see if some of the equipment I needed to include in my budget was something I already had.

What do you want from me? If you're willing to impart professional advice, I'm absolutely willing to hear it. If you don't want to, cool, that's fine too. I've never used enterprise storage in a professional setting before, I've been out of college barely a year and I've had my current (only) job for even less. I apologize if my questions are annoying, but you are free to ignore them if they are so loving unbelievable. I don't think being an rear end in a top hat about it is warranted.

sudo rm -rf
Aug 2, 2011


$ mv fullcommunism.sh
/america
$ cd /america
$ ./fullcommunism.sh


I apologize for not being clear enough on my needs / the details. Part of it has been figuring out what information would be relevant.

Servers - Currently a single UCS C220 M3. It has 2 Xeon E5-2695 @ 2.40GHz, comes to 48 logical processors. 96GBs of RAM. Definitely something I can get replaced or upgraded with ease - the discount for Cisco equipment is high. I've asked for two more of these servers a long with the disk array, and I should get them without any problems.

Storage - This is what I can't really get internally. Right now everything is running off the local storage of our single ESXi server, which is using 8 900GB 10k drives in RAID 10.

Networking - This is transitional, we are in the process of moving towards a Nexus only environment, but at the moment I have 2 N5Ks in place along with several N2Ks. The ESXi host currently connects to an N2K, and I assumed I would directly connect the disk array to the N5Ks to take advantage of 10G. I was learning towards iSCSI, but I believe that I can also do FCoE because of the unified ports on my N5Ks. But yeah, this is less of a budget concern considering the discount.



This is the latest Dell that I've put together, changed the HDDs (this includes 12 HDDs):

sudo rm -rf
Aug 2, 2011


$ mv fullcommunism.sh
/america
$ cd /america
$ ./fullcommunism.sh


Pantology posted:

I'm surprised Cisco's Invicta hasn't been mentioned as a comedy-option. Way, way overkill (and lists for like 10x that Dell build), but it has a Cisco faceplate, and if the internal discount makes it cheap enough...

It was ironically looked into by my group, but yeah, it's way overkill.

The other option was building out a couple of 240s and throwing FreeNAS on them.

sudo rm -rf
Aug 2, 2011


$ mv fullcommunism.sh
/america
$ cd /america
$ ./fullcommunism.sh


Richard Noggin posted:

The amount of time that a host spends in maintenance mode is very, very small. Wouldn't you rather take that other $5k that you would have spent on a server and put it into better storage/networking?

Just to give everyone an idea, here's our "standard" two host cluster:

- 2x Dell R610, 64GB RAM, 1 CPU, 2x quad-port NICs, ESXi on redundant SD cards
- 2x Cisco 3750-X 24 port IP Base switches
- EMC VNXe3150, drive config varies, but generally at least 6x600 10k SAS on dual SPs
- vSphere Essentials Plus

How much does the VNXe3150 run around? What separates it from the Dell MD3820i that I posted? Is it the dual storage processors? Adding this feature to the dell brings it up to $16,500. That's with 12 900TB 10K RPM drives. Is this more in line with a disk array that would be worth investing in?

sudo rm -rf
Aug 2, 2011


$ mv fullcommunism.sh
/america
$ cd /america
$ ./fullcommunism.sh


Hey guys, looking at the HP MSA 2040 datasheet - saw this:

quote:

Choice of 8Gb/16Gb FC, 1Gb/10GbE iSCSI and 12Gb SAS to match the configuration needs of infrastructure.

What's the difference between the 12Gb SAS controller and the others? What would the SAS controller be used for in a storage environment? Expansion?

Adbot
ADBOT LOVES YOU

sudo rm -rf
Aug 2, 2011


$ mv fullcommunism.sh
/america
$ cd /america
$ ./fullcommunism.sh


Docjowles posted:

Usually that's for direct attached storage, as opposed to accessing it over the network as you would with the other options.

Klenath posted:

SAS is used for SAN back-end disk shelf loops and related expansion needs though adding shelves. Hosts typically connect to the array via FC or iSCSI on the front-end while the array passes the data to disk via FC (older) or SAS (newer) loops. I've not seen a SAN which uses SAS on the front-end for host connectivity.

Good to know - thanks for the info.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply