|
Hey there storage guys, I'm looking at getting a simple disk array for iSCSI to create some shared storage for our growing vCenter deployment (6 hosts) - the only caveat is that I really need it to be Cisco hardware. Is there something in the UCS line that could fulfill this need? Could I literally grab a C240 and put openfiler on it?
|
# ¿ May 14, 2014 21:53 |
|
|
# ¿ Apr 28, 2024 15:11 |
|
The main issue is that we get significant discount on cisco products (internal pricing), so I'm thinking that jerry-rigging a C240 with Freenas would have such a huge price advantage over even an entry-level array like the MD3220i to be worth it.
|
# ¿ May 14, 2014 22:59 |
|
I've got a budget of ~10k. Need to get an iSCSI solution for a small VMware environment. Right now we're at 2-3 hosts, with about 30 VMs. I can do 10GE, as it would hook into a couple of N5Ks. Dilbert steered me away from hacking together a solution with some discount UCS 240s a while back and pointed me into the direction of Dell's MD series. I'm currently looking at the MD3800i with 8 2TB 7.2k SAS drives, does that sound alright? I was looking at the configuration options, wasn't really sure what this refereed to: I can't tell if that's needed or not.
|
# ¿ Jun 27, 2014 22:04 |
|
I'm not sure, honestly. I'm just a contractor that manages a couple labs. I think my project manager and I searched around for something (I'm at Cisco), but we didn't see a lot of good options internally. I saw a lot of brands that I've never heard of, like Infortrend, Overland, CRU, and Promise. I mean if they'll do the job, cool, but I didn't have any idea who they were and didn't have a lot of confidence in any of them.
|
# ¿ Jun 27, 2014 22:39 |
|
Richard Noggin posted:8 2TB NL SAS drives for 30 VMs? Have you done any IOPS analysis? I've seen bigger setups fail with more drives and less VMs. No. This is all new to me (a theme for most of my posts in these forums, haha). How is that usually done? But the VMs aren't all on at the same time - the majority of them are simple Windows 7 boxes used to RDP into our teaching environment. When we aren't in classes, there's not even a need to keep them spun up. Wicaeed posted:And yeah you're gonna be sad with just 7.2k RPM NL-SAS drives. I work for Cisco, but we're a small part of it. sudo rm -rf fucked around with this message at 03:12 on Jun 28, 2014 |
# ¿ Jun 28, 2014 03:08 |
|
Richard Noggin posted:Then why get a SAN? Because our Domain Controllers, DHCP Server, vCenter Server, AAA Server and workstations are also VMs, and at the moment everything is hosted on a single ESXi host. I can get more hosts, their cost isn't an issue - but additional hosts do not offer me much protection without being able to use HA and vMotion. That's a reasonable need, yeah?
|
# ¿ Jun 29, 2014 20:25 |
|
Sickening posted:Ha/vmnotion is more for an environment that has uptime/disaster recover needs. The environment posted (outside of a dc) doesn't really have those needs. If 10k is the budget, you will get more bang for your buck to add another beefy host and skipping the storage completely. What kind of budget do you think would be the minimum required to plan get my DCs protected and plan for a minimal amount of growth? Richard Noggin posted:See, that changes the game. You said earlier that you needed the storage for 30 infrequently used VMs that were basically jumpboxes. Without knowing the rest of your environment, I'd be inclined to tell you to put the workstation VMs on local storage and your servers on the SAN, but at least get 10K drives. Speaking to general cluster design, our standard practice is 2 host clusters, and add more RAM or a second processor if need be, before adding a third host. This sounds like a good plan, and our hosts are pretty beefy - the discount we get on UCS hardware is significant. Edit: I really appreciate you guys walking me through this. I'm essentially a one man team, so I don't have a lot of options for assistance outside of my own ability to research.
|
# ¿ Jun 29, 2014 22:10 |
|
Sickening posted:I would think somewhere in the 25k to 30k. Servers, expandable storage, and networking behind it. It might seem like a lot, but after the extra fees, support contracts, and tax that is where you are going to be. What do you mean by this?
|
# ¿ Jun 29, 2014 22:20 |
|
1000101 posted:For DCs you wouldn't need to worry too much since you can just build 1 DC on each ESXi host and they'll all replicate to each other. Nah, I'm in Atlanta.
|
# ¿ Jun 29, 2014 22:27 |
|
Sickening posted:Ugh, are you serious? Yes? I wanted to see if some of the equipment I needed to include in my budget was something I already had. What do you want from me? If you're willing to impart professional advice, I'm absolutely willing to hear it. If you don't want to, cool, that's fine too. I've never used enterprise storage in a professional setting before, I've been out of college barely a year and I've had my current (only) job for even less. I apologize if my questions are annoying, but you are free to ignore them if they are so loving unbelievable. I don't think being an rear end in a top hat about it is warranted.
|
# ¿ Jun 29, 2014 22:35 |
|
I apologize for not being clear enough on my needs / the details. Part of it has been figuring out what information would be relevant. Servers - Currently a single UCS C220 M3. It has 2 Xeon E5-2695 @ 2.40GHz, comes to 48 logical processors. 96GBs of RAM. Definitely something I can get replaced or upgraded with ease - the discount for Cisco equipment is high. I've asked for two more of these servers a long with the disk array, and I should get them without any problems. Storage - This is what I can't really get internally. Right now everything is running off the local storage of our single ESXi server, which is using 8 900GB 10k drives in RAID 10. Networking - This is transitional, we are in the process of moving towards a Nexus only environment, but at the moment I have 2 N5Ks in place along with several N2Ks. The ESXi host currently connects to an N2K, and I assumed I would directly connect the disk array to the N5Ks to take advantage of 10G. I was learning towards iSCSI, but I believe that I can also do FCoE because of the unified ports on my N5Ks. But yeah, this is less of a budget concern considering the discount. This is the latest Dell that I've put together, changed the HDDs (this includes 12 HDDs):
|
# ¿ Jun 29, 2014 23:07 |
|
Pantology posted:I'm surprised Cisco's Invicta hasn't been mentioned as a comedy-option. Way, way overkill (and lists for like 10x that Dell build), but it has a Cisco faceplate, and if the internal discount makes it cheap enough... It was ironically looked into by my group, but yeah, it's way overkill. The other option was building out a couple of 240s and throwing FreeNAS on them.
|
# ¿ Jun 30, 2014 17:56 |
|
Richard Noggin posted:The amount of time that a host spends in maintenance mode is very, very small. Wouldn't you rather take that other $5k that you would have spent on a server and put it into better storage/networking? How much does the VNXe3150 run around? What separates it from the Dell MD3820i that I posted? Is it the dual storage processors? Adding this feature to the dell brings it up to $16,500. That's with 12 900TB 10K RPM drives. Is this more in line with a disk array that would be worth investing in?
|
# ¿ Jul 1, 2014 00:01 |
|
Hey guys, looking at the HP MSA 2040 datasheet - saw this:quote:Choice of 8Gb/16Gb FC, 1Gb/10GbE iSCSI and 12Gb SAS to match the configuration needs of infrastructure. What's the difference between the 12Gb SAS controller and the others? What would the SAS controller be used for in a storage environment? Expansion?
|
# ¿ Sep 11, 2014 16:16 |
|
|
# ¿ Apr 28, 2024 15:11 |
|
Docjowles posted:Usually that's for direct attached storage, as opposed to accessing it over the network as you would with the other options. Klenath posted:SAS is used for SAN back-end disk shelf loops and related expansion needs though adding shelves. Hosts typically connect to the array via FC or iSCSI on the front-end while the array passes the data to disk via FC (older) or SAS (newer) loops. I've not seen a SAN which uses SAS on the front-end for host connectivity. Good to know - thanks for the info.
|
# ¿ Sep 11, 2014 19:56 |