|
adorai posted:I'm not trying to poo poo on you here, but I am amazed that this still exists in 2013. We've been fully virtualized since 2009, and my previous employer was getting there as well. Be amazed at SMB where +70% is still not virtual
|
# ? Oct 18, 2013 03:15 |
|
|
# ? Apr 18, 2024 23:53 |
|
TCPIP posted:We have about 30 users with 40-50GB Exchange mailboxes and the rest are all quota limited at 1GB but our CEO wants us to lift the limit due to too much user complaints. We want to 100% eliminate PST files so we need to account for all that junk that's sitting on their local hard drives as well. File server is small actually, about 1TB of data and it grows slow. Exchange is our biggest storage nightmare along with backups. If your CEO really wants to keep your storage budget low, tell him to stop being a moron about mailbox size limits. Implement sane retention limits.
|
# ? Oct 18, 2013 03:22 |
|
MrMoo posted:This is more semantics, file copy is a reasonable serial speed test but clearly not a scalability or random access test. If you're trying to benchmark an application that is going to do a single stream copy of data to your SAN, which is otherwise completely idle, then I suppose it's a fine benchmark. If you're trying to benchmark for any workload that actually exists in the real world on shared storage then it's a pretty terrible one. TCPIP posted:We have about 30 users with 40-50GB Exchange mailboxes and the rest are all quota limited at 1GB but our CEO wants us to lift the limit due to too much user complaints. We want to 100% eliminate PST files so we need to account for all that junk that's sitting on their local hard drives as well. File server is small actually, about 1TB of data and it grows slow. Exchange is our biggest storage nightmare along with backups. You don't need fast disk to run Exchange 2010. If you can get 20ms you will be fine, and really it can go even higher than that without any user impact provided your CAS servers have enough ram to keep a large number of messages in RAM. So you would probably be fine with 7.2k disks from a latency perspective. However when sizing for 2010 you need to consider the impact of background maintenance IO, which is about 5 MB/s and runs on a per DB basis, so the more databases you have the more throughput you need available to cover situations where all DBs are running maintenance. Big large mailboxes can exacerbate this by forcing you to use more DBs than you otherwise would and by causing the maintenance to run longer than it otherwise would. Might not be a problem for you since it doesn't sound like you have a very large environment, but it's something to consider. You also might be on the right track going with a DAS solution for your Exchange environment, if it really is your biggest storage problem. With enough enough DAG members to handle your failure scenarios, plus at least one LAG copy, you could pretty much do away with backups entirely and save a big chunk on your backup environment upgrades.
|
# ? Oct 18, 2013 05:35 |
|
Dilbert As gently caress posted:I believe you can add another rack but call your dell rep for that. What are your storage requirements ATM in GB? do you know your I/O load? evil_bunnY fucked around with this message at 14:37 on Oct 18, 2013 |
# ? Oct 18, 2013 14:33 |
|
Dilbert As gently caress posted:This is literally what you can get for 21k MSRP mind you through dell Hey quick question, what type of RAID config would you do with this setup? RAID-10 or RAID-6 with 6 drives of the 3 different types of drives? Also, I've never worked with SSD cache. Do you RAID those as well or is each SSD by itself? Also, the SAN takes care of that poo poo automatically right? Or is that something I setup with VMware under host cache? kiwid fucked around with this message at 17:00 on Oct 21, 2013 |
# ? Oct 21, 2013 16:49 |
|
kiwid posted:Hey quick question, what type of RAID config would you do with this setup? RAID-10 or RAID-6 with 6 drives of the 3 different types of drives? Depends what the Data requirements are, You may have some high transaction servers that would love to run on RAID 10 on those 15k drives, and do Raid 5 on the 10k for high level data, and 7.2k in raid 6 for your slow data Exchange DB's/File servers etc. Really depends on the environment. Here is how Dell works with their SSD caching. http://www.dell.com/downloads/global/products/pvaul/en/powervault-md3200i-md3220i-technical-guidebook-en.pdf
|
# ? Oct 21, 2013 22:55 |
|
Anyone here a fan of HP storage? I am trying to find the Selling point of the MSA's but I see they fall short compared to dell's MD series of features, mainly SSD caching. Only thing I can figure that HP has going is their controllers have 4GB cache to dells 2 GB cache per controller, and are a bit cheaper. Same with 3Par, The 7000's are good but compared to a VNX(to some extent the VNXe's)/NetApp/Compellent they feel limited. Dilbert As FUCK fucked around with this message at 18:19 on Oct 22, 2013 |
# ? Oct 22, 2013 17:43 |
|
HP Storage to me is a disjointed mess. They snapped up a bunch of companies, but then didn't really do much with them after the fact. I've only used LeftHand and MSA stuff from HP. The MSA's are just rebadged dothill arrays for the most part. They're OK at best if budget limits you to this line. I'm pretty sure the P4xxx stuff formerly known as LeftHand is basically dead at this point. Unfamiliar with the current P4xxx line. I don't see the EVA 4xxx listed on their website anymore so that's probably gone and I can't speak to 3PAR storage at all. My personal opinion is there are better options out there for the money. My personal generalized starting point recommendations in the storage market Basic iSCSI - EqualLogic More features iSCSI - EMC VNXe or Compellent Higher End kit - EMC VNX or NetApp The Dell MD's seem to get a lot of love, but I have an irrational dislike for them. Mostly because of the market that uses them though.
|
# ? Oct 22, 2013 18:02 |
|
I've had experience with an MSA P2000 iSCSI unit with a couple of expansion bays. It's easy enough to set up, the hardware design is nothing special but it's not terrible. Honestly I only ended up using it due to budget constraints, but the thing worked fine and the performance was as expected. If I was doing it again I'd probably pick one of the lower end Dell MD arrays, especially as they can do SSD caching, which wasn't a feature when I was looking at them. I think you could also level the complaint about acquiring vendors and not doing much with them at Dell as well with Compellent / EqualLogic, although the latest round of products seems to be addressing this. Thanks Ants fucked around with this message at 22:56 on Oct 22, 2013 |
# ? Oct 22, 2013 22:53 |
|
Caged posted:I've had experience with an MSA P2000 iSCSI unit with a couple of expansion bays. It's easy enough to set up, the hardware design is nothing special but it's not terrible. Honestly I only ended up using it due to budget constraints, but the thing worked fine and the performance was as expected. We used one of those (MSA 2000 G3) at a past job. That pretty much sums it up. Absolutely nothing special but pretty drat cheap and it worked reliably. The web UI was the worst poo poo but we were also very behind on firmware so there's an outside chance that got fixed. I can't say I'd actively recommend it to anyone but it did the job, as long as the job was "provide some spinning disks over the network" and nothing fancier.
|
# ? Oct 22, 2013 22:57 |
|
Syano posted:Powervault kits are great. You can add shelves any time you need up to like 192 total drives or something like that. You can't go wrong with them for small deployments Don't work with many small/medium businesses do you? I had 10 year old dell poweredge servers running mission critical stuff when I first started working at my current shop, and the prevailing attitude towards hardware purchases was generally "Just use one of the old ones laying around". Case in point: About a half a year before I was hired (Call it three years ago) they purchased an entirely new software that drove... well everything from inventory to service calls. Rather than purchase a new server and new storage for their new program they just took down half of the old ERP softwares cluster, reinstalled windows on it, appropriated half of it's storage and setup the new system on it. Naturally it ran like poo poo. The excuse I heard was "Well we already owned an extra server we didn't need". Trying to pry money out of people like this is hard. My entire first year here was spent convincing the owner that just because you spent 10k on something five years ago doesn't mean it's worth a poo poo now. Servers don't have some intrinsic value that sticks around after they are past the point where they can do any of the jobs you need to do. This isn't a car you can restore and then keep on using as a daily driver. Now I have a rack of lovely HP servers running ESX backed by a SAN. Much better than the baremetal raid 0 shitboxes with DAS that I was dealing with before. Rhymenoserous fucked around with this message at 23:01 on Oct 22, 2013 |
# ? Oct 22, 2013 22:59 |
|
No it didn't , the UI was still terrible 3 months ago. It had the undocumented admin account which we managed to dodge by pure coincidence because that was the account details I found when I was Googling for the default credentials and changed it because HPs documentation is pure poo poo and it's impossible to find relevant information on their website. Edit: That's a response to Docjowles's post
|
# ? Oct 22, 2013 23:00 |
|
Dilbert As gently caress posted:Be amazed at SMB where +70% is still not virtual This. Most SMB's I've done work for are still not doing any Virtualization. Hell, I just took on a client running an NT 4.0 server. Guess what my first priority is there?
|
# ? Oct 23, 2013 23:02 |
|
TKovacs2 posted:Hell, I just took on a client running an NT 4.0 server. Guess what my first priority is there?
|
# ? Oct 23, 2013 23:04 |
|
evil_bunnY posted:The first priority is always "it can't cost anything". The vast majority of SMB's will flatly bury their heads about operational risk and value. I DON'T HAVE TO BUY CAL'S RIGHT?? Those are just optional right?
|
# ? Oct 23, 2013 23:07 |
|
"Well if you don't have to enter the CALs anywhere then I don't see how they can tell how many users we have" I dealt with a small business once where one of the co-owners decided randomly overnight to change the 'server' and one of the desktops to Ubuntu overnight so they were effectively down a PC the next day and nobody could get to any file shares. This was the same guy who thought that spending time trying to trick Samba into being an AD domain controller rather than just buying a Windows Server license was a good use of time and money. I'm glad I gave up trying to help them. Thanks Ants fucked around with this message at 23:10 on Oct 23, 2013 |
# ? Oct 23, 2013 23:08 |
|
evil_bunnY posted:The first priority is always "it can't cost anything". The vast majority of SMB's will flatly bury their heads about operational risk and value. Yep. Their other five 'servers' are Compaq or whitebox PC's running illegal copies of everything from Windows Vista to Server 2008 R2 Web Server. Good times ahoy.
|
# ? Oct 23, 2013 23:09 |
|
One thing I will say in defence of smaller businesses is that Microsoft software is priced really loving ridiculously high, as well as being the most confusing thing you'll ever read. If you've got a small number of PCs the initial spend needed to go with some sort of volume license agreement is insane since you pretty much have to buy all your current licenses again just to be able to buy SA. There's no way to migrate your retail boxed server licenses across and just buy SA on them, or add SA to the Windows licenses included with the PCs you've just bought. This initial hit only gets bigger as the company grows until you're at the point where it's a gently caress ton of money to spend just to stay where you currently are but with SA. Either that or the rep I spoke to was utter poo poo. Thanks Ants fucked around with this message at 23:31 on Oct 23, 2013 |
# ? Oct 23, 2013 23:26 |
|
Is there any comparable competitor to the Cisco MDS-9148? Looking to hook up about 4 hosts to some DP 8Gb/s FC adapters to a VNX 5400, loaded with Flash/15k. I don't generally work too much with FC, and when I do it is mostly HP... Also Besides clockspeed/max drives is there a good compare and contrast of the new VNX's? Primarily the VNX5200 and VNX5400. Got a much larger budget that I thought to overhaul the san for my vmware lab environments at school. Dilbert As FUCK fucked around with this message at 01:04 on Oct 24, 2013 |
# ? Oct 24, 2013 01:01 |
|
Just size it realistically and spend on something else?
|
# ? Oct 24, 2013 01:13 |
|
evil_bunnY posted:Just size it realistically and spend on something else? The new servers I got came all with DP FC cards, If I don't spend the full budget on the project welp, it's gone(lol government). Personally I would prefer to do 10Gb, but buying accessories such as addin cards for the hosts is harder than buying straight up new equipment. My preliminary plan is 2 MDS-9148's x16 port's VNX5400 4x400GB SSD's 25x300GB 15k Drives 25x600GB 15k Drives +Fast Suite Already have a PS4000, and Netapp FAS2040 loaded with a bunch of 1TB 7.2k + 2 shelves NL SAS drives. Nettapp will be the onsite backup and failback point if the VNX takes a massive poo poo; the PS4000 will most likely be offsite for backups. Pretty syched and while I just found out my budget I really want to do my absolute best and dedicate as much time to it as I can because I would really like to use this as a VCDX defense some day.
|
# ? Oct 24, 2013 01:30 |
|
What's making you lean to EMC for the storage? Unless there are external factors (tied to certain vendors, crazy edu discounts) that aren't as apparent, paying the EMC brand premium for a lab environment seems like a waste. Plus, FC is probably the least user-friendly protocol. I imagine you could get way more bang for the buck elsewhere. If you need to eat up the funds, why not spend it on capacity or flash?
|
# ? Oct 24, 2013 01:37 |
|
Dilbert As gently caress posted:Pretty syched and while I just found out my budget I really want to do my absolute best and dedicate as much time to it as I can because I would really like to use this as a VCDX defense some day. You're looking at the VCDX in entirely the wrong way based on this post. That said you should look at either this: http://www.brocade.com/products/all/switches/product-details/300-switch/index.page or this: http://www.brocade.com/products/all/switches/product-details/6505-switch/index.page As alternatives to the MDS 9100 series. quote:What's making you lean to EMC for the storage? It's possible if this is for education use that an EMC array may be needed to teach EMC centric classes. Then again it may not be so this question should really be answered before cutting a PO.
|
# ? Oct 24, 2013 01:44 |
|
three posted:What's making you lean to EMC for the storage? We teach EMC, I do like VNX's. I completely agree with the FC use, I would really much prefer 10Gb/e and do iSCSI/NFS then FCoE if absolutely needed. However, 1) I don't have the adapters for 10Gb on the hosts, while I have 8Gb cards in the hosts already and B) Some others want to see what was purchased on the servers fully used, and want to move ahead with it, personally I feel it introduces a level of overhead not needed, but customer scope requirement. 1000101 posted:You're looking at the VCDX in entirely the wrong way based on this post. quote:It's possible if this is for education use that an EMC array may be needed to teach EMC centric classes. Then again it may not be so this question should really be answered before cutting a PO. Bingo! Dilbert As FUCK fucked around with this message at 01:55 on Oct 24, 2013 |
# ? Oct 24, 2013 01:47 |
|
Dilbert As gently caress posted:Is there any comparable competitor to the Cisco MDS-9148? Looking to hook up about 4 hosts to some DP 8Gb/s FC adapters to a VNX 5400, loaded with Flash/15k. Do you really need 48 ports? with a VNX & 4 hosts you'll only be using 8 ports on each switch. Theres the MDS-9124 or Brocade 300 which are kind of equivalent (8GB FC/24port). I find the brocades a bit easier to work with (zoning etc), but Cisco's FC gear tends to be a bit cheaper.
|
# ? Oct 24, 2013 02:30 |
|
GrandMaster posted:Do you really need 48 ports? with a VNX & 4 hosts you'll only be using 8 ports on each switch. Sorry yeah the 9124, wow no idea why I copied the 9148....
|
# ? Oct 24, 2013 02:36 |
|
GrandMaster posted:I find the brocades a bit easier to work with (zoning etc), but Cisco's FC gear tends to be a bit cheaper. Seconding Brocade, FabricOS has a great GUI and the CLI is very sensible.
|
# ? Oct 25, 2013 08:08 |
|
Wikipedia posted:However, the performance of an iSCSI SAN deployment can be severely degraded if not operated on a dedicated network or subnet (LAN or VLAN). Can someone explain why this is the case? Why do they tell you to have a separate VLAN for iSCSI traffic, and in the case of MPIO, multiple VLANs?
|
# ? Oct 25, 2013 19:45 |
|
Do you really need multiple VLANs for MPIO? I'm setting up an MSSQL Cluster running on an Equallogic backend, and it whines about not having multiple networks for the storage NICs, but works fine if both NICs are on the same subnet.
|
# ? Oct 25, 2013 19:48 |
|
Wicaeed posted:Do you really need multiple VLANs for MPIO? Pretty much everything I read says to use a different subnet for each interface, however, I guess you can do something with static routing to make it work but I've not looked into that.
|
# ? Oct 25, 2013 19:53 |
|
Wicaeed posted:Do you really need multiple VLANs for MPIO? You should if you want to protect yourself from switch misconfiguration
|
# ? Oct 25, 2013 19:54 |
|
Not really experienced with Equallogic but I seem to recall they're a special snowflake where one network is the correct config. Maybe someone can confirm. That's unusual, though. Most vendors and I believe the MS software iscsi initiator want two networks to do MPIO.
|
# ? Oct 25, 2013 21:20 |
|
KS posted:Not really experienced with Equallogic but I seem to recall they're a special snowflake where one network is the correct config. Maybe someone can confirm. That's unusual, though. Most vendors and I believe the MS software iscsi initiator want two networks to do MPIO. Unsure about EQ, but the PowerVault MD series wanted a different VLAN for each address.
|
# ? Oct 25, 2013 21:55 |
|
kiwid posted:Can someone explain why this is the case? Why do they tell you to have a separate VLAN for iSCSI traffic, and in the case of MPIO, multiple VLANs? There are a fair number of good reasons. A separate VLAN means a separate Spanning Tree domain which can mean faster and more optimized convergence and resilience against Spanning Tree problems in other VLANs. There are also configurations that can be applied on a per VLAN basis such as MTU size and QOS. A dedicated VLAN also limits the amount of broadcast traffic that the NICs on your storage network have to deal with. And with a dedicated VLAN you can leave it un-routed and prune the VLAN off of trunks to limit possible paths the data can take through the network to make things as efficient as possible. Wicaeed posted:Do you really need multiple VLANs for MPIO? This is a bad idea and isn't guaranteed to work with all clients and all storage. The basic problem: let's say you've got two physical interfaces on your host configured with IPs in the same subnet, 192.168.1.1 and 192.168.1.2, and you want to communicate with 192.168.1.3 which is a host somewhere on the network. Which interface will traffic to 192.168.1.3 leave out of?
|
# ? Oct 26, 2013 01:57 |
|
NippleFloss posted:There are a fair number of good reasons. A separate VLAN means a separate Spanning Tree domain which can mean faster and more optimized convergence and resilience against Spanning Tree problems in other VLANs. There are also configurations that can be applied on a per VLAN basis such as MTU size and QOS. A dedicated VLAN also limits the amount of broadcast traffic that the NICs on your storage network have to deal with. And with a dedicated VLAN you can leave it un-routed and prune the VLAN off of trunks to limit possible paths the data can take through the network to make things as efficient as possible. Ah, thanks for clearing that up. I could never find anywhere that actually explained that.
|
# ? Oct 26, 2013 02:19 |
|
NippleFloss posted:This is a bad idea and isn't guaranteed to work with all clients and all storage. The basic problem: let's say you've got two physical interfaces on your host configured with IPs in the same subnet, 192.168.1.1 and 192.168.1.2, and you want to communicate with 192.168.1.3 which is a host somewhere on the network. Which interface will traffic to 192.168.1.3 leave out of?
|
# ? Oct 26, 2013 02:53 |
Another thing is having to deal with iScsi initiator and targets. Some storage arrays ports can act as both initiator and target for doing types of replication and copying from san to san. If you don't segregate the traffic properly, you can have the san possibly login to itself from mis-configuration and start bouncing your iScsi ports. I've seen quite a few clients set things up on their own and wonder why their iscsi infrastructure isn't working how they like.
|
|
# ? Oct 26, 2013 03:28 |
|
For something like an Equallogic PSx100 series SAN, how exactly would you do different VLANs per path? You have two controllers with 4 ports each. Only 4 are active at once and they do vertical port failover so you would need 4 VLANS. If each vertical port group was on a different VLAN, wouldn't you nee 8 ports per host to connect to all the paths?
|
# ? Oct 26, 2013 04:24 |
|
bull3964 posted:For something like an Equallogic PSx100 series SAN, how exactly would you do different VLANs per path?
|
# ? Oct 26, 2013 04:52 |
|
|
# ? Apr 18, 2024 23:53 |
|
Misogynist posted:You don't need active/passive network connections to be on separate VLANs -- I don't even see how that would work, since you couldn't have the same IP address between interface pairs. I know, that's why I said 4 VLANs. Each pair of vertical failover ports would be their own VLAN. For example. 0 A/B 10.100.1.x 1 A/B 10.100.2.x 2 A/B 10.100.3.x 3 A/B 10.100.4.x But, if you want to have two switches for redundancy, each pair of a vertical port group would be connected to a different switch (A plugged into one, B plugged into the other). If you wanted all 4 ports to be accessible from each host even during a switch failure, then you would need 8 ports on each host, 4 connected to switch A (one on each VLAN) and 4 on switch B (one on each VLAN). If you had fewer ports on a host, I don't see how the host would be able to see all ports on the storage in the event of a switch failure and you would lose half your storage bandwidth. Dell's best practices for that array has all 4 ports being on a single VLAN and using a single VLAN for all your storage traffic. bull3964 fucked around with this message at 05:35 on Oct 26, 2013 |
# ? Oct 26, 2013 05:32 |