|
We're looking at an entry level iSCSI SAN to implement a 3 host VM solution (vmware essentials plus kit) for about 15 guests. Currently all our servers are on separate physical hardware and we want to slowly migrate into a fully virtualized solution. I was looking at the MD3220i with 24x 1TB 7200RPM Near-Line drives but a friend told me they were awful and to look into the EqualLogix PS solutions. He also told me to get 10k drives. I was just wondering if I can get more insight on this before I go the more expensive route. Keep in mind, we're trying to keep the costs to a low as we're a smaller company and don't have an enterprise budget.
|
# ¿ Oct 18, 2013 02:12 |
|
|
# ¿ Apr 19, 2024 03:18 |
|
Is the MD3220i a good choice with 15k/10k drives? What do you guys typically use to store massive amount of data (300 users with 40GB Exchange mailboxes)? We could probably go with a DAS at that point if we use DAGs, but what about file servers that hold a lot of data?
|
# ¿ Oct 18, 2013 02:30 |
|
Dilbert As gently caress posted:We are an HP shop and go with MSA's/3Par. Just for storage, our budget is about 40-50k. That is including backups since our current backup solution is a couple FreeNAS lovely whiteboxes. Is this possible? We can't reuse much hardware, we're essentially starting over here after going a decade not upgrading anything.
|
# ¿ Oct 18, 2013 02:36 |
|
Dilbert As gently caress posted:That is very possible, Storage is pretty cheap-ish now of days. What does your infrastructure look like(aside from 15 servers)? What's your projected growth? Everything is almost a decade old hardware running Server 2003. File server and exchange server storage are both on internal drives inside the physical servers. MSSQL databases total about 20GB (they're small) We do not have any DAS/NAS/SANs that we can reuse. We're basically starting over here. We have about 30 users with 40-50GB Exchange mailboxes and the rest are all quota limited at 1GB but our CEO wants us to lift the limit due to too much user complaints. We want to 100% eliminate PST files so we need to account for all that junk that's sitting on their local hard drives as well. File server is small actually, about 1TB of data and it grows slow. Exchange is our biggest storage nightmare along with backups.
|
# ¿ Oct 18, 2013 02:43 |
|
Can you get expansion units for a MD3220i if storage needs grow fast or would you be looking at a second SAN?
|
# ¿ Oct 18, 2013 02:57 |
|
Dilbert As gently caress posted:This is literally what you can get for 21k MSRP mind you through dell Hey quick question, what type of RAID config would you do with this setup? RAID-10 or RAID-6 with 6 drives of the 3 different types of drives? Also, I've never worked with SSD cache. Do you RAID those as well or is each SSD by itself? Also, the SAN takes care of that poo poo automatically right? Or is that something I setup with VMware under host cache? kiwid fucked around with this message at 17:00 on Oct 21, 2013 |
# ¿ Oct 21, 2013 16:49 |
|
Wikipedia posted:However, the performance of an iSCSI SAN deployment can be severely degraded if not operated on a dedicated network or subnet (LAN or VLAN). Can someone explain why this is the case? Why do they tell you to have a separate VLAN for iSCSI traffic, and in the case of MPIO, multiple VLANs?
|
# ¿ Oct 25, 2013 19:45 |
|
Wicaeed posted:Do you really need multiple VLANs for MPIO? Pretty much everything I read says to use a different subnet for each interface, however, I guess you can do something with static routing to make it work but I've not looked into that.
|
# ¿ Oct 25, 2013 19:53 |
|
NippleFloss posted:There are a fair number of good reasons. A separate VLAN means a separate Spanning Tree domain which can mean faster and more optimized convergence and resilience against Spanning Tree problems in other VLANs. There are also configurations that can be applied on a per VLAN basis such as MTU size and QOS. A dedicated VLAN also limits the amount of broadcast traffic that the NICs on your storage network have to deal with. And with a dedicated VLAN you can leave it un-routed and prune the VLAN off of trunks to limit possible paths the data can take through the network to make things as efficient as possible. Ah, thanks for clearing that up. I could never find anywhere that actually explained that.
|
# ¿ Oct 26, 2013 02:19 |
|
Agrikk posted:The MPIO config for my NetApp FAS2050 was the same way. If you have a host with NicA and NicB and the Filer with active/active heads listening on Nic1 and Nic2, their best practice was to have all four on the same VLAN and subnet so you could have: So you get 4 paths this way? I'm now wondering if I even have my home lab setup correctly. I have two NICs on my host and two NICs on my NAS so two paths using separate VLANs, right? Does this look right? http://imgur.com/a/Lnxsz
|
# ¿ Oct 29, 2013 22:49 |
|
Syano posted:What do you guys see being used most in the SMB space for backup storage? Are people storing their backups on the same SAN as their production data? Or do you see folks adding secondary storage or maybe even local storage for backups? If I had my way I'd be backing up locally to a large multi TB QNAP or something and then syncing to our building down the road via MIMO PTP wireless links. All can be done relatively cheaply. But hey, I live in the real world where we don't backup to anything.
|
# ¿ Jan 13, 2014 14:49 |
|
How come Nimble isn't in the OP? What are Goon's opinions on it?
|
# ¿ Jul 11, 2015 02:34 |
|
Figured this is the best thread to ask this in but does anyone here backup to Amazon S3 or Amazon Glacier (not sure what the difference is currently)? If so, do you like it? Is there any caveats I should know about it or is it simple just back up/archive your data and hope you don't have to touch it ever (we'd still be doing on-site backups)? Also, what the gently caress is a "request"? If I'm backing up one server, is that one request or is a request done for each file or what? edit: Also, assuming poo poo hit the fan and we had to resort to disaster recovery by getting our data off Amazon Glacier. How do you actually do that, assuming you had 20TB on there? Would you be downloading that all over the WAN or would they send you a hard drive or something? kiwid fucked around with this message at 22:01 on Jul 30, 2015 |
# ¿ Jul 30, 2015 21:57 |
|
I have a question. A friend told me that I need to be unmapping or doing something in VMware to free up storage on our SAN. I'm not really sure what he's talking about. We have a Nimble CS300. Do I need to be doing any maintenance tasks on this thing like mentioned? edit: some more info: The CS300 is just one array with two volumes/datastores. Using iSCSI and both datastores are formatted with VMFS using the entire space. A very straight forward simple setup. kiwid fucked around with this message at 19:09 on Mar 27, 2018 |
# ¿ Mar 27, 2018 19:06 |
|
How do I know if I have to do that?
|
# ¿ Mar 27, 2018 19:49 |
|
YOLOsubmarine posted:What version of ESXi are you running? 6.0 U2 Internet Explorer posted:Also are the LUNs that your datastore sit on thin provisioned? If not, then you don't need to worry about this. Yes
|
# ¿ Mar 28, 2018 01:56 |
|
|
# ¿ Apr 19, 2024 03:18 |
|
YOLOsubmarine posted:Then you’ll need to run the scsi unmap command manually to reclaim thin provisioned blocks on the datastore. Well we have a planned upgrade soon. If I upgrade it'll just start automating it then and I don't have to worry about this? edit: nvm, found this: quote:However, due to the changes done in VMFS 6 metadata structures to make it 4K aligned, you cannot inline/offline upgrade from VMFS5 to VMFS6. kiwid fucked around with this message at 03:40 on Mar 28, 2018 |
# ¿ Mar 28, 2018 03:37 |