Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
kiwid
Sep 30, 2013

We're looking at an entry level iSCSI SAN to implement a 3 host VM solution (vmware essentials plus kit) for about 15 guests. Currently all our servers are on separate physical hardware and we want to slowly migrate into a fully virtualized solution. I was looking at the MD3220i with 24x 1TB 7200RPM Near-Line drives but a friend told me they were awful and to look into the EqualLogix PS solutions. He also told me to get 10k drives.

I was just wondering if I can get more insight on this before I go the more expensive route. Keep in mind, we're trying to keep the costs to a low as we're a smaller company and don't have an enterprise budget.

Adbot
ADBOT LOVES YOU

kiwid
Sep 30, 2013

Is the MD3220i a good choice with 15k/10k drives?

What do you guys typically use to store massive amount of data (300 users with 40GB Exchange mailboxes)?

We could probably go with a DAS at that point if we use DAGs, but what about file servers that hold a lot of data?

kiwid
Sep 30, 2013

Dilbert As gently caress posted:

We are an HP shop and go with MSA's/3Par.


What's your budget?

Teiring data and looking at what you have as well as what storage you need is important. 7.2k drives can provide a high capacity for low(overall) I/O need such as a File server or large stagnate data, while a SQL DB or other DB needs HIGH i/o frequent R/W.

Just for storage, our budget is about 40-50k. That is including backups since our current backup solution is a couple FreeNAS lovely whiteboxes.

Is this possible?

We can't reuse much hardware, we're essentially starting over here after going a decade not upgrading anything.

kiwid
Sep 30, 2013

Dilbert As gently caress posted:

That is very possible, Storage is pretty cheap-ish now of days. What does your infrastructure look like(aside from 15 servers)? What's your projected growth?

Everything is almost a decade old hardware running Server 2003. File server and exchange server storage are both on internal drives inside the physical servers. MSSQL databases total about 20GB (they're small) We do not have any DAS/NAS/SANs that we can reuse. We're basically starting over here.

We have about 30 users with 40-50GB Exchange mailboxes and the rest are all quota limited at 1GB but our CEO wants us to lift the limit due to too much user complaints. We want to 100% eliminate PST files so we need to account for all that junk that's sitting on their local hard drives as well. File server is small actually, about 1TB of data and it grows slow. Exchange is our biggest storage nightmare along with backups.

kiwid
Sep 30, 2013

Can you get expansion units for a MD3220i if storage needs grow fast or would you be looking at a second SAN?

kiwid
Sep 30, 2013

Dilbert As gently caress posted:

This is literally what you can get for 21k MSRP mind you through dell



Hey quick question, what type of RAID config would you do with this setup? RAID-10 or RAID-6 with 6 drives of the 3 different types of drives?

Also, I've never worked with SSD cache. Do you RAID those as well or is each SSD by itself? Also, the SAN takes care of that poo poo automatically right? Or is that something I setup with VMware under host cache?

kiwid fucked around with this message at 17:00 on Oct 21, 2013

kiwid
Sep 30, 2013

Wikipedia posted:

However, the performance of an iSCSI SAN deployment can be severely degraded if not operated on a dedicated network or subnet (LAN or VLAN).

Can someone explain why this is the case? Why do they tell you to have a separate VLAN for iSCSI traffic, and in the case of MPIO, multiple VLANs?

kiwid
Sep 30, 2013

Wicaeed posted:

Do you really need multiple VLANs for MPIO?


Pretty much everything I read says to use a different subnet for each interface, however, I guess you can do something with static routing to make it work but I've not looked into that.

kiwid
Sep 30, 2013

NippleFloss posted:

There are a fair number of good reasons. A separate VLAN means a separate Spanning Tree domain which can mean faster and more optimized convergence and resilience against Spanning Tree problems in other VLANs. There are also configurations that can be applied on a per VLAN basis such as MTU size and QOS. A dedicated VLAN also limits the amount of broadcast traffic that the NICs on your storage network have to deal with. And with a dedicated VLAN you can leave it un-routed and prune the VLAN off of trunks to limit possible paths the data can take through the network to make things as efficient as possible.


This is a bad idea and isn't guaranteed to work with all clients and all storage. The basic problem: let's say you've got two physical interfaces on your host configured with IPs in the same subnet, 192.168.1.1 and 192.168.1.2, and you want to communicate with 192.168.1.3 which is a host somewhere on the network. Which interface will traffic to 192.168.1.3 leave out of?

Ah, thanks for clearing that up. I could never find anywhere that actually explained that.

kiwid
Sep 30, 2013

Agrikk posted:

The MPIO config for my NetApp FAS2050 was the same way. If you have a host with NicA and NicB and the Filer with active/active heads listening on Nic1 and Nic2, their best practice was to have all four on the same VLAN and subnet so you could have:

NicA -> Nic1
NicA -> Nic2
NicB -> Nic2
NicB -> Nic1

traffic patterns. For better redundancy: have a stacked pair of switches serving up only iSCSI traffic so there's no need to VLAN at all and plug NicA and Nic1 into SwitchA and NicB and Nic2 on SwitchB.

I'm not sure if that's still the way NetApp does it as the FAS2050 is ancient by IT years.

So you get 4 paths this way?

I'm now wondering if I even have my home lab setup correctly. I have two NICs on my host and two NICs on my NAS so two paths using separate VLANs, right?

Does this look right? http://imgur.com/a/Lnxsz

kiwid
Sep 30, 2013

Syano posted:

What do you guys see being used most in the SMB space for backup storage? Are people storing their backups on the same SAN as their production data? Or do you see folks adding secondary storage or maybe even local storage for backups?

If I had my way I'd be backing up locally to a large multi TB QNAP or something and then syncing to our building down the road via MIMO PTP wireless links. All can be done relatively cheaply. But hey, I live in the real world where we don't backup to anything. :smithicide:

kiwid
Sep 30, 2013

How come Nimble isn't in the OP? What are Goon's opinions on it?

kiwid
Sep 30, 2013

Figured this is the best thread to ask this in but does anyone here backup to Amazon S3 or Amazon Glacier (not sure what the difference is currently)?

If so, do you like it? Is there any caveats I should know about it or is it simple just back up/archive your data and hope you don't have to touch it ever (we'd still be doing on-site backups)?

Also, what the gently caress is a "request"? If I'm backing up one server, is that one request or is a request done for each file or what?

edit: Also, assuming poo poo hit the fan and we had to resort to disaster recovery by getting our data off Amazon Glacier. How do you actually do that, assuming you had 20TB on there? Would you be downloading that all over the WAN or would they send you a hard drive or something?

kiwid fucked around with this message at 22:01 on Jul 30, 2015

kiwid
Sep 30, 2013

I have a question. A friend told me that I need to be unmapping or doing something in VMware to free up storage on our SAN. I'm not really sure what he's talking about. We have a Nimble CS300. Do I need to be doing any maintenance tasks on this thing like mentioned?

edit: some more info: The CS300 is just one array with two volumes/datastores. Using iSCSI and both datastores are formatted with VMFS using the entire space. A very straight forward simple setup.

kiwid fucked around with this message at 19:09 on Mar 27, 2018

kiwid
Sep 30, 2013

How do I know if I have to do that?

kiwid
Sep 30, 2013

YOLOsubmarine posted:

What version of ESXi are you running?

6.0 U2

Internet Explorer posted:

Also are the LUNs that your datastore sit on thin provisioned? If not, then you don't need to worry about this.

Yes

Adbot
ADBOT LOVES YOU

kiwid
Sep 30, 2013

YOLOsubmarine posted:

Then you’ll need to run the scsi unmap command manually to reclaim thin provisioned blocks on the datastore.

https://kb.vmware.com/s/article/2057513.

Starting in 6.5 it’s automated (again).


Well we have a planned upgrade soon. If I upgrade it'll just start automating it then and I don't have to worry about this?

edit: nvm, found this:

quote:

However, due to the changes done in VMFS 6 metadata structures to make it 4K aligned, you cannot inline/offline upgrade from VMFS5 to VMFS6.

kiwid fucked around with this message at 03:40 on Mar 28, 2018

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply