Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Syano
Jul 13, 2005
Ive become more curious now. This Xen deployment isnt going to get very large, only 2 hosts and maybe 20 total VMs. However having a separate storage repository for each VM is sort of annoying. I think its time to do some googling.

Adbot
ADBOT LOVES YOU

Syano
Jul 13, 2005
So I should still be able to do Xenmotion and all the other bells and whistles by putting all my vdisks on a single SR?

Syano
Jul 13, 2005
I dunno. It was what was shipped to me. So far though I like it a lot. More features and than Hyper-V for sure

Syano
Jul 13, 2005
What would be a good option for a SMB/NFS virtual machine/appliance with deduplication? I am looking to run something on top of esxi that I can pass iscsi luns to that can act as a file server for my network. You can only get Windows Storage server as an OEM product and I would much rather run the gateway as a VM so I could take advantage of vmotion and HA.

Syano
Jul 13, 2005
I am pretty curious about this too. Based on a pointer just a few posts up I started researching into Nexentastor and on paper the concept looks fantastic and like a perfect fit for an upcoming project.

Syano
Jul 13, 2005
Certainly a concern but I am not sure how big of one it is at this point. The product still seems to be trucking along even almost a year after the announcement.

Syano
Jul 13, 2005

InferiorWang posted:

Speaking of lefthand, for those with a p4300/4500, how do you set up the NICs on your equipment? Do you bond the nics on each shelf/unit?

Yes we bonded the NICs and we went further and had each NIC in the bond plug in to a separate switch stack member

Syano
Jul 13, 2005

InferiorWang posted:

So you did the adaptive load balancing then. I'm debating whether to do that or just go with LACP bonding to one switch. Switch one is a cisco 4507 while switch two is a 2960G. I wonder if there is any performance difference between the two options.
Thats correct we did adaptive load balancing. Since technically both my switches are members in the same stack I could have set up LACP but I decided to go with the easier option. I am not sure if the performance is different between the two or not.

Syano
Jul 13, 2005

Nomex posted:


Also, tape sucks. Disk to disk backup is where it's at.

We never could make disk to disk as our only source of backups make sense for us as a company. We dump initial backups to disk yes but we archive off weekly and monthly sets to LTO4. We havent found any solution yet for storing the entire set of company data that beats a turtle LTO5 in an offsite safe to protects against mega disaster or insane admin or both.

Syano
Jul 13, 2005
I have a customer that ordered a expansion shelf in for an AX4 and asked me to put it in. I have zero experience with emc much less the clariion line and it looks like I need some sort of login to even get to the documentation. Is adding an expansion shelf a pretty easy prospect? Is it hot add? Or am I going to have to spin down the array?

Syano
Jul 13, 2005
Ok I have a question about iscsi offload. I have a couple R710s with 4 onboard broadcom NICs that have the option to do iscsi offload and a quadport Intel gigabit nic. Broadcoms documentation says they see big huge CPU performance increases running high IO loads by using the iscsi initiator. Intel's documentation says they see no discernible difference in cpu usage when using their nics and a software initiator plus they say using their nics and the software initiator is better since it doesnt break the standard OS model (whatever that means).

Who is correct? Have any of you done any testing at all in the real world with these scenarios?

Syano
Jul 13, 2005
With all the bad press I have seen flying recently concerning EMC I would default to the netapp based on that alone. Pricing being cheaper is just icing on the cake.

Syano
Jul 13, 2005
Yes thats a super open ended question. Whats your budget/feature requirements/etc/etc/etc

Syano
Jul 13, 2005

FISHMANPET posted:

Since we're on a push lately for redundant everything, I've been thinking about making everything reduntant. .... For Windows file sharing I can use DFS. (and probably others).



Thats not entirely accurate. While DFS can give you higher availability than just having one file server, the service was primarily designed to solve problems with low bandwidth between two sites and needing file access between them. DFS relies heavily on Active Directory replication, which by definition only provides loose convergence. It also has some very big caveats dealing with file sizes, replication times, and a few other things I can't really remember right off the top of my head. If you want true high availability from your windows file servers the answer is the same as it has been for a decade: HA cluster.

Syano
Jul 13, 2005

FISHMANPET posted:

Thanks, I guess that's another thing to put on my list.

Heres the good news: If you already have a SAN and are virtualizing then its really trivial nowadays to put together a windows cluster.

Syano
Jul 13, 2005

the spyder posted:

Does anyone have experience with 1+ PB storage systems? We have a project that may require up to 7.5TB of data collection per day.

I have nothing of value to offer other than I would love to know the details of the system you end up using. I have long been fascinated with storing immense amounts of data and the technologies required to do so.

Syano
Jul 13, 2005
Just to reinforce a point that has been made in this thread over and over again: We have a backup server populated with 12x 2tb 7.2krpm drives. When we purchased it several years ago we had it configured from the vendor as a raid 5 array. Mistake. We had a drive fail yesterday. No problem usually. We pulled the bad drive and hot swapped a spair in and popped in to the server manager to ensure the rebuild was taking place. Holy smokes. By the speed it is currently going the array will be finished rebuilding this time next week. Now here we are crossing our fingers that another drive doesnt flop in the next 6 days. ALWAYS configure huge arrays as raid 6 or similar. Lesson learned.

Syano
Jul 13, 2005

FISHMANPET posted:

I'm about ready to tear my hair out with DFS. I'm trying to create the first Namespace, and I get the error "RPC server not available." I have no idea what RPC server it's trying to contact, I can't find any failures in any logs, and googling hasn't come up with any useful solutions. I did this on another domain and it worked just fine, so I don't know what's different here.

Its querying for a DC and the one it is attempting to connect to either does not exist any longer or is not reachable. Make sure all your DCs are online. Make sure you do not have a dc still listed in AD which has been taken offline without proper demotion. If you do, use ntdsutil to get rid of all references to it.

Syano
Jul 13, 2005
I am sorry this is a tiny bit off subject and probably better suited to the backup thread but I think it will get better eyes here. 99% of my environment is backed up using Veeam, which we arent entirely happy with but thats a different story. What I am asking about is my file server cluster. I have a 2 node windows cluster serving files. The 2 nodes are both VMs and the cluster resource is a lun sitting out on the SAN. Veeam backs up the cluster nodes but of course does not back up the cluster resource lun. I am trying to come up with some ideas for the best way to do backups of the cluster resource. Any suggestions from anyone doing anything similar?

Syano
Jul 13, 2005

Misogynist posted:

What are you doing with your Veeam backup repository?

The Veeam backup repository is direct attached storage on the backup server proper.

Syano
Jul 13, 2005
I've got a question for those of you who have messed around with HP Lefthand gear. I have to upgrade the switches I have this kit plugged in to. What will happen if I just start unplugging from the old switches and then replug into the new? Am I going to end up with some sort of split brain cluster? Or will it reconverge on its own?

Syano
Jul 13, 2005

Nomex posted:

Why even risk it? Just take 20 minutes and label all the cables before you start ripping. Check the switch configuration too. It doesn't take very long.

Thats the easy part. What I am wondering is if I am going to need to shut down the array or if it can survive the cluster being down for about 5 minutes.

Syano
Jul 13, 2005
This will be the same subnet but what we are doing is upgrading to 10gig. These are top of rack switches so I am actually doing a one to one swap. Just about everything in these racks has storage on this array so I know I am going to need to shut down the servers. The more I think about it I guess I am going to have to power down the array as well to avoid any potential problems. I was just trying to avoid that. I am not sure why it just struck me as scary

Syano
Jul 13, 2005

Number19 posted:


Also, make sure your failover manager isn't stored on the LeftHand units.

Zinger! FOM has been running on the array itself since we put it in. Nows a good a time as any to switch her up!

Syano
Jul 13, 2005
You're probably not going to have an import feature with onboard raid. If you reinitialize your data is gone. I think youre probably hosed.

Syano
Jul 13, 2005

Boogeyman posted:


Thanks to good indexing and table compression, I'm pretty sure that most of the reads are just hitting RAM on the server and not having to hit the SAN. This will change as we accumulate more data.



If this is an MS SQL server you can verify this pretty trivially by looking at your cache hit ratio

Syano
Jul 13, 2005

Boogeyman posted:

I should have thought of that earlier (SQLServer:Buffer Manager - Buffer cache hit ratio, right?). I watched it for about five minutes and didn't see it drop below 99.99%.


Yeah dude, youre totally answering all your queries out of RAM.

Syano
Jul 13, 2005
I have a lun presented via iSCSI from a lefthand array that I have attached to a two node file server cluster. At some point in the near future I need to migrate that lun (or all the data on it) to a equallogic array. I have a few ideas on how I am going to go about it but if anyone has done this and has an easy method I would sure love to hear it.

Syano
Jul 13, 2005
NTFS file shares.

Syano
Jul 13, 2005
Thats where my logical thought keeps going. I was sort of hoping though there was some magical tool I had not heard about yet that could replicate a lun between two arrays from different vendors... and its free... and its easy to use... and uhh, it gives me 20 dollars when I double click on it

Syano
Jul 13, 2005

Xenomorph posted:

I just watched some videos that showed a user making a virtual disk on the target system, and then that virtual disk gets mounted on the initiator. So there is still overhead (NTFS -> virtual disk -> ext4) when writing/reading.

Edit:
I'm looking at this now: QNAP TS-EC1279U-RP

Under $5,000 and we just fill it with WD RE4 drives. I'm just not too hot on the idea that the iSCSI part creates a virtual disk sitting on top of ext4.
Yeah totally tons of overhead. I personally still use drive cabinets with 68 pin connectors and LVD ultra320 cables.

Anyways, you don't understand how iSCSI works. It doesn't 'create' anything. It is simply the protocol used to transfer the data. There is no overhead any more than there is overhead in accessing any file system.

Syano fucked around with this message at 02:30 on Sep 7, 2012

Syano
Jul 13, 2005
Our environment is small enough that I can still afford dedicated switches for our SAN. I have a buddy who is an admin over on the other side of town who constantly rides me about wasting ports by dedicating switches just to SAN traffic and how I should be just running all the traffic through the same top of rack switch and VLANing from there. Who cares I say, as long as I can afford it I am going to keep it simple and keep layer 3 off my SAN switches.

Syano
Jul 13, 2005

adorai posted:

. Dedicating a switch (and nics) to iscsi is a waste.
who cares if it's a waste. My port count is so low i can spend a few extra dollars and literally never have to worry about these other issues mentioned here

Syano
Jul 13, 2005

FISHMANPET posted:

The thing that's pissing me off about it right now is trying to share a single LUN to multiple machines. When I do this, it says I probably want to create a server cluster and share the LUN to that instead. Which makes sense. Except, as far as I can tell, there's no way to do that in CMC.

You may need to update CMC then. All recent versions you right click 'servers' and then choose 'New Server Cluster...'. You can even add your new member servers from this screen. You are going to have to get that done before you can add a LUN to multiple initiators at once

Syano
Jul 13, 2005
This is a good a place as any. Discuss away and I am sure people will chime in

Syano
Jul 13, 2005
I am still trying to wrap my head around what Microsoft's strategy is with Server 2012 and SMB 3.0 and scale out file servers for applications. Everything starts out looking awesome. Storing SQL and Exchange databases on your file servers is a pretty neat option. Having your hyper-v stores on an SMB share is pretty awesome too. So you move from that to thinking high availability at the file server level and you start reading about scale-out file servers. At this point things start looking fantastic. Unified storage for my MS shop on an active-active file server cluster. Then it hits you... you still have to have shared storage for all this to work.

So is Microsoft's strategy for all of this for me to build this out with the file server cluster acting as the filer and all the back-end storage still being done via 3rd party iscsi or fiberchannel kits? Or heck by shared SAS shelfs? And if that is the strategy why would I not just avoid the middle man and connect my hyper-v, sql and exchange application services directly to the iSCSI targets? I think I am missing something here.

Syano
Jul 13, 2005

Corvettefisher posted:


Sounds like you drank a bit too much of the MS kool aid.

Huh? How the crap do you get drinking MS kool aid out of me asking if someone can shed some light on the MS storage strategy for server 2012?

Syano
Jul 13, 2005

cheese-cube posted:


One thing to note is that their new "feature" which makes this possible, namely SMB 3.0 "Multichannel", only works when all your servers/clients are running Windows 8/Server 2012 which sort of kills the deal.


This is par for the course with Microsoft. That being said it starts to become a neat solution when you begin putting all the pieces together... that is until you address the shared storage for the cluster. I could see this as a wicked solution if you could take local storage of servers and throw it together akin to what the VSA does but if you still have to have iscsi or fiberchannel targets on the back end then I sort of don't see the point. Maybe this is just a natural progression of things... ie Microsoft puts out Server 2012 with SMB 3.0 support and the idea is that 3rd parties ala netapp/emc/etc pick up and implement SMB 3.0 support soon and thats their idea of the end to end solution.

Syano
Jul 13, 2005
Scale computing does something sort of like you guys are talking about I believe. I have no idea how it runs or what technologies it is built on or anything. I just had someone tell me about it once and I looked at the web page once. Here it is if you want to read more... I think I will when I get a free moment http://www.scalecomputing.com/products/hc3/features-and-benefits

Adbot
ADBOT LOVES YOU

Syano
Jul 13, 2005

Moey posted:

That's pretty strange that the Equallogic line cannot expand with just shelves. So every time you want to grow your storage, you are pretty much buying an entire new SAN?

Even Dell's lesser MD32X0 line supports add in shelves with the MD12X0 DAS units.


That's actually the advantage of an eqaullogic kit though (at least according to them). Each unit you add gives you more of everything. More iops more throughput capacity more storage and more redundancy. Lefthand kits from hp work this way too, though not near as good.

Check out scale computing hc3. They are doing exactly what we are talking about. They call hyper convergence. I have no clue what their hyper visor is though.

EDIT: I did some reading on the scale hc3 solution last night. It looks neat in theory. You buy a cluster of nodes that serves both your compute needs and your storage needs. If you need to expand you just buy another node and it adds to everything: more storage, more memory, more compute power more network capacity. If you need just storage you can buy just storage nodes. Of course they aren't very forthcoming on their site about the technology under the hood they use to make this happen. Still, if it works would be sort of neat.

Syano fucked around with this message at 14:30 on Nov 9, 2012

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply