Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Nomex
Jul 17, 2002

Flame retarded.
It sounds like a lot of work for something that isn't going to offer tremendous performance.

Adbot
ADBOT LOVES YOU

Nomex
Jul 17, 2002

Flame retarded.

Nukelear v.2 posted:

Tape is awesome, however don't actually buy Dell autoloaders. Both powervaults we've owned have died in only a couple years, and when they were working they changed slowly and made horrible grinding noises. We've replaced ours with HP G2 loaders and I have zero complaints about them. Still quite cheap as well.

Edit: Small sample I know but we opened them up and the mechanical build quality is on par with consumer ink jets. I can't imagine them actually surviving anywhere.

It really doesn't matter who you're buying your autoloaders from. Chances are they're either manufactured by Overland, Quantum or Storagetek. Dell, IBM, HP and a ton of other major brands just rebadge. :eng101:

Also, tape sucks. Disk to disk backup is where it's at.

Nomex
Jul 17, 2002

Flame retarded.

zapateria posted:

EVA4400 with 60x400GB 10k FC disks here. What kind of vRaid should we use for:

Random vmware application servers
Exchange database files
Smaller Oracle/MSSQL (~2GB express editions) servers
Fileservers

We're having some performance issues, where creating a new VM (Win 2008 R2) takes ~25 minutes (the "expanding" part during the initial install), and we see Write Latency going to 1-4k ms.

We're mostly using vRaid5 except for the bigger database servers. Should we convert most VMFS disks to vRaid1?

Has anyone here gotten a performance analysis from HP on a EVA system? Was it worth the money, or did you just get obvious results like "add more disks, use faster disks"?

VMDK files and Exchange should be on VRaid1, File servers can be on 5 and Oracle/MSSQL would depend on the size and performance requirements of the database. Unless you're dealing directly with HP engineering to solve stability issues, HP support is worthless as gently caress. I was a partner doing OEM support for HP, and calling their internal support lines was never anything more than a waste of time.

I should also ask, how many VMs are you running on this EVA? Are the VM hardware, drivers and critical updates all up to date? Have you used EVAperf to gather any performance metrics? 60 disks really may not be enough, although if you have everything as VRAID5 that'll definitely be causing some bottlenecks.

Nomex fucked around with this message at 20:11 on Jul 29, 2011

Nomex
Jul 17, 2002

Flame retarded.
The limits are so high with 64 bit aggregates that you will never reach them in a 20x0 array.

Nomex
Jul 17, 2002

Flame retarded.

Martytoof posted:

This may be a terrible question to ask among people who discuss actual SAN hardware, but if I want to get my feet wet with iSCSI ESXi datastores, what would be the best way to go about this on a whitebox as far as the operating system is concerned. I'm looking at something like OpenFiler. This would literally just be a small proof of concept test so I'm not terribly concerned about ongoing performance right now.

You may wish to evaluate using NFS as well as iSCSI. NFS and VMware play very well together. If you're deduplicating your VMware environments, iSCSI and FC won't report correct data store usage, as VMware has no way of seeing anything but the raw space. NFS will, because it's just a network attached drive. (This was in VMWare 4.1, if it's changed in 5, someone please correct me). You can also mount way larger volumes in NFS. You're limited to 2TB data stores with iSCSI, but NFS is limited only by the maximum volume size on your storage array. You are limited to 255 VMDKs per datastore though. Also, NFS is way better at handling locking. You can (and I have) got into a situation where a VMDK becomes locked, and the only way to clear it is to bounce the VM host. With NFS you can simply delete the .lck file and off you go.

Beelzebubba9 posted:

[*]Is this actually a bad idea? I understand I will never see the performance or reliability of our EMC CLARiiONs, but that’s not the point. The total cost of the unit I’ve spec’d out is well under the annual support costs of a single one of the EMC units (with a lot more storage), so I think it’s worth testing. Or should I just get a few QNAPs?

Further to what madsushi wrote, if something catastrophic happens to your home built storage solution, the blame will probably be entirely on you. When you use a major vendor it might not be your neck on the line.

Nomex fucked around with this message at 18:28 on Jan 9, 2012

Nomex
Jul 17, 2002

Flame retarded.

marketingman posted:

I've only deployed iSCSI offloading within a two super-computer level installations, and there it was carefully planned for with the correct hardware tested and purchased. Anywhere else it's a complete waste of administrative time and complexity, forget it.

Just out of curiosity, why did you use iSCSI at the supercomputer level, rather than FC or FCOE?

Nomex fucked around with this message at 17:32 on Jan 21, 2012

Nomex
Jul 17, 2002

Flame retarded.

marketingman posted:

So let me stop you right there because there in your scenario there is no performance hit from dedupe or thin provisioning on a NetApp filer.

This is from a few pages ago, but there is a performance hit with dedupe. You can't run realloc on a deduped volume, so after a while you start to get an uneven distribution of data across your disks, which leads to a loss in performance. Results may vary.

By the by, for anyone running Netapp with less than 8.1 firmware, you should run the realloc command on each volume every time you add disks to your aggregate.

Nomex
Jul 17, 2002

Flame retarded.

fiddledeedum posted:

It's a good gig.

We stood it all up in the space of 2-3 months because we followed the KISS principle. Do be warned that there is a well known issue with the 62xx series that causes panics. Thankfully, we've had no service impacting outages due to these panics but it was a bit disconcerting to spend high 7 figures and have filers panic. That said, NetApp support has been fantastic. I realize the $$$ probably influenced the level of support we have but I whole heartedly recommend them

A well known issue you say? Please tell me. I just finished setting up 6 new 62xx HA pairs at work, so I'll want to avoid that bug.

Nomex
Jul 17, 2002

Flame retarded.
If you lost 120 drives because of a single bad card or cable, you're doing it wrong.

Nomex
Jul 17, 2002

Flame retarded.
Why not just tell me what the issue is?

The_Groove posted:

Not really. I guess I forgot to say that they were all manually failed in order to power down a pair of enclosures to swap I/O modules and cables to try and find out what actually went bad. But yeah, if one path from one controller going down causes drive failures, there are bigger problems.

Still, failing 120 drives for any reason shouldn't happen. If Netapp came back to me and told me I had to offline a bunch of drives to find a problem I would laugh in their faces. There should be no scenario where you should need to lose all connection to the disks at any time. That's why everything is redundant.

Nomex fucked around with this message at 16:33 on Feb 20, 2012

Nomex
Jul 17, 2002

Flame retarded.

marketingman posted:

Because a bug that serious is covered by NDA, distributed only to partners and not for customer consumption.


Fair enough. Thanks for the tip though.

Nomex
Jul 17, 2002

Flame retarded.

This article is pretty good, but on top of those points, SAS drives have 520 byte sectors and SATA ones have 512. The extra 8 bytes is for that SCSI error checking. When you plug a SATA drive into a SAS port, it needs to allocate a few extra sectors every now and then to store that data. This makes the drive consume additional IO, so a nearline SAS drive will have better performance because of that as well.

Nomex
Jul 17, 2002

Flame retarded.

luminalflux posted:

This might be more of a networking question... I'm seeing a huge amount of dropped packets on the switch access ports my P4000's are connected to. The autoneg and speed are set correctly from what I can see. Flow control is enabled on those ports, but not on the trunk link between the two switches they're connected to.

Will bad things happen if I turn flow-control on the trunk? Will this alleviate my dropped packets from the SAN?

If the ports are trunked, make sure the switch and array are both configured correctly for the trunk. We had a ton of dropped packets on one array because the port channel on one of our switches wasn't configured and it caused the port to flap.

Nomex
Jul 17, 2002

Flame retarded.

I wrote this post as a response to Alctel, until I realized his array wasn't an N series, but I think someone might find it helpful, so I'm leaving it. Here's some recommendations for configuring VMware on Netapp appliances:

First of all, use NFS. LUN snapshots on Netapp products is poo poo. You have to mount the snapshot as a LUN to pull stuff out of it. With NFS you can just copy and paste files back. VSphere also doesn't report the size of de-duplicated volumes on fiber channel properly. It only shows how much un-deduplicated space is used in the data store. With NFS it's just a network drive, so it reports whatever the array tells it. There's a bunch more reasons, but this post is getting long already.

For sizing, you'll want to thin provision a VMware volume obviously, but you'll want to put only the OS partitions in there. Take all your servers and figure out how big their OS volumes are, then add maybe 10 or 20%. It depends on how much data you have. Keep in mind you'll need to save ~20% of the array space for snapshots. You'll want to keep all your data drives as either network shares or raw device mappings. The reason you want to do this is because with only OS volumes in the VMware volume, you'll get an awesome dedupe ratio. For the data, you don't want to slow down access by slipping VMFS between the server and the storage. Using RDMs also makes sure all your array features will work properly.

Nomex fucked around with this message at 00:14 on Mar 16, 2012

Nomex
Jul 17, 2002

Flame retarded.

madsushi posted:

Inflating your dedupe ratio by stacking only OS drives into one volume is bad for your overall dedupe amount. You get the BEST dedupe results (total number of GBs saved) by stacking as MUCH data into a single volume as possible. The ideal design would be a single, huge volume with all of your data in it with dedupe on.



You get the best results with dedupe by stacking a lot of similar data together. If you have a ton of random data, your dedupe ratio will be crap. If you stack 100 VMs, you're going to get a huge savings, as they all have nearly identical files.

quote:

Also, re: slowing down by slipping VMFS in the middle, this is wrong, because there is no VMFS on an NFS share. You're better off using iSCSI with SnapDrive to your NetApp LUNs, rather than doing RDM.

This is how you mount an RDM. You're right that NFS shares aren't affected, that's why I mentioned using either network shares or RDMs. Whether you use FCP or iSCSI to mount a data store, you have to format it with VMFS. If you connect directly to the LUN (as you mentioned), you're using an RDM.

Nomex fucked around with this message at 07:50 on Mar 16, 2012

Nomex
Jul 17, 2002

Flame retarded.

NippleFloss posted:

Why do you care about your dedupe ratio? If you have a volume with a 100GB of data that you dedupe down to 10G that's a great ratio and all, but if you have a volume with 1T of data that dedupes down to 500G you're saving a hell of a lot more space even if the ratio isn't nearly as good. Barring mitigating factors dedupe works better the more data you include in the volume, whether that data is similar or not. All of your similar data in the volume will still dedupe just as well, but a lot of that dissimilar data will also see some savings given that the dedupe is block level and even incredibly dissimilar data sets can share common blocks.

Regarding RDMs, I'm wondering why you're using them at all. You've just talked about the wonders of NFS from a manageability perspective, so what's stopping you from using VMDKs on NFS volumes instead of RDMs for data drives? You made it seem as if that wasn't an option when it is by far the most common thing in VMware deployments on NFS. If block level access from a guest is required and a VMDK won't cut it (MSCS or some Snapmanager products) then you can always mount an iSCSI LUN from within the guest OS and avoid RDMs entirely.

RDMs are pretty kludgy and basically only exist because VMware needed to provide some way for clustered services to function in a virtualized environment. No reason to put data on them that doesn't specifically require it.

You're right. I made a mistake. We're currently running FC datastores with RDMs, but we're switching to NFS. With FC it makes sense, not so much with NFS.

Nomex
Jul 17, 2002

Flame retarded.

adorai posted:

do you people who use NFS for your VMware datastores over iSCSI have an equivalent to round robin to aggregate bandwidth more effectively?

You don't use NFS over iSCSI. NFS is mounted like a network drive. As long as you have LACP enabled on both the vif on the filer and on the switch it'll balance the links pretty well.

Nomex
Jul 17, 2002

Flame retarded.

FISHMANPET posted:

I need to attach a pile of disks (goal is around 20TB) to a Solaris server so ZFS can manage them. Where can I get either a server that will hold ~12 drives + a system drive with a dumb pass through controller, or a drive enclosure that I can pass through the disks into an 1068E controller or something equivalent?

Why not just install ZFS on the storage server itself and pass the storage to your other Solaris server through iSCSI? Your options are pretty much limitless then.

Nomex
Jul 17, 2002

Flame retarded.
Beaten.

Nomex fucked around with this message at 21:07 on Mar 22, 2012

Nomex
Jul 17, 2002

Flame retarded.

Bitch Stewie posted:

Price will be the kicker. I was reading the link provider by NippleFloss and from the pricing I had on Netapp when we were looking prior to our current P4000 it's just way too expensive for us to justify.

There are things I hate about our P4000 (mostly every time I have to deal with HP) but when you get down to the nitty gritty of being able to buy $6k's worth of SAN licenses, a pair of $2k servers (and of course switches and the trivial matter of fibre between locations), et voila you can have a storage cluster across two sites for $10k, bottom line is nothing's going to come close is it, I don't even know why I keep looking.

If you can automatically fail over your primary site using HP then there's no reason to switch. If you have to manually intervene to bring up your DR site, then the cost of that downtime might make the savings on the storage seem trivial.

Nomex
Jul 17, 2002

Flame retarded.
You should see if leasing the equipment fits into your price range. Then you can toss the gear every 3 years and get new stuff, and all it ever costs is the same monthly fee. The best thing about the VSAs is you can install both the VSA and some of your servers on the same box.

Nomex
Jul 17, 2002

Flame retarded.
We're migrating one array to another, and we have to keep hundreds of volumes snapmirrored while we do it. This is why the 255 snapshot limit sucks. We also can't keep close to 255 volumes snapmirrored at once, because every now and then a volume takes too long to replicate, and you get multiple snapshots running concurrently. Right now I'd kill for 10k snapshots.

quote:

Snapshots are free, but the management overhead from snapshot growth isn't. When a user says "I lost a file about a month ago, can you recover it from a snapshot" if you don't have a tool that provides a searchable catalog of which changes are contained in which snapshots (and this isn't a common thing) then you're going to spend a lot of time digging through the hundreds of hourly snapshots around that time-frame.

This isn't exactly true, at least for Netapp. LUNs aren't searchable, but you can simply do a file search through any CIFS or NFS mounted volume's snapshot folder.

Nomex fucked around with this message at 05:42 on Mar 28, 2012

Nomex
Jul 17, 2002

Flame retarded.

the spyder posted:

Does anyone have experience with 1+ PB storage systems? We have a project that may require up to 7.5TB of data collection per day.

I've got a couple systems at this scale. Can you elaborate on what kind of data it is, how it's collected, and how it'll be accessed for reads? If it's not something you can discuss in the open thread you can msg me.

Nomex
Jul 17, 2002

Flame retarded.

marketingman posted:

So who was complaining about the 255 snapshot limit on DataONTAP :smug:

That was me, and it's still biting me. We're snapping 12 old filers to 6 new ones, and we have some volumes that are taking forever. Even if you fix it in 8.1 or later it's not going to matter to me, because we won't even upgrade to 8.1 for another year at least. It's best to stick with the proven stable versions on ONTAP.

Nomex
Jul 17, 2002

Flame retarded.

NippleFloss posted:

The stability differences between major versions of OnTAP aren't, well, major. Because the code base is still fairly similar if a bug pops up and is fixed in the 8.0 code tree then it will also be fixed in the 8.1 code tree. So with 8.1 you're getting the benefit of most of the fixes that were included in 8.0.2.

They also cleaned up some code and pared down the code base (no more java, for instance) so 8.1 is actually a more stable baseline than early 8.0 releases are.

My recommendation with any new version of DOT, is to wait until it has a few patch releases. By then it's generally pretty stable. That goes for major and minor version. 8.1 should have it's first patch release sometime in early June and likely at least one patch release monthly after that. By August you should have a very stable version with the patched releases.

Anecdotally, I've heard pretty good things about 8.1 stability even in the RC code, and the group I'm a part of has something like 300 controllers running 8.1 variants.

Marketingman -

I don't have any direct experience with Trace3, but they're a NetApp star partner and a couple of their customers are used as references for FlexPod. Sounds like a good company, from what little I know.

At this time there's really no benefit to us to upgrade. We have all our filers on stable releases, and there's no new features in 8.1 that we really need aside from the snapshots, and we really only need those now during migrations. Stability is important above all else, so why rock the boat?

Nomex
Jul 17, 2002

Flame retarded.

lol internet. posted:

I'm a bit of a SAN noob but these are most likely easy questions for you guys.

1. I have two physical Qlogic HBA on a IBM Blade. I would like to connect them both to 1 LUN via iSCSI on a Netapp SAN. How would I setup MPIO? Do I just configure both HBA's to point at the LUN and the MPIO will figure itself out? Or is there third party software I would install on the OS?

2. For Netapp, can someone explain initiator groups to me? From my understanding, you need a initiator group mapped to a lun so will be availible as a target. Also it stops other nodes from accessing the lun unless it's iqn? is put inside the initiator group.

3. What are the pros\cons\best practices for Volume size\LUN per Volume\Lun sizes etc..\

4. Pros\Cons between hardware\software initiators? I assume hardware has better performance overall.

5. I mapped a LUN through a physical HBA initiator but when I boot up into windows and check the network connections, the IP address of the network adapter is 169.x.x.x? is this right?

Any references or websites which are helpful in making me understand the concepts and real life usage of SANs would be awesome. I understand SANs are used everywhere, but I'd like to know why X scenario would be better then Y scenario using ____ method.

Thanks!

Your questions got covered pretty well, but here's another thing: Try to use NFS for everything you can. Netapp arrays work way better with NFS than with iSCSI or FC.

Nomex
Jul 17, 2002

Flame retarded.

Powdered Toast Man posted:

So...how hosed are we?


...we really did need more space. I expressed concerns about whether it was advisable to make this move without extensive testing. I was laughed off.

So, he did it anyway. Our entire file storage (except for databases and some other stuff like Exchange) got moved over the weekend to a Reldata appliance. This included shared network folders used by many people, as well as every user profile in the entire company (about 4,000 employees). To my great lack of surprise, the NTFS permissions on all of those folders and files (millions of them) essentially got put through a wood chipper/meat grinder/Blendtec blender/insert appropriate metaphor of destruction here. No one can get into their stuff. We can't fix it, because we don't have permissions to modify the folders. Admittedly I don't know much about how the appliance works but I'm guessing that it has its own filesystem and provides NTFS emulation of some sort. Poking around ACLs on folders I noticed "NODE-C\Administrators" which seems mighty suspicious to me. They're on the phone with Starboard right now trying to unfuck this.

We are highly reliant on centralized user profiles (everyone's path is \\fileserver\profiles\username) because the vast majority of our users are Citrix users, which means NONE OF THEIR loving APPS WORK. This has been going on for days and it still isn't fixed. I want to die.

Can you restore the data to its original location with a snapshot? What happened to the original data? Did he bother to do a full backup before the move?

If you have to restore the permissions manually, you'll need to weigh security vs. access. when you get control of the files back you can do a blanket domain users read all so at least people can get back to work, then start setting the permissions manually. We had a guy take ownership of about 2k directories and it literally took months to get it sorted.

Nomex fucked around with this message at 22:52 on Jun 19, 2012

Nomex
Jul 17, 2002

Flame retarded.

Powdered Toast Man posted:

Additional details have emerged:

- I'm not sure what model Reldata this is. It's one of the ones based on Supermicro chassis with vertically mounted drives.

- It has come to light that apparently the particular appliance we have will not support the user load they are putting on it. I have no idea why they didn't figure this out BEFORE the migration.

- I'm not sure if it's being used in iSCSI target mode or NAS mode but I speculate it is the latter.

- The reason why I think that is ACLs aren't working properly. Even when permissions appear to be set correctly on a particular folder, it doesn't work the way it should. Specifically it seems to have issues with individual users, although groups tend to work ok.

Essentially at this point it appears that it is in fact impossible for them to fix it because it won't even do what they're trying to make it do. They just figured this out today, and the migration happened on 6/15. The best part? We have no other options. We don't have any other hardware we can move it to and we can't roll back either because the person responsible is too proud to admit his mistake or because he did it in such a way that rolling back is now impossible.

He actually tried to blame the helpdesk (my department) by saying that robocopy and the ACLs didn't work right because we left our computers on and had files open. Uh...that's not how file locking works. The most we would have had open is a folder window, not any actual files, and we certainly weren't locking anyone's user profile files. If I had something else to fall back on I would resign without notice tomorrow, because this is ridiculous. One person brought a $700 million company to a grinding halt.

I wouldn't say it was one guy who brought the company to a halt. Sure, he may have been the trigger, but it sounds like your company has a glaring lack of project and change management. A move like he did should've been validated, tested and rolled out in segments, with controls along the way to make sure poo poo worked. If that guy is going to remain employed, you need to distance yourself from that company, lest your career gets jeopardized due to stupidity beyond your control.

Nomex
Jul 17, 2002

Flame retarded.
We're migrating exchange from 2k3 to 2k10. They decided to move 2k3 to the new Netapp array before doing the upgrade. 336 15k disks in the old exchange storage, 72 in the new. What could possibly go wrong?

Nomex
Jul 17, 2002

Flame retarded.
I go by 180 for a 15k disk. We were able to get about 10,300 out of them before they maxed out. Suffice it to say 72 drives was woefully inadequate. We had to salvage 6 x DS14 15k disk shelves from one of our old filers to get things running smoother. It's still sub-optimal, but they've started moving mailboxes to 2010 now, so things are getting better every day.

Nomex
Jul 17, 2002

Flame retarded.
Sorry guys, I can't go into to many specifics. It's a pretty large environment though.

Does anyone have any opinions on metro cluster? We're thinking of trying it out, but I don't know anyone whose used it before. We'll be running the heads about 9 km apart over a dedicated dwdm channel.

Nomex
Jul 17, 2002

Flame retarded.
Does anyone know if there's an HDS simulator available? I'm going for a Netapp/HDS interview and I don't really have much experience with HDS outside of the old HP XP arrays.

Nomex
Jul 17, 2002

Flame retarded.

adorai posted:

Building hundreds of snapmirror relationships so that I migrate my data to a new netapp sucks. What sucks worse is our offsite netapp is a 2050 so after we cutover, it will be a race to upgrade our 3140 to ontap8, reverse the snapmirrors, and drive it to our DR site.

Get a big list of the targets and hosts then slap them in excel. Build out the first command line around the source and destination columns, then copy/paste as much as you can until you have all the command lines to create the relationships. Copy/paste the contents of the sheet into notepad, replace all the tabs with spaces, then just copy/paste the entire script all at once into a putty session to the filer. Presto! Hundreds of relationships. If you want to do one better, set up a Linux administration box. You can really do some sweet stuff with Netapp from the command line.

Nomex fucked around with this message at 05:56 on Jul 26, 2012

Nomex
Jul 17, 2002

Flame retarded.

Misogynist posted:

2.5" nearline/midline is a really weird price/performance ratio. I've actually never run into anyone interested in it before. The whole appeal I've seen with 2.5" is that you can jam a crapload more fast spindles into a smaller space without needing to make the huge cash outlays for SSD.

It's not midline, but we've started buying all our SAS disk as 2.5" 10k from 3.5" 15k. A 24x600 15k x 3.5" shelf uses around 600w of power and a 24 x 600 x 10k x 2.5" one uses about 300. We're also in a physical space crunch, and we can get 90% of the performance from a shelf of 2.5" 10k disks that takes up half the rack space.

Nomex
Jul 17, 2002

Flame retarded.

NippleFloss posted:

Don't use system manager to do anything, ever, and you'll be happier for it. This is just one in a string of stupid bugs starting with 2.0 that almost all revolve around it occasionally just trashing config files.

CLI is the only proper way to manage a filer.

Stop lying, 2.0 could cause unprompted data loss without trashing the configs ;)

Nomex
Jul 17, 2002

Flame retarded.

Syano posted:

I've got a question for those of you who have messed around with HP Lefthand gear. I have to upgrade the switches I have this kit plugged in to. What will happen if I just start unplugging from the old switches and then replug into the new? Am I going to end up with some sort of split brain cluster? Or will it reconverge on its own?

Why even risk it? Just take 20 minutes and label all the cables before you start ripping. Check the switch configuration too. It doesn't take very long.

Nomex
Jul 17, 2002

Flame retarded.

EoRaptor posted:

It's budget getting time, so I'm looking for recommendations:

Needed:
12 to 20tb of usable storage
Network accessible via SMB/CIFS and AFPv3
Integration with AD/Kerberos (authentication, security, etc)
Rack mountable
Snapshots with Volume Shadow Copy integration

Nice to have:
NFS
Support for Backup Exec 2010, (NDMP?, other protocol/agent?)
Low management needs (set and forget)
Alerting via email


This will be used as network shared drives, my documents redirection, raw photo and video storage, other bulk storage, for an office of about 50 people, who produce magazines, videos, etc.

I'm mostly wondering what is out there, and how much it costs? I'm hoping for something in the 10k range, but if I I have valid reasons to spend more, I can ask for it. I'd prefer an integrated solution with support, but I can roll my own if need be.

I have an Equallogic for my SAN, and it's very nice, but I think the price and feature set is more than I need to spend here (unless someone can show otherwise?)

Thanks.

You should check out the Netapp FAS2240 series. You can get them with 12 or 24 disks to start and 1-3 TB SATA disks. They support CIFS, NFS, iSCSI and FC, as well as AD integration and NDMP.

Nomex
Jul 17, 2002

Flame retarded.
http://www.netapp.com/us/company/news/news-rel-20120821-746791.html

I hope I still have time to get a small demo unit squeezed into next year's budget.
I wonder if you can stack a bunch of small Fusion IO cards together to get that 2TB. I could use a few million extra IOPs.

Nomex
Jul 17, 2002

Flame retarded.

18 Character Limit posted:

I've seen five Duos together in one server chassis before.

I know the server will take it, but will the software support it? Hey, any Netapp engineers in here?

Adbot
ADBOT LOVES YOU

Nomex
Jul 17, 2002

Flame retarded.
I use 4/8gb FC for a ton of stuff. Its more due to the fact that someone up high hates iscsi, and we just got nexus switches, so fcoe is going to be new. Forget about the future proof cable stuff, because you can run 10gigE through fiber as well. Honestly if I could I would convert our entire environment to fcoe and eliminate our brocade infrastructure. With fiber channel pass through on the nexus stuff there's really no speed penalty with fcoe on 10gigE vs FC on 8gig fiber channel.



To the post above me, nexus gear may be expensive, but so are fabric switches. I think our last cost for brocade licensing was about $1300/port.

Nomex fucked around with this message at 21:15 on Oct 18, 2012

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply