Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
madsushi
Apr 19, 2009

#essereFerrari


complex posted:

Anyone have any thoughts on NetApp's new offerings? The FAS6200, but in particular ONTAP 8.0.1. I'm thinking of going to 8 just for the larger aggregates.

Data ONTAP 8.0.1 also brings DataMotion, which lets you move volumes between aggregates without downtime. The catch is that you can't move a volume in a 32-bit aggregate to a 64-bit aggregate, or vice-versa. Also compression might be nice for shrinking user shares, but I haven't got a chance to see that in action yet to see how much it actually helps. Finally, the introduction of VAAI in 8.0.1 makes a ton of improvements for VMWare via iSCSI on NetApp, notably much faster storage vMotion.

Adbot
ADBOT LOVES YOU

madsushi
Apr 19, 2009

#essereFerrari


conntrack posted:

Did they get SMB2 back in? When 8 was releasing there was alot of grumbling about that.

SMB2 is in 8.0.1, but not SMB2.1, which I guess Windows 7 is capable of.

madsushi
Apr 19, 2009

#essereFerrari


SnapManager is like keeping 255 spare engines, transmissions, and alternators in your trunk, although they're magic parts and weigh almost nothing. If one of these parts breaks, you can simply swap in one of your many spares in a moment's notice.

Without SnapManager, if your engine breaks, you have to take it to a costly repair shop, wait a very long time, and you don't end up with exactly the same engine you had before.

madsushi
Apr 19, 2009

#essereFerrari


ghostinmyshell posted:

Anyone know the dedupe limits for Netapp's ontap 8.01?

My access to the now site is non-existant :smith:

If the filer supports 8.0.1, then the volume size limit for dedupe is 16 TB, regardless of controller.

madsushi
Apr 19, 2009

#essereFerrari


Depending on the model of NetApp, dedupe might be limited to 2TB volumes anyway, especially if it's not a model that supports ONTAP 8.0.1.

madsushi
Apr 19, 2009

#essereFerrari


Cavepimp posted:

Thanks, that's pretty much what I was gathering. I had the pleasure of having to fire up their retarded little management tool to figure out why a single user couldn't contact the thing at all, only to discover the OS pre-dated the DST changes and it had lost time sync with AD. That would have sucked if everyone else wasn't coasting on cached credentials.

I don't get it. We're a pretty small shop (<50 users) and were probably about 25-30 when they bought that thing for ~$13k and then turned it into a file server with no backup or even snapshots? The things I'm finding here are just...odd.

At least I report to a VP and they listen when I tell them we need to buy something.

If you're familiar with NetApp, you can say "gently caress the StoreVault Manager" and just navigate to http://filername/na_admin and manage it like a regular NetApp. It's still Data ONTAP on the backend, and you can do just about anything with it. It's actually quite a bit better once you can start setting things up on your own, especially since you can do it the "right" way since the Manager is pretty awful.

Two caveats: the StoreVault can't authenticate CIFS users off of a Windows 2008 DC (so there has to be a 2003 DC in the environment), and it's never ever going to get a code update.

madsushi
Apr 19, 2009

#essereFerrari


Nomex posted:

I just inherited an environment where they're about to get a FAS6210. One of the workloads will be 8 SQL servers, each needing a contiguous 4TB volume. I'll need 32TB worth of volumes total. I'm wondering what the best practice would be for carving up my aggregates. Should I just make 1 large aggregate per FAS or would it be better to split them into smaller ones? This was my first week working with Netapp, so I'm not sure what would be recommended.

With RAID-DP, you always want the biggest aggregates / raid groups you can get, as it saves you from wasting drives to new raid sets. Every aggregate means a new raid set, which means 2 disks lost to the dual parity drives. Ideally you'll split the drives evenly between your controllers and make the biggest aggregates you can, making sure to maximize your "raid group" size to minimize lost disks. More disks in an aggregate = more spindles your data is spread across = better performance.

Assuming you get ONTAP 8.0.1 on the FAS (which I am 99% sure you will, I think it's the only supported ONTAP for the 62xx series) you can make 64-bit aggregates, so you can toss as many disks as you want into a single aggregate (per controller).

madsushi fucked around with this message at 02:09 on Apr 15, 2011

madsushi
Apr 19, 2009

#essereFerrari


Nomex posted:

Would you put multiple workloads in 1 aggregate to maximize the amount of disks globally? Should I only have 2 aggregates total? If so, that makes my life a lot easier.

Yep, two big aggregates, one for each controller.

madsushi
Apr 19, 2009

#essereFerrari


optikalus posted:

Quite a few people just rely on snapshots (many manufacturers allow snaps to be on separate disk shelves than the filer, so if you lose a shelf, you still have your snaps and can restore from that).

NetApps allow you to use either SnapMirror (complete replication) or SnapVault (snapshot archiving) to put all of their data on a secondary filer, usually filled up with all SATA disks.

madsushi
Apr 19, 2009

#essereFerrari


You will need to leave 3 disks attached to the partner, as it will needs its own aggregate/root volume to run and that requires at least 3 disks.

madsushi
Apr 19, 2009

#essereFerrari


Mierdaan posted:

Seeing the Unisphere video made me realize how bad I have it; are there any good management tools for NetApp? We're still on 7.3.3 unfortunately, since our 2020 won't run 8.x code.

I like NetApp's System Manager, their 2.0 product (in open beta) is actually pretty sharp.

madsushi
Apr 19, 2009

#essereFerrari


Mierdaan posted:

Oh; I tried that in 1.1 and it was terrible, I'll give it another shot!

They rebuilt it from the ground up in 2.0. It now just runs through your browser (works in IE and Chrome if you set the "Browser" .exe path to the right file) and is much snappier. Really my only gripe is that it's still missing SnapVault configuration stuff.

madsushi
Apr 19, 2009

#essereFerrari


Stoo posted:

Anyone know if there are any sneaky ways around HP's lack of official support for installation of their utilities for monitoring HD status on DL180 servers running ESXi? We have some great DIY SANs thanks to their VSA but obviously VSAs only see their virtual disks thanks to the virtual abstraction and there's no easy way to remotely check for failed HDs. Anyone know if there's a way to poll the smart array controller somehow else?

Get the ESXi ISO from HP that contains the CIM drivers, it will push individual HD info up to VMWare.

https://h20392.www2.hp.com/portal/swdepot/displayProductInfo.do?productNumber=HPVM09

madsushi
Apr 19, 2009

#essereFerrari


Let me know if any of you storage goons are going to be at NetApp Insight this week, we can share a McRib in the McDonalds of the MGM Grand.

madsushi
Apr 19, 2009

#essereFerrari


Vanilla posted:

The more cache the better when it comes to arrays.

Seconding this. Caching technology is robust and universally used, as every single storage vendor realizes how powerful and important caching is to delivering storage. If you're spending the $$$ on SSDs, you should put them where they'll see the most benefit, which is the cache.

madsushi
Apr 19, 2009

#essereFerrari


Some NetApp info:

Anything in the FAS2xxx series will be too small, they cap around 150Tb. You'll want to look at the 3xxx series (probably too small for a 6xxx), and you NEED FlashCache for this type of VDI deployment (make sure they include it). My last XD on NetApp deployment sees 97-99% of read requests served by FlashCache (few hundred users).

Just as a note, MCS is relatively untested at scale and only a fraction of deployments use it. I only view it as a quick and dirty option for setting up storage for pilots/POCs.

PVS (as mentioned) is very thoroughly tested and is used in most deployments. You might read some mumbo jumbo about how NetApp's RAID-DP is bad because PVS deployments tend to see mostly writes to the SAN since the PVS server caches the disks, but the truth is that NetApp's WAFL tech makes it the best choice for writes no matter what raid level you pick.

There is also a 3rd option, using the NetApp/VMWare Rapid Deployment Utility (RDU) plugin, which I believe is considered to be NetApp/Citrix "best practices" at the moment. I have not deployed this solution because all of my XD deployments were tricked into using XenServer by Citrix. Luckily the RDU for XenServer should be out next year...

NetApp's SRM integration is very tight. VMWare running on NetApp NFS is great, and NetApp's Virtual Storage Console plugin for vCenter is by far the best VMWare/storage management tool out of the bunch.

madsushi
Apr 19, 2009

#essereFerrari


evil_bunnY posted:

Nothing wrong with Netapp but the price really, from what I've seen so far. What kind of software features did you grab?

The new 2040 pricing is very low, and is probably the lowest quote you'll get from any enterprise SAN vendor. EMC really doesn't compete in that space that often (~$10-20k).

madsushi
Apr 19, 2009

#essereFerrari


At the end of the day, there are plenty of ways to "trick" the Compellent. Some storage usage profiles match up nicely with its Data Progression design and will see great performance, some usage profiles simply aren't compatible.

"What about the database that only blasts yesterday's records at 3AM and then never touches them again??"

In my opinion, the right way to size storage performance is to figure out your IOPS needs, and then to give yourself enough spindles to handle those IOPS (while also considering your space requirements). Any storage vendor that tries to sneak around the basic spinning disk IOPS requirements is going to run into caveats where their system doesn't work. NetApp loves to use SATA-heavy deployments with FlashCache, Compellent uses Data Progression, and I know EMC has some SSD-based options.

As long as you size your Compellent properly to handle your load WITHOUT all of the tricks, it will run fine. The problems start when you buy into their marketing-speak of "just buy a couple 15ks and then all SATA and everything will work out magically".

madsushi
Apr 19, 2009

#essereFerrari


evil_bunnY posted:

I was told they removed the volume limits on the 20x0s, is that true?

On Ontap 8 and up, volume limits are essentially gone, and you can dedupe a volume up to 16TB (which is up from 1-4TB previously, based on model). You CAN'T put 8+ on a 2020 or 2050, only the 2040 and newer 2240. So if you bought a NetApp 2xxx more than a year or two ago (likely a 2020/2050) then you're still stuck.

madsushi
Apr 19, 2009

#essereFerrari


Beelzebubba9 posted:

[*]Is this actually a bad idea? I understand I will never see the performance or reliability of our EMC CLARiiONs, but thatís not the point. The total cost of the unit Iíve specíd out is well under the annual support costs of a single one of the EMC units (with a lot more storage), so I think itís worth testing. Or should I just get a few QNAPs?

There's nothing wrong with building your own SAN, and it definitely can/does work. Your biggest issue with a homemade solution is going to be support/knowledge. If you build these things yourself, how many people at your organization knows how to fix them? Too often one IT guy gets assigned to "try building a SAN" and then he's the only guy that knows the software and tech well enough to fix/maintain it later. If you're going to roll your own, make sure you keep your coworkers informed and educated so that they can troubleshoot an EMC or a homemade box equally well. Otherwise you end up spending all of your time in the weeds trying to keep these things running.

madsushi
Apr 19, 2009

#essereFerrari


FlyingZygote posted:

At this point, I don't believe I'm going to use compression, dedupe, or thin provision because the disk savings is not worth the performance hit.


Specific questions:
  • NFS or iSCSI?
  • Which is easier to install and manage?
  • Which unit would you choose, and why?

First, go NetApp, you will be able to get more help here. :)

Secondly, skip compression, but USE DEDUPE. You actually get better performance out of your NetApps with dedupe turned on, since the cache is dedupe-aware so you can get more blocks into your cache (in addition to space savings).

1) NFS, definitely. Even VMWare is recommending NFS now. All of the VAAI tricks are just attempts to get iSCSI to where NFS is already at. Here is the KEY REASON for NFS on a NetApp: when you get free space via dedupe, you can then use that free space for more VMs. If you go with a LUN-based iSCSI setup, all of your dedupe savings are wasted since the hosts aren't aware of the free space, since they can only see the LUN. Hosts connected with NFS see the whole volume, so they can take advantage of your deduped space savings.

2) NetApp, the software is the best. System Manager (now ONCommand System Manager) is great.

3) NetApp, because it's easy, and they have the best software tools. The Virtual Storage Console (VSC) is a plugin for vCenter that hooks your NetApp in. Here's what it lets you do:

  • Backups / Recovery (via snapshots, very slick)
  • Set host networking/storage best practices (it audits your hosts, and will recommend AND allow you to push a button to set all of the settings to best practices (things like timeouts and packet size, etc))
  • Allow you to provision storage from the VMWare console without going anywhere else (including best practices like auto-enabling dedupe, turning off automatic snapshots, setting NFS permissions, adding the NFS datastore to ALL your hosts at once)
  • Monitor space usage

madsushi
Apr 19, 2009

#essereFerrari


Re: CIFS on or off SAN, one big reason is that your NetApp or VMX isn't going to give you the advanced share/file reporting stats that Windows will give you if you run your storage through a Windows server. I like the idea of making a big LUN, deduping it, and then presenting that LUN to Windows and letting it serve out the data. Granted, most customers choose to just toss CIFS on the NetApp and forget about it, but the share reporting features of Windows are one thing to consider.

madsushi
Apr 19, 2009

#essereFerrari


LUN masking is hard, guys.

madsushi
Apr 19, 2009

#essereFerrari


Nomex posted:

This is from a few pages ago, but there is a performance hit with dedupe. You can't run realloc on a deduped volume, so after a while you start to get an uneven distribution of data across your disks, which leads to a loss in performance. Results may vary.

By the by, for anyone running Netapp with less than 8.1 firmware, you should run the realloc command on each volume every time you add disks to your aggregate.

Uhh...

You can run reallocate on a deduped volume, and you can run reallocate on a snapshotted volume, you just have to throw the right flags to the reallocate command.

Dedupe has two areas where it can lower performance: during your nightly dedupe window (which can be seconds to minutes based on your daily rate of change) and new writes can take slightly longer. Dedupe can also improve performance, by letting you get more unique blocks in your cache.

madsushi
Apr 19, 2009

#essereFerrari


NippleFloss posted:

There is a third area where ASIS dedupe *CAN* lower performance, but it's harder to quantify. In a heavy sequential read environment dedupe will artificially fragment otherwise sequential groups of blocks if those blocks are shared across files. In this case a sequential read of a file turns into a random read on disk and will perform slower. The caveat is that dedupe aware caching will often come into play in those same workloads, so the two effects can cancel each other to a degree.

I still generally recommend avoiding enabling dedupe on heavily sequential workloads where performance is important. Database transaction logs would be one obvious one as slow trans log performance can affect the speed of database interactivity, and generally the benefit of deduping transaction logs is small. Another workload would be something like streaming video or audio where you will likely be reading back large, contiguous segments of files.

For most workloads, though, dedupe has no noticable performance impact and the space savings can be significant.

I'm not really sure if this argument makes sense on a NetApp though, due to WAFL, there's really no such thing as "sequential" since the data can be anywhere.

madsushi
Apr 19, 2009

#essereFerrari


marketingman posted:

As to why VMDKs on NFS datastores isn't supported by SnapManager for Exchange yet, I don't know, last word I had was it was just taking longer to get out because it's the same guys coding both products.

I never understood this. Why would you want your databases in VMDKs? If you're using a NetApp, you have SnapDrive installed, which literally makes it take like five seconds to make a volume, provision a LUN, and connect it to your VM. Why add in the extra layer of abstraction for nothing? You're already using a SAN, so it's not like the LUNs won't be available if you vMotion your VM. Plus, by having your Exchange/SQL databases in your VMWare volumes, you're killing dedupe and making your snapshots way larger than they need to be. And if you're putting Exchange/SQL on separate NFS volumes in VMDKs.... why not just make those LUNs?

madsushi
Apr 19, 2009

#essereFerrari


Misogynist posted:

Storage vMotion is a pretty big one if you're working with small datasets (small enough to not max out a 2 TB LUN, anyway) and space-constrained storage.

Thanks. Storage vMotion makes sense, you can do some vol/LUN moving on NetApp but not to the same degree as storage vMotion. I would ask why you're moving your Exchange/SQL databases around though, but I can see the value there.

e: has anyone suggested using "sMotion" to describe Storage vMotion?

madsushi
Apr 19, 2009

#essereFerrari


feld posted:



How much did that cost?!

More importantly, are you hiring??? I would love to just hold my body against that rack.

madsushi
Apr 19, 2009

#essereFerrari


FlyingZygote posted:

The numbers were fudged a bit to make them easier to look at.

What I'm actually getting for MaxThroughput-100%Read:
code:
Hosts			IOPS	MBPS
1 host			3479	108
Total from 2 hosts	2833	88
You might be right about the cache. The datastore I'm hitting is NFS configured for thin provision. I should probably setup a datastore that is not thin provisioned/deduped.

But but but why would you run the test again if you're seeing good performance, regardless of hitting the cache? Also typically iometer uses random data so it's not in the cache.

madsushi
Apr 19, 2009

#essereFerrari


Nomex posted:

You'll want to keep all your data drives as either network shares or raw device mappings. The reason you want to do this is because with only OS volumes in the VMware volume, you'll get an awesome dedupe ratio. For the data, you don't want to slow down access by slipping VMFS between the server and the storage. Using RDMs also makes sure all your array features will work properly.

I disagree with this.

Inflating your dedupe ratio by stacking only OS drives into one volume is bad for your overall dedupe amount. You get the BEST dedupe results (total number of GBs saved) by stacking as MUCH data into a single volume as possible. The ideal design would be a single, huge volume with all of your data in it with dedupe on.

Also, re: slowing down by slipping VMFS in the middle, this is wrong, because there is no VMFS on an NFS share. You're better off using iSCSI with SnapDrive to your NetApp LUNs, rather than doing RDM.

madsushi
Apr 19, 2009

#essereFerrari


Whenever I'm talking to a client about NetApp, I like to say there's only 3 types of data: CIFS/SMB/Files, VMWare/virtualization, and databases.

In the perfect NetApp design, you have a big volume containing all of your organization's aggregated CIFS/SMB/file shares, and dedupe is saving you tons of space and your snapshots are seamlessly integrated into Windows' "Previous Versions" tab. NetApp automatic scheduled snapshots handle backups.

You store all of your VMWare data, save for a small, thin-provisioned vSwap volume, in one big volume. Dedupe saves you space on the OS drives and on any applications that are commonly installed (AV, etc). You mount this volume via NFS, and so any VMs you create simply get tossed into that NFS volume. You use the NetApp Virtual Storage Console (VSC) to set best-practices settings on your VMWare hosts, to provision/connect the storage, and to take snapshots on the NetApp regularly (and obviously for recovery).

Finally, you put all of your databases into individual LUNs in individual qtrees in individual volumes: one for the database files, one for the logs, and one for SnapInfo information (depending on your config). This goes for Exchange, SQL, Oracle, etc, any of the database platforms. You connect these LUNS directly to the guest OS via SnapDrive/iSCSI, and you manage the configuration and backups via SnapManager. Dedupe doesn't necessarily need to be turned on for these volumes, and you definitely want ONLY the databases / log files in these volumes. These are your high-usage volumes, you want them to perform.

Database example:
/vol/ExchangeDB/qtree/exchangedb.lun
/vol/ExchangeLogs/qtree/exchangelogs.lun
/vol/ExchangeSnapInfo/qtree/exchangesnapinfo.lun


Now, you have a very consistent best-practices NetApp setup with very easy backup/recovery options.

Catch: as Nomex correctly mentioned above, if you're running your filer on fiber channel, you can't pass LUNs to your guest OS without using RDM. Luckily most the NetApp installations I work on are all network-based (iSCSI/NFS).

madsushi fucked around with this message at 15:32 on Mar 16, 2012

madsushi
Apr 19, 2009

#essereFerrari


adorai posted:

do you people who use NFS for your VMware datastores over iSCSI have an equivalent to round robin to aggregate bandwidth more effectively?

:smug: 10 gig :rice:

In all seriousness, I set up my NetApp to have 2 IP addresses and then add the datastores to vSphere 5 with a DNS name that points to both IP addresses. The vSphere NFS client will use both IPs, which results in multiple hashes which results in multiple links utilized. Before vSphere 5, I would either deal with 1-gig being my storage limit, or I'd split my data into two datastores and map each one with a different IP address.

madsushi
Apr 19, 2009

#essereFerrari


Here's how I break it down, personally:

How responsible do I want to be for this data?

If you're OK with being responsible for the SAN and application compatibility and application backups and replication and all of that, then your best money is rolling your own. You can use ZFS or OpenFiler or OpenIndiana or whatever suits you. But at the end of the day, YOU'RE responsible for maintaining spare gear, for ensuring firmware/hardware compatibility, etc. None of the big vendors can compete when hardware is the only line item. NetApp/EMC/etc can't put a full shelf of SSDs and a TB of RAM into a box for less than you can. If you know your poo poo, then you can do this. But you have to have a dedicated SAN person to handle this, because it's not going to be user-friendly. If you have the staff to do this, good luck.

If you're OK with not being responsible for your data and you're also OK with the possibility of losing everything because you went with a tier-2 SAN provider, then the newer SAN startups are for you. You might get great support and response time when things are easy, but do they have the depth of experience when things get tough? Or will they sell their business to Dell and let everything go to complete poo poo? You're rolling the dice with an untested/unproven SAN vendor, and you better hope that they keep their promises.

If you AREN'T alright with being responsible AND you need your data to be secure, then your tier-1 SAN vendors start making sense. They're more expensive, but you don't have to worry about VMWare or Windows or Citrix or any of your apps not playing nicely. You know backups WILL work, you know replication WILL work, and you're not digging around rsync logs trying to get it working yourself. You're paying for the solution here. I hate to say it, but the "nobody ever got fired for buying IBM" (or Cisco) holds true here.

At home, I don't mind spending all weekend loving around with my array. At work, I definitely don't want there to be any fingers pointed at me if something goes up in smoke. I want to know that I made the most conservative possible choices when it comes to a company's lifeblood -- their data.

madsushi fucked around with this message at 03:42 on Mar 22, 2012

madsushi
Apr 19, 2009

#essereFerrari


Rolling your own SAN might become more viable after this release: ONTAP-V finally coming out!

http://www.theregister.co.uk/2012/03/23/netapp_ontapv/

madsushi
Apr 19, 2009

#essereFerrari


adorai posted:

Sure it is. A lot of our servers have an M: drive, which is just a lun connected that is used for mountpoints and random file storage. I'd like to snapshot this M: drive every day, and never have to delete a snapshot. But I can only do that for 8-9 months. And there is no real reason that the filesystem shouldn't support it, other than someone decided to use one type of integer in the code rather than a larger one. And it's not a big enough deal to matter when it comes to purchasing, but it causes me a bit of frustration from time to time, and that does count for something.

255 sucks, but with SnapVault, you can just vault it to another volume on the same (or a different) filer and now you have what you want.

madsushi
Apr 19, 2009

#essereFerrari


You will have 16TB of raw space (8 disks at 2TB each) but that is going to drop to 10-11GB once it's formatted and you subtract a single spare. If you're going active/active, that will be like 5TB per controller, or you can pool it all together for one 10-11GB aggregate on your primary controller.

NippleFloss says a/p is less than ideal, but I actually prefer it for the smaller deployments. By putting more disks in a single aggregate, you get better performance since you're striping across more disks, and you're definitely not going to be capping out a single 2240's capabilities with 12x SATA disks. You get more flexibility since you have all of the storage available to you in one pool. Makes configuring way easier too, since you essentially just act as if you have 1 controller and point everything at it, rather than trying to remember which services are hosted by which storage.

re: spares, since you have a NetApp support agreement anyway, I like just keeping a single spare for both heads. You can assign it to whichever head has the first failure, and your new disk will be there shortly to fill in.

madsushi
Apr 19, 2009

#essereFerrari


NippleFloss posted:

I don't have an issue with unbalanced configurations. I agree that it's often a lot more useful to give most of the disk to just one controller when you only have 12 disks. The problem is that it's still an active/active pair in that scenario. You still have to provide at minimum two disks to the "passive" controller to create an aggregate and those disks aren't contributing anything to your usable space if you treat that node as passive by just ignoring it.

On the subject of spares it's nice to have at least one on each controller so DOT always has a spare available to do disk pre-fails which saves a rebuild and lessens the time you're running degraded. But, as you said, in a disk limited scenario that isn't always viable.

Actually it's a minimum of 3 disks to the passive controller, which is even worse. With RAID-DP, disk 1 and disk 2 are parity/dual-parity, and you need disk 3 to actually store the data.

madsushi
Apr 19, 2009

#essereFerrari


NippleFloss posted:

A lot of people who do active passive do RAID 4 rather than DP on the "passive" node. DP makes little sense when you only have a single data disk and it only holds configuration information for a mode with no user data.

Doesn't RAID 4 require 3 disks? Or am I missing something?

madsushi
Apr 19, 2009

#essereFerrari


marketingman posted:

So who was complaining about the 255 snapshot limit on DataONTAP :smug:

Spill the beans!

re: MS iSCSI initiator - yeah it's awesome, anyone who uses iSCSI on Windows is using the built-in initiator.

Adbot
ADBOT LOVES YOU

madsushi
Apr 19, 2009

#essereFerrari


evil_bunnY posted:

Talk now or shut up forever.

It's got a terrible, terrible UI but it works well.

http://www.microsoft.com/en-us/download/details.aspx?id=18986

I guess I never see the built-in iSCSI UI because I'm spoiled with SnapDrive. :smug:

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply