Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Vanilla
Feb 24, 2002

Hay guys what's going on in th

conntrack posted:

You mean getting the same effect as giving the applications dedicated spindles? :)

If by same affect you mean performance predictability then yes. I've seen it once already on the Clariion where Automated Tiering has been enabled and the application owner has seen some good performance improvements. For example a run dropped from 50 minutes to 30 minutes because a many key parts of the DB moved up to SSD.

However, the problem was the next day the run took 35 minutes, then 25 minutes the day after, then 28 minutes, and so it see-sawed for weeks. This is because some blocks get demoted and given to other apps.

They didn't like the see-sawing but wanted to keep the performance improvements - so they had the LUN locked in place at the current SSD/FC/SATA levels and got a generally consistent run time.


quote:

That depends on who you talk to, I personally share your view in this matter. A lot of people see it as a way to fire all those crusty storage guys though.

So generally people only give a select few important applications a portion of SSDs (because there is typically few SSDs in comparison to other drives as they're so drat expensive right now). This means a few guys see performance improvements......but if they're the critical apps then it's all good.

Most LUNs don't see too big an improvement because all that is happening is that their inactive data is being dumped lower. However, it's very rare they will see any kind of performance degradation.

I don't see it as a way to fire all the crusty storage guys (that's cloud :) ) but as a way to free up the time of the storage guys. We all have way too much storage - someone with 250TB and a growth rate of 30% has almost 1PB after 5 years. We can't give 500TB of storage the same love and attention we did when it was only 50TB so we have to let the array do it.

quote:

Why doesn't the vmax virtualize external storage? Any one?

EMC has never seen an array as a technology to virtualise other arrays. There are a lot of questions, issues and down sides to such an approach.

EMC has Invista, which is a joint approach with Cisco and Brocade. Basically the virtualisation was done at the fabric layer in the blades.

Quite a few customers use it but it never really went mainstream due to a number of factors including other technologies that were coming up. By that I mean the main use case for Invista was data mobility without host downtime - this can now be done with PowerPath (host based), VMAX Federation (one VMAX to another without the host knowing) and so on. There were many ways to skin a cat.

The latest storage virtualisation technology is called VPLEX.....although the storage virtualisation is just a side show - the main use it to provide active / active data centers.

Adbot
ADBOT LOVES YOU

Vanilla
Feb 24, 2002

Hay guys what's going on in th

conntrack posted:

The cost of "enterprise sata" sort of takes out the "save" part in "save money", so virtualizing midrange is looking better and better.

Edit: If the vmax would get the external capability i would definitely look more in to it.

It all depends on what you are trying to achieve.

30TB of SATA is a lot cheaper than 30TB of 450GB 15k on an enterprise array, more so once you factor in less DAEs and surrounding hardware.

If you had two configuration - one with 100TB of FC storage using 450GB 15k drive sand the other with 100TB of SSD, FC, SATA and auto tiering the second would cost less and likely provide more IOPS.

Consider the down sides to virtualisation through an enterprise array:
- Enterprise cache and FC ports is not just for your tier 1 apps but also being given to data acting as fetch through.
- What happens if a 1TB LUN is corrupt and needs to be restored? You want me to push that data through my enterprise array in the middle of the day?!
- Limited use cases - look at the support matrix, only a few things are actually recommended and supported for this kind of virtualisation (archive, backup, file)
- etc

So it all depends on what you are trying to do - why does the array need to be virtualised for you?

Vanilla
Feb 24, 2002

Hay guys what's going on in th

H110Hawk posted:

To give you an idea of the other end of the spectrum BlueArc tried to do this with their Data Migrator which would offload files from Fast -> Slow storage based on criteria you set. This happened at the file level, so if you had a bajillion files you wound up with a bajillion links to the slow storage. I'm not saying one way is better than the other, or one implementation is better than another, but there are extremes both ways with this sort of thing.

I for one would bet EMC has it less-wrong than BlueArc. Is their system designed for Exchange datastores? Is there a consideration in how you configure Exchange to deal with this?

File and block tiering are two different things.

EMC VNX can automatically tier file systems LUN just like normal block LUNs.

However, there is a tool called File Management Appliance. Based on policies (last accessed, size, last modified, etc) the appliance will scan the file system and move individual files from fast spindles to slow spindles.

Then it can move the files again to an archive, such as on Data Domain. Not just an EMC tool, can do the same for Netapp (well, the archive piece).

Vanilla
Feb 24, 2002

Hay guys what's going on in th

Intraveinous posted:


Right now I've got budgetary quotes in hand from 3PAR and NTAP, and expect a Compellent one soon. Haven't talked to EMC yet. Anyone else I should be talking to? Am I making too big a deal out of the differences in management?

I think you are making too big a deal out of it. You'll be happy with any of those.

Slappy Pappy
Oct 15, 2003

Mighty, mighty eagle soaring free
Defender of our homes and liberty
Bravery, humility, and honesty...
Mighty, mighty eagle, rescue me!
Dinosaur Gum

Vanilla posted:

File and block tiering are two different things.

EMC VNX can automatically tier file systems LUN just like normal block LUNs.

However, there is a tool called File Management Appliance. Based on policies (last accessed, size, last modified, etc) the appliance will scan the file system and move individual files from fast spindles to slow spindles.

Then it can move the files again to an archive, such as on Data Domain. Not just an EMC tool, can do the same for Netapp (well, the archive piece).

EMC DiskXTender can do a lot of this, too. I understand that getting too granular with tiering could cause an unhealthy proliferation of pointers. For now, I think we'll be happy with FAST Cache but leave tiering alone until it's a little more mature and practical.

Serfer
Mar 10, 2003

The piss tape is real



Holy poo poo I almost had a heart attack this afternoon. One of our drives failed on a SAN in Oakland, and since my coworker (also in IT) was going over there, I asked him to replace the drive, drive zero (stressing it was drive zero). The drive had failed the day before, and was severely degrading performance. He goes over there, and immediately ejects drive 1. I promptly freak the gently caress out and yell at him for trying to destroy the RAID5 array. He responds "but I thought these drives were hot swappable!"

Thank god I had a hot spare running or bye bye 8tb of data.

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO
haha, well maybe you should have a support contract and a CE to do disk swaps if your co-workers can't count to zero.

Serfer
Mar 10, 2003

The piss tape is real



paperchaseguy posted:

haha, well maybe you should have a support contract and a CE to do disk swaps if your co-workers can't count to zero.

Sad thing is, we do. I can get the part and replace it faster than they can get a CE there. Also we had just moved that office, and for EMC, changing addresses is a multi-day affair, and since I needed the drive replaced as soon as possible, I thought we could do it. I guess I thought wrong.

conntrack
Aug 8, 2003

by angerbeet

Vanilla posted:


So it all depends on what you are trying to do - why does the array need to be virtualised for you?

The idea is to be able to buy any(supported) array and put behind the enterprise one.

What I get:
I can put a smaller FC/SAS tier inside the enterprise array.

I can buy a larger cache from not getting all those FC raidgroups. With cache boundaries/partitions I can give the sata luns a small amount to minimise thrashing and still probably swallow bursty writes.

I can get quotes from all the players in FC storage for mid to slow demanding storage and just connect those. The quote we got for 10TB enterprise sata was higher than buying a new 20TB midrange dual-controller array. After the inital outlay for the midrange box the savings just get bigger as the controller costs gets spread over a larger amount of storage while the enterprise sata is the same expensive price all the time no matter how much we get.

Of course I will have to pay for more front end ports and san ports but unless we run in to some queue depth issue I hope to be able to run several arrays on a shared amount of 8gb ports in the virtualizer.

The main point is being able to have multiple tiers, present these from a single box with a single management interface and being able to play the vendors against each other. If we buy in to the vmax for example we are stuck with it. If we need cheaper storage we have to buy a forest of arrays all with single points of management if they are from several vendors.

I am tired of paying the GNP of Angola for low access storage. I am a jew that wish to get the enterprise features I pay for to my tier 1 luns and use it for tier 2 and 3 but pay less for the privilege.

If I ever get time away from managing my forest of arrays I might get time to explore the options ....

conntrack fucked around with this message at 14:44 on Mar 5, 2011

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

H110Hawk posted:

I for one would bet EMC has it less-wrong than BlueArc. Is their system designed for Exchange datastores? Is there a consideration in how you configure Exchange to deal with this?
Exchange 2010 is basically built to run on tier 3 storage to begin with (90% IOPS reduction from 2003 and most of what's left is sequential), so this may not be the best example application anymore.

H110Hawk
Dec 28, 2006

Misogynist posted:

Exchange 2010 is basically built to run on tier 3 storage to begin with (90% IOPS reduction from 2003 and most of what's left is sequential), so this may not be the best example application anymore.

Slick. I remain blessedly ignorant of how Exchange works. Do you have to tell it how to behave for 3-tier, or does it expect that to be handled behind the scenes by the block device?

Vanilla
Feb 24, 2002

Hay guys what's going on in th

H110Hawk posted:

Slick. I remain blessedly ignorant of how Exchange works. Do you have to tell it how to behave for 3-tier, or does it expect that to be handled behind the scenes by the block device?

It does it itself, it's comes really far over the years. Like most apps you still have to size it appropriately.

Exchange 2003 = nightmare. Number 1 application I saw people dedicate spindles to - it hates latency and doesn't share nice with other apps.

Exchange 2007 = better but still bad.

Exchange 2010 = Really good. Just rolled a 20000 seat exchange environment on 1TB SATA drives. Most 2010 environments are capacity bound and not disk performance bound so using SATA drives really helps.

GrandMaster
Aug 15, 2004
laidback

Serfer posted:

He goes over there, and immediately ejects drive 1.

lol, dont failed drives normally have a red/amber light on them instead of green?


does anyone know of any utilities for merging emc navisphere analyzer logs?
every time i want to look at perf stats of the array over the last week or so, i have to download all the logs, merge them 2 at a time with naviseccli. then merge the merged files, then merge those merged files until i'm left with one big log.

its a massive time consuming pain in the rear end and i want something where i can just point it at the directory of logs and it can merge them all and spit one out for me. it could probably be scripted, but i'm not great with scripting.

Moey
Oct 22, 2010

I LIKE TO MOVE IT
So has/is anyone here working with a Qnap TS-809-RP?

My work recently got 4 of these (2 at each site), and I'm starting to wonder on the actual performance we are getting out of them. I didn't do much setup on them (was done by my boss), so never sat down and got any good data from the start.

We were in the process of getting a ESX backup software setup yesterday, and performance between the two boxes seemed extremely sub-par. Ran some tests with IOMeter, and I'm trying to figure out if the NAS is to blame, or if something is misconfigured.

When running a Max Throughput test (50% Read), its showing read and write IOps a little below 1400. Both read and write speeds are between 40-43 MBps.

When running a Real life test (60% Random 65% Read), its showing 105 IOps Read and 56 IOps write (on one NAS), and 70 IOps read and 37 IOps write (on the second NAS). read MBps was between .8-.5 and write was between .4-.3

Our arrays are running 8x1tb Sata 7200rpm disks, Raid5.

Any input on this? Is this what I should expect with a low end unit, or is something messed up?

EDIT:

Just noticed they released a new firmware 3 weeks ago, going to upgrade to that tomorrow and see if anything changes. Something has to be going terribly wrong.


EDIT:

Updated FW today, same terrible results...no fun.

Moey fucked around with this message at 22:29 on Mar 9, 2011

Echidna
Jul 2, 2003

Well, this is interesting : http://www.theregister.co.uk/2011/03/10/netapp_goes_diversified/

Seems like a good strategy for NetApp, but I'm a little surprised that LSI let the Engenio division go. I'm pretty sure the OEM deal with Sun will be killed by Oracle in the not too distant future, but the IBM and Dell Engenio-based offerings are still going strong...

Janitor Prime
Jan 22, 2004

PC LOAD LETTER

What da fuck does that mean

Fun Shoe
Hi guys I just got 4 SSDs for my FreeNAS box. This is mainly used to server iSCSI extents to my Citrix XenServer pool and was wondering if I should setup them up in a RAIDZ configuration or a RAID 1+0. I'm just not sure which one is going to be better in this scenario. This isn't used for anything in production, just for my test lab.

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]
Moving data at sub-LUN vs file system level is a good question - personally I'd go with file system instead of sub-LUN due to the fact that the latter has no clue about the data but FS-level tiering at least doubles the latency and that's not always tolerable. Recently I met with F5 and I saw a demo of their appliance-based ARX platform and I really liked the granularity and added latency wasn't bad at all but the cost is just crazy, I don't see it ever making it into the mainstream... it was much more expensive than EMC's DiskXtender which is already in the ripoff-level price range.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

szlevi posted:

Moving data at sub-LUN vs file system level is a good question - personally I'd go with file system instead of sub-LUN due to the fact that the latter has no clue about the data but FS-level tiering at least doubles the latency and that's not always tolerable. Recently I met with F5 and I saw a demo of their appliance-based ARX platform and I really liked the granularity and added latency wasn't bad at all but the cost is just crazy, I don't see it ever making it into the mainstream... it was much more expensive than EMC's DiskXtender which is already in the ripoff-level price range.

The benefit with file based tiering is that you can do it simply based on policy around metadata already contained in the file system. If it hasn't been accessed in 6 months move it down a tier. If it still hasn't been accessed after 9 months move it to a suitable archive platform - the file metadata itself gives us a load of data which can be used to help move data around.

Sub-Lun has no clue about the data simply due to its nature. The array has no idea and just moves things based on patterns it sees and policies you set.

If you just want to move files around then opt for lightweight tools that can move files and even leave stubs but without being intrusive to the environment. The F5 stuff if full on file virtualisation and literally sits between the users and the files (i.e - just try and rip it out in a few years) - if it's file tiering you want look elsewhere for similar functionality that is easier to deploy.

Use F5 for other file virtualisation fancy features such as global namespace, easy migrations - the file archiving stuff is a side show :)

On the latency side i've never cared if something increases file latency. Such latency is completely invisible to your general file users and only becomes an issue if you have an application that trolls thousands of files.

wang souffle
Apr 26, 2002
We have an office of around 30 developers and other users with two storage needs--one fast and one large:
1) SVN repositories and build directories shared via NFS. Needs to be relatively fast.
2) Larger NFS/CIFS share for data which isn't performance critical.

We're looking into one NetApp box to cover both of these data sets. The question is, would either of these setups be an obvious choice?

NetApp FAS2040 with 12x 600GB 15krpm SAS HDD and 24x 1TB 7.2krpm SATA HDD
NetApp FAS2040 with 36x 1TB 7.2krpm SATA HDD

Is it absolutely necessary to have 15k drives for fast access, or would creating a span of pools from purely 7.2k drives likely be fast enough for both use patterns? The second config would give us more usable space and should be easier to manage.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

wang souffle posted:

We have an office of around 30 developers and other users with two storage needs--one fast and one large:
1) SVN repositories and build directories shared via NFS. Needs to be relatively fast.
2) Larger NFS/CIFS share for data which isn't performance critical.

We're looking into one NetApp box to cover both of these data sets. The question is, would either of these setups be an obvious choice?

NetApp FAS2040 with 12x 600GB 15krpm SAS HDD and 24x 1TB 7.2krpm SATA HDD
NetApp FAS2040 with 36x 1TB 7.2krpm SATA HDD

Is it absolutely necessary to have 15k drives for fast access, or would creating a span of pools from purely 7.2k drives likely be fast enough for both use patterns? The second config would give us more usable space and should be easier to manage.
The problem is that "relatively fast" is not a useful quantifier when evaluating storage performance. What kind of IOPS are you looking for? What kind of throughput?

If you don't know these numbers, you need to profile until you do.

quackquackquack
Nov 10, 2002
It's "my first SAN" time at work!

We have 4 shiny ESXi hosts and a vSphere Enterprise Plus license. I should note I play much more with the Windows side, rarely the linux.

We're currently running roughly 60% linux VMs, with a couple of them actually requiring decent resources. All of the Windows VMs are pretty light - print, DC, tiny SCCM, light file, etc. In the past, the linux servers have mounted NFS shares, and the Windows file servers have pointed at iSCSI.

For storage options, we're somewhat locked into IBM nSeries (rebadged NetApp), and trying to figure out what makes the most sense in terms of licensing protocols. My thought was to license only NFS, and store the vmdks on NFS. Save money by not paying for iSCSI (have to double check with the vendor that this is true).

Is there any reason to expose NFS or iSCSI directly to VMs, as opposed to making NFS/iSCSI datastores in ESXi?

I'm not sure why I would want to use CIFS. Would it be to get rid of our Windows file servers?

Maneki Neko
Oct 27, 2000

quackquackquack posted:

It's "my first SAN" time at work!

We have 4 shiny ESXi hosts and a vSphere Enterprise Plus license. I should note I play much more with the Windows side, rarely the linux.

We're currently running roughly 60% linux VMs, with a couple of them actually requiring decent resources. All of the Windows VMs are pretty light - print, DC, tiny SCCM, light file, etc. In the past, the linux servers have mounted NFS shares, and the Windows file servers have pointed at iSCSI.

For storage options, we're somewhat locked into IBM nSeries (rebadged NetApp), and trying to figure out what makes the most sense in terms of licensing protocols. My thought was to license only NFS, and store the vmdks on NFS. Save money by not paying for iSCSI (have to double check with the vendor that this is true).

Is there any reason to expose NFS or iSCSI directly to VMs, as opposed to making NFS/iSCSI datastores in ESXi?

I'm not sure why I would want to use CIFS. Would it be to get rid of our Windows file servers?

iSCSI is generally free on NetApp filers, its NFS that costs $$$ (although I'm not sure what kind of terrible stuff IBM might pull licensing wise on these). There's some nice things about Netapp + NFS + ESX, but iSCSI works fine there too if you're in a pinch and your company doesn't want to get bent over on the NFS license.

There's some circumstances where you might want to have a VM mounting an NFS volume or an iSCSI LUN vs just creating a local disk on a datastore, but that really depends on application needs, etc.

As you mentioned, CIFS is generally just used on the filers as a replacement for an existing windows file server.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole
VMFS can only be up to 2TB, so there is one reason.

madsushi
Apr 19, 2009

Baller.
#essereFerrari
Depending on the model of NetApp, dedupe might be limited to 2TB volumes anyway, especially if it's not a model that supports ONTAP 8.0.1.

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

Maneki Neko posted:

iSCSI is generally free on NetApp filers, its NFS that costs $$$ (although I'm not sure what kind of terrible stuff IBM might pull licensing wise on these). There's some nice things about Netapp + NFS + ESX, but iSCSI works fine there too if you're in a pinch and your company doesn't want to get bent over on the NFS license.

There's some circumstances where you might want to have a VM mounting an NFS volume or an iSCSI LUN vs just creating a local disk on a datastore, but that really depends on application needs, etc.

As you mentioned, CIFS is generally just used on the filers as a replacement for an existing windows file server.


I believe NetApp has switched to a "First protocol is free" pricing scheme.

EnergizerFellow
Oct 11, 2005

More drunk than a barrel of monkeys

three posted:

VMFS can only be up to 2TB, so there is one reason.
RDM is limited to 2TB as well due to the 32-bit SCSI emulation layer. You're stuck presenting NFS or iSCSI to the guest for >2TB. iSCSI also lets you do things like clustering in Windows.

Cavepimp
Nov 10, 2006
I just inherited a little bit of a mess of a network, including a NetApp StoreVault S500. From what I can gather it's a few years old and no longer under maintenance.

That, combined with the fact that it was never really implemented properly (glorified file dump for end users, not even being backed up and they never even provisioned 40% of the space), has me thinking we should get rid of it.

Would anyone else do otherwise? Is it possible to re-up the maintenance? At the very least I'd need to sink money into an LTO-4 drive to attach and get backups, and ideally would need to buy something new anyway to move half of that (oh god, critical) data off the stupid thing and start using it for iSCSI LUNs for our VMs. I'm thinking ditch it and sell it to management as yet another poorly thought out idea by the previous IT manager.

conntrack
Aug 8, 2003

by angerbeet

Cavepimp posted:

I just inherited a little bit of a mess of a network, including a NetApp StoreVault S500. From what I can gather it's a few years old and no longer under maintenance.

That, combined with the fact that it was never really implemented properly (glorified file dump for end users, not even being backed up and they never even provisioned 40% of the space), has me thinking we should get rid of it.

Would anyone else do otherwise? Is it possible to re-up the maintenance? At the very least I'd need to sink money into an LTO-4 drive to attach and get backups, and ideally would need to buy something new anyway to move half of that (oh god, critical) data off the stupid thing and start using it for iSCSI LUNs for our VMs. I'm thinking ditch it and sell it to management as yet another poorly thought out idea by the previous IT manager.

If i read the netapp pages correctly the S500 series hardware is "end of support" in 29-Feb-11 so I doubt they will sell support contracts to you. Might wish to call them and check though.

H110Hawk
Dec 28, 2006

Cavepimp posted:

I just inherited a little bit of a mess of a network, including a NetApp StoreVault S500. From what I can gather it's a few years old and no longer under maintenance.

That was junk when it was bought. I fell for the same pitch and wound up with 3 of them.

Cavepimp
Nov 10, 2006
Thanks, that's pretty much what I was gathering. I had the pleasure of having to fire up their retarded little management tool to figure out why a single user couldn't contact the thing at all, only to discover the OS pre-dated the DST changes and it had lost time sync with AD. That would have sucked if everyone else wasn't coasting on cached credentials.

I don't get it. We're a pretty small shop (<50 users) and were probably about 25-30 when they bought that thing for ~$13k and then turned it into a file server with no backup or even snapshots? The things I'm finding here are just...odd.

At least I report to a VP and they listen when I tell them we need to buy something.

madsushi
Apr 19, 2009

Baller.
#essereFerrari

Cavepimp posted:

Thanks, that's pretty much what I was gathering. I had the pleasure of having to fire up their retarded little management tool to figure out why a single user couldn't contact the thing at all, only to discover the OS pre-dated the DST changes and it had lost time sync with AD. That would have sucked if everyone else wasn't coasting on cached credentials.

I don't get it. We're a pretty small shop (<50 users) and were probably about 25-30 when they bought that thing for ~$13k and then turned it into a file server with no backup or even snapshots? The things I'm finding here are just...odd.

At least I report to a VP and they listen when I tell them we need to buy something.

If you're familiar with NetApp, you can say "gently caress the StoreVault Manager" and just navigate to http://filername/na_admin and manage it like a regular NetApp. It's still Data ONTAP on the backend, and you can do just about anything with it. It's actually quite a bit better once you can start setting things up on your own, especially since you can do it the "right" way since the Manager is pretty awful.

Two caveats: the StoreVault can't authenticate CIFS users off of a Windows 2008 DC (so there has to be a 2003 DC in the environment), and it's never ever going to get a code update.

Cavepimp
Nov 10, 2006

madsushi posted:

If you're familiar with NetApp, you can say "gently caress the StoreVault Manager" and just navigate to http://filername/na_admin and manage it like a regular NetApp. It's still Data ONTAP on the backend, and you can do just about anything with it. It's actually quite a bit better once you can start setting things up on your own, especially since you can do it the "right" way since the Manager is pretty awful.

Two caveats: the StoreVault can't authenticate CIFS users off of a Windows 2008 DC (so there has to be a 2003 DC in the environment), and it's never ever going to get a code update.

Not really familiar at all, but that's good to know. As is the 2008 DC auth issue, since I really wanted to get the DCs upgraded.

Not that it probably even matters anyway, since everyone has full permissions to every folder. Ugh.

Maneki Neko
Oct 27, 2000

Looks like Microsoft released a free iSCSI software target for Windows Server 2008 R2.

http://blogs.technet.com/b/virtualization/archive/2011/04/04/free-microsoft-iscsi-target.aspx

Fully supported, having another option is always handy.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Maneki Neko posted:

Looks like Microsoft released a free iSCSI software target for Windows Server 2008 R2.

http://blogs.technet.com/b/virtualization/archive/2011/04/04/free-microsoft-iscsi-target.aspx

Fully supported, having another option is always handy.
Any word on whether it supports SCSI-3 PGRs?

Serfer
Mar 10, 2003

The piss tape is real



Maneki Neko posted:

Looks like Microsoft released a free iSCSI software target for Windows Server 2008 R2.
Good thing Windows Servers rarely have important security patches that require reboots.

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

Serfer posted:

Good thing Windows Servers rarely have important security patches that require reboots.

Maybe you can reboot the storage servers while you're rebooting the servers that are connecting to it for their patches.

edit:

Looks like it supports some form of HA so you can patch node A; reboot it; then when its back repeat on node B.

edit2:
http://technet.microsoft.com/en-us/library/gg232621%28WS.10%29.aspx

1000101 fucked around with this message at 18:46 on Apr 5, 2011

Syano
Jul 13, 2005
Microsoft Clustering Services on my storage controllers? BARF

EoRaptor
Sep 13, 2003

by Fluffdaddy

Syano posted:

Microsoft Clustering Services on my storage controllers? BARF

Don't worry, you can just put the quorum disk on your new HA iSCSI target.

Nebulis01
Dec 30, 2003
Technical Support Ninny
I had a 'my first SAN' growing experience today.

We've got a Dell PS4000X that's been running since September nice and peachy. We got another since we're running low on space. I went and added it to the group, but wanted to make sure nobody would use it so I disabled eth0/1 as soon as possible. Little did I know that the 'Enable Performance Load Balancing in Pools' check box is checked by default. Our entire production environment that uses SAN as storage came to a screeching halt because the SANs had already started load balancing before i disabled eth0/1.

It was a learning experience, thankfully we didn't lose any data and had about at 5minute downtime while I figured out what the gently caress I did.

Thought I'd share my idiot story

Adbot
ADBOT LOVES YOU

Cavepimp
Nov 10, 2006
Well, the good news is that the S500 I was talking about is now on the way out (being relegated to low-priority storage until it dies, basically).

Now I'm looking for something entirely different. I want to move to a D2D2D backup environment, one being off-site at our colo.

Our storage amount is relatively low (under 2tb, and not really growing that fast), so what would be my best bet if I wanted a fairly low end NAS/SAN that would replicate to an identical off-site unit and be fairly expandable later?

My only real experience is with Synology, but it looks like people have had issues with them recently over in the other thread and I've never tried replicating between two units.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply