Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
The NAS is running Linux and in Linux it creates an EXT4 filesystem. If you use the NAS function it creates some folders and shares those out with samba or something like that. If you use iSCSI it just creates a huge file (so if you make a 100 GB iSCSI device it makes a 100GB file) and shares that to your clients as a block device, and then your Windows server can configure that block device as if its a local disk.

Adbot
ADBOT LOVES YOU

Docjowles
Apr 9, 2009

Xenomorph posted:

I guess I don't fully understand iSCSI.

I'd have to see the specific page you're talking about, but it's probably a "unified" storage system that can do both iSCSI and NFS depending on what you configure. If you configure it to share over NFS, the drives get preformatted with an ext4 (or whatever) filesystem. Think of this like a Windows shared folder, you just connect to it and the filesystem is already there.

If you configure it for iSCSI, it's similar to fibre channel in that you just get a raw block device that you then have to partition and format yourself.

e: beaten :argh:

Xenomorph
Jun 13, 2001

Maneki Neko posted:

That's exactly how it works. The NAS box is probably just making a big fat file and then presenting that storage to the initiator as a block device.

I just watched some videos that showed a user making a virtual disk on the target system, and then that virtual disk gets mounted on the initiator. So there is still overhead (NTFS -> virtual disk -> ext4) when writing/reading.

Edit:
I'm looking at this now: QNAP TS-EC1279U-RP

Under $5,000 and we just fill it with WD RE4 drives. I'm just not too hot on the idea that the iSCSI part creates a virtual disk sitting on top of ext4.


Xenomorph fucked around with this message at 00:46 on Sep 7, 2012

GrandMaster
Aug 15, 2004
laidback
It's been a while since we bought storage, but we just got a quote for 9x 600GB 15K disks for an EMC NS120.. Just the disks, no DAE.
The quote was $21k AUD, we're being bent over right? It seems reaaaallllly expensive, I'm sure we bought a dual controller AX4-5i for about $25k around a year ago.

Moey
Oct 22, 2010

I LIKE TO MOVE IT
Are you really asking if $2300 per disk is a lovely price or am I reading this wrong somehow?

GrandMaster
Aug 15, 2004
laidback
Yeah that's exactly what I'm asking, like I said it's been a long time since we bought any storage so I'm totally out of touch with what enterprise stuff costs these days..

Moey
Oct 22, 2010

I LIKE TO MOVE IT
I have not worked with EMC before, but that seems quite high.

Edit: Actually looking around online, 600gb 15k 2.5" drives do seem to be quite pricey.

Edit 2: The unit you mention takes 3.5" disks. That seems insanely high.

Moey fucked around with this message at 01:55 on Sep 7, 2012

Syano
Jul 13, 2005

Xenomorph posted:

I just watched some videos that showed a user making a virtual disk on the target system, and then that virtual disk gets mounted on the initiator. So there is still overhead (NTFS -> virtual disk -> ext4) when writing/reading.

Edit:
I'm looking at this now: QNAP TS-EC1279U-RP

Under $5,000 and we just fill it with WD RE4 drives. I'm just not too hot on the idea that the iSCSI part creates a virtual disk sitting on top of ext4.
Yeah totally tons of overhead. I personally still use drive cabinets with 68 pin connectors and LVD ultra320 cables.

Anyways, you don't understand how iSCSI works. It doesn't 'create' anything. It is simply the protocol used to transfer the data. There is no overhead any more than there is overhead in accessing any file system.

Syano fucked around with this message at 02:30 on Sep 7, 2012

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

Xenomorph posted:

I just watched some videos that showed a user making a virtual disk on the target system, and then that virtual disk gets mounted on the initiator. So there is still overhead (NTFS -> virtual disk -> ext4) when writing/reading.

Edit:
I'm looking at this now: QNAP TS-EC1279U-RP

Under $5,000 and we just fill it with WD RE4 drives. I'm just not too hot on the idea that the iSCSI part creates a virtual disk sitting on top of ext4.

Holy poo poo you have no idea what you're doing. This is what a SAN is. If you want to find some crazy overpriced FC gear go nuts I guess, or see if you can find some direct attached storage but I don't know if that exists anymore anyway.

If I were you I'd be way more worried about the overhead of running second hand equipment all over.

FISHMANPET fucked around with this message at 04:17 on Sep 7, 2012

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

FISHMANPET posted:

Holy poo poo you have no idea what you're doing. This is what a SAN is. If you want to find some crazy overpriced FC gear go nuts I guess, or see if you can find some direct attached storage but I don't know if that exists anymore anyway.

If I were you I'd be way more worried about the overhead of running second equipment all over.

Well, you see, FC cables plug directly into the hard drive platters so there is no overheard like iSCSI. :psyduck:

Amandyke
Nov 27, 2004

A wha?

GrandMaster posted:

It's been a while since we bought storage, but we just got a quote for 9x 600GB 15K disks for an EMC NS120.. Just the disks, no DAE.
The quote was $21k AUD, we're being bent over right? It seems reaaaallllly expensive, I'm sure we bought a dual controller AX4-5i for about $25k around a year ago.

You're getting those directly from EMC with a service contract attached? If so then you're paying for the service contract more so than the disks. Honestly though if you do any amount of business with EMC just take the quote back to your sales guy and tell him that's way too high, how about half of that. Maybe wait until near the end of the quarter?

Xenomorph
Jun 13, 2001

FISHMANPET posted:

Holy poo poo you have no idea what you're doing. This is what a SAN is. If you want to find some crazy overpriced FC gear go nuts I guess, or see if you can find some direct attached storage but I don't know if that exists anymore anyway.

If I were you I'd be way more worried about the overhead of running second hand equipment all over.

Uh, I was hoping it was some sort of SAN device. I am asking about that kind of stuff in a SAN thread. I'm looking for something iSCSI-based, not fibre channel.

three posted:

Well, you see, FC cables plug directly into the hard drive platters so there is no overheard like iSCSI. :psyduck:

...and this? From what I saw, most of the iSCSI devices I looked at had an additional layer in-between: an OS running on the device (Linux) with its own file system, then a file created on that as a "virtual disk" is shared as a volume over iSCSI -which is then formatted by the initiator. The idea of some sort of nested file systems doesn't seem like a good idea to me. Is it making partitions instead?

If I already knew everything about every single hardware device and connection method, I don't think I'd be asking so many loving questions.
I was hoping to get some helpful information, not be mocked.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Xenomorph posted:

Uh, I was hoping it was some sort of SAN device. I am asking about that kind of stuff in a SAN thread. I'm looking for something iSCSI-based, not fibre channel.


...and this? From what I saw, most of the iSCSI devices I looked at had an additional layer in-between: an OS running on the device (Linux) with its own file system, then a file created on that as a "virtual disk" is shared as a volume over iSCSI -which is then formatted by the initiator. The idea of some sort of nested file systems doesn't seem like a good idea to me. Is it making partitions instead?

If I already knew everything about every single hardware device and connection method, I don't think I'd be asking so many loving questions.
I was hoping to get some helpful information, not be mocked.

Every SAN has a layer of indirection between the calling program and the data blocks. Even on internal disk on most modern operating systems you usually have, at minimum, a filesystem layer list NTFS and a volume manager layer (LVM, windows disk management) that translates a filesystem call to a virtual device into a call to a physical block on an actual disk. This is not a bad thing. It is a good thing when we can abstract a disk away from the underlying physical geometry since it lets us do things like create raid sets, or grow and shrink volumes dynamically, or thin provision storage, or a bunch of other neat stuff. It does for storage what virtual machines do for physical hardware (with the same "inefficienies").

Modern processors are incredibly fast and efficient and that extra "layer" you're talking about is really just a lookup in a table stored in memory to find the file location. It happens in an infinitesimally small amount of time, relative to the amount of time required to actually retrieve the data from spinning platters. You're talking picoseconds vs milliseconds.

20 years ago you still found people writing code in assembly because the optimization that could be done, relative to what a compiler would give you, was worth doing because CPU cycles were relatively costly. These days no one writes assembly anymore because saving 1000 cycles on a processor that does billions a second, across multiple threads, is a waste of a programmers time. Modern SANs are in the same boat. The "inefficiency" of a filesystem is so negligible relative to the benefits that it's not even worth considering.

If you don't know anything about storage then you don't need to concern yourself with the details of how a particular piece of storage works. You just need to provide useful information about the workload you intend to run on it and the required response times. If you can find storage that provides the performance you want for the price you want, who cares how it works?

Pile Of Garbage
May 28, 2007



Xenomorph posted:

...and this? From what I saw, most of the iSCSI devices I looked at had an additional layer in-between: an OS running on the device (Linux) with its own file system, then a file created on that as a "virtual disk" is shared as a volume over iSCSI -which is then formatted by the initiator. The idea of some sort of nested file systems doesn't seem like a good idea to me. Is it making partitions instead?

What you are describing there is a NAS which is able to present shares as iSCSI targets. NASs are file-level storage devices where the disks are formatted with a file system and the device presents the storage as CIFS/NFS shares. On the other hand SANs are block-level storage devices which simply present slices of storage to hosts via iSCSI or FC.

e:fb (Sort of, NippleFloss's post is a bit more abstract)

Pile Of Garbage fucked around with this message at 08:32 on Sep 7, 2012

Xenomorph
Jun 13, 2001

NippleFloss posted:

If you don't know anything about storage then you don't need to concern yourself with the details of how a particular piece of storage works. You just need to provide useful information about the workload you intend to run on it and the required response times. If you can find storage that provides the performance you want for the price you want, who cares how it works?

My experience with a "file system on a file system" are things like Wubi/Ubuntu (ext3 loop device on top of NTFS) and virtual machines, where I/O suffers. I'm guessing I shouldn't have to worry about that since the internal I/O of a modern RAID setup won't be my bottle neck (1 Gb Ethernet will).

I just wasn't sure if that is how all iSCSI-compatible devices do it.

Next question: how important is it to have a dual-controller RAID?

Is something like the QNAP TS-EC1279U-RP terrible?

The primary use I'm looking for is just data storage. People want to move stuff of their desktops to a shared drive.

Rhymenoserous
May 23, 2008

Moey posted:

I have not worked with EMC before, but that seems quite high.

Edit: Actually looking around online, 600gb 15k 2.5" drives do seem to be quite pricey.

Edit 2: The unit you mention takes 3.5" disks. That seems insanely high.

On the lower end systems EMC gets you on the price of the system. On the higher end ones they drat near give the hardware away provided you sign away your entire companies firstborn children to their unholy embrace.

Rhymenoserous
May 23, 2008

Xenomorph posted:

My experience with a "file system on a file system" are things like Wubi/Ubuntu (ext3 loop device on top of NTFS) and virtual machines, where I/O suffers. I'm guessing I shouldn't have to worry about that since the internal I/O of a modern RAID setup won't be my bottle neck (1 Gb Ethernet will).

I just wasn't sure if that is how all iSCSI-compatible devices do it.

Next question: how important is it to have a dual-controller RAID?

Is something like the QNAP TS-EC1279U-RP terrible?

The primary use I'm looking for is just data storage. People want to move stuff of their desktops to a shared drive.

I know you are concerned about overhead with iSCSI but it's virtually non existent on a modern system. I mean I'm presenting blocks of iSCSI to 4 ESXi boxes and I'm using those blocks to run windows servers (I.E. the OS is coming into the box over iSCSI). Each one of those windows servers is talking to it's own storage over Microsofts iSCSI initiator (My storage has app aware backups if you do it this way as opposed to setting up a big virtual storage pool). And at least two of these pools are database servers that at any given moment have ~200 people hitting them almost contantly because our ERP vendor never clued into the idea that if I'm not actively executing a query a db call/connection is absolutely unnecessary (gently caress these guys seriously).

So as long as your just sharing out chunks of storage to do well... whatever the hell you are doing (I have no clue) you will be fine. Sidenote: if you are doing linux stuff I'd get a device that can do NFS as well as iSCSI. iSCSI plays really nice with windows, NFS is a breeze with linux.

Amandyke
Nov 27, 2004

A wha?

Rhymenoserous posted:

So as long as your just sharing out chunks of storage to do well... whatever the hell you are doing (I have no clue) you will be fine. Sidenote: if you are doing linux stuff I'd get a device that can do NFS as well as iSCSI. iSCSI plays really nice with windows, NFS is a breeze with linux.

iSCSI works great no matter what OS you're running...

Docjowles
Apr 9, 2009

Xenomorph posted:

Next question: how important is it to have a dual-controller RAID?

This is pretty much something you have to answer yourself based on what you're using the device for. If there's a major issue (botched firmware update, CPU/RAM/whatever totally shits the bed and the whole device goes down) is it OK for the device to go down for a week while you get a replacement? It's one of those things where you're buying insurance against a rare but potentially devastating scenario.

Rhymenoserous
May 23, 2008

Amandyke posted:

iSCSI works great no matter what OS you're running...

I've never used it on *nix because NFS on *nix is just so drat easy.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Xenomorph posted:

My experience with a "file system on a file system" are things like Wubi/Ubuntu (ext3 loop device on top of NTFS) and virtual machines, where I/O suffers. I'm guessing I shouldn't have to worry about that since the internal I/O of a modern RAID setup won't be my bottle neck (1 Gb Ethernet will).

I just wasn't sure if that is how all iSCSI-compatible devices do it.

Next question: how important is it to have a dual-controller RAID?

Is something like the QNAP TS-EC1279U-RP terrible?

The primary use I'm looking for is just data storage. People want to move stuff of their desktops to a shared drive.

The reason I/O on a virtual machine suffers isn't because it's a filesystem on a filesystem, it's because translating system calls from the guest OS layer into system calls on the VM server layer is a pretty complex undertaking to do in real time. The I/O portion of the stack is the easy part. And even then saying that I/O suffers is pretty meaningless since it's all dependent on system overhead. Sure, if you're running a VM off of your laptop with 2GB of memory, and that laptop is also running a full OS then you will have bad performance. But if you're running a special purpose hypervisor like ESX and you're doing it on sufficiently sized hardware, then you would never be able to tell if the machine is physical or virtual just by looking at performance. You really can't translate your experiences playing with loopback devices or KVM into how very special purpose and optimized storage devices will work.

I would be very surprised if you saturated even a 1GB link with file services. Unless people are streaming video or running applications from the file server you will have pretty low I/O demands and your latency will mostly be due to CIFS being kind of crappy. 1GB/s of throughput translates into about 2,000 64k block operations per second. A single 7200RPM disk can handle maybe 100 OPS at best, so you'd need 20 data disks to provide that many IOPS. That number of disks goes down if the workload is heavily sequential (and it's just a very rough estimate, you would need to know average I/O size and chain length to get more exact) but most shared file services workloads are fairly random since the files are often small and numerous, and what one user is accessing doesn't necessarily correlate with what another user is accessing so temporal and spatial locality of reference are low.

Not every iSCSI SAN uses as filesystem underneath, but most do. Most SANs of any sort, FC or iSCSI, use a filesystem or at very least an incredibly complex volume manager these days. You can still find both iSCSI and FC SANs that are more traditional and don't run a full filesystem but they are increasingly rare and outmoded.

The QNAP is a pro-sumer device. It's not going to comparable to even the lowest end business grade SAN. Performance and reliability and feature set will be much worse than something EMC, or Dell, or whoever will sell you. Whether it's good for your purposes depends on what your requirements are and what your budget is.

EoRaptor
Sep 13, 2003

by Fluffdaddy
It's budget getting time, so I'm looking for recommendations:

Needed:
12 to 20tb of usable storage
Network accessible via SMB/CIFS and AFPv3
Integration with AD/Kerberos (authentication, security, etc)
Rack mountable
Snapshots with Volume Shadow Copy integration

Nice to have:
NFS
Support for Backup Exec 2010, (NDMP?, other protocol/agent?)
Low management needs (set and forget)
Alerting via email


This will be used as network shared drives, my documents redirection, raw photo and video storage, other bulk storage, for an office of about 50 people, who produce magazines, videos, etc.

I'm mostly wondering what is out there, and how much it costs? I'm hoping for something in the 10k range, but if I I have valid reasons to spend more, I can ask for it. I'd prefer an integrated solution with support, but I can roll my own if need be.

I have an Equallogic for my SAN, and it's very nice, but I think the price and feature set is more than I need to spend here (unless someone can show otherwise?)

Thanks.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
Honestly, I'm going to be really surprised if you end up finding AFP and NDMP on the same device. AFP is incredibly uncommon to find on enterprise or even midrange NAS solutions (after ten years in the industry I haven't seen it in the wild once), and the support you'll find on most prosumer NAS devices (QNAP, Synology, etc.) is half-baked netatalk integration.

EoRaptor
Sep 13, 2003

by Fluffdaddy

Misogynist posted:

Honestly, I'm going to be really surprised if you end up finding AFP and NDMP on the same device. AFP is incredibly uncommon to find on enterprise or even midrange NAS solutions (after ten years in the industry I haven't seen it in the wild once), and the support you'll find on most prosumer NAS devices (QNAP, Synology, etc.) is half-baked netatalk integration.

Yeah, NDMP or other direct backup solution (as opposed to grabbing them from the share) is probably a stretch, but I still want to keep in mind that I have to back this up, and keeping my feature checklist to match that.

I could do Windows Storage Server and ExtremeZ-IP, I'd just need a chassis and controller setup to support it. What I have going now for network shares is a whitebox supermicro with Server 2008R2 on it. The OSX support is terrible, however, and fixing that is a requirement.

EoRaptor fucked around with this message at 21:18 on Sep 7, 2012

Nebulis01
Dec 30, 2003
Technical Support Ninny

EoRaptor posted:

Yeah, NDMP or other direct backup solution (as opposed to grabbing them from the share) is probably a stretch, but I still want to keep in mind that I have to back this up, and keeping my feature checklist to match that.

I could do Windows Storage Server and ExtremeZ-IP, I'd just need a chassis and controller setup to support it. What I have going now for network shares is a whitebox supermicro with Server 2008R2 on it. The OSX support is terrible, however, and fixing that is a requirement.

If WSS would work, look in to Server 2012? They added SMB 3.0 support and made some other changes that make it more appealing for storage usage. I don't know jack about OSX though, does it even support SMB?

EoRaptor
Sep 13, 2003

by Fluffdaddy

Nebulis01 posted:

If WSS would work, look in to Server 2012? They added SMB 3.0 support and made some other changes that make it more appealing for storage usage. I don't know jack about OSX though, does it even support SMB?

I plan to look into WS2012, and OSX does support SMB, but having our Mac workstations use SMB shares from our current 2008R2 results in a bunch of file locking issues that just don't seem to be fixable. I've tried tuning SMB on the server side, tried the DAVE SMB replacement client on the Mac, and the same issues persist. It's very frustrating for the teams that work on them.

If they made a mac mini with 2 (or more) Gb ports, I'd wedge one of those in just to serve AFP from an iSCSI LUN, but the current mini + thunderbolt to ethernet just seems to be asking for problems.

evil_bunnY
Apr 2, 2003

Nebulis01 posted:

If they made a mac mini with 2 (or more) Gb ports, I'd wedge one of those in just to serve AFP from an iSCSI LUN, but the current mini + thunderbolt to ethernet just seems to be asking for problems.
Why'd you think that? I have one in my iMac and it works peachy?

Nomex
Jul 17, 2002

Flame retarded.

EoRaptor posted:

It's budget getting time, so I'm looking for recommendations:

Needed:
12 to 20tb of usable storage
Network accessible via SMB/CIFS and AFPv3
Integration with AD/Kerberos (authentication, security, etc)
Rack mountable
Snapshots with Volume Shadow Copy integration

Nice to have:
NFS
Support for Backup Exec 2010, (NDMP?, other protocol/agent?)
Low management needs (set and forget)
Alerting via email


This will be used as network shared drives, my documents redirection, raw photo and video storage, other bulk storage, for an office of about 50 people, who produce magazines, videos, etc.

I'm mostly wondering what is out there, and how much it costs? I'm hoping for something in the 10k range, but if I I have valid reasons to spend more, I can ask for it. I'd prefer an integrated solution with support, but I can roll my own if need be.

I have an Equallogic for my SAN, and it's very nice, but I think the price and feature set is more than I need to spend here (unless someone can show otherwise?)

Thanks.

You should check out the Netapp FAS2240 series. You can get them with 12 or 24 disks to start and 1-3 TB SATA disks. They support CIFS, NFS, iSCSI and FC, as well as AD integration and NDMP.

madsushi
Apr 19, 2009

Baller.
#essereFerrari

Nomex posted:

You should check out the Netapp FAS2240 series. You can get them with 12 or 24 disks to start and 1-3 TB SATA disks. They support CIFS, NFS, iSCSI and FC, as well as AD integration and NDMP.

You're not going to see a FAS anywhere near $10k with that much storage.

Your best bet is a server with a decent raid card (HP/Dell) and a ton of disk and Windows. Windows is obviously going to give you AD auth, volume shadow copy, SMB/CIFS, etc, and it will be simple to manage (probably no training needed). If all you need is file shares, it's going to be very hard to beat a Windows server loaded with drives.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

EoRaptor posted:

If they made a mac mini with 2 (or more) Gb ports, I'd wedge one of those in just to serve AFP from an iSCSI LUN, but the current mini + thunderbolt to ethernet just seems to be asking for problems.

You can use the thunderbolt port as a 10Gb port.

sanchez
Feb 26, 2003

madsushi posted:



Your best bet is a server with a decent raid card (HP/Dell) and a ton of disk and Windows. Windows is obviously going to give you AD auth, volume shadow copy, SMB/CIFS, etc, and it will be simple to manage (probably no training needed). If all you need is file shares, it's going to be very hard to beat a Windows server loaded with drives.

This is fun until a RAID controller dies. I'd try to get a real SAN if at all possible, that's a lot of data to be sitting on a single server. If it was just a few tb it'd be a different story.

madsushi
Apr 19, 2009

Baller.
#essereFerrari

sanchez posted:

This is fun until a RAID controller dies. I'd try to get a real SAN if at all possible, that's a lot of data to be sitting on a single server. If it was just a few tb it'd be a different story.

That's why I recommended buying HP/Dell, because at least you know parts are going to be available for a long time. If you can find a decent SAN for $10k that fits the criteria, then that's an option too, I just don't see that happening easily.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

madsushi posted:

That's why I recommended buying HP/Dell, because at least you know parts are going to be available for a long time. If you can find a decent SAN for $10k that fits the criteria, then that's an option too, I just don't see that happening easily.
It appears you do not fully grasp what "single point of failure" means :(

madsushi
Apr 19, 2009

Baller.
#essereFerrari

Misogynist posted:

It appears you do not fully grasp what "single point of failure" means :(

But "single point of failure" was not in the original requirements. If you read the original request, what's needed is just a big NAS on a relatively tight budget.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

madsushi posted:

But "single point of failure" was not in the original requirements. If you read the original request, what's needed is just a big NAS on a relatively tight budget.
http://forums.somethingawful.com/showthread.php?threadid=2801557

Nukelear v.2
Jun 25, 2004
My optional title text

madsushi posted:

That's why I recommended buying HP/Dell, because at least you know parts are going to be available for a long time. If you can find a decent SAN for $10k that fits the criteria, then that's an option too, I just don't see that happening easily.

Pair of servers doing DFS/DFSR should be around 10k depending on what you get (spend a bit more and get something nicer) and has no single failure point. If SMB is flaky in OSX give NFS a try, given it's pedigree I can't imagine it having poor support for NFS.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Nukelear v.2 posted:

Pair of servers doing DFS/DFSR should be around 10k depending on what you get (spend a bit more and get something nicer) and has no single failure point. If SMB is flaky in OSX give NFS a try, given it's pedigree I can't imagine it having poor support for NFS.
The Windows support for NFS serving is incredibly poo poo, though, and good luck doing access control unless you're doing a full Centrify implementation.

XMalaclypseX
Nov 18, 2002
Quick Question:

I have an EqualLogic PS4100 with (12) 900GB, 7200RPM drives in a RAID-50. It has two 1Gb dual port controllers. The SAN is connected to a vmware host that is configured for round-robin at 3 IOPS.

My question involves performance. I can consistently get about a 100MBps read rate and an 80MBps write rate. Something is telling me that these rates are particularly low. Any thoughts?

Internet Explorer
Jun 1, 2005





Are those transfer rates when the SAN has nothing else on it? 100MB/s is about as high as you'll see on a single 1Gb port, so I'd start looking at what may be causing it not to use both ports.

Adbot
ADBOT LOVES YOU

XMalaclypseX
Nov 18, 2002

Internet Explorer posted:

Are those transfer rates when the SAN has nothing else on it? 100MB/s is about as high as you'll see on a single 1Gb port, so I'd start looking at what may be causing it not to use both ports.

The transfer rates are over both ports.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply