|
Misogynist posted:A decent storage device should be able to take care of this regardless of client with policy-based routing (a connection is started and associated with a network interface, then all traffic related to that connection flows over the same interface). In practice, this is pretty wonky and barely works right in Linux and FreeBSD, and I wouldn't trust most storage vendor firmware based on vxWorks or whatever to just get it right. The storage vendor can enforce it with source routing rules on their side, but they are still at the mercy of what the client elects to do when they send data, so it's easier to just force the behavior you want by using multiple subnets and keeping the routing table clean on both the client and the storage side. I know that some initiators and storage targets either strongly suggest or even enforce this behavior by not allowing multiple sessions or MCS over interfaces in the same subnet. bull3964 posted:Dell's best practices for that array has all 4 ports being on a single VLAN and using a single VLAN for all your storage traffic. EQL is a bit of an odd one since they do recommend everything being in the same subnet, I believe because they require that every port on a node be able to communicate with all other ports on all other nodes. They have an MPIO DSM that handles load balancing and likely enforces the port binding behavior that they want for TCP connections. Obviously you should follow whatever your vendors guidelines are, but as a general rule you're usually safe in building out your iSCSI switching layer analogously to redundant FC fabrics where you maintain two completely separate fabrics (usually physical, but can be virtual) and home at least one initiator port and one target port to each.
|
# ? Oct 26, 2013 06:38 |
|
|
# ? May 2, 2024 10:02 |
|
kiwid posted:Ah, thanks for clearing that up. I could never find anywhere that actually explained that. Tagging on that iSCSI traffic is clear text as well, and anyone sniffing could potentially read the data and attach to the ISCSI server(chap can help but it does not encrypt).
|
# ? Oct 26, 2013 19:34 |
|
Dilbert As gently caress posted:Tagging on that iSCSI traffic is clear text as well, and anyone sniffing could potentially read the data and attach to the ISCSI server(chap can help but it does not encrypt). Mask the LUNs. This isn't always surefire, but it's a lot better than just relying on iSNS and maybe CHAP. iSCSI traffic is cleartext, but trivially encapsulated in IPSEC if you really cared. Really, someone would have to be awfully dedicated to try to assemble a block device from iSCSI traffic. They can certainly sniff some of your data (though sensitive data should be encrypted before it hits the disks anyway), but the likelihood of them doing any sort of real espionage or damage from trying to reassemble a device from SCSI commands is low as long as you have layered security.
|
# ? Oct 27, 2013 02:15 |
|
evol262 posted:Mask the LUNs. This isn't always surefire, but it's a lot better than just relying on iSNS and maybe CHAP. I agree but hey, point is don't set everything on the same flat network!
|
# ? Oct 27, 2013 02:28 |
|
evol262 posted:Mask the LUNs. This isn't always surefire, but it's a lot better than just relying on iSNS and maybe CHAP. iSCSI LUNs are masked by IQNs, which are user configurable, so it's trivially easy to work around any security that's based on masking. Worse, if you have a single flat network that contains both storage IPs and normal user traffic IPs your chances of ending up with IP conflicts are much higher and when someone steals an IP address from your storage array's iSCSI target for their newly built server or desktop or whatever, things can get bad pretty quickly. Or, if the user is malicious, they can do this on purpose and DOS your storage device. Keeping your storage traffic segregated from the wider user network, and unrouted, is a good idea. IPSEC encryption isn't always viable either as it can add additional load to clients and storage, as well as adding latency to storage response times. In a high performance environment that can be problematic. Most shops aren't going to go through the trouble of encrypting storage traffic that is only traversing their local network.
|
# ? Oct 27, 2013 03:36 |
|
NippleFloss posted:iSCSI LUNs are masked by IQNs, which are user configurable, so it's trivially easy to work around any security that's based on masking. Worse, if you have a single flat network that contains both storage IPs and normal user traffic IPs your chances of ending up with IP conflicts are much higher and when someone steals an IP address from your storage array's iSCSI target for their newly built server or desktop or whatever, things can get bad pretty quickly. Or, if the user is malicious, they can do this on purpose and DOS your storage device. Keeping your storage traffic segregated from the wider user network, and unrouted, is a good idea. I'm not saying you shouldn't VLAN off your storage traffic. For a number of reasons (not least of which being MTU and performance) you should. And a malicious user can get around pretty much anything. But masking eliminates most of the "whoops, I connected to the wrong LUN and hosed everything up" problems. Nothing's going to stop "I sniffed traffic and spoofed an IQN", but when does that actually happen?
|
# ? Oct 27, 2013 05:22 |
|
No mention at all yet of Seagate's Kinetic Open Storage that was announced over the weekend? http://www.seagate.com/tech-insights/kinetic-vision-how-seagate-new-developer-tools-meets-the-needs-of-cloud-storage-platforms-master-ti/ It seems like an interesting concept, though I'm still trying to wrap my brain fully around it. Ethernet enabled low level storage devices (drives) manipulated directly via an API (PUT, DELETE, GET commands). No controller, no RAID subsystem, no operating system, no SAN.
|
# ? Oct 28, 2013 15:24 |
|
bull3964 posted:No mention at all yet of Seagate's Kinetic Open Storage that was announced over the weekend? I'm very disappointed at how little hard technical documentation is out there right now, because this seems like a really neat product despite the vendor lock-in that's driven lots of people to other object storage platforms. There's nothing on consistency guarantees (is there even read-after-write consistency or not?), conflict resolution, partition tolerance, nothing. The way they chose Riak CS and OpenStack Swift as their first third-party integrations, along with the bit on DHTs in the documentation, suggests to me that the underlying implementation is fairly similar to Dynamo, with consistent hashing at the middle of it, but even that level of detail is just guesswork. I can't even tell if this is a real piece of technical achievement or another "me-too" attempt to capitalize on the NoSQL/denormalization thing that's happening right now, because there's nothing to look at but a press release and 15 pages of non-technical documentation. Vulture Culture fucked around with this message at 17:20 on Oct 28, 2013 |
# ? Oct 28, 2013 17:16 |
|
I would guess that within their API call for a PUT operation they have a field that lets you set the redundancy level of the file you are writing and then ensures that it gets written to multiple nodes based on that redundancy level. They also mention end to end data consistency including silent data corruption, so I assume they have some sort of checksum based integrity checking. I think you're correct that it sounds a lot like Dynamo or any other DHT, just running directly on drive hardware rather than an abstracted compute/storage node. So it's hardware-defined software-defined storage...
|
# ? Oct 28, 2013 18:28 |
|
From the initial developer documentation and sample code, redundancy appears to be an application-layer concern. Which is probably why they're supporting Riak and OpenStack Swift out of the gate. But the documentation isn't great, and everything they released on GitHub is compiled to bytecode (), so take that with a grain of salt.
|
# ? Oct 28, 2013 19:10 |
|
The fact that they have an HDFS adapter for this suggests that either they or their customers don't know how to use Hadoop. I don't know if that makes me want to laugh or cry.
|
# ? Oct 28, 2013 20:03 |
|
bull3964 posted:Dell's best practices for that array has all 4 ports being on a single VLAN and using a single VLAN for all your storage traffic. The MPIO config for my NetApp FAS2050 was the same way. If you have a host with NicA and NicB and the Filer with active/active heads listening on Nic1 and Nic2, their best practice was to have all four on the same VLAN and subnet so you could have: NicA -> Nic1 NicA -> Nic2 NicB -> Nic2 NicB -> Nic1 traffic patterns. For better redundancy: have a stacked pair of switches serving up only iSCSI traffic so there's no need to VLAN at all and plug NicA and Nic1 into SwitchA and NicB and Nic2 on SwitchB. I'm not sure if that's still the way NetApp does it as the FAS2050 is ancient by IT years.
|
# ? Oct 29, 2013 22:33 |
|
Agrikk posted:The MPIO config for my NetApp FAS2050 was the same way. If you have a host with NicA and NicB and the Filer with active/active heads listening on Nic1 and Nic2, their best practice was to have all four on the same VLAN and subnet so you could have: So you get 4 paths this way? I'm now wondering if I even have my home lab setup correctly. I have two NICs on my host and two NICs on my NAS so two paths using separate VLANs, right? Does this look right? http://imgur.com/a/Lnxsz
|
# ? Oct 29, 2013 22:49 |
|
Aquila posted:Oh god what have I done: Mine's just a baby. All 10k SAS, unfortunately. Edit: lol, I can't read, apparently. NullPtr4Lunch fucked around with this message at 23:45 on Oct 29, 2013 |
# ? Oct 29, 2013 23:37 |
|
Agrikk posted:The MPIO config for my NetApp FAS2050 was the same way. If you have a host with NicA and NicB and the Filer with active/active heads listening on Nic1 and Nic2, their best practice was to have all four on the same VLAN and subnet so you could have: This setup is not recommended. It may not cause problems, but it's not a best practice and it *can* cause problems. From TR-3441: "NetApp recommends that if using multiple paths or sessions with iSCSI each path should be isolated to it's own subnet." kiwid posted:So you get 4 paths this way? Yes, this is correct.
|
# ? Oct 29, 2013 23:48 |
|
kiwid posted:So you get 4 paths this way? Yes you get four paths. However in my case the filer had two heads, each acting independently and controlling its own disk aggregate for the most part so each nic on my server needs a path to each head. 2 nics on server * 2 heads = 4 paths. For a single host to a single NAS, each with two NICs, you configure two paths on seperate VLANs. NicA to Nic1 on VLAN1 and NicB to Nic2 on VLAN2.
|
# ? Oct 30, 2013 01:08 |
|
NippleFloss posted:This setup is not recommended. It may not cause problems, but it's not a best practice and it *can* cause problems. That has never made sense to me. By using a single subnet for a path, I must therefore limit that path to a specific pair of NICs (initiator to target) over a specific switch. code:
How does NetApp propose to resolve this problem with one subnet per path?
|
# ? Oct 30, 2013 01:17 |
|
Agrikk posted:That has never made sense to me. By using a single subnet for a path, I must therefore limit that path to a specific pair of NICs (initiator to target) over a specific switch. Well, you're missing two connections. ServerA Nic2 should also have a path to HeadA NicB (on a different subnet from Nic1), and Server A Nic1 should have a path HeadB NicA. Your setup doesn't make much sense because it doesn't have anything to do with limiting paths to subnets, you just plain haven't made enough connections to your switch to allow for all of the required paths. Agrikk posted:Yes you get four paths. However in my case the filer had two heads, each acting independently and controlling its own disk aggregate for the most part so each nic on my server needs a path to each head. 2 nics on server * 2 heads = 4 paths. You don't really have four paths. You have two paths to any one LUN with this setup. LUNs are owned by a controller and only paths to the LUNs owned by that controller are available to MPIO (in iSCSI, FC on NetApp is different). So you have two paths and when controller failover happens you still have two paths. If you use the same subnet setup you can FAKE having four paths by doing a full mesh of each initiator to each target, but you don't really have four paths because two will drop out with any failure.
|
# ? Oct 30, 2013 01:45 |
|
NippleFloss posted:Well, you're missing two connections. So what is a full-featured four path setup, then? Four NICs on each device (with two NICs on each head)? Can you detail it, please? Because I've been thinking about this wrong the whole time, apparently.
|
# ? Oct 30, 2013 03:30 |
|
Agrikk posted:So what is a full-featured four path setup, then? You would require 4 NICs on your server and 4 NICs on each controller if you wanted a true 4-path setup. Each server-to-controller NIC pair is a single path. Obviously you could do things like 4 NICs per server and 2 NICs per controller but then you've got a 2 to 1 fan in ratio at the controller ports so you aren't getting a full 4 paths worth of performance or redundancy so it really doesn't buy you much. The secondary controller doesn't provide any usable paths under normal (i.e. non failover) operation so you can't count it's NICs as available MPIO paths. Under Clustered OnTAP this is different as all paths through all nodes are active and ALUA is used to decide which are the preferred paths. But you'd still want to make sure that all paths on the same controller are on different subnets.
|
# ? Oct 30, 2013 03:43 |
|
quote:SupportEdge Standard Part Replace 4hr - FAS2020 1 $4,360.00 $4,360.00
|
# ? Oct 30, 2013 19:39 |
|
Nobody wants to support old stuff. Have you asked NetApp if you can trade it in?
|
# ? Oct 30, 2013 19:41 |
|
Caged posted:Nobody wants to support old stuff. Have you asked NetApp if you can trade it in? Is it really that old? As old as everything else here, it wouldn't surprise me. Here's the other half of the quote. quote:SupportEdge Standard Part Replace 4 hr - FAS2020A $2,994.00
|
# ? Oct 30, 2013 19:54 |
|
It's a 2007 model and adding 12 months warranty on after the original (3 years?) warranty is up is always quite pricey. I wouldn't consider those costs ridiculous but I'd seriously consider replacing it versus paying out in maintenance for another few years.
|
# ? Oct 30, 2013 19:57 |
|
Caged posted:It's a 2007 model and adding 12 months warranty on after the original (3 years?) warranty is up is always quite pricey. I wouldn't consider those costs ridiculous but I'd seriously consider replacing it versus paying out in maintenance for another few years. Exactly as old as our VMware servers. Makes sense. Ugh, I hate my cheap boss.
|
# ? Oct 30, 2013 20:02 |
|
I'm surprised they're still offering to support it if it is a 2007 model. Usually enterprise kit gets 3 years to start and you can add another 2 if you pay the price. After 5 years it's usually End Of Support time.
|
# ? Oct 30, 2013 21:11 |
|
skipdogg posted:I'm surprised they're still offering to support it if it is a 2007 model. Usually enterprise kit gets 3 years to start and you can add another 2 if you pay the price. After 5 years it's usually End Of Support time.
|
# ? Oct 30, 2013 21:55 |
|
Misogynist posted:but prices go up so much beyond year five that it's usually insane not to trade it in for something newer. Unless your finance department is so retarded that they forget to forecast for planned obsolescence so they chose to get gouged on support contracts instead of getting bigger gouged on new gear purchases. "Pay $10k for support on eight year old kit instead of paying $40k for new gear and new warranty support? We save $30k! Aren't we awesome!" When we moved into our new offices, I named our SSID "Sparky" because that's how I referred to the gear I kept patching together. The guest SSID was called "Smoky". Same reason. I thought "Downtime", "Blown Capacitors" and "Performance Issues" were too obvious.
|
# ? Oct 30, 2013 22:12 |
|
Agrikk posted:Unless your finance department is so retarded that they forget to forecast for planned obsolescence so they chose to get gouged on support contracts instead of getting bigger gouged on new gear purchases. A friend I used to work with always wanted to name a Netapp "uncorrectablereaderrorer" e: btw Hitachi snapshotting / api / command line / horcm can blow me.
|
# ? Oct 30, 2013 22:48 |
|
Bob Morales posted:Exactly as old as our VMware servers. Makes sense. Ugh, I hate my cheap boss. The 2020 was sold all the way through mid 2012 so it's probably doesn't date back all the way to 2007. And it doesn't go end of support until 2017 so you've got some time before you HAVE to get rid of it. However it is pretty underpowered and it doesn't support OnTAP 8.x which means you're basically orphaned on maintenance releases of 7.3 until you upgrade to newer hardware.
|
# ? Oct 31, 2013 01:03 |
|
Is there such a thing as a NAS aimed at the SMB market that isn't utter poo poo? It seems that vendors are more concerned with putting crap like iTunes servers on them and building in photo gallery applications than making sure that they aren't full of show-stopping bugs.
|
# ? Oct 31, 2013 01:37 |
|
Got an email from Starboard Storage which says they are "winding down." Now I'm stuck with an unsupported piece of poo poo for the rest of my life. They claim they will offshoot the remaining support to a third party but we all really know what that means. Take a shot with me other Starboard goons, all 3 of you
|
# ? Oct 31, 2013 01:53 |
|
Misogynist posted:Most major players will support past the five year mark unless there's a supply chain reason not to (see: BlueArc acquisition and LSI parts) but prices go up so much beyond year five that it's usually insane not to trade it in for something newer.
|
# ? Oct 31, 2013 01:55 |
|
For example? Synology are pretty good compared to Netgear, Thecus, QNAP, and all the forth rate poo poo from Seagate, ioSafe, Buffalo (really bad), D-Link, Drobo, WD, etc. http://www.synology.com/dsm/dsm_for_business.php?lang=us You are probably better off with FreeNAS, Open-E, Wasabi, etc.
|
# ? Oct 31, 2013 01:59 |
|
Yeah it's Synology box that's giving me issues. Put it under load and it restarts a load of services and drops all connections momentarily. Not really getting anywhere with their support people. Edit: Performance counters all look good - not maxing out RAM, CPU, NICs etc. Really annoying me now.
|
# ? Oct 31, 2013 02:04 |
|
You could enable SSH and check the logs of Samba and NFSD. First step would probably be to rule out intermediary network equipment though.
|
# ? Oct 31, 2013 02:20 |
|
Was just poking around on the box via SSH to see if the logs had anything meaningful in (they don't) and got the following:code:
|
# ? Oct 31, 2013 02:39 |
|
Make sure they replace it with a new unit and don't just run a quick test and return the same unit. I had Netgear do that and proceed to fry about 8 new disks.
|
# ? Oct 31, 2013 02:49 |
|
Yeah I'll take note of the serials before it goes away. Thanks for the help, I thought it was a different issue but nope, same network problems.
|
# ? Oct 31, 2013 02:52 |
|
|
# ? May 2, 2024 10:02 |
|
Well looks like I was given a "Dilbert you do what you want to do, or feel is best" freedom VXN5400 4x100GB SSD FAST CACHE 4x400GB SSD TEIR 1 storage 16X300GB 15K HDD 10k HDD TEIR 2 + Shelf 10k 600GB 10K drives TEIR 3 2xDP 10Gb NIC's per SP 2x3750's 2x10GB SFP's (uplinks from Storage) 24/1Gb ports (4x1gbp/s to hosts) Glad to go with without "durr we want FC"; Server VM's presented via ISCSI, Desktops NFS. Anyone interested please ask me questions, I don't have some contract saying I can't talk about it so feel free to ask any questions Dilbert As FUCK fucked around with this message at 03:46 on Nov 3, 2013 |
# ? Nov 3, 2013 03:38 |