Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Misogynist posted:

A decent storage device should be able to take care of this regardless of client with policy-based routing (a connection is started and associated with a network interface, then all traffic related to that connection flows over the same interface). In practice, this is pretty wonky and barely works right in Linux and FreeBSD, and I wouldn't trust most storage vendor firmware based on vxWorks or whatever to just get it right.

The storage vendor can enforce it with source routing rules on their side, but they are still at the mercy of what the client elects to do when they send data, so it's easier to just force the behavior you want by using multiple subnets and keeping the routing table clean on both the client and the storage side. I know that some initiators and storage targets either strongly suggest or even enforce this behavior by not allowing multiple sessions or MCS over interfaces in the same subnet.


bull3964 posted:

Dell's best practices for that array has all 4 ports being on a single VLAN and using a single VLAN for all your storage traffic.

EQL is a bit of an odd one since they do recommend everything being in the same subnet, I believe because they require that every port on a node be able to communicate with all other ports on all other nodes. They have an MPIO DSM that handles load balancing and likely enforces the port binding behavior that they want for TCP connections. Obviously you should follow whatever your vendors guidelines are, but as a general rule you're usually safe in building out your iSCSI switching layer analogously to redundant FC fabrics where you maintain two completely separate fabrics (usually physical, but can be virtual) and home at least one initiator port and one target port to each.

Adbot
ADBOT LOVES YOU

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

kiwid posted:

Ah, thanks for clearing that up. I could never find anywhere that actually explained that.

Tagging on that iSCSI traffic is clear text as well, and anyone sniffing could potentially read the data and attach to the ISCSI server(chap can help but it does not encrypt).

evol262
Nov 30, 2010
#!/usr/bin/perl

Dilbert As gently caress posted:

Tagging on that iSCSI traffic is clear text as well, and anyone sniffing could potentially read the data and attach to the ISCSI server(chap can help but it does not encrypt).

Mask the LUNs. This isn't always surefire, but it's a lot better than just relying on iSNS and maybe CHAP.

iSCSI traffic is cleartext, but trivially encapsulated in IPSEC if you really cared. Really, someone would have to be awfully dedicated to try to assemble a block device from iSCSI traffic. They can certainly sniff some of your data (though sensitive data should be encrypted before it hits the disks anyway), but the likelihood of them doing any sort of real espionage or damage from trying to reassemble a device from SCSI commands is low as long as you have layered security.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

evol262 posted:

Mask the LUNs. This isn't always surefire, but it's a lot better than just relying on iSNS and maybe CHAP.

iSCSI traffic is cleartext, but trivially encapsulated in IPSEC if you really cared. Really, someone would have to be awfully dedicated to try to assemble a block device from iSCSI traffic. They can certainly sniff some of your data (though sensitive data should be encrypted before it hits the disks anyway), but the likelihood of them doing any sort of real espionage or damage from trying to reassemble a device from SCSI commands is low as long as you have layered security.



I agree but hey, point is don't set everything on the same flat network!

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

evol262 posted:

Mask the LUNs. This isn't always surefire, but it's a lot better than just relying on iSNS and maybe CHAP.

iSCSI traffic is cleartext, but trivially encapsulated in IPSEC if you really cared. Really, someone would have to be awfully dedicated to try to assemble a block device from iSCSI traffic. They can certainly sniff some of your data (though sensitive data should be encrypted before it hits the disks anyway), but the likelihood of them doing any sort of real espionage or damage from trying to reassemble a device from SCSI commands is low as long as you have layered security.

iSCSI LUNs are masked by IQNs, which are user configurable, so it's trivially easy to work around any security that's based on masking. Worse, if you have a single flat network that contains both storage IPs and normal user traffic IPs your chances of ending up with IP conflicts are much higher and when someone steals an IP address from your storage array's iSCSI target for their newly built server or desktop or whatever, things can get bad pretty quickly. Or, if the user is malicious, they can do this on purpose and DOS your storage device. Keeping your storage traffic segregated from the wider user network, and unrouted, is a good idea.

IPSEC encryption isn't always viable either as it can add additional load to clients and storage, as well as adding latency to storage response times. In a high performance environment that can be problematic. Most shops aren't going to go through the trouble of encrypting storage traffic that is only traversing their local network.

evol262
Nov 30, 2010
#!/usr/bin/perl

NippleFloss posted:

iSCSI LUNs are masked by IQNs, which are user configurable, so it's trivially easy to work around any security that's based on masking. Worse, if you have a single flat network that contains both storage IPs and normal user traffic IPs your chances of ending up with IP conflicts are much higher and when someone steals an IP address from your storage array's iSCSI target for their newly built server or desktop or whatever, things can get bad pretty quickly. Or, if the user is malicious, they can do this on purpose and DOS your storage device. Keeping your storage traffic segregated from the wider user network, and unrouted, is a good idea.

IPSEC encryption isn't always viable either as it can add additional load to clients and storage, as well as adding latency to storage response times. In a high performance environment that can be problematic. Most shops aren't going to go through the trouble of encrypting storage traffic that is only traversing their local network.

I'm not saying you shouldn't VLAN off your storage traffic. For a number of reasons (not least of which being MTU and performance) you should. And a malicious user can get around pretty much anything. But masking eliminates most of the "whoops, I connected to the wrong LUN and hosed everything up" problems. Nothing's going to stop "I sniffed traffic and spoofed an IQN", but when does that actually happen?

bull3964
Nov 18, 2000

DO YOU HEAR THAT? THAT'S THE SOUND OF ME PATTING MYSELF ON THE BACK.


No mention at all yet of Seagate's Kinetic Open Storage that was announced over the weekend?

http://www.seagate.com/tech-insights/kinetic-vision-how-seagate-new-developer-tools-meets-the-needs-of-cloud-storage-platforms-master-ti/

It seems like an interesting concept, though I'm still trying to wrap my brain fully around it. Ethernet enabled low level storage devices (drives) manipulated directly via an API (PUT, DELETE, GET commands). No controller, no RAID subsystem, no operating system, no SAN.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

bull3964 posted:

No mention at all yet of Seagate's Kinetic Open Storage that was announced over the weekend?

http://www.seagate.com/tech-insights/kinetic-vision-how-seagate-new-developer-tools-meets-the-needs-of-cloud-storage-platforms-master-ti/

It seems like an interesting concept, though I'm still trying to wrap my brain fully around it. Ethernet enabled low level storage devices (drives) manipulated directly via an API (PUT, DELETE, GET commands). No controller, no RAID subsystem, no operating system, no SAN.
It's interesting that they're targeting OpenStack Swift and Riak CS out of the gate. Here's the developer documentation wiki for anyone who's interested.

I'm very disappointed at how little hard technical documentation is out there right now, because this seems like a really neat product despite the vendor lock-in that's driven lots of people to other object storage platforms. There's nothing on consistency guarantees (is there even read-after-write consistency or not?), conflict resolution, partition tolerance, nothing. The way they chose Riak CS and OpenStack Swift as their first third-party integrations, along with the bit on DHTs in the documentation, suggests to me that the underlying implementation is fairly similar to Dynamo, with consistent hashing at the middle of it, but even that level of detail is just guesswork.

I can't even tell if this is a real piece of technical achievement or another "me-too" attempt to capitalize on the NoSQL/denormalization thing that's happening right now, because there's nothing to look at but a press release and 15 pages of non-technical documentation.

Vulture Culture fucked around with this message at 17:20 on Oct 28, 2013

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

I would guess that within their API call for a PUT operation they have a field that lets you set the redundancy level of the file you are writing and then ensures that it gets written to multiple nodes based on that redundancy level. They also mention end to end data consistency including silent data corruption, so I assume they have some sort of checksum based integrity checking.

I think you're correct that it sounds a lot like Dynamo or any other DHT, just running directly on drive hardware rather than an abstracted compute/storage node. So it's hardware-defined software-defined storage...

Molten Llama
Sep 20, 2006
From the initial developer documentation and sample code, redundancy appears to be an application-layer concern. Which is probably why they're supporting Riak and OpenStack Swift out of the gate.

But the documentation isn't great, and everything they released on GitHub is compiled to bytecode (:wtc:), so take that with a grain of salt.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
The fact that they have an HDFS adapter for this suggests that either they or their customers don't know how to use Hadoop. I don't know if that makes me want to laugh or cry.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

bull3964 posted:

Dell's best practices for that array has all 4 ports being on a single VLAN and using a single VLAN for all your storage traffic.

The MPIO config for my NetApp FAS2050 was the same way. If you have a host with NicA and NicB and the Filer with active/active heads listening on Nic1 and Nic2, their best practice was to have all four on the same VLAN and subnet so you could have:

NicA -> Nic1
NicA -> Nic2
NicB -> Nic2
NicB -> Nic1

traffic patterns. For better redundancy: have a stacked pair of switches serving up only iSCSI traffic so there's no need to VLAN at all and plug NicA and Nic1 into SwitchA and NicB and Nic2 on SwitchB.

I'm not sure if that's still the way NetApp does it as the FAS2050 is ancient by IT years.

kiwid
Sep 30, 2013

Agrikk posted:

The MPIO config for my NetApp FAS2050 was the same way. If you have a host with NicA and NicB and the Filer with active/active heads listening on Nic1 and Nic2, their best practice was to have all four on the same VLAN and subnet so you could have:

NicA -> Nic1
NicA -> Nic2
NicB -> Nic2
NicB -> Nic1

traffic patterns. For better redundancy: have a stacked pair of switches serving up only iSCSI traffic so there's no need to VLAN at all and plug NicA and Nic1 into SwitchA and NicB and Nic2 on SwitchB.

I'm not sure if that's still the way NetApp does it as the FAS2050 is ancient by IT years.

So you get 4 paths this way?

I'm now wondering if I even have my home lab setup correctly. I have two NICs on my host and two NICs on my NAS so two paths using separate VLANs, right?

Does this look right? http://imgur.com/a/Lnxsz

NullPtr4Lunch
Jun 22, 2012

Aquila posted:

Oh god what have I done:

HUS 150 with SSD Tier is what I've done :getin:



Mine's just a baby. All 10k SAS, unfortunately. :mad:

Edit: lol, I can't read, apparently.

NullPtr4Lunch fucked around with this message at 23:45 on Oct 29, 2013

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Agrikk posted:

The MPIO config for my NetApp FAS2050 was the same way. If you have a host with NicA and NicB and the Filer with active/active heads listening on Nic1 and Nic2, their best practice was to have all four on the same VLAN and subnet so you could have:

NicA -> Nic1
NicA -> Nic2
NicB -> Nic2
NicB -> Nic1

traffic patterns. For better redundancy: have a stacked pair of switches serving up only iSCSI traffic so there's no need to VLAN at all and plug NicA and Nic1 into SwitchA and NicB and Nic2 on SwitchB.

I'm not sure if that's still the way NetApp does it as the FAS2050 is ancient by IT years.

This setup is not recommended. It may not cause problems, but it's not a best practice and it *can* cause problems.

From TR-3441:

"NetApp recommends that if using multiple paths or sessions with iSCSI each path should be isolated to it's own subnet."


kiwid posted:

So you get 4 paths this way?

I'm now wondering if I even have my home lab setup correctly. I have two NICs on my host and two NICs on my NAS so two paths using separate VLANs, right?

Does this look right? http://imgur.com/a/Lnxsz

Yes, this is correct.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

kiwid posted:

So you get 4 paths this way?

I'm now wondering if I even have my home lab setup correctly. I have two NICs on my host and two NICs on my NAS so two paths using separate VLANs, right?

Does this look right? http://imgur.com/a/Lnxsz

Yes you get four paths. However in my case the filer had two heads, each acting independently and controlling its own disk aggregate for the most part so each nic on my server needs a path to each head. 2 nics on server * 2 heads = 4 paths.

For a single host to a single NAS, each with two NICs, you configure two paths on seperate VLANs. NicA to Nic1 on VLAN1 and NicB to Nic2 on VLAN2.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

NippleFloss posted:

This setup is not recommended. It may not cause problems, but it's not a best practice and it *can* cause problems.

From TR-3441:

"NetApp recommends that if using multiple paths or sessions with iSCSI each path should be isolated to it's own subnet."

That has never made sense to me. By using a single subnet for a path, I must therefore limit that path to a specific pair of NICs (initiator to target) over a specific switch.

code:

Initiator on ServerA  Nic1 ---- Switch 1 --- NicA   HeadA of Target
                      Nic2 ---- Switch 2 --- NicB   HeadB of Target

In the above example, If I were to lose Nic2 on ServerA, then the connection to HeadB drops for ServerA. Since HeadB still thinks it is up and can still take connections from other servers, it retains control of its aggregate.Since there's no failover, I lose all of the disks presented to ServerA on the aggregate controlled by HeadB even though I have a physical connection path from Nic1 to NicB via the switch stack of Switch1/Switch2.


How does NetApp propose to resolve this problem with one subnet per path?

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Agrikk posted:

That has never made sense to me. By using a single subnet for a path, I must therefore limit that path to a specific pair of NICs (initiator to target) over a specific switch.

code:

Initiator on ServerA  Nic1 ---- Switch 1 --- NicA   HeadA of Target
                      Nic2 ---- Switch 2 --- NicB   HeadB of Target

In the above example, If I were to lose Nic2 on ServerA, then the connection to HeadB drops for ServerA. Since HeadB still thinks it is up and can still take connections from other servers, it retains control of its aggregate.Since there's no failover, I lose all of the disks presented to ServerA on the aggregate controlled by HeadB even though I have a physical connection path from Nic1 to NicB via the switch stack of Switch1/Switch2.


How does NetApp propose to resolve this problem with one subnet per path?

Well, you're missing two connections. ServerA Nic2 should also have a path to HeadA NicB (on a different subnet from Nic1), and Server A Nic1 should have a path HeadB NicA. Your setup doesn't make much sense because it doesn't have anything to do with limiting paths to subnets, you just plain haven't made enough connections to your switch to allow for all of the required paths.


Agrikk posted:

Yes you get four paths. However in my case the filer had two heads, each acting independently and controlling its own disk aggregate for the most part so each nic on my server needs a path to each head. 2 nics on server * 2 heads = 4 paths.

For a single host to a single NAS, each with two NICs, you configure two paths on seperate VLANs. NicA to Nic1 on VLAN1 and NicB to Nic2 on VLAN2.

You don't really have four paths. You have two paths to any one LUN with this setup. LUNs are owned by a controller and only paths to the LUNs owned by that controller are available to MPIO (in iSCSI, FC on NetApp is different). So you have two paths and when controller failover happens you still have two paths. If you use the same subnet setup you can FAKE having four paths by doing a full mesh of each initiator to each target, but you don't really have four paths because two will drop out with any failure.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

NippleFloss posted:

Well, you're missing two connections.

You don't really have four paths. You have two paths to any one LUN with this setup.

So what is a full-featured four path setup, then?

Four NICs on each device (with two NICs on each head)? Can you detail it, please? Because I've been thinking about this wrong the whole time, apparently.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Agrikk posted:

So what is a full-featured four path setup, then?

Four NICs on each device (with two NICs on each head)? Can you detail it, please? Because I've been thinking about this wrong the whole time, apparently.

You would require 4 NICs on your server and 4 NICs on each controller if you wanted a true 4-path setup. Each server-to-controller NIC pair is a single path. Obviously you could do things like 4 NICs per server and 2 NICs per controller but then you've got a 2 to 1 fan in ratio at the controller ports so you aren't getting a full 4 paths worth of performance or redundancy so it really doesn't buy you much. The secondary controller doesn't provide any usable paths under normal (i.e. non failover) operation so you can't count it's NICs as available MPIO paths.

Under Clustered OnTAP this is different as all paths through all nodes are active and ALUA is used to decide which are the preferred paths. But you'd still want to make sure that all paths on the same controller are on different subnets.

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

quote:

SupportEdge Standard Part Replace 4hr - FAS2020 1 $4,360.00 $4,360.00
Post-Warranty Mths: 12
Begin: 11/1/2012
Serial#:700000102696
Model: FAS2020; # of Systems: 1; Total # of Heads: 1
Total # of Shelves: 1; Software License Capacity: Node
Disk1: 1TB ATA / 12 Base Disks; # of shelves disk1: 1
Maintenance on our little SAN is super expensive. What gives?

Thanks Ants
May 21, 2004

#essereFerrari


Nobody wants to support old stuff. Have you asked NetApp if you can trade it in?

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

Caged posted:

Nobody wants to support old stuff. Have you asked NetApp if you can trade it in?

Is it really that old? As old as everything else here, it wouldn't surprise me. Here's the other half of the quote.

quote:

SupportEdge Standard Part Replace 4 hr - FAS2020A $2,994.00
Post-Warranty Mths: 12
Begin: 11/1/2012
Serial#:700000089848
Serial#:700000089850
Model: FAS2020A; # of Systems: 1; Total # of Heads: 2
Total # of Shelves: 1; Software License Capacity: Node
Disk1: 300GB SAS / 12 Base Disks; # of shelves disk1: 1

Thanks Ants
May 21, 2004

#essereFerrari


It's a 2007 model and adding 12 months warranty on after the original (3 years?) warranty is up is always quite pricey. I wouldn't consider those costs ridiculous but I'd seriously consider replacing it versus paying out in maintenance for another few years.

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

Caged posted:

It's a 2007 model and adding 12 months warranty on after the original (3 years?) warranty is up is always quite pricey. I wouldn't consider those costs ridiculous but I'd seriously consider replacing it versus paying out in maintenance for another few years.

Exactly as old as our VMware servers. Makes sense. Ugh, I hate my cheap boss.

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

I'm surprised they're still offering to support it if it is a 2007 model. Usually enterprise kit gets 3 years to start and you can add another 2 if you pay the price. After 5 years it's usually End Of Support time.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

skipdogg posted:

I'm surprised they're still offering to support it if it is a 2007 model. Usually enterprise kit gets 3 years to start and you can add another 2 if you pay the price. After 5 years it's usually End Of Support time.
Most major players will support past the five year mark unless there's a supply chain reason not to (see: BlueArc acquisition and LSI parts) but prices go up so much beyond year five that it's usually insane not to trade it in for something newer.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

Misogynist posted:

but prices go up so much beyond year five that it's usually insane not to trade it in for something newer.

Unless your finance department is so retarded that they forget to forecast for planned obsolescence so they chose to get gouged on support contracts instead of getting bigger gouged on new gear purchases.


"Pay $10k for support on eight year old kit instead of paying $40k for new gear and new warranty support? We save $30k! Aren't we awesome!"

When we moved into our new offices, I named our SSID "Sparky" because that's how I referred to the gear I kept patching together. The guest SSID was called "Smoky". Same reason.

I thought "Downtime", "Blown Capacitors" and "Performance Issues" were too obvious.

Aquila
Jan 24, 2003

Agrikk posted:

Unless your finance department is so retarded that they forget to forecast for planned obsolescence so they chose to get gouged on support contracts instead of getting bigger gouged on new gear purchases.


"Pay $10k for support on eight year old kit instead of paying $40k for new gear and new warranty support? We save $30k! Aren't we awesome!"

When we moved into our new offices, I named our SSID "Sparky" because that's how I referred to the gear I kept patching together. The guest SSID was called "Smoky". Same reason.

I thought "Downtime", "Blown Capacitors" and "Performance Issues" were too obvious.

A friend I used to work with always wanted to name a Netapp "uncorrectablereaderrorer"

e: btw Hitachi snapshotting / api / command line / horcm can blow me.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Bob Morales posted:

Exactly as old as our VMware servers. Makes sense. Ugh, I hate my cheap boss.

The 2020 was sold all the way through mid 2012 so it's probably doesn't date back all the way to 2007. And it doesn't go end of support until 2017 so you've got some time before you HAVE to get rid of it. However it is pretty underpowered and it doesn't support OnTAP 8.x which means you're basically orphaned on maintenance releases of 7.3 until you upgrade to newer hardware.

Thanks Ants
May 21, 2004

#essereFerrari


Is there such a thing as a NAS aimed at the SMB market that isn't utter poo poo? It seems that vendors are more concerned with putting crap like iTunes servers on them and building in photo gallery applications than making sure that they aren't full of show-stopping bugs.

ghostinmyshell
Sep 17, 2004



I am very particular about biscuits, I'll have you know.
Got an email from Starboard Storage which says they are "winding down." Now I'm stuck with an unsupported piece of poo poo for the rest of my life. :smith:

They claim they will offshoot the remaining support to a third party but we all really know what that means.

Take a shot with me other Starboard goons, all 3 of you :smith:

evil_bunnY
Apr 2, 2003

Misogynist posted:

Most major players will support past the five year mark unless there's a supply chain reason not to (see: BlueArc acquisition and LSI parts) but prices go up so much beyond year five that it's usually insane not to trade it in for something newer.
And if you want 5, by god get it beforehand.

MrMoo
Sep 14, 2000

For example? Synology are pretty good compared to Netgear, Thecus, QNAP, and all the forth rate poo poo from Seagate, ioSafe, Buffalo (really bad), D-Link, Drobo, WD, etc.

http://www.synology.com/dsm/dsm_for_business.php?lang=us

You are probably better off with FreeNAS, Open-E, Wasabi, etc.

Thanks Ants
May 21, 2004

#essereFerrari


Yeah it's Synology box that's giving me issues. Put it under load and it restarts a load of services and drops all connections momentarily. Not really getting anywhere with their support people.

Edit: Performance counters all look good - not maxing out RAM, CPU, NICs etc. Really annoying me now.

MrMoo
Sep 14, 2000

You could enable SSH and check the logs of Samba and NFSD.

First step would probably be to rule out intermediary network equipment though.

Thanks Ants
May 21, 2004

#essereFerrari


Was just poking around on the box via SSH to see if the logs had anything meaningful in (they don't) and got the following:

code:
Corrupted MAC on input.                                                
Disconnecting: Packet corrupt
I completely wiped everything out of this unit due to really strange network issues the other day and rebuilt it, and now they're back. Have tried multiple switches, directly connecting it etc. I think something's just fried in hardware land at this point.

MrMoo
Sep 14, 2000

Make sure they replace it with a new unit and don't just run a quick test and return the same unit. I had Netgear do that and proceed to fry about 8 new disks.

Thanks Ants
May 21, 2004

#essereFerrari


Yeah I'll take note of the serials before it goes away. Thanks for the help, I thought it was a different issue but nope, same network problems.

Adbot
ADBOT LOVES YOU

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
Well looks like I was given a "Dilbert you do what you want to do, or feel is best" freedom

VXN5400
4x100GB SSD FAST CACHE
4x400GB SSD TEIR 1 storage
16X300GB 15K HDD 10k HDD TEIR 2
+ Shelf 10k 600GB 10K drives TEIR 3
2xDP 10Gb NIC's per SP

2x3750's
2x10GB SFP's (uplinks from Storage)
24/1Gb ports (4x1gbp/s to hosts)

Glad to go with without "durr we want FC"; Server VM's presented via ISCSI, Desktops NFS.


Anyone interested please ask me questions, I don't have some contract saying I can't talk about it so feel free to ask any questions

Dilbert As FUCK fucked around with this message at 03:46 on Nov 3, 2013

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply