Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

XMalaclypseX posted:

The transfer rates are over both ports.

What are you using to benchmark those numbers?

Adbot
ADBOT LOVES YOU

XMalaclypseX
Nov 18, 2002

three posted:

What are you using to benchmark those numbers?

iometer and SAN Headquarters

Mierdaan
Sep 14, 2004

Pillbug
He said he's using Round Robin, so he's not aggregating bandwidth over the 2x 1Gb ports. I'm not sure what kind of performance gain you can get by cranking IO operations limit down to 3, but it's not going to be spectacular, is it?

XMalaclypseX
Nov 18, 2002
Its a regular vmware tweak because the default Round Robin policy IOPS limit is 1000. By setting it to 3 it balances lower traffic loads better.

Mierdaan
Sep 14, 2004

Pillbug

XMalaclypseX posted:

Its a regular vmware tweak because the default Round Robin policy IOPS limit is 1000. By setting it to 3 it balances lower traffic loads better.

I know it's considered regular, but do you actually have guidance from EQL on what kind of performance increase you should expect out of it? Some well-known bloggers have questioned how effective that change actually is, so I'm not sure expecting throughput in excess 100MB/s in a 2Gb+RR setup is that realistic. Like I said, you're not aggregating the 2 links together, you're just switching between them every. damned. IO.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

XMalaclypseX posted:

Its a regular vmware tweak because the default Round Robin policy IOPS limit is 1000. By setting it to 3 it balances lower traffic loads better.

Just check your link saturation while you're running the benchmarks, rather than assuming that the issue is due to the network configuration. It's possible that you're not actually saturating either link and that your bottleneck is elsewhere.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

XMalaclypseX posted:

My question involves performance. I can consistently get about a 100MBps read rate and an 80MBps write rate. Something is telling me that these rates are particularly low. Any thoughts?
Dumb question, but you're running these systems on the same subnet, right? The second you get layer-3 anything involved, your block performance will go to poo poo.

XMalaclypseX
Nov 18, 2002

Mierdaan posted:

I know it's considered regular, but do you actually have guidance from EQL on what kind of performance increase you should expect out of it? Some well-known bloggers have questioned how effective that change actually is, so I'm not sure expecting throughput in excess 100MB/s in a 2Gb+RR setup is that realistic. Like I said, you're not aggregating the 2 links together, you're just switching between them every. damned. IO.

The tweak came directly from EQL support.

XMalaclypseX fucked around with this message at 21:11 on Sep 10, 2012

XMalaclypseX
Nov 18, 2002

Misogynist posted:

Dumb question, but you're running these systems on the same subnet, right? The second you get layer-3 anything involved, your block performance will go to poo poo.

The SAN is separated onto its own vlan but the two adapters are on the same subnet.

The only reason the throughput is at 80MBps right now is due to the multipathing. If I shutdown an interface the bandwidth drops to only about 40Mbps.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

XMalaclypseX posted:

The SAN is separated onto its own vlan but the two adapters are on the same subnet.
While many networking people will disagree, I've pretty generally found this to be a bad idea. Stick your interfaces onto the same subnet as your storage so you're not routing through your whole distribution layer, and see what happens to your performance. In our experiences, introducing an L3 component into the mix dropped our performance at least 15%, in addition to making jumbo frames an incredibly difficult proposition.

Internet Explorer
Jun 1, 2005





Misogynist posted:

While many networking people will disagree, I've pretty generally found this to be a bad idea. Stick your interfaces onto the same subnet as your storage so you're not routing through your whole distribution layer, and see what happens to your performance. In our experiences, introducing an L3 component into the mix dropped our performance at least 15%, in addition to making jumbo frames an incredibly difficult proposition.

I agree with this, but then again I'm not a big networking guy either. The Equallogic SAN you are talking about only does iSCSI so I would keep it's traffic plus any other iSCSI related traffic on the same subnet, same vlan (VMware storage NICs, etc). As long as both are separate from normal LAN traffic you should be good to go.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Misogynist posted:

While many networking people will disagree, I've pretty generally found this to be a bad idea. Stick your interfaces onto the same subnet as your storage so you're not routing through your whole distribution layer, and see what happens to your performance. In our experiences, introducing an L3 component into the mix dropped our performance at least 15%, in addition to making jumbo frames an incredibly difficult proposition.
I agree, he should add vNICs to his VMware environment on that subnet, and remove layer 3 from the discussion.

Syano
Jul 13, 2005
Our environment is small enough that I can still afford dedicated switches for our SAN. I have a buddy who is an admin over on the other side of town who constantly rides me about wasting ports by dedicating switches just to SAN traffic and how I should be just running all the traffic through the same top of rack switch and VLANing from there. Who cares I say, as long as I can afford it I am going to keep it simple and keep layer 3 off my SAN switches.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

Syano posted:

Our environment is small enough that I can still afford dedicated switches for our SAN. I have a buddy who is an admin over on the other side of town who constantly rides me about wasting ports by dedicating switches just to SAN traffic and how I should be just running all the traffic through the same top of rack switch and VLANing from there. Who cares I say, as long as I can afford it I am going to keep it simple and keep layer 3 off my SAN switches.

You should ride him about being terrible. You're doing it the correct way. iSCSI should have its own dedicated network.

The only justification for not doing it that was is if he has no budget, and if so then maybe he should've gone NFS instead of iSCSI.

three fucked around with this message at 00:50 on Sep 11, 2012

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Syano posted:

Our environment is small enough that I can still afford dedicated switches for our SAN. I have a buddy who is an admin over on the other side of town who constantly rides me about wasting ports by dedicating switches just to SAN traffic and how I should be just running all the traffic through the same top of rack switch and VLANing from there. Who cares I say, as long as I can afford it I am going to keep it simple and keep layer 3 off my SAN switches.
A non routed vlan is not layer 3, it's still layer two. Dedicating a switch (and nics) to iscsi is a waste.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

adorai posted:

A non routed vlan is not layer 3, it's still layer two. Dedicating a switch (and nics) to iscsi is a waste.

It's silly to not pay ~$15,000 for a couple of stacked switches to provide increased stability, performance (especially if the backplane of your top of the rack switches can't support the full bandwidth of all the ports), higher reliability since the environment is pristine, protecting from the network team causing blips that really cause problems with iSCSI but not typical traffic, ease of management in pinpointing issues/changes/configuration problems.

However, I will concede that this is a debatable method, but I can't believe any storage admin wouldn't dedicate NICs at the very minimum (especially with 1Gb, which most environments don't need 10Gb).

three fucked around with this message at 01:28 on Sep 11, 2012

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

three posted:

It's absurd to not pay ~$15,000 for a couple of stacked switches to provide increased stability, performance (especially if the backplane of your top of the rack switches can't support the full bandwidth of all the ports), higher reliability since the environment is pristine, protecting from the network team causing blips that really cause problems with iSCSI but not typical traffic, ease of management in pinpointing issues/changes/configuration problems.

If you want to utilize your same network, use NFS. It's much more forgiving.
At this point, it really depends on whether you're using (round-robin) 1 GbE or 10 GbE for your networking. There's some pretty major cost concerns associated with the extra 10 gigabit switches if you want any kind of reasonable port density at line rate. Lots of people do converged for their 10 gigabit infrastructure, but they don't have the problems we've had with unannounced network maintenance taking down my cluster on my wedding day because of isolation response :shobon:

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

Misogynist posted:

At this point, it really depends on whether you're using (round-robin) 1 GbE or 10 GbE for your networking. There's some pretty major cost concerns associated with the extra 10 gigabit switches if you want any kind of reasonable port density at line rate. Lots of people do converged for their 10 gigabit infrastructure, but they don't have the problems we've had with unannounced network maintenance taking down my cluster on my wedding day because of isolation response :shobon:

Price goes up a bit if using 10Gb (which most people probably don't need), but it's the foundation for a SAN infrastructure that probably cost several hundred thousand dollars.

The foundation for the common belief that iSCSI is worse than FC is derived from people trying to implement it on their existing network infrastructure, and the problems that causes.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

three posted:

The foundation for the common belief that iSCSI is worse than FC is derived from people trying to implement it on their existing network infrastructure, and the problems that causes.
At the same time, if you're buying all that networking infrastructure from scratch, the cost bonus of iSCSI evaporates relative to FC. iSCSI is mostly marketed as a cost savings versus FC, so it's not particularly surprising that it tends to be purchased on those grounds as well.

Syano
Jul 13, 2005

adorai posted:

. Dedicating a switch (and nics) to iscsi is a waste.
who cares if it's a waste. My port count is so low i can spend a few extra dollars and literally never have to worry about these other issues mentioned here

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Syano posted:

who cares if it's a waste. My port count is so low i can spend a few extra dollars and literally never have to worry about these other issues mentioned here

The issues being mentioned are pretty overblown. If you've got the extra switches lying around then, by all means, use them for that purpose. But if you're buying switches with sufficient backplane bandwidth that you don't end up over-subscribed, and you're utilizing VLANs for traffic segregation, and you're doing VPCs or stacking to get physical redundancy, then you're really only left with user error as a concern.

Sure, if your network admins are dumb then they might take down your storage, but they might also take down the hosts, or they might take down your private storage network just as well.

Many places, especially those running 10G and trying to lessen the number of ports and switches they need to support aren't going to buy dedicated switchgear just for storage when the justification for it is so thin. And if they do go that route then they will probably just buy FC anyway and have a generally more stable storage backbone.

Like Misogynist said, for most places going iSCSI is a cost issue, so anything that increases the price dilutes that value proposition and makes it a harder sell.

BnT
Mar 10, 2006

adorai posted:

Dedicating a switch (and nics) to iscsi is a waste.

In general I agree with you, but one of the major issues with non-dedicated iSCSI switches is you then have to consider and design for STP convergence. A convergence can have a nasty impact on iSCSI traffic, even if the failed switch doesn't touch your iSCSI traffic.

XMalaclypseX posted:

My question involves performance. I can consistently get about a 100MBps read rate and an 80MBps write rate. Something is telling me that these rates are particularly low. Any thoughts?

What's your current MTU? If you're not already using jumbo frames and are able to support them you may find some serious gains there.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

BnT posted:

In general I agree with you, but one of the major issues with non-dedicated iSCSI switches is you then have to consider and design for STP convergence. A convergence can have a nasty impact on iSCSI traffic, even if the failed switch doesn't touch your iSCSI traffic.
Are many people really running vanilla STP over RSTP or RPVST+ these days, though? Both these protocols should take at most a few seconds to converge if the device at the other end of a link suddenly drops dead, which is well within the timeouts of any sane OS configuration.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

NippleFloss posted:

The issues being mentioned are pretty overblown.

[snip]

And if they do go that route then they will probably just buy FC anyway and have a generally more stable storage backbone.

Your post contradicts itself. You're saying it's overblown then say FC is more stable. The reason FC is considered more stable is because it uses it's own switches and people don't usually gently caress with it and gently caress it up. The issues mentioned are real world scenarios, not hypothetical paradises where people don't do dumb things and break your storage network. (Hint: A lot of people anyone will work with are really bad at their jobs.)

Having a pristine, less hosed with, easily monitored and managed environment is so critical to stability. And the cost is so negligible given the cost of a quality Compellent, EMC, NetApp, etc array. Don't cut corners on storage.

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

BnT posted:

In general I agree with you, but one of the major issues with non-dedicated iSCSI switches is you then have to consider and design for STP convergence. A convergence can have a nasty impact on iSCSI traffic, even if the failed switch doesn't touch your iSCSI traffic.

You can avoid this by turning on 'portfast' (assuming cisco switches) to prevent iSCSI initiators/target ports from sending/responding to a TCN.

You can also opt to use rapid spanning tree (also with portfast) or just using a VLAN that doesn't have loops in the topology and shutting off spanning tree for that specific VLAN.

Also upping your SCSI timeout to account for STP convergence is a good idea as well.

quote:

The reason FC is considered more stable is because it uses it's own switches and people don't usually gently caress with it and gently caress it up.

FC also scales better as your SAN gets larger (more initiators and targets.) I would hate to have to manage 1000 hosts on my SAN using iSCSI but it's not uncommon to see that many initiators on one SAN in larger datacenters. FC load balancing is also generally better overall (at least until more of these datacenter fabrics become commonplace.) Four 8gig FC links used as an ISL can generally be treated as a logical 32 gig link whereas four 10 gig links may not balance traffic perfectly among all members of that bundle so it's possible not to see even half of 40 gigabit.

Also FC switches are aware of all the other switches in the topology and where everyone is plugged in. This means you don't need to worry about things like loop prevention since it's inherent to the protocol.

Note when I say "SAN" I'm not just referring to the storage but the network, servers and the storage.

VMware MPIO (and EMC PowerPathVE) sorts out the multi-path issues on the host side but once you have to get traffic to another switch you're beholden to all the rules of ethernet and the baggage it carries.

edit: your point about "people don't tend to gently caress it up" holds true as well since most FC deployments have two separate fabrics and it's not uncommon to change one and wait 24 hours before changing the other. So if you do gently caress it up you're generally okay as long as you follow the process.

1000101 fucked around with this message at 06:12 on Sep 11, 2012

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

three posted:

Your post contradicts itself. You're saying it's overblown then say FC is more stable. The reason FC is considered more stable is because it uses it's own switches and people don't usually gently caress with it and gently caress it up. The issues mentioned are real world scenarios, not hypothetical paradises where people don't do dumb things and break your storage network. (Hint: A lot of people anyone will work with are really bad at their jobs.)

Having a pristine, less hosed with, easily monitored and managed environment is so critical to stability. And the cost is so negligible given the cost of a quality Compellent, EMC, NetApp, etc array. Don't cut corners on storage.

FC is more stable because it is a protocol designed from the ground up to provide storage service. It handles flow control better, handles device and link failure better, has lower header overhead, has lower overhead at the packet switching layer, etc....It's a storage protocol from the ground up, not a storage protocol layered on top of a multi-purpose networking protocol that was designed to allow the transmission and routing of large numbers of small data packets to many hosts.

The differences are pretty small, but they are there. And if you've already decided you want to pay extra for switchgear because the minor benefits of a segregated storage network are worth spending more on, then you're probably a good candidate for the incremental benefits that FC provides over iSCSI.

In a small business it might be feasible to buy a couple of cheap 1g switches to make a dedicated storage network, and when I worked for a small company I recommended just that. But when you're talking about hundreds of physical servers, each requiring a minimum of 2 nics, plus a half dozen arrays with 4 active nics, all running 10g, well that's not a negligible amount of money.

"Don't cut corners on ______" is an empty statement, in general, because companies don't have unlimited IT budgets so corners will ALWAYS be cut. The question is how many thousands of dollars that fifth or sixth 9 of uptime is worth, and that is a business decision, not a technical one.

YOLOsubmarine fucked around with this message at 05:52 on Sep 11, 2012

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

NippleFloss posted:

The question is how many thousands of dollars that fifth or sixth 9 of uptime is worth, and that is a business decision, not a technical one.

Or when we factor reality in, that 3rd or 4th nine!

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

1000101 posted:

Or when we factor reality in, that 3rd or 4th nine!

I can give you two nines, a seven, and two threes, but I'm not going to guarantee that they'll come in any specific order.

spoon daddy
Aug 11, 2004
Who's your daddy?
College Slice

Misogynist posted:

[ they don't have the problems we've had with unannounced network maintenance

oh god this, so many times over. I had to fight hard for a seperate 10g switch infrastructure because our networks guys insisted that converged networks were the way to go. I was proven right on several occaisions.

Pile Of Garbage
May 28, 2007



NippleFloss posted:

FC is more stable because it is a protocol designed from the ground up to provide storage service. It handles flow control better, handles device and link failure better, has lower header overhead, has lower overhead at the packet switching layer, etc....It's a storage protocol from the ground up, not a storage protocol layered on top of a multi-purpose networking protocol that was designed to allow the transmission and routing of large numbers of small data packets to many hosts.

I've always loved FC for exactly this reason but as you already pointed out it always comes down to price. It's always an up-hill battle trying to convince the customer or your boss to take the plunge when the alternative is so competively priced.

Of course it doesn't help when your sales guys insist on choosing 5m LC-LC fibre cables instead of the 1m ones when they draw up quotes (Which was the case at my previous employer). I mean bloody hell your average 42U rack is just under 2m in height and I doubt you are really going to have your SAN or hosts more than 1m away from the switches. All it does is make cable management a nightmare. Plus they are around $50 cheaper.

That's the end of my rant.

luminalflux
May 27, 2005



1m is usually too short if you want to have any semblance of cable management. All depending on where you put your switches and SAN equip, of course.

Plus, isn't the cost of fibre usually in the termination, not the length?

Pile Of Garbage
May 28, 2007



luminalflux posted:

1m is usually too short if you want to have any semblance of cable management. All depending on where you put your switches and SAN equip, of course.

Plus, isn't the cost of fibre usually in the termination, not the length?

Yeah you're right. From my experience usually the SAN controllers and your host HBAs will come with SFPs however you'll still need to buy them for the switches (Note I've only really dealt with IBM gear so it might be different with other vendors).

An 8-pack of 8Gb-SW 850nm SFP+'s will cost around $4-5k and depending on what FC switches you are using it can get considerably more expensive from there (Brocade 300-series FC switches usually only have the first 8 ports licensed and you have to pay extra to activate more ports).

edit: 5m may not seem long until you start to rack everything which is when you end up in a situation like this:


Pile Of Garbage fucked around with this message at 13:12 on Sep 11, 2012

evil_bunnY
Apr 2, 2003

People who buy narrow racks should be shot QTIYD.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

cheese-cube posted:

Yeah you're right. From my experience usually the SAN controllers and your host HBAs will come with SFPs however you'll still need to buy them for the switches (Note I've only really dealt with IBM gear so it might be different with other vendors).

An 8-pack of 8Gb-SW 850nm SFP+'s will cost around $4-5k and depending on what FC switches you are using it can get considerably more expensive from there (Brocade 300-series FC switches usually only have the first 8 ports licensed and you have to pay extra to activate more ports).

edit: 5m may not seem long until you start to rack everything which is when you end up in a situation like this:


Most of that mess is your too-long, too-thick factory power cables, which can be replaced for $1.50 apiece. Clean that up and you'll have plenty of room to correctly dress your fiber at the side of the rack.

Mierdaan
Sep 14, 2004

Pillbug

Misogynist posted:

Most of that mess is your too-long, too-thick factory power cables, which can be replaced for $1.50 apiece. Clean that up and you'll have plenty of room to correctly dress your fiber at the side of the rack.

What power cables do you normally get? I've bought shorter sizes before so I could deal with length better, but I thought the thickness was pretty standard.

Pile Of Garbage
May 28, 2007



Misogynist posted:

Most of that mess is your too-long, too-thick factory power cables, which can be replaced for $1.50 apiece. Clean that up and you'll have plenty of room to correctly dress your fiber at the side of the rack.

Yeah well that was when I was at my last job and the sales guys never considered the actual racking when putting quotes together. We were IBM partners so I think they just threw together builds in x-config, ran them by our local distributor (Avnet) and then presented them to the customer. I asked several times to be involved in the process but they were always too busy trying to score sales.

I don't want to drag the thread off-topic but I just wanted to mention that the re-branded Brocade FC gear that IBM sells isn't very practical rack wise. In middle of the photo are two IBM SAN24B-4 FC switches (Re-branded Brocade 300-series) which are half-length and have the FC+power connectors on the front. However just above them is a SAN06B-R MPR (Re-branded Brocade 7800-series) which is full length and has the FC+Ethernet connectors on one end and the power connectors on the other :confused: That's why there is 1RU of space above it: so that FC cables can run through from the FC switches to the MPR.

Mierdaan posted:

What power cables do you normally get? I've bought shorter sizes before so I could deal with length better, but I thought the thickness was pretty standard.

I should note that some of thick cables down the side are 16A C19/C20 terminated cables which connect the UPS at the bottom of the rack to the PDUs further up the rack. Also that picture was taken before I went to town on the whole thing with double-sided velcro.


To try and bring things back on topic before I piss everyone off with my tl;dr derail: earlier in the thread the Netgear ReadyNAS range of devices was mentioned and I voiced by extreme dissatisfaction with their "higher-end" business models like the 4200 (Basically they are shite).

However sometime ago I inherited one of the low-end ReadyNAS business/prosumer models (A ReadyNAS Pro 2) and I have to say it's a pretty nifty device. I built a HTPC for my dad a while back and he used to have an external USB HDD plugged into it where all the media was kept. This was slow and clunky so I grabbed the ReadyNAS, configured it in a RAID0 array, created one big share on the array and configured it to present that as an iSCSI target. Then I just plugged it straight into the 1Gb Ethernet port of the HTPC, configured the Windows iSCSI initiator and bam: 3.7TB of portable storage that is a heck of a lot faster than USB 2.0.

So yeah for home use it's an alright device. It took all of 10 minutes to configure it for iSCSI and get it hooked up and given it's portability I can see it being useful in a situation where you need to move a large amount of data between different locations (As in plug it in at site A, upload data, take it over to site B, upload data).

Pile Of Garbage fucked around with this message at 16:08 on Sep 11, 2012

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Mierdaan posted:

What power cables do you normally get? I've bought shorter sizes before so I could deal with length better, but I thought the thickness was pretty standard.
You can buy power cables in any number of gauges. Which gauges you can use depends on the current you plan on pushing through the cable. The standard cables you get from a vendor are usually somewhere around 14 AWG, thick enough to push 15A through, which is way overkill for a standard server (typical draw of a 2-socket box at full load is just over 2A). You can buy C13-C14 cables (or NEMA 5-15, if your ghetto datacenter uses 120V) in 14, 16, or 18 AWG. Smaller wire means higher resistance (hence 18AWG is only rated for 10A), so you're not getting quite as good efficiency from your PDUs, but if your racks have a lot of power cords in them (1U servers, etc.), you'll make up some of the difference in cooling costs and better equipment lifetimes since your server fans (especially in your PSUs) aren't having to work quite so hard. Plus, working with them will make you not want to completely kill yourself.

It's not guaranteed that a smaller gauge wire will have a proportionally thinner jacket, but if you're buying random cables off the Internet it's a pretty good rule of thumb.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Oh my God this P4000 that I've inherited is terrible. Does anybody know where I can find some documentation? I can't even begin to make heads or tails of what's going on.

Amandyke
Nov 27, 2004

A wha?

FISHMANPET posted:

Oh my God this P4000 that I've inherited is terrible. Does anybody know where I can find some documentation? I can't even begin to make heads or tails of what's going on.

Like here? http://h10010.www1.hp.com/wwpc/us/en/sm/WF05a/12169-304616-3930449-3930449-3930449-4118659.html?dnr=1

Adbot
ADBOT LOVES YOU

Number19
May 14, 2003

HOCKEY OWNS
FUCK YEAH


FISHMANPET posted:

Oh my God this P4000 that I've inherited is terrible. Does anybody know where I can find some documentation? I can't even begin to make heads or tails of what's going on.
What problems are you having?

We have one and it mostly does what I need it to quite reliably.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply