Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
A pricier alternative is one of the startech disk duplicators, we use one at work for churning out boot disks for test servers and it was a revelation. No PC required and it takes about 5-10 minutes to fully dupe a drive.

http://www.startech.com/m/HDD/Duplicators/usb-3-esata-hdd-duplicator-dock~SDOCK2U33RE

The usb 2 version is cheaper but I would stick with the 3.

Totally worth it if you're doing it a lot, not so much if it's just an occasional thing however having a usb3 to sata interface can be handy for older drives lying around turned into cold storage.

Adbot
ADBOT LOVES YOU

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
Pro tip: precondition yo drives before benchmarking!

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

Travic posted:

Any ideas why my results are worse?


What's a good way to do this?

IOMeter is a free app that is a good benchmarking tool and you can set the read/write sizes. You can set an area to write to and do a bunch of 4k writes and then test that area or do the whole drive (although that'll wear it more).

Basically a fresh new drive will have the best performance and it drops into a steady state after some use.

A good presentation by SNIA (storage networking industry association) has a primer on ssd benchmarking:

http://www.snia.org/sites/default/education/tutorials/2011/fall/SolidState/EstherSpanjer_The_Why_How_SSD_Performance_Benchmarking.pdf

Basically if you're testing fresh out of the box the performance will never be that good again.

It's for enterprise storage really but it's applicable to consumer drives too.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
One thing I've found on some high end PCIe drives is they require "Above 4G Decoding" to be enabled in the bios, if that is an option. I'm not sure if it would cause the Intel 750 to have an initial setup hit if it's not set but that's a possibility. It allows memory mapped I/Os on devices using 64bit addressing. I know the systems do boot with the intel 750/3x00 drives with it disabled but I've never used one of those as a boot device (just as a storage device for system testing)

Definitely sounds like something in POST is hung up waiting for something though.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

movax posted:

I'm beginning work on a design using some NVMe SSDs with an ARM-based platform; anyone running them behind PCIe Packet switches? I care more about having the drives available for sheer storage, not necessarily performance. Topology could be something akin too:

x4 Gen 2.0 Root Complex -> (PCie Switch) -> (2 x4 Links) -> 2x Samsung 950 Pro/etc.

From a PCIe POV, the drives should show up as PCIe devices to Linux as normal, just under a different a bus number than the RC. Or am I missing something fundamental about NVMe? Mentally, I am just thinking of them as regular PCIe devices.

I'm running NVMe devices (SSDs, NVRAM) behind a switch quite a bit, both on windows (server 2012 R2) and Linux (CentOS 7/Ubuntu 14.04). I think NVMe support was added after the 3.xx kernel somewhere, so older kernels you will probably see the PCIe devices (lspci) but there won't be any nvme devices in /dev.

What switch are you looking at using?

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

movax posted:

Either PEX8615 or PEX8619; not sure if the DMA support is necessary, but seems like anything from the 8614 through 8619 is the same package size and probably are similar dies that have been laser cut / binned.

Definitely will be bandwidth limited by the upstream port, but I expect the SSDs to behave well and downtrain to Gen 2 without issue.

Yeah they will be fine with going to gen2. We have some 950s for testing and we run them across speeds and outside of some wacky gen2 fibre channel cards nothing has ever had an issue going down in pcie speed.

You might want to check out the IDT devices (like http://www.idt.com/products/interfa...rconnect-switch), they have more ports so you could have an upstream port x4 going to 6 x2s instead of limited to all x4 bifurcation like that avago/plx. Avago is cheaper, though.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

movax posted:

Hmm, I've never used IDT's switches (used PLX heavily in the past). The PEX switches have some flexible lane configurations -- all I really need is the 2 x4 downstream, but since the 8619 is the same package, I figured I'd break out the extra four lanes it offers to another M.2 slot, maybe half-size for a wi-fi card, or some other test hardware.

Yah it totally depends on your use case, but if speed is less of an issue you could hook up a bunch at x1 or x2.

Wish I could recommend the switch I'm working on but it'll be available to tier 1 customers only for the time being. It has some pretty cool storage centric features.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

SlayVus posted:

Would anyone think $300 is even slightly reasonable for a PCI-E 16x to m.2 x 4? That would make it $75 per m.2 slot. Which if you only have one m.2 or no m.2, this would give uses access to greater, faster storage expandability. Just for reference, a standard PCI-E 4x2 to m.2 is about $20. I haven't been able to find anything more than a PCI-E to single m.2

That's probably as good as you'll get because to get the 4 m.2s from a x16 it will have to have a switch IC on there. Not real high volume demand for such a card so it'll be $$. 32 lane switch with one x16 port on one side and 4x4s on the other. Just poking around a gen2 32lane plx device is about $100, so factor on the fabrication, other BOM cost and profit margin and the $300 doesn't seem too nutty.

Some motherboards can bifurcate a slot to x4x4x4x4 but that's mostly on server type stuff from supermicro or asrock rack.

The x4 to m.2 is cheap because it's just traces on the PCB with perhaps some passives.

priznat fucked around with this message at 06:19 on Apr 14, 2016

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
https://news.samsung.com/global/sam...g-device-design

:stare: holy cow

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

Ika posted:

Isn't the intel P3600 SSD series pretty highend and significantly better than their 750? The specs look better for everything except write IOPs. I'm asking because Amazon.de currently has the 800 gig model for 365 euros for whatever reason, presumably some sort of error, and it looks quite tempting.

E: For an idea of EU prices, the 850 pro 1TB is more expensive.

That seems like a super deal for a p3600, they are the enterprise drives vs the 750 "prosumer". Slot version I'm guessing? That'd be a fantastic game drive, might want to check if your motherboard can boot from PCIe slots if you want to use it as an OS drive.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
One thing is the z97 boards only have gen 2 pcie on the chipset so if you want gen 3 you'll have to use the cpu lanes, so the video card connection will be x8 instead of x16 most likely.

So the tradeoff would be less video bandwidth or less pcie bandwidth, not sure which would have the larger noticeable effect (if any)

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
There are also NVRAM solutions look like this one: http://www.tomsitpro.com/articles/pmc-sierra-flashtec-nvram-drive,2-954.html

X8 gen 3 PCIe connection to DDR which is backed up to flash in the case of a power loss.

Not really something anyone would need in the non enterprise or hyperscale area but a nifty niche product.

I hear 3dxpoint/optane are hitting snags and delays so this kind of solution is a good stopgap

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
The 750 isn't M.2 though, just x4 slot and SFF 8639 (u.2)

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

blowfish posted:

Why did nobody think of that when making the drive. Also what is the SPD information necessary for if the drive works without it.

Doesn't surprise me that much, I do a lot of interoperability testing on a pcie products and there are some weird issues that can be encountered on different chipsets and even between different motherboard vendors. Still, being a prosumer product it should have been a configuration they checked and would have been a very easy fix prior to production (probably resistor population for setting the smbus address)

The SMBus connection to the PCIe is usually used for out of band configuration and monitoring like temperature. Totally optional on pcie devices.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

redeyes posted:

That is actually an enterprise hard drive. Not even prosumer.

Oh, 3700s, I had assumed it was a 750 for some reason.. Yeah a 3700 in a consumer motherboard is surprising.. It should work though, really.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
I am looking forward to the day when all my local storage (including local NAS) are flash-only.

Anyone charted the cost per tera/gigabyte compared between spinning disk and flash? Is the gap narrowing or staying about the same? Seems like it is about 5x the cost for flash over hdd (comparing 1TB drives).

HDDs are fine for media storage etc, sure, but all flash would be pretty nifty.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
1,000,000 IOPS NVMe flash controller, nice

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

Anime Schoolgirl posted:

With that amount of power it's likely to be a large controller chip for enterprise-level pcie drives only and won't fit on 2.5 or M.2 at all.

Yes, it's an enterprise grade one but it will come in 2.5" (u.2) form factors. The previous gen part is in micron and HGST (among others) in that form factor.

The next thing will be the mainstream version of the controller which should be pretty kick rear end.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
I personally wouldn't unplug an m.2 that is powered on, they are kind of weird flimsy connectors. I don't like how you have to use a screw to hold it in place on motherboards.

It probably wouldn't hurt anything buuuuuut it's definitely not meant to be unplugged while powered on. (M.2 is not hot pluggable)

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
Seagate with a 60TB SAS SSD in 3.5" form factor:

http://www.theregister.co.uk/2016/08/09/flashy_seagate_demonstrates_monster_60tb_ssd/

Also the quad m.2 nvme to single x16 slot is interesting. Apparently they don't use a PLX switch so I have to wonder if the motherboard slot has to support x4x4x4x4 bifurcation.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

Arsten posted:

Wouldn't you have to run that in something like RAID0 to get the advertised 10GB/s?

These numbers are usually just based off of what you can wring out of a pool of unallocated drives in something like IOMeter, it probably would take some careful setup to make them useable at that speeds in a real world configuration.

But the bandwidth is there (pcie gen3 at x16 just shy of 16GB/sec at line rate) so they would be seriously stonkin' fast. I want to get one for the lab here and try it out! So far the speed kings we have are gen3 x8 NVDRAM devices which hit about 5.1GB/sec each, and a few on a switch with x16 to a host gets around 13.5GB/s.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

Arsten posted:

But they are basically 4 x4s glued together to fit into an x16 slot. Wouldn't that mean that you wouldn't get anywhere near the x16 speeds and would need to pull from the entire...er...'array' of SSDs (RAID0) to pull the rated 10GB/s?

Or am I misunderstanding the unswitched x4x4x4x4 configuration?

Bifurcating the slot can turn it into 4 x4 connections going straight to the processor, so lanes 0-3 go to one m.2, 4-7 to another etc.. It means each of the m.2 will have its full bandwidth. Could be set up in a RAID 0 or JBOD and with certain workloads they could hit those max numbers, potentially.

There would be some situations (probably most, really) in real life use that it would never hit that 10GB/s but it's the theoretical maximum that matters to ad copy. If they can put it on a slot and have it hit an aggregate value of 10GB/s then job done! Not too hard since it'd be "only" 2.5GB/s per m.2 which is slower than that top end samsung.

The weird thing is there was no mention of requiring bifurcated slots in the PR but perhaps it's just something they mention later about what servers are supported (a lot of decent server motherboards do these days). It's a nice solution to densify the storage. I wonder if we'll see full length pcie cards with even more using higher port count switches (I'd only seen ones with 4 m.2 before using a PLX 32 lane switch (x16 -> 4 x4 m.2).

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
I was just looking into buying a bunch of nvme ssds for work and compared the Intel 750 with the 3700, holy poo poo the write endurances are so different they use different metrics to measure them. 70GB/day writes vs 10 Drive Writes Per Day (DWPD) on the 400GB. :stare:

Weirdly the price isn't that far apart, $550 vs $700 although the 3700 is 45% off right now at newegg. So we ordered 50 :getin:

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
Yup these are going in test setups where they will be slammed mercilessly so longevity is key. I was just really surprised by how much better the 3700 was, huge.

For high performance testing we use NVRAM cards that smoke any SSDs currently available.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

Skandranon posted:

I'm a little annoyed at my Intel 750 in terms of that "should boot straight to login", in that it doesn't. Windows 10 seems to like to hang for awhile with it's window up, but seems to be doing nothing. As soon as it's spinner comes up, it'll load everything else very quickly. Random too, sometimes doesn't hang at all, sometimes it does. Benchmarks all look as fast as they should, Windows just seems to like looking at the drive or something...

Sure there's nothing in the startup that could be slowing things down like a driver for something plugged in? Might want to peruse the system logs.

Also I am wondering if Intel is doing any xpoint controllers at wider pcie widths. Especially if the nvdimms continue to be a headache.

Then when pcie gen4 shows up.. :getin:

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
It's pretty cool to see Intel putting pressure on the market to bring down the price of NVMe PCIe drives.

Part of the strategy to eventually ditch SATA I bet.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

BIG HEADLINE posted:

Yeah, and what has me sold is that it's Intel NAND again. Sure, it's not their controller, but it's their firmware.

Five year warranties on them, too.

Do you know whose controller it is? Annoyingly the product isn't on the Ark pages yet.. It was probably mentioned earlier but I missed it.

Adbot
ADBOT LOVES YOU

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

BIG HEADLINE posted:



SMI, but if you look very closely in the bottom right corner of the chip, you'll see the Intel logo.

Interesting, don't see a PCIe part on SMI's site so it must be an exclusive to Intel for now.

Gonna have to order a couple of these for work to see how they do on our stress testing.

  • Locked thread