Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

kimcicle posted:

So before I run out and buy an Intel 600p from Microcenter, would it be compatible with my motherboard? It's a Asus Z97I-PLUS, and all the documentation says that it supports M.2 and runs over PCIe. The 256GB model is "on sale" for $99, but I want to make sure that it will be fine because the port is on the underside of the motherboard so I'll have to take apart my whole computer to install it.
I have a Asus Z97i-Plus (it's mini ITX right?) and the problem with that specific motherboard's m.2 slot is that it's quite limited in PCI-e lanes (2 of them), so you won't get even close to the full benefit of an NVMe SSD when it comes to sustained reads / writes. Personally, I'd look for a SATA m.2 SSD to save some money or if performance is that big of a deal go with a different motherboard entirely. I'm looking to turn the machine into a Hackintosh and may stick with a SATA based SSD just for that anyway given NVMe SSD compatibility on OS X is not perfect.

https://www.ramcity.com.au/blog/m.2-ngff-ssd-compatibility-list/189

quote:

ASUS Z97i-PLUS. Boots with both the XP941 and SM951 in the M.2 socket. Note, the M.2 socket only has 2 PCIe lanes (10Gb/s) feeding to it though. So expect a maximum sequential throughput around 650-700MB/s. Check your manual as this is quite common with ASUS Z97 boards.

Adbot
ADBOT LOVES YOU

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Z97 boards are on two PCI-E 2.0 lanes for m.2 SSDs, so you're talking 1 GBps, which is definitely below what most NVMe drives can sustain. Random IOPS matters a lot more in day to day tasks probably though and in that respect PCI-E 2.0 doesn't make much of a difference there.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

syntaxfunction posted:

How big is too big?
When you won't expect to fill that space to about 80% (mostly for filesystem performance reasons rather than SSD related issues) before prices come down enough that you'll be upgrading and the decline in pricing won't make up the difference in sunk cost. In your position I'd get the smallest SATA SSD as necessary for day-to-day stuff and go up one size from there (eg. if you can fit Windows and web browsing and whatnot into 100 GB, go with a 256 GB), map larger bulk data onto the hard disk (Windows has junction points) and expect to upgrade everything completely in another two years tops and at that point a 2 TB M.2 NVMe SSD could totally be less than $250 by then.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Because you're not bottlenecked on issuing commands faster with random IOPS it's not a proportional drop in performance there. I've got a Z97 board as well and it'd be silly to say that I wouldn't get a substantial improvement going from, say, an 850 Evo SATA to most of the newer NVMe SSDs despite that 2 lane bottleneck.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
The write cycle advantage makes me think Optane is not a terrible fit for scenarios where SLC was a good idea (high transaction rate caches, which is typically an enterprise-only thing). Out of the gate it's not competitive with cutting edge SSDs, but we can should remember back to when SSDs were introduced maybe 8 years ago and cringe at how bad they were then, too. The question is really how quickly will Optane based drives improve to fit the market they were aiming for.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Sounds like a software bug in firmware similar to the kinds of problems Microsoft has been having with their Surface tablets if there's nothing physical about the defects... unless Samsung's battery specs themselves are flawed and replicated.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

PC LOAD LETTER posted:

I'd love to have something with the capacity of LTO7 or even LTO4 (with a modern SATAIII or USB3.1 interface) for a good price but man the prices on the drives and tapes just does not want to come down at ALL for anything that is less than 3yr old and/or in good condition. Yeah you can find LTO4 drives used for $300 or less but they're typically over 5yr old and usually look like they got dropped down the stairs with no chance of warranty on ebay. No way I'll trust that.
You wind up with problems with LTO technology being a self-obsolescence technology family because the tape drives themselves are only made for so long, not as much as the tapes being the problem.

Honestly, from an archival standpoint, it starts to make sense to use DVD+Rs on gold media at a certain point, and I think US Army uses archival-grade discs that are contractually guaranteed to last at least 50 years. We still have DVD drives from nearly 2 decades ago that work swell. The only problem with that was that most video assets would have to be spanned across several discs and optical media takes way, way too long to write out on multi-petabyte archives.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
DVD-Rs specifically are worse than DVD+Rs for archival purposes due to more checksum padding used more or less. People used to abuse the extra space for overburning.

Also, back in the early 80s a lot of factory-made CDs like even Bruce Springsteen's CDs are now basically totally defective due to oxidation.

Potato Salad posted:

Are there good reasons not to put archival tiers of storage on the cloud?
#1 is regulatory reasons. For my company involved with the content creators we're not allowed to use certain cloud providers like Amazon potentially because that would be giving money to a content competitor.
Second is cost at scale. $.01 / GB / mo is potentially still too much for a lot of these archives that appear to have a need to archive so much and even with storing pricing being very fierce that they're into the 50 petabyte range or more. 50 PB / mo of coldline storage on GCP is $500k / mo - I think they literally do not have the budget to pay $6 MM / yr for storage and increasing at that because new content growth (think 8k+ pro editing level content, not your pixelated random pirate rips) is outpacing the really anemic decline in storage pricing (besides SSDs). The $.013 / GB cost cited for LTO6 is more like $.0095 / GB when compressed. I don't think Google and friends can really beat LTO to be honest, and so they're probably doing some compression with some extra redundancy to get somewhere between the average case compression scenario and the offering cost.

Furthermore, the pricing that GCP asks is $.01 / GB / mo - you buy the drive and the tapes as capex and the opex is basically facility and staffing costs at that point. Hollywood accounting may make capex easier to justify than opex unlike most corporate environments.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Unless the SSD has gone through the SSD torture test or something similar I have doubts that someone would sell one or two SSDs that are used to the point of semi-failure. The greater dangers are with bulk sellers (they may have a huge storage farm and cheaped out on SSDs by not using enterprise-grade SSDs) and bad controllers.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
All the high-end SSD performance characteristics fall squarely into “if you have to ask if you need / can benefit from it, you don’t need it or won’t benefit from it” territory. The closest thing to a gaming scenario that would materially benefit is if you’re multi boxing like 10 MMOs concurrently.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Against conventional wisdom, I got an M.2 NVMe SSD, a Toshiba 1TB XG3 drive, and I put it into a Z97 ASUS board (Z97I-PLUS). I'm not sure what's going on but the UEFI BIOS is not detecting the drive. I've tried following various guides for disabling CSM and the secure boot keys and that doesn't seem to make a difference. Windows 10's installer is able to see the drive when I start the installer up as long as I have a SATA device enabled (I tried disabling all SATA ports, the drive disappears in Windows' installer). I've tried to completely remove all drives, reboot, and try to "enable" the m.2 drive like some people have had to do with some motherboards (I can't even select the M.2 drive or controller to do it). I've tried using "RAID" mode for storage instead of AHCI, which seems to work for some people. As I understand it, Windows 10 includes a generic NVMe driver that Toshiba uses, but I haven't checked the firmware version and trying to upgrading it is my next step. But after that, I'm about to put this back on Ebay given I've spent way too much time on this already and hanging onto this when I can't even use it is asinine. Anyone have any ideas on what I might have overlooked?

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Asus stopped with the last BIOS update claiming NVMe support back in early 2016, and I’m on that version. I should have clarified but Windows works just fine with the drive after it boots up from another drive, so it seems to physically work. I can install Windows to the drive, but the BIOS is not registering the boot sector of the drive somehow post-install and rejecting booting from it.

I’m a bit pissed because a bunch of people seemed to do just fine with this motherboard for other NVMe SSDs. Spending $700 in parts over a matter of a $300 SSD seems ludicrous when the 4790k is working just fine. I may wind up using this in another machine like a server if this takes me more than another hour given I’ve already burned 14 hours rebooting and removing the drat thing while resetting the CMOS.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
I’m strongly considering it for a NAS / virtualization server that’s lightly loaded but needs to somewhat quickly unpack 15 - 30 GB at a time, and while my trusty 1 TB WD Black hard drive has worked since 2009 it’s likely worth getting an SSD to avoid jamming older SSDs (I have an M4 Crucial and Mushkin that are like 8 years old now that spank hard drives) into the tiny case with so little airflow. So with a 240 GB boot and VM SSD and 1 TB scratch disk a 2 TB SSD would work nicely.

Gosh, I guess my drives are living longer than ever. Most of the boot storage drives I have in my servers have been running since 2011 or 2010 so I guess I’m due for some upgrades. I got that dumb 1 TB NVMe drive that’s doing nothing but I’m not buying a whole new machine over it either.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Those ruler form factor SSDs can pack 256 TB into a 2U

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
I bought the 2 TB SSD for my home storage and media application server that is running a number of VMs, containers, and decompressing and downloading files at about 30%+ duty cycle. On the 7200 RPM WD Black disk I use, it takes three times as long and with a rash of recent hard drive failures for myself I’d rather stick with SSDs even for lighter usage. This tends to also matter more with a higher bandwidth connection than if I was on some 10 Mbps download link.

I only have two real uses for magnetic storage anymore - lowest cost bulk storage primarily for archival uses or for a Kafka broker that is specifically written to perform sequential read and write patterns matching rotational media. Life is too short to bother with slow computers unless cost is a big deal like when money is tight or you’re working at huge scale where compute resources are actually more expensive than people’s time arguing about the resource efficiency.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
PCIe SSDs are what the major cloud providers have used in the past from vendors like Fusion IO. Those things from 2014 still destroy consumer SSDs for server workloads partly because the drivers were written by literally Linux kernel I/O scheduler authors but also because the capacitor types used for them are better for higher queue depths and transactions favored by databases that are SSD aware.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Also, if you shop on Rakuten you should be able to get some extra cash back from Ebates. I believe senates was acquired by Rakuten a while ago. I dunno, all I know is that I got some extra money from Ebates and I’d use them even though they’re both customers of the company I work for.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Nation states would get your data before you destroy it whether they get an agent on your system, attack your back-ups in some form of off-site storage, or have a trash guy paid to throw your garbage into a special place in the truck. All of those options are far cheaper and more effective than trying to reassemble anything. Going after a broken pile of parts is to discover among a big pile what your supplier of parts is (Toshiba, Western Digital, etc) but again far cheaper to just get a trojan onto your machine and run a smaller version of CrystalDiskInfo or whatnot.

Nation states do not necessarily have an advantage in terms of pure technology, but they do have an advantage in terms of who they can boss around.

Watch James Mickens’ talks on security or read his brief paper https://www.usenix.org/system/files/1401_08-12_mickens.pdf In fact, read all his papers because they are very much in the style of humor and writing that appeals to the SA community anyway.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Anyone got some ideas of what to do with older SSDs (circa 2010 like this 512 GB Mushkin Sandforce based SATA and with lower capacities like this Crucial 128 GB drive from 2011)? With these Inland SSDs for $100 I’m not sure if it’s worth the effort. I can’t really use them in ZFS situations, the 128 GB is barely enough for like 4 AAA games, and I have an HTPC setup based around the Shield TV that I’m quite happy with. I run a Kubernetes cluster at home. Last option I gave any thought to is for boot volumes and I’d be better off spending $100 for a new SSD then instead of another $400+ for 10 GbE.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Uh yeah. I do it professionally too and for basic setups with little care for downtime it’s pretty brain dead IMO. Just looking for creative solutions because I was going to throw them away / e-cycle or outright give them gratis to goons in need.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
I mostly like NVMe drives for reducing cabling in my machines and consider the small price premium to be that privilege.

Also, is the heatsink for that Sabrent drive worthwhile if you push those 4 GBps and 300k IOPS for over an hour or two? I run deep learning stuff along with some intensive jobs on several TB of data at a time where I get maybe 400 MBps off of my ZFS array and am strongly considering the 2 TB SSD

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Wish those cheap SSDs would be useful as SATA DOMs but the 2.5” form factor kind of kills it for me. Probably best off doing some hilariously bad video like in the early days where someone at Samsung marketing hooked up a RAID0 of 26+ drives together for a benchmark and got a bunch of YouTube hits.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
I wanted to use Optane SSDs for swap space or as a L1 cache of sorts for my MLC and TLC SSD because Optane's read/write cycles are pretty drat good, but evidently adaptive caching is still patented and tech patents suck in practice. So goodbye to that random project anyway.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Thought some people had come up with some weirdo universal adapter to help secure m.2 drives without dealing with the screw that everyone loses like that stupid key being lost in the opening scene of Saw that bothered me so much I still remember it all these years later.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
That Leven SSD looks like it's used Smartphone NAND and from what I've read around there's even some SSDs being sent out as QLC when advertised as TLC. Hard pass

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
I don’t see how that’s possible when part of the MBA process is to be exsanguinated and your spinal column removed

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

movax posted:

IMO, the ‘best’ (i.e., least amount of SW layers) is probably Thunderbolt so you can effectively get an external PCIe <-> SATA thing going… or if eSATA ports / add-in cards are still a thing. I like trying to get everything as native as possible, but I also generally distrust USB / try not to use it for anything outside of keyboard / mouse / webcam.
External PCI-e chassis are surprisingly expensive for what they provide and the market's basically said nobody cares enough besides media professionals to go that far with their optical drives. Additionally, I can't think of a single eGPU external PCI-e Thunderbolt chassis that offers a 5.25" bay stock.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
DRAM must use a bit more power as well which matters significantly if one's compute situation is mobile computing but it's important to consider aggregate power usage for the work units (read: might be worth the DRAM if it makes the total power use for the compute lower). So something like the WD Black SN770 is a better idea potentially than the SK Hynix P41 Platinum. I don't think performance is ever truly free in terms of TCO anyway.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

Klyith posted:

First, do you have an application where the SATA -> NVMe upgrade is a non-trivial performance gain on a current PCs?
The latency difference is pretty huge when it comes to ZFS allocation classes for small writes it appears. I have a pile of SATA SSDs from work and now I'm kind of bummed they're not as useful as I would have hoped. Seems like the best bet really is going to be QD1 random IOPS performance from ye olde Intel Optane drives here. Might do some benchmarks to do proper comparisons for the workloads I have but for virtualization cases it was kind of staggering how important random IOPS are once you reach the 128 GB RAM point in ZFS systems.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

BIG HEADLINE posted:

It'd be interesting to see if it works with SK Hynix drives.
Solidigm is a rebrand of the Intel SSD product line through acquisition and I'm not quite sure if the SK Hynix series is compatible or not. Have sincere doubts there's cross-brand firmware compatibility despite being the "same company" is what I'm getting at.

WhyteRyce posted:

So it’s just doing some prefetching? Theoretically that can eat into potential concurrent write performance but these are client drives so probably won’t be a concern
Aligning I/O transactions to optimal buffer sizes and sequences that may not be visible to the generic I/O schedulers in different OSes as well as avoiding some performance regressions over time if the kernel changes schedulers for whatever reason. I know that for years and years the general old hat elevator scheduling algorithm for I/O was pretty abysmal relatively for Linux but that was a decade ago when I talked to the Fusion I/O engineers and before NVMe was common.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

WhyteRyce posted:

Seems odd there would be a lot of unaligned I/O in Windows with modern day client games, apps and stuff but, again, I don't know much about this stuff.

Reading the white paper it seems like they do 3 things:
...
Those all matter, too, and smart, proprietary things to get an edge over the competition is always on the table for premium drives rather than ones sold in bulk to appeal to as many machines as possible. Toshiba's / Kioxia SSDs are a great example of maximum compatibility for OEMs balanced with very solid performance. One other thing that I remembered for needing to be careful writing drive firmware algorithms is that there's patents on adaptive replacement cache [1]. Which is part of why ZFS and other file systems are nowhere near as performant as they could be in general and many SSDs are leaving performance on the table. Nobody wants to get into a legal battle with IBM, a legal and sales & marketing company that focuses upon hyping up bought technology.

WhyteRyce posted:

If I were a SKHynix engineer I'd be loving pissed. SMI engineers don't count they are used to never getting credit for anything
Presuming the SK Hynix engineer is likely Korean, so it's par for the course of its history to not be given credit and for others with better marketing / branding culturally to reap most of the rewards. For starters, take a look at the animation industry (look at Dragonball and uh... Ren & Stimpy).

[1] loving IBM. https://web.archive.org/web/20170624055115/http://patft1.uspto.gov/netacgi/nph-Parser?patentnumber=6996676

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
I can use Optane for sure in my ZFS pools with allocation classes to have the Optane drive (with redundancy) take the metadata primarily which will provide a solid speed up for lots of use cases where a TLC or QLC SSD would be a bad idea. Some folks think it’s better to have separate pools entirely for VMs and to avoid mixing workload classes but a poor man’s SAN is what a lot of folks are making into the prosumer category.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
That video reminds of the one Samsung put out many years ago now where their interns put a bunch of SSDs in a RAID0 setup and put it on Youtube when it was kind of a new site. The performance they got with like 16 drives then is not that different in sequential transactions to what a mid-range PCI-e 3.0 NVMe SSD does now.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

priznat posted:

How are the prices on those? I wonder if they're dropping as enterprise moves to all NVMe solutions.
They're roughly within 10 - 15% pricing of the mainstream consumer pricing. A vendor accidentally shipped me an 18TB SAS drive rather than SATA and I saw that the price difference was maybe $10 - $20 USD so it seems to be following similar price curves with the mainstream on the spinning rust aspect of the market. 2.5" SAS SSDs (namely the Nitro series) I ordered last year roughly were pretty close to consumer pricing at the time (read: not that great). Looking around now they haven't followed suit with the NVMe drive drops going on for about half a year now a Nitro 3750 400 GB drive is still $307 on CDW. We bought I think 24 3.84TB drives about $800 each from CDW which was pretty nice but is roughly the cost of a brand new car these days and probably not appropriate for a home lab.

I'm not quite sure how these will work in terms of pricing as enterprises rotate out of SAS over the coming few years but I don't think it'll be a slam dunk win for home lab dirt cheap target pricing. The first generation Xeon D systems are still rather pricey for being about 7 - 8 years old now, for example.

Adbot
ADBOT LOVES YOU

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
The Xeon D 1500 series as I understand it went to hyper scalers primarily in CDNs and folks that needed more of that "edge computing" buzzword like 7 years ago but that's kind of fallen for reasons I'm not quite familiar. I had a 1500 for a while and it wasn't that much faster than my Xeon E3-1230 frankly and because I went to solar power I saw less need. Additionally, these are old parts now and with newer Xeon D chips supporting DDR5 and PCI-e 4+ finally it's silly to buy something that won't last all that much. Granted, I am now sitting on about 1TB of RDIMMs (64GB ones at that) with no idea wtf to do with them so maybe I should have held onto that board.

My conjecture for the SAS v NVMe situation is due to over-manufacturing on the NVMe channel while SAS is sitting at replacement levels for the foreseeable future with long term enterprise contracts such as with DoD there's not much reason to budge on SAS pricing when competition for downward pricing is mostly for NVMe. Unless manufacturers are willing to start writing off all that inventory I don't see why they would be trying to convert it to price-dropping NVMe unless there's a huge warehouse glut sort of like what happened with wholesale oil prices years ago. But because those long term contracts are pretty stable they should be able to reliably predict shipments.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply