|
My current rig I didn't even use any 2.5" drives, I put 2 m.2 drives in it. So much waste space in the case, though not really I guess as airflow is good
|
# ? Jul 13, 2020 18:03 |
|
|
# ? Apr 23, 2024 09:31 |
|
codo27 posted:My current rig I didn't even use any 2.5" drives, I put 2 m.2 drives in it. So much waste space in the case, though not really I guess as airflow is good Yeah, that's why you can get nifty cases like the Corsair One series now--with no drives whatsoever to worry about other than the m.2 slot on the motherboard, you can collapse a looooot of that space down.
|
# ? Jul 13, 2020 18:33 |
|
Klyith posted:If this is a secondary drive for data storage you'd be fine with a QLC drive (Intel 660p, Crucial P1). Remember I was going to combine 2x 2TB WD Blues with a 2TB Crucial MX500, because the blues were going for a relatively cheap price on eBay (for UK prices, anyway)? Well, I ended up just buying a 3rd WD blue so that they looked nice in the system logs. I plugged them into a Windows PC, one by one, first off. So I could run the Western Digital drive software and check that they had zero hours on the clock and that the firmware was up to date, before they went into my home Linux server. All was good. The ZRaid1 pool I have them in (ZFS equivalent to RAID-5) is blazingly fast when compared to the 2 WD Reds I had for a good few years in a ZFS mirror. I have separate datasets and one of them is for my home streaming library. I can flip-flop backwards and forwards through a 1080p movie file which is downstairs in the ZFS array, when sitting at my TV upstairs and there's almost no perceptible lag. I can click forwards to, like 1hr10m into a film and it's there straight away, flick back to 8mins into the film and it's there straight away. Copying large files from the system drive (a 960 EVO NVM'e that's getting on for 3 years old) is so fast. It's really nice to have, even if it's money that could have been spent far more parsimoniously. If these perform like this for 5 or 6 years (never mind the 10 I hope to get out of them) I'll be very happy. Which brings me on to monitoring. Is anyone running regular long SMART tests on SSD's these days? Or should I just write a bash script that'll scan the SMART attributes every week and tell me if any of the values look askew? apropos man fucked around with this message at 18:42 on Jul 13, 2020 |
# ? Jul 13, 2020 18:39 |
|
At present time, if we're talking about general home use (browsing, streaming, gaming, guess occasional file transfers), is there going to be a big difference between a sata ssd and an nvme ssd?
|
# ? Jul 18, 2020 05:56 |
|
Rinkles posted:At present time, if we're talking about general home use (browsing, streaming, gaming, guess occasional file transfers), is there going to be a big difference between a sata ssd and an nvme ssd? Nothing you would notice in almost all cases.
|
# ? Jul 18, 2020 06:15 |
|
Rinkles posted:At present time, if we're talking about general home use (browsing, streaming, gaming, guess occasional file transfers), is there going to be a big difference between a sata ssd and an nvme ssd? No. There is a possibility that future games will care about nvme vs sata speed, since the new consoles will be using super-nvme storage and are talking this up as their big new feature. But that's still the future, and IMO probably won't be a major factor in shipping games for at least 2 years from now. This console transition is gonna be slow.
|
# ? Jul 18, 2020 07:55 |
|
Rinkles posted:At present time, if we're talking about general home use (browsing, streaming, gaming, guess occasional file transfers), is there going to be a big difference between a sata ssd and an nvme ssd? I'd put NVMe in the "nice to have" bracket. It's smaller and it's the enthusiasts choice but when I fitted an NVMe in my gaming PC there wasn't an appreciable difference in game loading times. If you're using one of the 2.5" SSDs that are oft mentioned in this thread as the cream of the crop then you're gonna have a hard time noticing a difference if you swap to NVMe. I've just been watching a Level1Linux video on YouTube where he mentions running "fstrim -v /" to trim your SSD. The ones that he's talking about in that video are the Phison controller ones. I have a 256GB MyDigitalSSD BPXP running the base OS on my Ryzen virt host, so I thought I'd run that trim command on it. It took a few unnerving seconds (I thought "Oh poo poo. Am I gonna have to reinstall the base OS here? Have I done something terrible?". But after about 5-10 seconds it reported that I'd just trimmed 92.5G. I've briefly read about trim before and, from what I remember, it's related to the fact that all blocks need to be erased completely before writing new data to them. I thought that the OS (CentOS 8) would have been doing this automatically or that the drive controller would have been doing it. So have I just made my drive faster by cleaning 92.5G with the fstrim command? Do I need to put this in a weekly cronjob or will doing that affect the lifetime of the drive? I've had the drive about 18 months and it's always performed fine.
|
# ? Jul 18, 2020 14:51 |
|
apropos man posted:I've just been watching a Level1Linux video on YouTube where he mentions running "fstrim -v /" to trim your SSD. The ones that he's talking about in that video are the Phison controller ones. I have a 256GB MyDigitalSSD BPXP running the base OS on my Ryzen virt host, so I thought I'd run that trim command on it. It took a few unnerving seconds (I thought "Oh poo poo. Am I gonna have to reinstall the base OS here? Have I done something terrible?". But after about 5-10 seconds it reported that I'd just trimmed 92.5G. Trimming will improve performance, particularly on drives with low free space (for the drive as a whole, not any single partition). In theory trimming too often will reduce the drive lifespan, because it encourages the drive to re-write data. But a drive that's never trimmed is liable to have more write amplification when it hasn't garbage collected the dirty blocks. This is why the happy median recommendation is about every week. Either way is not a particular worry because write endurance is not anything more than a theoretical lifespan for most people, even if you trim all day long. IDK what the linux command means when it says "92.5G trimmed" though. That might be the whole filesystem of in-use blocks, not how much data got re-written. If your linux version isn't running a trim by itself you should put it in the cron, but I find it hard to believe a modern linux distro isn't setting that up automatically.
|
# ? Jul 18, 2020 18:03 |
|
Klyith posted:... I'm not sure if that's exactly what the output was, and I have since closed that terminal window. So I'm sorry I wasn't being terribly accurate and I typed my question from memory of what it had said. A "df -h /" command reveals that I'm currently using 107G though, so I doubt it meant the amount of space currently in use. Even accounting for a variation in the way that certain commands report disk space (1000 bytes per kilobyte vs 1024 bits ber kilobye) I doubt that 107G would be equivalent to 92.5G. Thanks for the explanation though. I'm gonna read up about CentOS and the default trim settings. I'm sure that if CentOS has had trim options available they would have already been in CentOS 7, as SSD's must have already made up a high proportion of preferred installation media when CentOS 7 was released. I'll run the command on my other server, which has a 500GB Samsung 960 Evo as the system drive (I'm never quite sure whether to use G or GB. GB looks better on a forum but the terminal seems to want to use G most of the time). My other server has also got CentOS8 on it, as I went through a complete server revamp a couple of weeks ago. I made sure my VM backups were on point, nuked and reinstalled from 7 to 8 and reinstated my VM's. It's the main reason I've got two home servers: it's really nice to be able to sync scheduled backups from one machine to the other in case one machine ever dies. I'll edit this post with the output of the other one in a moment... e: Here's what I get on the other one: code:
apropos man fucked around with this message at 18:44 on Jul 18, 2020 |
# ? Jul 18, 2020 18:38 |
|
Here is quite an easy article about the use of trim on Linux: https://bashtheshell.com/guide/ssd-trim-on-centos-7/ It turns out that CentOS didn't have it enabled by default on my systems: code:
This recent knowledge of the significance of fstrim got me wondering about how I would do it on my ZFS arrays, which are also on SSDs. It turns out that there's a zpool command to trim your ZFS array. From the man page (I've trimmed it down!): code:
code:
e: I'm now doing a manual trim with 'zpool trim <poolname>' code:
apropos man fucked around with this message at 19:53 on Jul 18, 2020 |
# ? Jul 18, 2020 19:45 |
|
Yeah, the lack of auto trim seems really odd since Windows and MacOS have supported it for years now. Like you, I can't really see any immediately obvious downside to running it on a regular schedule as the default.
|
# ? Jul 18, 2020 23:55 |
|
I just feel a bit silly for only realising the significance of trimming your SSD's at this moment in time. I've been using SSD's for a few years now, and I've been using ZFS on Linux for probably 3 years or more. I'd read about trim before when I'd seen people mention it on forums but never really gave it a second thought. Better late than never! I've never really noticed any drastic performance issues while I've been using SSD's, but my two home servers are rarely pushed to their limits. I use the one which has an older Xeon processor and UDIMM ECC for "proper" stuff, like storing my music collection, my video streaming library and all of the personal holiday photos and financial documents you collect over the years. The other one with the Ryzen CPU is more like a sandbox plaything, so if I see something that looks cool but I might get bored with it I'll spin up a VM on the Ryzen machine, gently caress about with it, and if it isn't to my taste I'll just destroy the VM. I try and keep the base OS on both machines relatively clean and do everything in VM's but the Ryzen machine is much more of a Frankenstein's monster which I'm willing to take risks on. It's cool to have two servers, because like I said earlier: the weekly backups of the VM's from one machine get synced across to the other machine and vice versa. So if one machine ever blows up and I need to replace a piece of hardware I can just sync the VM's back across from the ZFS pool on the working machine and I'm back up and running again. The Ryzen one is only running a R7 1700 CPU and it does a great job of running 2 Windows VM's and 3 or 4 Linux ones at the same time. There's some really good compute power available if you're willing to run a server made of parts from a couple of generations ago.
|
# ? Jul 19, 2020 01:28 |
|
apropos man posted:I've never really noticed any drastic performance issues while I've been using SSD's, but my two home servers are rarely pushed to their limits. Yeah SSDs also have internal garbage collection that accomplishes the same thing, but slower and less efficiently. It can keep up as long as the drive has a decent amount of free space or data isn't being added and deleted constantly, which is why back in the day the guideline for SSDs was to partition them short with 10% free space at the end. Trim is way better so that advice has become superfluous. Also looking around the internet the space trimmed result from fstrim is *supposed* to represent free space on the device, because the process of trimming is to explicitly tell the drive which blocks are unused. But the individual filesystem is what reports that number back to fstrim and they do whatever they want. Anyways you should just ignore that. DrDork posted:Yeah, the lack of auto trim seems really odd since Windows and MacOS have supported it for years now. Ubuntu & mint have it set up automatically, for years now. So it seems to be a thing where the expert distros assume you will sysadmin yourself.
|
# ? Jul 19, 2020 04:11 |
|
It probably has something to do with this line in the Arch wiki.quote:Warning: Users need to be certain that their SSD supports TRIM before attempting to use it. Data loss can occur otherwise! Kernel has a blacklist of SSDs with trim problems, but it hard to be sure if it includes all. Whitelist would be too much work with so many new SSDs and firmware revisions. For MacOS it would easy to whitelist the drives Apple has approved and Microsoft is big enough that SSD manufacturers have to maintain white- and blacklists for them.
|
# ? Jul 19, 2020 15:08 |
|
I wonder when the last SSD was produced that didn't support TRIM, though. Maybe some OCZ model from 2012?
|
# ? Jul 19, 2020 16:13 |
|
DrDork posted:I wonder when the last SSD was produced that didn't support TRIM, though. Maybe some OCZ model from 2012? Well that blacklist includes Samsung 850, 960GB Crucial M500 and several other drivers I would have expected to be good, so never underestimate the manufacturer capability to gently caress up.
|
# ? Jul 19, 2020 16:19 |
|
Klyith posted:Ubuntu & mint have it set up automatically, for years now. So it seems to be a thing where the expert distros assume you will sysadmin yourself. I think that's a fair enough assumption and I think it's good that Ubuntu and Mint enable it by default. I would recommend those two distros to noobs who are just dipping their toes into Linux. Ubuntu is my usual recommendation. Fedora would also be another recommendation but generally I would advise noobs in the direction of Ubuntu. I guess that knowing what I do now, I would always recommend Ubuntu, even though I personally see Fedora as a good starter distro too. Klyith posted:
Yeah, I guessed that it was the unused space that it was reporting after I'd ran the command on my laptop and the little 64GB SanDisk m.2 drive in my HTPC. I thought I was doing something risky when I first ran it on my server yesterday but I'm now of the opinion that the only harm it can do is to add a tiny amount of writes to the lifetime of an SSD. What I'm realising, these days, is that if you buy a modern SSD that has relatively good reviews then that thing is gonna last a heck of a long time compared to spinning rust. Sure, there are always gonna be edge cases: you may buy a top-of-the-line Samsung SSD that fails after a month of two. But the hesitance that we all had about the longevity of SSDs was just that they were a new thing and we expected the worst. As long as you're not using it to run an enterprise database or something you're getting a lot of bang for your buck these days.
|
# ? Jul 19, 2020 16:41 |
|
Klyith posted:No. The best argument for M.2 NVMe adoption is that the markup isn't much anymore and you don't have to dick around with cables. Like a WD Blue SN550 1 TB is in the ballpark of some "basic" 1TB SATA SSDs so why not basically.
|
# ? Jul 20, 2020 21:51 |
|
sean10mm posted:The best argument for M.2 NVMe adoption is that the markup isn't much anymore and you don't have to dick around with cables. Like a WD Blue SN550 1 TB is in the ballpark of some "basic" 1TB SATA SSDs so why not basically. Yeah for most mid-range budget buyers that's pretty true, a WD SN550 is only $5 more than a WD blue sata. But for low budget buyers you can get cheap but acceptable sata drive for $20 less than a WD blue. Also there are still reasons to buy sata drives, such as the fact that m.2 slots are a limited resource and the boards that most people buy can't support more than 2 without adapters. (And some B_50 boards may be very limited even with adapters -- a 1x PCIe link is not much faster than sata.) A 2.5" sata drive is far more reusable. Plus the factual answer is that no, nvme isn't better in real world results for any of those things right now. That doesn't mean people should not buy nvme, but it should inform whatever decision they're making.
|
# ? Jul 21, 2020 09:40 |
|
A lesson when buying Samsung: Keep your original invoices. Samsung evidently likes to "lose" warranty information a few years in, and you can get stuck trying to *prove* your products are still covered. For your consideration: A 1TB 850 EVO, purchased at Newegg during their Black Friday sale in the halcyon days of late November 2016. The drive was promptly registered upon receipt. Upon checking recently, I find the drive is ~just about~ to go off-warranty, when it should be covered until November 2021. So I copy a scan of the invoice off of the Newegg site and submit it for "Warranty Review." DENIED. Evidently, you must have ~all of these things~ on your proof of purchase or Samsung tells you to : * Must not be in an editable format (ie word/excel/html); PDF, Jpeg, Gif, PNG format only. * Must include the place of purchase on company letterhead, including the retailer's name, address, phone number or store number. * Must include the date of purchase. * Must include itemized pricing. * Must confirm paid total; non-online purchases must include the payment method. * Must include the date of purchase. * The following are non-valid proofs of purchase, and as such, cannot update the date of purchase: credit/debit card statements, sales quotes or estimates, private purchases (item purchased from another customer or sold through an eBay type bidding site/used products), units sold As Is with the exception of Lowes or Famous Tate, or units refurbished through any vendor outside of Samsung. Also, I just purchased a 500GB T5 Portable SSD last week. I tried registering it using the Packing Slip it came with since I bought it online. DENIED. So on my way home from work I had to actually stop by an Office Depot, wasting their time and mine, to get a paper receipt I can submit as ~proper proof of purchase~. In conclusion: gently caress Samsung, and maybe double-check that your warranty information is correct if you ever registered yours. BIG HEADLINE fucked around with this message at 01:43 on Jul 22, 2020 |
# ? Jul 22, 2020 01:30 |
|
BIG HEADLINE posted:* Must not be in an editable format (ie word/excel/html); PDF, Jpeg, Gif, PNG format only.s. Is there a reason you can't simply save your scan as one of those formats? I mean, I haven't dealt with Samsung in quite some time, but I've never had an issue just doing print to PDF and sending that in. Apparently the concept of any of Adobe's offerings isn't a big concern for them. And, yeah, almost nowhere will accept a packing slip as proof of anything--always gotta send them something with the price paid on it.
|
# ? Jul 22, 2020 02:41 |
|
DrDork posted:Is there a reason you can't simply save your scan as one of those formats? I mean, I haven't dealt with Samsung in quite some time, but I've never had an issue just doing print to PDF and sending that in. Apparently the concept of any of Adobe's offerings isn't a big concern for them. I did change them into their requested format - they still kicked it because it didn't have Newegg's address on it. I guess I'll see next week if the Office Depot receipt is an ~invalid~ proof of purchase as well. -_- It still doesn't change the fact that Samsung apparently 'lost' my original registration information on the 850 EVO.
|
# ? Jul 22, 2020 03:08 |
|
https://www.amazon.com/TEAMGROUP-CARDEA-Liquid-Heatsink-TM8FP5001T0C119/dp/B07V6SDNZ5/
|
# ? Jul 22, 2020 03:12 |
|
Cygni posted:https://www.amazon.com/TEAMGROUP-CARDEA-Liquid-Heatsink-TM8FP5001T0C119/dp/B07V6SDNZ5/ I was gonna say "well, it's obviously useless since it doesn't connect to a CLC or anything, but hey, it looks kinda cool and it's actually priced reasonably." But then I thought about it--why would they have a capacity level marker for something that should never leak fluid? And then I read the review from Tweaktown. quote:When you hold the drive against a light, you can see right through where the comer should make contact with the Phison E12 controller. This makes the cooler ineffective. It's not just ineffective because it doesn't make good contact; it's literally not doing anything to cool the controller at all. Ouch.
|
# ? Jul 22, 2020 03:21 |
|
So it's not a water block, it's just a plastic thing full of liquid? In what world is a machined block of aluminium not cheaper and also a massively better performer?
|
# ? Jul 22, 2020 11:32 |
|
Thanks Ants posted:So it's not a water block, it's just a plastic thing full of liquid? In what world is a machined block of aluminium not cheaper and also a massively better performer? In the world where aftermarket heatsinks for nvme drives are pointless, and potentially worse for the drive lifespan. That thing is completely stupid, yet somehow less stupid than an actual nvme waterblock.
|
# ? Jul 22, 2020 11:53 |
|
Thanks Ants posted:So it's not a water block, it's just a plastic thing full of liquid? In what world is a machined block of aluminium not cheaper and also a massively better performer? Correct. And the aluminum heatsink part doesn't even connect with the actual controller chip, meaning it's straight up worse that a 5-cent heatspreader. It does look cool, though, which was clearly the only thing they were going for.
|
# ? Jul 22, 2020 14:28 |
|
I'm looking at moving my HD to an SD and I wonder if there's any preparations I should do before I copy and move it? Like do I need to defrag it or something like that?
|
# ? Jul 22, 2020 23:53 |
|
Tin Tim posted:I'm looking at moving my HD to an SD and I wonder if there's any preparations I should do before I copy and move it? Like do I need to defrag it or something like that? Nope
|
# ? Jul 23, 2020 00:06 |
|
Klyith posted:In the world where aftermarket heatsinks for nvme drives are pointless, and potentially worse for the drive lifespan. From what I've heard, a heatsink that *only* contacts the controller is preferable, since the NAND is designed to run hot.
|
# ? Jul 23, 2020 00:34 |
|
not to mention that you're never going to heat sink nand packages on the underside of the stick Even a lot of very well designed, very high density enterprise PCIe nvme devices don't bother with heat sinking nand packages on the....uh would you call it the ventral? side of the card
|
# ? Jul 23, 2020 00:41 |
|
Bob Morales posted:Nope
|
# ? Jul 23, 2020 00:45 |
|
these are devices that are intended to run in human occupancy spaces and reasonably specced data centers water cooling--and even component-specific air cooling--is extraneous in this design use case let the engineers who have to design systems for such applications as unshielded reentry, polar exploration, or the mud of groverhaus worry about nvme thermals edit: joking aside, nvme thermals are a hyperscaler's problem Potato Salad fucked around with this message at 17:19 on Jul 23, 2020 |
# ? Jul 23, 2020 00:46 |
|
BIG HEADLINE posted:From what I've heard, a heatsink that *only* contacts the controller is preferable, since the NAND is designed to run hot. If you wanted to put a heatsink on a nvme drive, then yes. But you don't need to put a heatsink on a nvme drive. Full stop. The one activity that makes a nvme controller overheat is maximum sustained sequential reads. The only things that does maximum sustained sequential reads for long enough to throttle the controller is benchmark software, and maybe something like checksumming very large files. Copying a big file doesn't do it unless your destination is a device that can write as fast as your nvme drive can read. (nvme raid 0 array? giant ramdisk? dev/null?) In normal, real-world it is not an issue.
|
# ? Jul 23, 2020 10:27 |
|
Just check the temps your drives are operating at. If it's hitting throttle temps ever, (75C or so? afaik) stick a ramsink on the controller. Ofc the real solution is to have case airflow
|
# ? Jul 23, 2020 15:18 |
|
What is a good S.M.A.R.T. / diagnostic tool for a Crucial SSD? Update 2004 took nine hours to install so I want to make sure the throughput is good and that I didn't get a lemon.
|
# ? Aug 5, 2020 19:35 |
|
Should most of these be basically interchangeable? Except the Kingston apparently has way worse speeds? (unless those are SATA2 numbers for some reason) This would be for a secondary computer, so it doesn't have to be top quality as long as it's not junk.
|
# ? Aug 5, 2020 20:31 |
|
Those are sata2 and I don't believe any such SSD was ever made?
|
# ? Aug 5, 2020 20:43 |
|
Rinkles posted:Should most of these be basically interchangeable? At this price point every single one of these will be a DRAM-less drive. Spend more and get this: https://www.amazon.com/ADATA-SU800-256GB-3D-NAND-ASU800SS-256GT-C/dp/B01K8A2A0E
|
# ? Aug 5, 2020 20:48 |
|
|
# ? Apr 23, 2024 09:31 |
|
BIG HEADLINE posted:At this price point every single one of these will be a DRAM-less drive. How do you check?
|
# ? Aug 5, 2020 20:51 |