|
Moey posted:Why is that? I thought everything related to SMR was handled by the drives controller/firmware? It is, but the firmware would still prefer to understand the difference between "please write this data (all zeros)" and "this block no longer matters (you can write it on the next pass)". SMR drives have "block rewrite" characteristics that are similar to the way SSDs handle erases. They can't write a block, only a whole track, just like SSDs need to erase and reprogram a whole block at a time. Or at least that's what I'm going with after hearing that SMR drives work well with TRIM.
|
# ? Aug 16, 2018 02:52 |
|
|
# ? Apr 26, 2024 23:35 |
Moey posted:Why is that? I thought everything related to SMR was handled by the drives controller/firmware?
|
|
# ? Aug 16, 2018 11:05 |
|
Moey posted:Why is that? I thought everything related to SMR was handled by the drives controller/firmware? Star War Sex Parrot fucked around with this message at 12:03 on Aug 16, 2018 |
# ? Aug 16, 2018 12:01 |
|
D. Ebdrup posted:I'm not sure I'm the best to advise you when it comes to Linux - on FreeBSD I'd just throw Dtrace at the problem until I got some meaningful numbers for explaining what's wrong, but I got rid of my last Linux machine recently (hts/tvheadend plugin for kodi finally hit FreeBSD ports after being absent when the namechange and big version bump from Kodi which broke ABI compatibility was made). I created the pool layout like that basically from the N^2+stripe belief in the second link. Originally the 4TB pool was 4 + /Z2, and I brought the 8s to be 8 + /Z2. But then changed my mind and redid the pools. (The original system is from 2014, in January I added the 8TB drives.) Ok, I did find some interesting things: 8TB drives with 8 in the external array 2 internal: dd if=/dev/zero of=speetest bs=1M count=10000 conv=fdatasync 10000+0 records in 10000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 18.906 s, 555 MB/s (A backup was running while doing this that had a sustained read of 100MBit) 4TB drives all internal dd if=/dev/zero of=speetest bs=1M count=10000 conv=fdatasync 10000+0 records in 10000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 78.5028 s, 134 MB/s iostat pre:Linux 4.15.17-2-pve (tso) 08/16/2018 _x86_64_ (8 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 3.83 0.13 1.76 4.70 0.00 89.58 Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn sda 79.52 1910.64 242.72 12455506354 1582272316 sdh 79.50 1909.18 241.69 12445986450 1575556148 sdd 55.89 719.94 661.35 4693331612 4311375924 sdb 55.60 720.35 661.35 4696008204 4311383704 sdc 55.61 690.28 647.98 4499966944 4224173544 sde 54.46 693.81 648.10 4522962432 4225009008 sdg 55.42 690.90 648.01 4503982892 4224372704 sdf 55.46 721.06 661.35 4700600484 4311347472 sdi 21.54 27.79 302.22 181189027 1970191064 sdj 20.51 27.54 302.22 179533367 1970191064 sdk 67.08 1909.21 242.72 12446186494 1582275684 sdl 66.82 1907.71 241.72 12436424914 1575791488 sdm 66.83 1907.72 241.72 12436492590 1575786240 sdn 67.11 1908.93 242.71 12444371782 1582260820 sdo 67.04 1909.01 242.72 12444902186 1582268112 sdp 66.92 1907.79 241.72 12436961138 1575780244 sdq 67.12 1909.10 242.71 12445447754 1582257472 sdr 66.69 1907.60 241.72 12435723274 1575776352 zd0 1.66 1.98 4.82 12906872 31410620 zd16 0.00 0.00 0.00 20 4 zfs get all for the two datastores: pre:NAME PROPERTY VALUE SOURCE Main-Volume/subvol-103-disk-1 type filesystem - Main-Volume/subvol-103-disk-1 creation Sun Jan 28 8:51 2018 - Main-Volume/subvol-103-disk-1 used 11.9G - Main-Volume/subvol-103-disk-1 available 30.3G - Main-Volume/subvol-103-disk-1 referenced 1.69G - Main-Volume/subvol-103-disk-1 compressratio 1.00x - Main-Volume/subvol-103-disk-1 mounted yes - Main-Volume/subvol-103-disk-1 quota none default Main-Volume/subvol-103-disk-1 reservation none default Main-Volume/subvol-103-disk-1 recordsize 128K default Main-Volume/subvol-103-disk-1 mountpoint /Main-Volume/subvol-103-disk-1 default Main-Volume/subvol-103-disk-1 sharenfs rw=@192.168.86.0/24 inherited from Main-Volume Main-Volume/subvol-103-disk-1 checksum on default Main-Volume/subvol-103-disk-1 compression off default Main-Volume/subvol-103-disk-1 atime on default Main-Volume/subvol-103-disk-1 devices on default Main-Volume/subvol-103-disk-1 exec on default Main-Volume/subvol-103-disk-1 setuid on default Main-Volume/subvol-103-disk-1 readonly off default Main-Volume/subvol-103-disk-1 zoned off default Main-Volume/subvol-103-disk-1 snapdir hidden default Main-Volume/subvol-103-disk-1 aclinherit restricted default Main-Volume/subvol-103-disk-1 createtxg 131698 - Main-Volume/subvol-103-disk-1 canmount on default Main-Volume/subvol-103-disk-1 xattr sa local Main-Volume/subvol-103-disk-1 copies 1 default Main-Volume/subvol-103-disk-1 version 5 - Main-Volume/subvol-103-disk-1 utf8only off - Main-Volume/subvol-103-disk-1 normalization none - Main-Volume/subvol-103-disk-1 casesensitivity sensitive - Main-Volume/subvol-103-disk-1 vscan off default Main-Volume/subvol-103-disk-1 nbmand off default Main-Volume/subvol-103-disk-1 sharesmb on inherited from Main-Volume Main-Volume/subvol-103-disk-1 refquota 32G local Main-Volume/subvol-103-disk-1 refreservation none default Main-Volume/subvol-103-disk-1 guid 15933573363077316406 - Main-Volume/subvol-103-disk-1 primarycache all default Main-Volume/subvol-103-disk-1 secondarycache all default Main-Volume/subvol-103-disk-1 usedbysnapshots 10.3G - Main-Volume/subvol-103-disk-1 usedbydataset 1.69G - Main-Volume/subvol-103-disk-1 usedbychildren 0B - Main-Volume/subvol-103-disk-1 usedbyrefreservation 0B - Main-Volume/subvol-103-disk-1 logbias latency default Main-Volume/subvol-103-disk-1 dedup off default Main-Volume/subvol-103-disk-1 mlslabel none default Main-Volume/subvol-103-disk-1 sync standard default Main-Volume/subvol-103-disk-1 dnodesize legacy default Main-Volume/subvol-103-disk-1 refcompressratio 1.00x - Main-Volume/subvol-103-disk-1 written 780K - Main-Volume/subvol-103-disk-1 logicalused 11.5G - Main-Volume/subvol-103-disk-1 logicalreferenced 1.48G - Main-Volume/subvol-103-disk-1 volmode default default Main-Volume/subvol-103-disk-1 filesystem_limit none default Main-Volume/subvol-103-disk-1 snapshot_limit none default Main-Volume/subvol-103-disk-1 filesystem_count none default Main-Volume/subvol-103-disk-1 snapshot_count none default Main-Volume/subvol-103-disk-1 snapdev hidden default Main-Volume/subvol-103-disk-1 acltype posixacl local Main-Volume/subvol-103-disk-1 context none default Main-Volume/subvol-103-disk-1 fscontext none default Main-Volume/subvol-103-disk-1 defcontext none default Main-Volume/subvol-103-disk-1 rootcontext none default Main-Volume/subvol-103-disk-1 relatime off default Main-Volume/subvol-103-disk-1 redundant_metadata all default Main-Volume/subvol-103-disk-1 overlay off default NAME PROPERTY VALUE SOURCE datastore/Media type filesystem - datastore/Media creation Sun Jan 21 8:38 2018 - datastore/Media used 20.3T - datastore/Media available 41.5T - datastore/Media referenced 18.0T - datastore/Media compressratio 1.00x - datastore/Media mounted yes - datastore/Media quota none default datastore/Media reservation none default datastore/Media recordsize 128K default datastore/Media mountpoint /datastore/Media default datastore/Media sharenfs rw=@192.168.86.0/24 inherited from datastore datastore/Media checksum on default datastore/Media compression off received datastore/Media atime off received datastore/Media devices on default datastore/Media exec on default datastore/Media setuid on default datastore/Media readonly off default datastore/Media zoned off default datastore/Media snapdir hidden default datastore/Media aclinherit passthrough inherited from datastore datastore/Media createtxg 243 - datastore/Media canmount on default datastore/Media xattr on default datastore/Media copies 1 default datastore/Media version 5 - datastore/Media utf8only off - datastore/Media normalization none - datastore/Media casesensitivity sensitive - datastore/Media vscan off default datastore/Media nbmand off default datastore/Media sharesmb on inherited from datastore datastore/Media refquota none default datastore/Media refreservation none default datastore/Media guid 9149317449859012954 - datastore/Media primarycache all default datastore/Media secondarycache all default datastore/Media usedbysnapshots 2.28T - datastore/Media usedbydataset 18.0T - datastore/Media usedbychildren 0B - datastore/Media usedbyrefreservation 0B - datastore/Media logbias latency default datastore/Media dedup off received datastore/Media mlslabel none default datastore/Media sync standard default datastore/Media dnodesize legacy default datastore/Media refcompressratio 1.00x - datastore/Media written 37.4M - datastore/Media logicalused 20.3T - datastore/Media logicalreferenced 18.0T - datastore/Media volmode default default datastore/Media filesystem_limit none default datastore/Media snapshot_limit none default datastore/Media filesystem_count none default datastore/Media snapshot_count none default datastore/Media snapdev hidden default datastore/Media acltype off default datastore/Media context none default datastore/Media fscontext none default datastore/Media defcontext none default datastore/Media rootcontext none default datastore/Media relatime off default datastore/Media redundant_metadata all default datastore/Media overlay off default datastore/Media org.freebsd.ioc:active yes inherited from datastore 4TBs WDC WD40EFRX-68WT0N0 Not sure why the purely internal drives are 1/5th the speed of the mixxed internal/external. What the hell am I missing / what do I try or look at next?
|
# ? Aug 16, 2018 15:04 |
Hughlander posted:I created the pool layout like that basically from the N^2+stripe belief in the second link. Originally the 4TB pool was 4 + /Z2, and I brought the 8s to be 8 + /Z2. But then changed my mind and redid the pools. (The original system is from 2014, in January I added the 8TB drives.) What controller does the motherboard use? I've heard of Marvell controllers that either start blocking data when being overloaded, or more problematically, actually drop connections - so depending on how Linux handles devices and how your pool is setup, you can try pulling out the four internal disks and hook them up to USB3 (at least on FreeBSD and Solaris, ZFS doesn't care how the disks are accessed) or a HBA if you've got a spare one. Otherwise, I'm really drawing a blank here.
|
|
# ? Aug 16, 2018 18:42 |
|
D. Ebdrup posted:Ah, I thought you had two vdevs in the same pool, but you have two separate pools? I have 2 zpools, each /Z1s. 1 is 10 8TB Reds, the other is 6 4TB Reds. The 4TBs are directly connected to a SUPERMICRO MBD-X10SL7-F-O that has a LSI2308 in IT Mode. The 8TBs are attached via a LSI LSI00188 PCI Express Low Profile Ready SATA / SAS 9200-8e Controller Card (Also flashed to IT mode) and then a SANS DIGITAL TR8X6G JBOD enclosure. I should see if I have any PCI slots open, I could try just another LSI in IT mode and see if that affects it.
|
# ? Aug 16, 2018 20:06 |
|
Storage goons, how long after a drive has run in one orientation should it not be changed to another. I have a 10tb WD that's about a year old with only light usage (media drive for watching stuff) that i want to switch orientations with in a new case, but don't want to run into any issues or shorten its lifespan. Is there a threshold either in terms of hours on or data written in a single orientation? Thanks in advance.
|
# ? Aug 17, 2018 01:03 |
|
Shrimp or Shrimps posted:Storage goons, how long after a drive has run in one orientation should it not be changed to another. You mean physical orientation? Just mount it. It's not a liquid, it isn't going to care.
|
# ? Aug 17, 2018 01:49 |
|
sharkytm posted:
Yes, I mean swapping from horizontal label-up to vertical label-side. I thought I might have read that if a drive's been used for a long time in one orientation, that moving it to a different one (horizontal --> vertical or vice versa) can end up shortening its life span as the bearings are "used" to one orientation, or whatever. I do know with my 6+ year old 2 TB spinner, when I swapped it to vertical from horizontal it started making much more noise than it used to, and I figured that's just the drive going tits up, and I replaced it with my current 10tb spinner which I've only been using for a year. I should say that my 2TB spinner stopped making more noise when I mounted it back in the horizontal position.
|
# ? Aug 17, 2018 02:35 |
|
I have a good 10TB of data that's not likely to expand at a fast rate. I plan on doing zfs raid using ubuntu server. My goal is to use this as a plex/media/backup/storage server. What raid scheme is recommended? System specs:
|
# ? Aug 17, 2018 04:13 |
|
Shrimp or Shrimps posted:Yes, I mean swapping from horizontal label-up to vertical label-side. Yeah that's psychosomatic. Think about every laptop hard drive ever. Disks literally use a buffer of air from the platter spinning to hold the head off of crashing. Direction hardly matters.
|
# ? Aug 17, 2018 04:21 |
|
Rotate your drives monthly for even bearing wear! Don't.
|
# ? Aug 17, 2018 04:25 |
|
I keep my drives constantly rotating on all three axes for even wear.
|
# ? Aug 17, 2018 04:27 |
|
H110Hawk posted:Yeah that's psychosomatic. Think about every laptop hard drive ever. Excellent, than I shall not worry and get the new PC case! Though I would suspect laptop hard drives don't really get used while vertical, and the x360-style laptops all use SSDs. Edit: FWIW, the reason I asked is because I read this from WD Support: quote:New E-Mail from the WD Support: https://community.wd.com/t/change-mounting-orientation-after-long-usage/214990/4 Dated 2017. I know this was a "thing" back in the 2000s. Shrimp or Shrimps fucked around with this message at 04:35 on Aug 17, 2018 |
# ? Aug 17, 2018 04:31 |
|
Shrimp or Shrimps posted:Excellent, than I shall not worry and get the new PC case! Laptop hard drives get shaken around in all directions all the time. It's the worst thing for them. SSDs are basically brand new in comparison. I don't know what a x360 laptop is but you're overthinking it.
|
# ? Aug 17, 2018 05:30 |
H110Hawk posted:Disks literally use a buffer of air from the platter spinning to hold the head off of crashing. Direction hardly matters. The only thing you really shouldn't do with disks is turn them perpendicular to the disk rotation while the disks are spinning, because of the precession of the platters that this causes. As long as they stay within the limits of the vibrational force that they rated for, there's very little that can go wrong.
|
|
# ? Aug 17, 2018 17:59 |
|
PUBLIC TOILET posted:I'm hoping someone can shine some light on this. I posted about it on reddit (unfortunately.) I want to run bare-metal Hyper-V Server with a Server (GUI) VM. I have 2x 4TB drives for data storage that I want passed to the VM. The VM will maintain the shared folders, file downloads, etc. I don't know enough about Storage Spaces and Hyper-V disk management to determine the best approach. From all of the research I've done so far, it sounds like this may be the best approach, but it leaves a bunch of questions and ifs I have about it: Pass through would probably be the recommended approach, I'm running hyper-v server core and pass everything to a nas VM. I have an SSD that the host/guests run off that hyper-v is in charge of and the rest are passed through. I'm not entirely sure what you're asking though and didn't click on the Reddit post. Are you trying to run the guest on a pass through disk or something? Because that wouldn't work.
|
# ? Aug 17, 2018 18:27 |
|
D. Ebdrup posted:Fun fact: the distance between the platter and the read head is actually something like 700nm - or possibly smaller since it's been a few years since I heard that number - so there's not a whole lot of air molecules functioning as a buffer. Yeah don't move the disk while operational. Laptop drives aggressively park their heads, which is one reason they can feel really slow to use if you aren't perfectly still with the laptop. It was fun to pull dead disks out of a netapp by just ripping it out and twisting it around. You could feel the centripetal (I don't care if I'm wrong you know what I mean) force of the platter still going at somewhere close to 10k rpm. Not that I would do that ever.
|
# ? Aug 17, 2018 19:10 |
|
How does the cadence of Synology's release cycle work? I'm thinking of pulling the trigger on a DS918+ primarily as a Plex/sabnzbd/associated apps server I guess I'm used to Apple products where it's pretty well known when and how they release things and what time of year it's a good idea to wait to buy something. (i.e. right now is not a great time to b uy an iPhone) Is Synology similar?
|
# ? Aug 17, 2018 20:52 |
H110Hawk posted:Yeah don't move the disk while operational. Laptop drives aggressively park their heads, which is one reason they can feel really slow to use if you aren't perfectly still with the laptop. Unfortunately, that also means there's a potential for corrupt writes, if you're not using ZFS - how's that for getting back on-topic?
|
|
# ? Aug 17, 2018 21:55 |
|
So encountered the dumbest thing. Synology 5 bay device. A standalone file server built for convenience. Boot it up, hit the button to start setup. "Failed to format the drive (35)" Solution is to ... remove the drives and make sure to format and remove partitions on a separate PC An all-in-one file server that... can't automatically format? I've seen bash scripts do this better...
|
# ? Aug 18, 2018 03:09 |
|
re: orientation and wear, it might have been true once but I think it isn't now. HDD spindles once used ball bearings, which do wear out, and it's easy to imagine how ball bearings and their races might wear into a preferred orientation. However, it's been at least 10 years since anyone shipped a ball bearing HDD spindle. The whole industry switched over to fluid dynamic bearings (FDBs). The thing about FDBs is that there is no metal to metal contact, and, consequently, essentially zero metal wear. The bearing is a very thin film of oil that fully separates the metal surfaces. They probably won't take a set no matter what orientation you run them in. Take all the above with a grain of salt since I'm not a mechE
|
# ? Aug 18, 2018 10:20 |
There are more moving parts in spinning rust disks than just the platters themselves; the whole contraption for moving the head is only rated for a certain number of load/unload cycles and when people insist on using powersaving when disks are idle, you sometimes find that those load/unload cycles are a lot less than you bargined for.
|
|
# ? Aug 18, 2018 12:16 |
|
A sure way to kill a drive is to operate it heavily and then drop the sucker on the floor in the middle of it. Pretty much how I killed an old external drive of mine while I was backing up files to it. The sounds it made were... not pleasant. Even though gyroscopes and such exist to suspend the drive’s mechanical functions quickly if it’s detected to be in free-fall, the rough impact alone can cause some serious damage.
|
# ? Aug 18, 2018 18:36 |
|
necrobobsledder posted:Even though gyroscopes and such exist to suspend the drive’s mechanical functions quickly if it’s detected to be in free-fall, the rough impact alone can cause some serious damage.
|
# ? Aug 18, 2018 19:07 |
|
Star War Sex Parrot posted:The vast majority of drives on the market are not equipped with internal free-fall sensors. This. That was a fad that quickly died out with SSDs.
|
# ? Aug 19, 2018 10:02 |
|
It's obnoxious to upgrade HP Laptops from spinners to SSD's because you still have to install that stupid 3D Driveguard driver to fully reconcile the Device Manager.
|
# ? Aug 20, 2018 07:17 |
|
I'm currently running an old Synology DS214 from a few years back, and am looking to upgrade. I'm running 2x 4TB WD Reds from 2017, and running out of space using Synology's SHR raid thing. Would it make sense if I wanted to upgrade to the DS918+, if I was looking to upgrade the storage, and maybe have the option of streaming Plex through it? I currently have a standalone system that does nothing except run a Crashplan VM and serve out Plex to my clients, so I'm hoping the 918+ would be able to do this too.
|
# ? Aug 21, 2018 14:31 |
|
Maybe others can chime in, but I'm really not a fan of running Plex off a purpose-built NAS like Synology. Don't get me wrong, I love their NASes and have two of them. They just aren't powerful enough to do any serious transcoding and if you start having clients that can't Direct Stream or Direct Play things can go south very quickly. They can do hardware transcoding, but again, not in an ideal fashion. I'd rather have my storage and compute separate. I personally use a Synology NAS for storage and a Shield for my main client and server. If I needed more power than that, I'd likely build a little server out of a NUC or something small with a ThreadRipper in it. Internet Explorer fucked around with this message at 15:10 on Aug 21, 2018 |
# ? Aug 21, 2018 15:01 |
|
Internet Explorer posted:Maybe others can chime in, but I'm really not a fan of running Plex off a purpose-built NAS like Synology. Don't get me wrong, I love their NASes and have two of them. They just aren't powerful enough to do any serious transcoding and if you start having clients that can't Direct Stream or Direct Play things can go south very quickly. They can do hardware transcoding, but again, not in an ideal fashion. I'd rather have my storage and compute separate. For the longest time I was wondering why you would want to turn on the Plex option to make a streamable version of every single piece of media instead of doing an on-the-fly transcoding. Ashamed it wasn't until I got a Synology last weekend to realize that those devices kind of suck at doing any heavy processing not related to NAS. I got this NAS to store my static files that aren't video/audio media as a semi-expensive way to clear up space on my main Ubuntu ZFS file server. The i5 xenon on the box has power but I put in a cheap Nvidia 730 gpu to activate the Plex GPU transcoding and it loads so much faster on my phone. I read posts on Plex's forums about how their CPU is pegged whenever Plex transcodes anything and it's like... well, the CPU in these NAS are much weaker than what's in your phone now. What did you expect?
|
# ? Aug 21, 2018 15:35 |
|
Synology underpower their boxes across the range, especially considering the claims they make. Even the business-oriented boxes will come with a quad-core Atom and 4GB RAM and choke as soon as you actually do any of the stuff that they advertise their OS as being well suited for.
|
# ? Aug 21, 2018 21:05 |
|
A ~$100 reolink plus a $50 surveillancestation license is the easiest way to go with adding a camera to my Synology DS418?
|
# ? Aug 22, 2018 20:20 |
|
Hadlock posted:A ~$100 reolink plus a $50 surveillancestation license is the easiest way to go with adding a camera to my Synology DS418?
|
# ? Aug 22, 2018 22:12 |
|
Heck yeah. We're going on a trip and the last time I left the cats to their own devices I rigged up a crude web cam that failed halfway through the trip. This seems like a more sensible solution. Looking on Amazon there's about eleventy billion security cameras available, looking for the sweet spot of reputable, compatible, and reasonably priced, it is overwhelming to pick one.
|
# ? Aug 22, 2018 22:21 |
|
Hadlock posted:Heck yeah. We're going on a trip and the last time I left the cats to their own devices I rigged up a crude web cam that failed halfway through the trip. This seems like a more sensible solution. Skip reputable and segment it from the rest of your lan and the internet at large. I've got a Foscam PTZ camera, it doesn't have internet access at the router level. https://smile.amazon.com/gp/product/B00I9M4HBO/ref=oh_aui_search_detailpage?ie=UTF8&psc=1 It phones home to china if allowed on the internet.
|
# ? Aug 22, 2018 22:41 |
|
How worried should I be about FreeNAS screaming critical errors about some random Python files on the boot USB? We've already had one USB completely die on us, but that did give us a good 'dry' run at restoring the config backup to a fresh ISO and that seemed to work fine. But now on this 2nd USB it almost immediately started throwing errors at 2 seemingly random files. Based on the file paths I can't tell if they're even being used, and as far as we can tell everything else about the NAS is working fine. The data HDs are coming back clean so I'm not super worried, but I don't like red lights flashing at me in the UI Unrelated question, is it normal for your NAS to just hang out consuming all its RAM? The first time I moved a significant amount of data over to the NAS it spiked its 8gb RAM for the duration of the transfer. So I picked up another 8gb off eBay and stuck it in, but then it just used 16gb instead of 8. The Reports tab is showing about 14.5gb Wired all the time, but I'm not sure what that means or if it's something to be concerned about. Everything else in the system (CPU, disk activity etc) almost never breaks double digits of % usage.
|
# ? Aug 24, 2018 01:18 |
|
Takes No Damage posted:How worried should I be about FreeNAS screaming critical errors about some random Python files on the boot USB? We've already had one USB completely die on us, but that did give us a good 'dry' run at restoring the config backup to a fresh ISO and that seemed to work fine. But now on this 2nd USB it almost immediately started throwing errors at 2 seemingly random files. Based on the file paths I can't tell if they're even being used, and as far as we can tell everything else about the NAS is working fine. Sounds like a lovely USB port. If you mean the output of 'free' then look at the cached number. That is free memory too. Basically yes, that's how linux/freebsd work.
|
# ? Aug 24, 2018 01:42 |
|
https://www.linuxatemyram.com/ Unused memory is wasted memory.
|
# ? Aug 24, 2018 02:39 |
|
Sheep posted:https://www.linuxatemyram.com/ This. Any decent file server is gonna use every last byte of RAM to cache files so it doesn't have to touch the disks unless it absolutely has to.
|
# ? Aug 24, 2018 03:48 |
|
|
# ? Apr 26, 2024 23:35 |
|
H110Hawk posted:Sounds like a lovely USB port. We are still trying to use the internal USB slot, since the mobo only has 2 more on the rear panel. But I can't think of what else we would need plugged in back there besides the keyboard so maybe for this next FreeNAS update I'll flash a fresh stick and try one of the back ports. Sheep posted:https://www.linuxatemyram.com/ Except it was my rear end, thanks.
|
# ? Aug 24, 2018 19:30 |