Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.
Another option would be to allow access to your Synology only from the IP used by the Synology at your work. This would effectively still block it from the rest of the internet. But I suspect the Google router doesn't support this.

Adbot
ADBOT LOVES YOU

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

WWWWWWWWWWWWWWWWWW posted:

They are all Western Digital USB drives. You are correct that they are just SATA drives inside. I've already shucked two of them, but I can't do that to the rest because there's no more physical room in my case.

Well you could always extend your case and build an external drive stand. That's what I did a decade ago when I was in your situation. A small wood plank, four aluminum L-bars standing upright with holes for HDD screws. Bunch of harddrives screwed in with the SATA and IDE cables running in to the case.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

WWWWWWWWWWWWWWWWWW posted:

I am in fact intrigued by your idea of the low-powered 2nd PC as a file server, though. Is there some sort of box that will take 8 or so hard drives that I would run Linux on or something? I know of NAS devices of course, but the reason I went with a Windows PC was because it's what I already had, I use it to run games, and I have it set up with all sorts of stuff like Handbrake and automated tasks and Steam games, so while yeah it is taking up more electricity than a NAS box, it's doing a lot more and I don't have a gaming PC on that most people do, so I feel like it evens out. What would this 2nd low-powered PC with all my external drives shucked and installed into it accomplish anyway? I am not trying to be snarky; I am genuinely asking. Is there a benefit beyond it looking nicer?

You already made your decision, but I'd like to give my view on this question. Biggest reason for having a separate file server is not to be limited to Windows. You would have all these option, Unraid/FreeNAS/Linux/MDADM/LVM/ZFS, and you can choose the best storage method for your needs.

Another major benefit is the ability to put it in the basement or some closet, out of sight and out of hearing. Then it won't matter that much how much noise and heat it generates or how unsightly it looks. You had a problem of finding a case that could fit all your drives. I wouldn't have even tried, I would have just split the drives in two cases, remove the side panels and but them facing each other. If I wanted to be fancy I might have bolted the cases together with hinges. Below is a picture of my setup from 11 years ago. Cheap and worked just fine for all those years. That case is still serving the same duty, but with bigger drives I could fit them all internally. I just cut a hole for 14cm fan in front of the harddrive cage.

The PC doesn't need much, just enough SATA ports or PCIe slots. I've often used a left over machine after upgrading my desktop. If you at some point put the large drives internally you will have bunch of smaller drives you can use to learn alternative systems.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

Beaucoup Haram posted:

I've currently got a Chenbro 48 drive 4ru case and it's using a lot of power for a system with only 16 disks in it.

What alternatives are there that can do 24-48 drives that I can use commodity parts in (ie ATX psu, 120mm fans etc). Low power and low noise would be the focus.

A Supermicro with a SQ PSU would be the 2nd best option but I'd prefer standard off the shelf parts if possible so I can replace if something fails without a hassle.

System is 2 x 2680v2 in an Intel board, 128gb DDR3 ECC, 280gb Optane PCIE + LSI HBA running virtualised Napp-It with a passthrough HBA. 16 x 3tb drives (Toshiba DTAwhatevers)

Do you need redundant power supplies? If not, then a Storinator with a standard ATX PSU could be an option. Do you need all the IO those drives could provide? If you could manage with no more than ten large drives you would have much more options. For the transcoding duties you could use a Threadripper or a GPU encoding. But might need a pro card for multiple streams.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

Fire Storm posted:

Would it be worth it from a strictly energy use and disk failure standpoint to use SSDs? Home use, 1 device will be streaming video files from it at a time, and data will pretty much be write once read many. Largely thinking of using disks from the Samsung 860 line (2tb)

SSDs are too expensive to be worth it in NAS use, but otherwise they wouldn't be a bad choice. As a compromise you could consider 2.5" harddrives and something like Synology DS416slim.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

CommieGIR posted:

I run the OpenVPN Virtual Appliance in my Xen hypervisor, but I'm running on 443 so I can bypass filtering for work.

My coworker likes to run VPN on the DNS port, 53/UDP. This gets you through even many captive portals on airports and such.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

IOwnCalculus posted:

Also, do yourself a favor and make a list of what drive is installed where, by serial number. That way you know which drive to pull instead of pulling each one until you get the right one. It can also be useful if the dead drive shits itself so bad that it won't identify itself via SMART, so you can make the server give you a list of what drives it sees and you can know from there which serial is missing.

I've printed stickers with the serial and put them on both ends of the drive.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

wargames posted:

How is HPE updating service, do they still gate that behind a pay wall?

Usually only BIOS updates and the full SPP package require maintenance contract outside of warranty. And I think some critical BIOS updates have also been free. Of course if you have one system with warranty or maintenance contract you can get updates for everything.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

H110Hawk posted:

Don't support these dumb games by paying HPE money.

It's a minor hassle compared to something like Dell which refuses to even sell maintenance after 7 years. Just at the end of last year we finally shut down one financial system server that had been running for something like 12 years, all that time under HPE maintenance contract. I would expect servers to be under warranty or maintenance contract anyway, so the paywall is a non-issue. And if a decently long warranty has expired, then the server is probably under such a maintenance phase that paywalled BIOS upgrades seldom have much interesting to offer and the critical updates like recent Intel microcode one seem to be free to download.

The HPE SPP package is such a convenience it is almost worth the expense, I wish Dell would offer something like it. I had to create my own CentOS 7 USB boot stick to update the firmware on our Dell servers running Ubuntu.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

CopperHound posted:

How hard can it be to implement client side subtitle support?

I honestly don't know. I don't program anything more complex than Arduino stuff.

It can probably be quite a challenge if you want full featured support for all the multitudes of subtitle formats. I vaguely remember seeing a short scene in some anime, where they were showing a book with japanese text. The book was overlayed by english subtitles that were aligned to the book and conformed to it. And since they were soft subs you could turn them off. Example from Aegisub.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

Former Human posted:

I have the same case and be advised that the hard drive trays are not designed for new large capacity drives. The screw holes do not line up. I bought an HGST 12TB drive when it was on sale recently and I have only two of the four holes screwed into the tray. I haven't been able to find a replacement tray that both works in the case's drive cage and has the right hole placement.

There are two options: zip tie part of the drive tightly to the tray to minimize vibration, or get a 3.5" to 5.25" adapter and install the drive in the optical bay.

If anyone in the thread knows of updated/universal trays that fit the Antec P280 I'd love the help.

Could you drill new holes in correct spots?

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

Mecha posted:

Is there any advantage to running FreeBSD off a USB stick with the rest of the drives being the ZFS pools? Friend came through again with selling off a startup workstation: Supermicro X10SRM-F board and Xeon v4 in a cheap Lian-Li case, and it's already got a USB socket on one of the mobo headers.

Another option is the M.2 slot on your motherboard, based on the manual it sounds like NVMe drive won't disable a SATA port. Intel Optane 16GB sticks are quite cheap.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.
Sometimes RAID5 is not enough. At work we have a Dell MD1200 disk shelf that is close to five years old. Twelve 4TB SAS drives, 10 of which have been replaced at some point in time. In the past six months we've had three cases where 2 drives have failed exactly at the same time, last one a week ago. We have been dodging bullets like in Matrix. Thankfully there are only two original drives remaining, so when they fail in some weeks the chances of a third drive failure aren't that high.

This is pretty incomprehensible case. We have loads of these disk shelves and I don't know of any other that has exhibited this behaviour. And I can't think of an external reason that could cause this. We have very reliable electricity, the server is behind UPS and I know of one power glitch this year and it doesn't match with any of the drive failures.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.
I have also gotten the feeling that 10Gbase-T is a dead end technology. The higher latency and power use will give the victory to SFP+. That has happened in our datacenters. We used to have a mix of 10G SFP+ optical and Base-T, but future installations will be SFP+. When the world was switching to 1G datacenters and homes used basically the same technology, it just needed time to become cheaper and reach homes. But if the only users for 10Gbase-T are the hardcore NAS or homelab builders, then the use case will be such a niche it will never reach economy of scale and wide acceptance. On the other hand, SFP+ ports are way too long, motherboards don't have room for it so integrated SFP+ can't become common. Not to mention the standard is too complicated compared to base-T. I'm a server admin and regularly buy them or spend a lot of time in the datacenter, and I have really started to understand the options with SFP+ in the past few years.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.
What is the temperature of the other drives? If the other drives stay at 44 degrees and the warning limit is 45, then maybe the one drive just occasionally runs warmer. Or the other drives may be different, cooler models.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.
That's unnecessarily fancy install. When the fan died on my old GPU I just ziptied a 12cm around the card. Worked just fine.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

GreenBuckanneer posted:

The purpose would really be having easy to remove drives so when one fails I can just pop another one in without losing much of anything, and if I did, it wouldn't be the end of the world, and to have whatever the device is offload the encoding (if needed) and to have a web interface and grabbable on the network. I suppose I could just build a miniatx box/server but :effort:

It does sound like in the end you want some type of redundant RAID, 1/5/6. With RAID0 you will lose everything and have to go through the hassle of restore. With JBOD you lose random quarter of your files, with some fragmented files possibly only partially. Assuming the filesystem doesn't get corrupted and you lose everything anyway. But at least you would need checksums of all your files to figure what exactly you have lost, then do a piecemeal restore which is even more of a hassle than full restore.

Synology or QNAP are the minimal effort solution to achieve your needs, with not too much of an extra expense. An old desktop with FreeNAS/Unraid is the minor expense solution. RaspPi4 is the minimal expense, lower performance solution, but at least you don't have to shuck the drives.

Saukkis fucked around with this message at 16:11 on Jan 12, 2021

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.
I guess you don't actually need to stripe the data over several drives just to use parity. You could have drives with completely unrelated data, one with Linux ISOs, second with movies, third with mp3s, and then a fourth drive with parity calculated from the rest of the drives.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

EpicCodeMonkey posted:

Ideally however, I want something closer to a JBOD, without striping, so if data isn't accessed all the disks don't spin up and waste power. If a disk is lost, I'd rather it just took out 1/3rd of the data (it's all replaceable, easily).

The easy solution for JBOD is LVM. Create one volume group, add all the disks to the group, create a large logical volume that spans all the disks and then format the volume with your preferred file system. But I wouldn't count on things working when one drive fails, it very much depends on the file system how it behaves on that situation and what exactly gets lost. I've used a lot of of LVM at work, even with that kind of spanned setups, but we haven't had any cases where the LVM would lose the middle disk. At least with torrents it is easy to figure out which data got lost or corrupted.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

fletcher posted:

I don't actually need any redundancy on these drives, they won't be storing anything important and I'm not so concerned about availability. I just wanted two drives to show up as 1 big drive and thought RAID 0 with the onboard might be the easiest way to do that. I'm running Debian 10. Any suggestions for me?

If you don't need the speed increase from RAID0, then the easiest way to achieve that is usually LVM. On the first disk create a 512+MB partition for /boot and another partition for the LVM, the second disk would have a single partition for LVM and add it to the same volume group. This also has the benefit that you can create separate logical volumes for stuff like /home, /opt, /var, etc and then increase their size as needed. And if you buy a new drive you can easily add it to the same volume group, not nearly as easy with RAID.

Saukkis fucked around with this message at 12:15 on Mar 21, 2021

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.
Zipties can work really well. This GTX560 served me years like this. I had forgotten I also used those screws to position it a bit up from the heatsink.

Only registered members can see post attachments!

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

BlankSystemDaemon posted:

I almost laughed myself out of my chair because of that. :allears:


Also, in other news: corrective zfs receive, which makes it possible to heal corrupted data on pools, is only missing one test (which currently panics on FreeBSD) before it's ready for its final review.
This will finally mean that all those ZFS standard I/O streams that I've been writing to SMR disks as a way of backup (which is the only acceptable use of SMR), will be possible to restore from, without loading the whole chain of snapshots onto a new pool.

I just happened to read a blog post about why ZFS streams aren't a good backup. This does seem to address the biggest shortcoming it brought up, but reading the patch comment it still feels like a bit of a hack.

Storing ZFS send streams is not a good backup method

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

CommieGIR posted:

This, and enable Multifactor on your OpenVPN server. If you want to avoid having your OpenVPN blocked, run it on port 443 unless you have another webserver hogging that outbound port.

A coworker is running OpenVPN in the DNS port, 53 UDP. Supposedly this often gets him pass captive portals on airport wifi and such.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

Twerk from Home posted:

If I wanted a fast NAS and was willing to splash for a couple terabytes of all flash, what's a sane way to do that?

Is ZFS RAIDZ going to be a huge bottleneck for nVME disks? Do SSDs fail so rarely that people just span them together with LVM or run RAID0? Are SATA disks still cheaper enough to do 2.5" SATA SSDs instead of nVME?

If you really want a network attached storage, then I think the main question is what is your network setup? Unless you have a 10Gbit or better I don't think this is worth considering.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

droll posted:

I thought NAS would make it easier to share content with my fellow travelers and family/friends along the way, rather than everyone having to connect and copy the data off. Is that the best RAID enclosure you recommend?

I feel that it very much depends on the usage scenario. USB enclosures are simple, everyone can use them and they just work. Biggest downside is that only one computer can use them at a time, unless you use network sharing on your computer to share it with others.

NAS are a bit more complicated and they have more requirements to work, and if you are travelling with it you may not be sure if all the situations satisfy those requirements. For example if you plugged in the NAS to the network at my work you might be surprised when nothing happens. The NAS would not get network access and it would inaccessible for you. Similarly if you plan to use it in hotels, most NAS devices are designed to be plugged in to wired ethernet network and nowadays most hotel rooms won't have one of those and homes may not have it either. Most places use wireless and it's far from trivial to connect a NAS to wireless network. You may also need to bring a router or switch with you for connecting the NAS and your computer, and in my experience it can be surprisingly difficult to connect a Win10 computer simultaneously to WiFi for internet access and ethernet for the NAS.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

Nam Taf posted:

I would guess it's a dying drive, rather than the cable.

If I were wedded to the data on it, I'd be using ddrescue to image the drive onto another, as best as it can. Then I'd be extracting all I could out of that image. Imaging the drive is a day+ affair and requires an equal-or-bigger drive spare, so if you're not wedded to it I'd consider how much effort you're willing to put in.

If the most important data is only a small portion I might first try to copy single files and directories before continuing with ddrescue.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

Sheep posted:

Not a Synology but I recently had an SSD die: a three months out of warranty Intel 535 series.

I used it as my OS drive for a few years so makes sense given how much crap gets written to %appdata% and such on a Windows box. Anything that has constant churn like that will fail far sooner than something just used for mostly storage.

That probably didn't kill the drive, it was most likely random failure. It takes a serious effort to wear out a SSD, using it as a normal OS drive won't do it.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

BlankSystemDaemon posted:

If you've got Samba listening on the web, first of all why?
Second of all, you probably shouldn't.

I would assume most people won't have the Fruit module enabled and the way I understood the announcement it wouldn't affect them.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

BlankSystemDaemon posted:

Well, it's enabled by default - it's sort of hinted at by the workaround being to disable it, but I wish they'd spell it out explicitly.

Also, excuse me while I'm over here, shaking my head at using implementing mdoc/groff in xml.

Yes, I read that, but that's not what it means. If you have the vfs object "fruit" enabled, then AAPL is also enabled by default. But fruit isn't enabled by default, because there is no way to disable it. Fruit-module is in use only if your configuration has a "vfs objects" stanza with the value "fruit". If that stanza is missing then fruit isn't enabled.

quote:

As a workaround remove the "fruit" VFS module from the list of
configured VFS objects in any "vfs objects" line in the Samba
configuration smb.conf

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

Boner Wad posted:

I only enabled TCP_NODELAY, didn't mess with the buffer sizes. Makes sense why it didn't help.
The file copy was 120MB/s. The iperf was 300 Mbits/sec.

I'd say both the network and storage are adequate and it's just a samba problem unless I'm missing something else here.

Is that 300Mbps over wireless or wired gigabit connection? It would seem quite slow for gigabit. And please use accurate units. 120MB/s means 120 megabytes per second, or almost a gigabit per second.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

CerealKilla420 posted:

I remember when smaller SSDs first started to really get affordable back in 2010 or so and everyone was worried that the drives would burn out after 2 years... Hell I was worried about it myself and I decided against getting one in favor of an upgraded 500gb 2.5in HDD drive for my laptop at the time.

So far to this day I have not even heard OF someone's 64gb drive burning out on them honestly. I'm not saying that it doesn't happen but in that same time period (past 12 years), I've had at least 3 3.5in HDD drives fail on me.

That said I'm sure things are very different in a real production environment where the drives are responsible for something more important than delivering my 10 bit chinese cartoons to my chromecast or reading Gamecube ISO files lol.

Just a while ago I checked the SSD status on a Moodle server at work. For the 800GB SAS-drives with "mixed use" rating storing the database, the parameter "Estimated Life Remaining based on workload to date" was around 21746 days, or 59.5 years. For the 7.68 TB SATA drives with "read intensive" rating, storing all the file data and which are filling up and another pair for extension is on order, the same parameter was at 450776 days, or 1234 years.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

MikeyTsi posted:

I'm running a 12-disk raid5 with hot spare on my primary array.

Wouldn't it be better to run a 12-disk RAID-6 without the hot spare?

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

Smashing Link posted:

Anyone have any clever solutions to an OS disk for TrueNAS? I'm using a 256 GB NVME but it's overkill for the OS and uses the whole disk. I have read USB disks are not good either because TrueNAS does a lot of writing to the OS disk.

You can get 16GB Intel Optane NVMes for $10 from eBay.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

namlosh posted:

Thanks for the reply. Wow, I must have totally misremembered and messed that up hard when I set up the ghetto server and 8TB drive. How embarrassing… I used to know this stuff.

Either way, looks like I’ll be looking for a Linux distro that supports exFAT and samba so I can reinstall ghetto server a lot sooner than I thought.

Thx again

You can probably enable exFAT support from EPEL.

Is it possible to mount an exFAT filesystem on Red Hat Enterprise Linux?

And I wouldn't really say CentOS 8 was killed, they just ended the support earlier and you need to upgrade it to Stream, which may be suitable distro for your needs.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.
If you want enterprise there's always the option of getting a developer account and use RHEL.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.
Just build a disk stand.

Only registered members can see post attachments!

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

Computer viking posted:

Ha, it feels like the overlap between maxtor drives, noctua fan, that case, the non-modular PSU and the red SATA cables is enough to date this build very precisely.

(I'll guess 2006?)

Well what do you know, the picture is from October 2007.


priznat posted:

Yeah that’s what I was thinking of building, something like that! Need to source those rails somewhere. It’ll just live on a shelf in my basement so no one will ever see it.

They're just standard aluminum L-bars.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

ilkhan posted:

Generators can also be refuelled if they run low.

I've been wondering if a high power inverter would be more practical alternative to a generator. A generator is big and requires quite a bit of maintenance if you want to rely on it working when you need it once a year or every other year. A car is in constant use and can be expected to work.

Running a car just to produce electricity would be inefficient, but for rare needs it wouldn't be a too big an expense.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

unknown posted:

You'd be surprised how often they can't.

Not because they physically can't, but you can't get the fuel - even if you have guaranteed delivery contracts. In large outages, governments have a habit of overriding business concerns and claiming all available fuel for things like hospitals and their own needs.

I guess that would be irrelevant for the question, since any battery backup will run out way before generator fuel. If your fuel is running low you either acquire more or you relocate your operation somewhere that has electricity. Trying to acquire electricity by trucking charged UPS batteries isn't a practical option, unless you hire a fleet of F-150 Lightnings to come an plug in.

Adbot
ADBOT LOVES YOU

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

Lowen SoDium posted:

The maintenance is pretty low and I have never worked any where that had diesel generator backup where the generator wasn't run at least 20 minutes every 2 weeks for testing. They are pretty loving reliable and cost effective.

That certainly applies for the large static generators, I was mostly thinking the small two-stroke gasoline generators meant for home use. Does anyone bother to run them even monthly.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply