Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Hadlock
Nov 9, 2004

SwissArmyDruid posted:

Also QNAP seem to be the only ones pushing 2.5/5 gigabit interfaces on their NASes, or have the provisions to upgrade with the built-in PCIe slot, which appeals to my sense of futureproofing.

Which makes it all the more bizarre to me the other NAS oems aren't doing the same, considering that the extra cost over gigabit is like... a banana.

I was gonna upgrade from my synology ds418@1gbps, to a qnap with the 5gbps but then I saw this JBOD for $50 on amazon and I have 4x rando leftover SSD in there @5gbps. Anything that needs to live for more than a week goes on the synology, all my random stuff gets thrown on the jbod, and ther's a daily cron job that makes a copy of anything that lives on the JBOD for more than a week to the shr2 synology

My guess is that Synology releases the 2022 models with native USB4 support at which point you'll get 40gbps if you plug your computer directly into it. Generic USB4 jbod are gonna eat synolgy/qnap for lunch, they'll need to keep up at that point. As someone else pointed out, there's no reason to hand 5gbps to consumers right now. Same problem with digital cameras. There's no reason why 2020 digital cameras couldn't have existed in 2010 other than the need for incremental improvements for sales.

Adbot
ADBOT LOVES YOU

corgski
Feb 6, 2007

Silly goose, you're here forever.

EssOEss posted:

I am considering it but don't really see many obvious options. For Windows there is DrivePool but that's some 1-man labor of love that I would not trust more than Storage Spaces. Everything else seems non-Windows and I kind of need to have Windows on this machine. Right now I think I will just keep on trucking with Storage Spaces. Or have I missed a Windows-compatible option I should consider?

Drivepool at least is a layer over native file systems so it’s less likely /all/ your data would become unrecoverable. That’s what ultimately pushed me to use that instead of storage spaces.

Less Fat Luke
May 23, 2003

Exciting Lemon

EssOEss posted:

I am considering it but don't really see many obvious options. For Windows there is DrivePool but that's some 1-man labor of love that I would not trust more than Storage Spaces. Everything else seems non-Windows and I kind of need to have Windows on this machine. Right now I think I will just keep on trucking with Storage Spaces. Or have I missed a Windows-compatible option I should consider?

Check out SnapRAID, it's a nice compromise. Let's you mix and match drive sizes, arbitrary amount of parity drives and it's offline RAID so you run it once a day or whatever to re-sync the parity. Works fine on Windows too!

BlankSystemDaemon
Mar 13, 2009



EssOEss posted:

I am considering it but don't really see many obvious options. For Windows there is DrivePool but that's some 1-man labor of love that I would not trust more than Storage Spaces. Everything else seems non-Windows and I kind of need to have Windows on this machine. Right now I think I will just keep on trucking with Storage Spaces. Or have I missed a Windows-compatible option I should consider?
Marshall Kirk McKusick, who wrote and still writes, UFS - which serves as the (if not the actual, then the learning) basis for most filesystems - is often quoted as saying something to the effect of: The hard part about being a filesystem developer is that once you lose someones data, they aren't going to trust you with it again.

Honestly, that's been one of the weirdest part about joining the FreeBSD project as a contributor; getting to just talk with people who I've basically looked up to my entire life.

Smashing Link
Jul 8, 2003

I'll keep chucking bombs at you til you fall off that ledge!
Grimey Drawer
Does anyone have experience with Unraid CPU isolation acting flakey? In my case it became clear that my Plex docker (not pinned to any cores) was only using a single thread that was supposed to be isolated to one of my VMs. I fixed the problem by assigning Plex some other cores but wondering why this happened.

H110Hawk
Dec 28, 2006

Smashing Link posted:

Does anyone have experience with Unraid CPU isolation acting flakey? In my case it became clear that my Plex docker (not pinned to any cores) was only using a single thread that was supposed to be isolated to one of my VMs. I fixed the problem by assigning Plex some other cores but wondering why this happened.

Unless unraid is being clever pinning one process to one core (vm0 to core9) does not prevent, it merely discourages, processN from being scheduled on core9. Can you be more specific about how you are configured? (I don't know the specifics of unraid cpu "isolation")

Smashing Link
Jul 8, 2003

I'll keep chucking bombs at you til you fall off that ledge!
Grimey Drawer

H110Hawk posted:

Unless unraid is being clever pinning one process to one core (vm0 to core9) does not prevent, it merely discourages, processN from being scheduled on core9. Can you be more specific about how you are configured? (I don't know the specifics of unraid cpu "isolation")

I have two VMs that each have 4c/8t "isolated" and also "pinned", which to my understanding means the OS doesn't send any processes there under any circumstances. Plex was not pinned to any cores so I assumed it would use up as many as it wanted, but instead it looked like it was only using one of the VM cores. Problem resolved when I pinned Plex to some different cores and it was able to use all of them effectively, tested with 5 streams at once.

H110Hawk
Dec 28, 2006

Smashing Link posted:

I have two VMs that each have 4c/8t "isolated" and also "pinned", which to my understanding means the OS doesn't send any processes there under any circumstances. Plex was not pinned to any cores so I assumed it would use up as many as it wanted, but instead it looked like it was only using one of the VM cores. Problem resolved when I pinned Plex to some different cores and it was able to use all of them effectively, tested with 5 streams at once.

It sounds like a bug or unintended consequence. Unraid might mask off everything to 1 core once you start "isolating" and doesn't tell you as some kind of awful default.

If you can get it back into that state and login to the command prompt somewhere in /proc/plexspid/something/deep/cpu/mask there is the execution mask. Cat it out and paste it here. I bet it's all 0's and a 1 instead of all 1's and a 0.

IOwnCalculus
Apr 2, 2003





Smashing Link posted:

I have two VMs that each have 4c/8t "isolated" and also "pinned", which to my understanding means the OS doesn't send any processes there under any circumstances. Plex was not pinned to any cores so I assumed it would use up as many as it wanted, but instead it looked like it was only using one of the VM cores. Problem resolved when I pinned Plex to some different cores and it was able to use all of them effectively, tested with 5 streams at once.

Out of curiosity, why are you doing this in the first place? I migrated years ago from a nested ESXi monstrosity to a comparatively-simple docker deployment in Ubuntu because I was losing more in overhead trying to balance the CPU / RAM each VM needed than I was gaining by having everything firmly isolated like that.

BurgerQuest
Mar 17, 2009

by Jeffrey of YOSPOS
From my own experience I run a Windows VM with GPU passthrough for some TV gaming because my unraid box is in the home theatre cabinet.

Sometimes I also run up a linux VM to gently caress around or test something. All the actuall apps I run on unraid are in docker.

Smashing Link
Jul 8, 2003

I'll keep chucking bombs at you til you fall off that ledge!
Grimey Drawer

H110Hawk posted:

It sounds like a bug or unintended consequence. Unraid might mask off everything to 1 core once you start "isolating" and doesn't tell you as some kind of awful default.

If you can get it back into that state and login to the command prompt somewhere in /proc/plexspid/something/deep/cpu/mask there is the execution mask. Cat it out and paste it here. I bet it's all 0's and a 1 instead of all 1's and a 0.

I can't seem to find that exact directory but I will keep looking.

IOwnCalculus posted:

Out of curiosity, why are you doing this in the first place? I migrated years ago from a nested ESXi monstrosity to a comparatively-simple docker deployment in Ubuntu because I was losing more in overhead trying to balance the CPU / RAM each VM needed than I was gaining by having everything firmly isolated like that.

Just exploring Unraid's capabilities more than anything. I also thought it was a best practices kind of thing to do with VMs but for now I think I will just leave it alone.

H110Hawk
Dec 28, 2006

Smashing Link posted:

I can't seem to find that exact directory but I will keep looking.

Whoops phone posting sent you down a rabbit hole with no end.

for i in $(ps -o pid,cmd ax | grep -i plex | awk {'print $1'}) ; do taskset -pc $i ; done

Or just all of them:

for i in $(ps -o pid ax) ; do taskset -pc $i ; done

harrygomm
Oct 19, 2004

can u run n jump?
.

harrygomm fucked around with this message at 00:15 on Jan 27, 2021

H110Hawk
Dec 28, 2006

harrygomm posted:

I was looking for a cloud-storage like backup solution but realized I don't really need to access any of the data remotely and I just wanted redundancy. I don't need much space, less than a TB, so from reading the last couple months of posts it looks like I should just go with a usb-attached external drive. My router doesn't have the capability to add it there. I'm also worried about redundancy. I'd like to remove the files from my primary machine so placing them on a single usb-attached drive seems like a good way to have the data lost when that single drive dies.

Do I go for a QNAS/Synology 2-drive enclosure? I kept reading that hardware raiding is bad, but it seems like a much more hands off solution to buy an enclosure with hardware raid, put in two drives, and connect it to my network. Also since I need so little space, what are the downsides of going for SSD storage instead of spinners? Cost certainly, but in the long-term storage world, are SSDs up to this kind of task yet?

If you want detached external storage that just stores a second copy, a single USB disk is fine, but you also have to be diligent about making those backups.

If you want detached external storage that is safe from fire, flood, cryptolockers, and theft, I would go with something like Backblaze / Backblaze B2.

RAID is fine, RAID is not backup though. If you want online storage at home that isn't only on a single computer, a little 2-bay NAS can be just what you need. Then you still want to back that up somewhere, such as Backblaze B2.

IOwnCalculus
Apr 2, 2003





For that little amount of storage, I would be very tempted to forego the local RAID and just store it on a single drive, and use the money saved there towards Backblaze or any other cloud solution.

RAID/RAID-like solutions only really get you two things - better performance, and a reduced likelihood of having to restore from backup. The latter matters a lot more if you're dealing with dozens of terabytes where restoration is going to take days at best / weeks-months at worst. For <1TB, if your single local drive dies you can restore that within a day on any decent broadband connection.

harrygomm
Oct 19, 2004

can u run n jump?
.

harrygomm fucked around with this message at 00:15 on Jan 27, 2021

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)
Uh, just add 2 drives to your computer and make them software raid1?

E: I'm pretty sure symbology makes small 2 drive NAS devices.

Hughlander
May 11, 2005

harrygomm posted:

I want to move data off of my computer to a place where it will be redundantly stored. I want to do this without the need for another computer-sized solution and without using a cloud storage service.

A single usb attached drive doesn't accomplish redundant storage since it would be removing the single copy from my primary machine to a single usb attached drive. If I move to two usb attached drives it seems like it would be better to raid 1 in a nas for ease of use having them mirrored automatically. Maybe I'm misunderstanding the use case of raid in this situation.

My primary focus is easy to use, hands off (automated disk health checking and reporting a major plus) redundant local storage. Maybe it would have been better to start with that.

WB My Book Duo is 2 drives of N size that will do raid and plugin via USB-C

EssOEss
Oct 23, 2006
128-bit approved

harrygomm posted:

I was looking for a cloud-storage like backup solution but realized I don't really need to access any of the data remotely and I just wanted redundancy.

What if there's a fire and both disks burn down? Cloud backups can save you in this situation. Just pointing out that large scale physical damage is a valid fault to consider.

harrygomm
Oct 19, 2004

can u run n jump?
.

harrygomm fucked around with this message at 00:16 on Jan 27, 2021

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

harrygomm posted:

This option looks pretty good but I am concerned that they are able to offer a NAS-like enclosure with 2 4TB drives for ~$250 less than a Synology + 2x1TB WD Red drives (or equivalent).

Easy. The WD My Book is just a mostly dumb enclosure and needs to be connected to a computer to do anything. Think of it like nothing more than a big USB drive with optional RAID, because that's what it is.

The Synology is a whole computer unto itself, and will happily keep serving up your files regardless of what any other computer on your network is doing.

Carbocation
Sep 2, 2006
I have a Pentium G4560/8GB DDR4/Geforce GT 1030/128GB SSD/Manjaro system that I use in the living room to watch movies and play games. I am looking for the cheapest way to turn it into a NAS, which seems to be around $150 for an LSI HBA + SAS cables + 5x3TB refurb SAS drives. I haven't bought anything yet, still trying to learn and have a few questions:

1. Is it possible to keep using this system as a movie/game PC while also using it as a NAS? Or does this defeat the stability of the NAS?
2. Would the above specs be enough to accomplish both functions? Not enough RAM?
3. What to use - NAS OS with a Linux desktop VM? Linux OS with a NAS VM? Or something else?
4. Not worth it, save up for dedicated NAS?

H110Hawk
Dec 28, 2006

Carbocation posted:

I have a Pentium G4560/8GB DDR4/Geforce GT 1030/128GB SSD/Manjaro system that I use in the living room to watch movies and play games. I am looking for the cheapest way to turn it into a NAS, which seems to be around $150 for an LSI HBA + SAS cables + 5x3TB refurb SAS drives. I haven't bought anything yet, still trying to learn and have a few questions:

1. Is it possible to keep using this system as a movie/game PC while also using it as a NAS? Or does this defeat the stability of the NAS?
2. Would the above specs be enough to accomplish both functions? Not enough RAM?
3. What to use - NAS OS with a Linux desktop VM? Linux OS with a NAS VM? Or something else?
4. Not worth it, save up for dedicated NAS?

NAS workloads are basically rounding errors. All you are doing is adding on samba or nfs, and under rebuild heavy parity calculations. I wouldn't make a second vm for it. ARM cpu's are fast enough to cap gig ethernet over samba with plenty of room to grow.

insta
Jan 28, 2009
The "obsolete technology" thread laughed at anybody who's using RAID5 -- and while I sorta laughed along with them, I quickly ran over here to see what I "should" be doing instead, or if what I have is good enough.

My current fileserver setup is:

* Ubuntu 19.04
* Athlon(tm) 5350 APU
* 16GB RAM
* LSI SAS2008 HBA in IT mode
* 4x (eventually 8x?) WD Red 6TB
* mdadm RAID-5
* LVM volume
* ext4 filesystem

I have Totally Cool and Totally Not :filez: things running, but not Plex.

Am I doing something wrong? :ohdear:

Kia Soul Enthusias
May 9, 2004

zoom-zoom
Toilet Rascal
If you get more Red 6tb beware that you'll probably get SMR drives. You have to get red pro or plus or whatever.

rufius
Feb 27, 2011

Clear alcohols are for rich women on diets.

insta posted:

The "obsolete technology" thread laughed at anybody who's using RAID5 -- and while I sorta laughed along with them, I quickly ran over here to see what I "should" be doing instead, or if what I have is good enough.

My current fileserver setup is:

* Ubuntu 19.04
* Athlon(tm) 5350 APU
* 16GB RAM
* LSI SAS2008 HBA in IT mode
* 4x (eventually 8x?) WD Red 6TB
* mdadm RAID-5
* LVM volume
* ext4 filesystem

I have Totally Cool and Totally Not :filez: things running, but not Plex.

Am I doing something wrong? :ohdear:

RAID5 is fine as long as you’re not running pre-spun drives or lovely cheap drives. RAID6 is usually what folks like if your drives are a little suspect.

I run RAID5 in my little 4-bay with the HGST Ultrastar drives. They’re the ones Backblaze has had the lowest failure rates on so I’m pretty confident I’ll have a shot at replacing a drive in a timely manner if I run into an issue.

The biggest point, as always, is that RAID is not backup. You should have a redundancy plan of some sort for your system. In my case, I’ve got weekly/daily backups of data to Backblaze and AWS. I only duplicate my ripped music/BluRays to Backblaze but my family photos get synced to AWS as well.

insta
Jan 28, 2009

Charles posted:

If you get more Red 6tb beware that you'll probably get SMR drives. You have to get red pro or plus or whatever.

I don't need to match them, I'm just wondering if I should be doing something with btrfs/LVM to make better use of my equipment instead of these very segregated layers.


rufius posted:

I run RAID5 in my little 4-bay with the HGST Ultrastar drives. They're the ones Backblaze has had the lowest failure rates on so I'm pretty confident I'll have a shot at replacing a drive in a timely manner if I run into an issue.

Maybe I'll go with these next, I just want to make sure I've got the underlying setup good to go :)

Kia Soul Enthusias
May 9, 2004

zoom-zoom
Toilet Rascal
Yeah I just want you to be aware of the whole SMR thing if you're going to buy more drives. It can really degrade performance. The rest I don't know enough about :)

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.
Sometimes RAID5 is not enough. At work we have a Dell MD1200 disk shelf that is close to five years old. Twelve 4TB SAS drives, 10 of which have been replaced at some point in time. In the past six months we've had three cases where 2 drives have failed exactly at the same time, last one a week ago. We have been dodging bullets like in Matrix. Thankfully there are only two original drives remaining, so when they fail in some weeks the chances of a third drive failure aren't that high.

This is pretty incomprehensible case. We have loads of these disk shelves and I don't know of any other that has exhibited this behaviour. And I can't think of an external reason that could cause this. We have very reliable electricity, the server is behind UPS and I know of one power glitch this year and it doesn't match with any of the drive failures.

rufius
Feb 27, 2011

Clear alcohols are for rich women on diets.

Saukkis posted:

Sometimes RAID5 is not enough. At work we have a Dell MD1200 disk shelf that is close to five years old. Twelve 4TB SAS drives, 10 of which have been replaced at some point in time. In the past six months we've had three cases where 2 drives have failed exactly at the same time, last one a week ago. We have been dodging bullets like in Matrix. Thankfully there are only two original drives remaining, so when they fail in some weeks the chances of a third drive failure aren't that high.

This is pretty incomprehensible case. We have loads of these disk shelves and I don't know of any other that has exhibited this behaviour. And I can't think of an external reason that could cause this. We have very reliable electricity, the server is behind UPS and I know of one power glitch this year and it doesn't match with any of the drive failures.

Ooh ya, if you’ve got more than 4 bays I’d run RAID6. I should have caveated that.

My next one will probably be 8 bay and I’m gonna run RAID6 in there, most likely.

insta
Jan 28, 2009
Just double-checking before I wander back out, my mdadm+LVM+ext4 setup is optimal-enough, and there isn't some major advantage I'd get from a combined volume-aware setup like btrfs or a LVM-RAID?

GnarlyCharlie4u
Sep 23, 2007

I have an unhealthy obsession with motorcycles.

Proof
Just found a cheap old Lenovo server with a sas card and 8 3.5" bays that I want to turn into a NAS.
I have a couple quick questions though:
Where is the best place to find cheap (either new or used) SAS drives. I'm thinking 3/4 TB is probably the sweet spot for overall cost and cost per dollar, yeah? I likely won't need the performance of the SAS drives since the raid card is only SAS 2 anyway so it's not going to be significantly faster than any SATA drive I would get. But I'm thinking I will likely find that the SAS drives will be more gently used and more resilient than a used SATA drive.

Less of a question and more of a headcheck, I'll have to disable hardware raid and use the controller for JBOD in order to run ZFS via FreeNAS or FreeBSD correct?

ROJO
Jan 14, 2006

Oven Wrangler
Well poo poo. Went through the process of upgrading my 4x3TB drives in my Synology RS815 to 4x8TB drives, configured as RAID5. When I jam the 4th disk in, it delightful notifies me that is only supports volumes up to 16TB. It didn't even cross my mind that with 4 disks I would have to worry about the total size of the volume exceeding some capability of the NAS. :doh:

Oh well, at least I have more than just 200GB of free space now, even if I have an effectively superfluous drive. Now I'm itching to upgrade....

Takes No Damage
Nov 20, 2004

The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far.


Grimey Drawer

GnarlyCharlie4u posted:

Less of a question and more of a headcheck, I'll have to disable hardware raid and use the controller for JBOD in order to run ZFS via FreeNAS or FreeBSD correct?

Sounds like it, but even then it sounds like HBAs and RAID controllers can differ wildly. I actually have an LSI in the mail (theoretically) right now so I'm researching if I'll need to do anything with its firmware, but basically it sounds like you don't want there to be other kind of logic in between ZFS and the physical drives.

Also it seems that eBay has become the de facto marketplace for more niche used/refurb'd PC hardware.

GnarlyCharlie4u
Sep 23, 2007

I have an unhealthy obsession with motorcycles.

Proof
This all sorta hinges on not having to spend a bunch of money on a new controller so I'm gonna pick up the server in the morning and see what I'm working with first. There was a craigslist for $20 3TB constellation SAS drives (it's gone now), that's why I was asking. I was gonna pick up all 9 of them and cram as many in there as I could.
Looks like I'm not having much luck replicating that pricing online so the point might be moot anyway.

I still like the idea of using sas drives for their longevity, reliability and of course performance since I'm not willing to buy a dozen 1TB SSD's.
I won't need this NAS for streaming so much, but I do require good random read/write for file transfers.
Obviously I'm not building a SAN so I'm not trying to run VMs off it (lol gently caress making GBS threads up my poor 1GB network with that traffic) or anything but I will be using it for sftp, project repositories, and data logging.

IOwnCalculus
Apr 2, 2003





insta posted:

The "obsolete technology" thread laughed at anybody who's using RAID5 -- and while I sorta laughed along with them, I quickly ran over here to see what I "should" be doing instead, or if what I have is good enough.

My current fileserver setup is:

* Ubuntu 19.04
* Athlon(tm) 5350 APU
* 16GB RAM
* LSI SAS2008 HBA in IT mode
* 4x (eventually 8x?) WD Red 6TB
* mdadm RAID-5
* LVM volume
* ext4 filesystem

I have Totally Cool and Totally Not :filez: things running, but not Plex.

Am I doing something wrong? :ohdear:

It's less "wrong" and more "there are better solutions". Mdraid will poo poo the entire array with one unrecoverable read error during a rebuild in your situation. RAID6 would turn that into two errors, at the expense of more capacity, and it's still going to mark each entire drive as failed as soon as it encounters a single error on that drive.

ZFS will mark blocks as corrupt but will still recover as much as it can. I've had at least two different events now where ZFS has saved me from a whole array rebuild, where mdraid would have puked.

In your shoes, when you get ready to add the other four disks, I'd set them up as a raidz, migrate the data over, then add the existing four disks as another raidz vdev. This would give you similar to a RAID50 arrangement with the total capacity of six of the disks.

Axe-man
Apr 16, 2005

The product of hundreds of hours of scientific investigation and research.

The perfect meatball.
Clapping Larry

ROJO posted:

Well poo poo. Went through the process of upgrading my 4x3TB drives in my Synology RS815 to 4x8TB drives, configured as RAID5. When I jam the 4th disk in, it delightful notifies me that is only supports volumes up to 16TB. It didn't even cross my mind that with 4 disks I would have to worry about the total size of the volume exceeding some capability of the NAS. :doh:

Oh well, at least I have more than just 200GB of free space now, even if I have an effectively superfluous drive. Now I'm itching to upgrade....

I'm going to say for Synology, if you are looking for something you want long term, you gotta go the plus series, for that intel processor. They are always 64 bit, and thus you don't hit the low caps for stuff like that. :sad:

Steakandchips
Apr 30, 2009

So my Synology 1812+ should be able to handle me going from 8x2TB to 8x8TB, correct?

I've already switched out 1 of the batch of 2tb disks for a 8tb disk... I want to know before I buy 7 more...

This bit in the manual scares me:



https://global.download.synology.co...a_Sheet_enu.pdf

KozmoNaut
Apr 23, 2008

Happiness is a warm
Turbo Plasma Rifle


No, you would only be able to use 32TB of that 64TB.

E: At least according to the specs. Notice it doesn't say "max volume size", but "max internal capacity", which I take to mean an absolute maximum, not a maximum volume size.

KozmoNaut fucked around with this message at 14:51 on Jul 7, 2020

Adbot
ADBOT LOVES YOU

Sneeze Party
Apr 26, 2002

These are, by far, the most brilliant photographs that I have ever seen, and you are a GOD AMONG MEN.
Toilet Rascal

KozmoNaut posted:

No, you would only be able to use 32TB of that 64TB.
Could he do two mirrored 32TB arrays?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply