Search Amazon.com:
Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us $3,400 per month for bandwidth bills alone, and since we don't believe in shoving popup ads to our registered users, we try to make the money back through forum registrations.
«270 »
  • Post
  • Reply
feld
Feb 11, 2008

Out of nowhere its.....

Feldman



WindMinstrel posted:

All it needs is online capacity expansion and it's superior in every way to mdadm RAID-5.

I just did this today at work. One of our Oracle DBs needed a larger space for its data. Created a LUN, attached it to the system.

code:
feld@solaris:/# zpool add poolname devicename
Done. System didn't blink. Storage was available instantly.

Note: YMMV. We were using ZFS on a single LUN so we weren't getting all its integrity features -- we're not going to run ZFS in a RAID5 setup on a High Availability SAN. ZFS integrity features are really useful for a lot of cheap disks. Furthermore, ZFS would never see the errors most likely because our SAN would fix them first. And we'll get an email after the manufacturer and the distributer gets an email. Hell they'll be on the phone with us before we even READ that email. If something is seriously up they'll send out a team to check it out. (<3 Pillar SAN)


Anyway....
I'm not a ZFS zealot. It has its good sides and its bad sides. People have run into interesting bugs and performance issues and it is still in its infancy.

That said, when it works, it rocks. And open source? Hell yes. It's on FreeBSD now too so we don't have to use Solaris? HELL YES.

Adbot
ADBOT LOVES YOU

cycleback
Dec 3, 2004
The secret to creativity is knowing how to hide your sources

McRib Sandwich posted:


* Addonics Technologies Storage Tower - Base Model -- Lowest price: $119 (Addonics, 18 Mar 2008)




I have been thinking about buying one of these Addonics storage towers to use as a DAS and a portable offsite backup.

Can you comment on the quality of the enclosure and the power supply?

How noisy is the power supply?

Is there enough airflow over the drives to keep them cool? I am concerned that there might be airflow problems because of the mesh looking sides.

Is it possible to change the backpanel once it is purchased?

cycleback
Dec 3, 2004
The secret to creativity is knowing how to hide your sources

feld posted:

I just did this today at work. One of our Oracle DBs needed a larger space for its data. Created a LUN, attached it to the system.

code:
feld@solaris:/# zpool add poolname devicename
Done. System didn't blink. Storage was available instantly.


I think that the previous poster is referring to the fact you can't currently expand a VDEV. This is pretty limiting for most home setups.

The Gay Bean
Apr 19, 2004


I'm looking to buy/build a NAS box. One of my biggest concerns is power draw and noise, which is why I'm a bit hesitant to buy a small case and build a computer. Otherwise that would be awesome, though.

I read in one of the reviews of the pre-built 4 drive models that its power draw was around 50W--is this typical? Have any of you hooked your homebrew NAS boxes up to Kill-A-Watts to check power draw?

Misogynist
Jul 14, 2003



Shalrath posted:

My current setup is about 5 drives concatencated through LVM for a total of about 800 GB. (which now feels very outmoded, since geeks.com is having a sale of 750 Barracudas for 124$) Nothing special really. I havent bothered with any sort of mirroring or striping since most of the drives are different sizes.
Just so you're aware, if you lose the drive that your FS's inode table is on, all of the data on the other drives is effectively orphaned, regardless of whether the files themselves are contiguous or not. As above, it's no worse than losing a single 750GB drive, but it still sucks.

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

teamdest posted:

B
RAID-0 (Suicide)

If you're looking to set up a Raid 0 (a.k.a. striping), stop and think seriously before continuing. Some Reports severely undermine the oft-believed concept that double the drives means double the performance, and when you consider that doubling the number of the drives DOES double the failure chance, the question "is it worth it" becomes well-on impossible to answer in the affirmative. The only situation where a RAID-0 array might be required is in high-availability environments where many people will be accessing one array at the same time, and even then it is almost always better to use RAID-5 (nearly the same performance, but with redundancy too!).


Just being pedantic but RAID 0 is rarely used in HA environments unless coupled with RAID 1; and even then only when high performance is required. RAID 5 is slower than RAID 0 but I'll qualify that with the fact that no home user will ever notice.

You'll use RAID 0 (really RAID 10) with applications that need a lot of IOPS (exchange, very busy SQL databases, possibly clustered filesystems).

Anyway, carry on.

McRib Sandwich
Aug 4, 2006
I am a McRib Sandwich

cycleback posted:

I have been thinking about buying one of these Addonics storage towers to use as a DAS and a portable offsite backup.

Can you comment on the quality of the enclosure and the power supply?

How noisy is the power supply?

Is there enough airflow over the drives to keep them cool? I am concerned that there might be airflow problems because of the mesh looking sides.

Is it possible to change the backpanel once it is purchased?

Well, in a lot of respects the enclosure feels like a shuttle case; the construction is pretty sturdy (enough to lug a 5-disk RAID around in, anyway). The PSU I really can't vouch for unfortunately, it seemed to be of average quality. I think I remember the PSU itself containing two fans, one of which was noisier than the other. We ended up unplugging one of them to decrease the noise somewhat. This wasn't a problem in our case because the Areca subsystem has a dedicated fan to itself. I don't recall the PSU itself being terribly noisy on its own, though... the RAID was substantially louder.

If you put your own drives in there, as opposed to a dedicated disk enclosure unit, you may want to add some active cooling. As I remember, the bare box is just that; the only fans I recall coming with it were attached to the PSU. The website pictures are actually pretty good at giving you an idea of what the bare case looks like.

As far as the backpanel, you can order any of a few different ones from Addonics. It's literally just a plate that you screw into place, with prepunched holes for whatever config you ordered from them. If you have the right metalworking tools, you could make your own custom panel pretty easily.

Sorry for the underwhelming details, for all the research I did on this thing I didn't have it in my own hands for very long, as it was for someone else. Hope that helps. All in all, I couldn't find anything similar on the market, and it's hard to beat the modularity and portability of the thing.

McRib Sandwich fucked around with this message at Mar 19, 2008 around 06:14

MrMoo
Sep 14, 2000


Open-e provides an interesting low-end alternative, buy your NAS box fill up with disks and use a USB based flash storage device to run the NAS/iSCSI system. Competitors include Wasabi Systems, and you can get FreeNAS to operate similarly.

With software vs. hardware, I think RAID 1 software is almost always faster. RAID 5 usually needs a big memory cache to get good performance due to bus speeds. Software RAID carries the bonus that you can swap the disks immediately into another server, hardware RAID needs the same vendor card. Some hardware RAID cards go obsolete and are incompatible on-disk with future models, it's unsurprisingly not a highlighted feature.

If you lose one disk in a RAID 5 it's highly probable that a remaining disk can fail during the rebuild. This is why hot-spares are preferred, and lower capacity disks. 1 TB disks can take an quite a long time to re-write. Following on, if you have a hot-spare you can be better off using that disk by bumping the level up to RAID 5e or RAID 6 if supported.

Some NAS devices only support code-page encodings for file names, e.g. Buffalo, so you cannot use multiple written languages: e.g. Simplified & Traditional Chinese.

NAS performance varies drastically, from 2-3 MB/s for single disk adapters to ~20MB/s for many current SOHO units. Jumbo frames are a method of bumping the performance to ~40MB/s using larger packets on the wire, but compatibility is limited and the technology slightly dead-end. The performance is directly linked to the processor, various methods are used to avoid the more expensive processors such as hardware RAID 5 calculations and ToE network interfaces. Raw disk speed in comparison is 60MB/s and above. This can mean RAID 5 is faster on SOHO devices than RAID 1, and RAID 10 (if supported).

Vendors are starting to compete more on features as performance is limited by cost. Qnap, Synology, and Thecus trail behind media support of Infrant/NetGear's system and this year started up forums, SSH access and more firmware updates than you usually see for consumer/SOHO hardware. Media compatibility includes iTunes servers, Xbox 360 & PS3 streaming, Slingbox, and other set-top style radio devices.

Synology recently added a IP camera surveillance system similar to ZoneMinder, it is unusually license restricted to 5 cameras. Although this might be tied to performance limitations of the hardware, although it easily performs better than ZoneMinder which is designed for direct attached CCTV cameras.

MrMoo fucked around with this message at Mar 19, 2008 around 06:17

Shalrath
May 25, 2001

by elpintogrande


Mr. Heavy posted:

Just so you're aware, if you lose the drive that your FS's inode table is on, all of the data on the other drives is effectively orphaned, regardless of whether the files themselves are contiguous or not. As above, it's no worse than losing a single 750GB drive, but it still sucks.


On a similar note, I believe the inode table (or whatever NTFS uses) has gone bad on my laptop's windows partition. Linux can't mount it, as ntfs or ntfs-3g. I cant seem to run chkdsk through dosbox either. Kind of a conundrum. I'm preparing to install XP on another partition just so I can run chkdsk on the first partition.


Any tips for recovery?

King Nothing
Apr 26, 2005

Ray was on a stool when he glocked the cow.

MrMoo posted:

Synology recently added a IP camera surveillance system similar to ZoneMinder, it is unusually license restricted to 5 cameras. Although this might be tied to performance limitations of the hardware, although it easily performs better than ZoneMinder which is designed for direct attached CCTV cameras.

On that topic, when I was looking at hard drives some were advertised as being good for video feeds because they could write multiple data streams. Is that an actual feature, or some sort of marketing thing? Doesn't any drive with multiple platters have multiple read/write heads, and thus the ability to do that?

wolrah
May 8, 2006
what?


King Nothing posted:

On that topic, when I was looking at hard drives some were advertised as being good for video feeds because they could write multiple data streams. Is that an actual feature, or some sort of marketing thing? Doesn't any drive with multiple platters have multiple read/write heads, and thus the ability to do that?

It's just a firmware thing. The access patterns of a DVR or similar are different from those of most home users, so the firmware can be designed in a way that's optimized for that use, sacrificing performance in other areas.

In terms of physical hardware, the "DVR Edition" or whatever drives are identical to their standard use counterparts. They're just software-optimized to perform well with at least two "simultaneous" writes and one read going at any given time.

900ftjesus
Aug 10, 2003


wolrah posted:

It's just a firmware thing. The access patterns of a DVR or similar are different from those of most home users, so the firmware can be designed in a way that's optimized for that use, sacrificing performance in other areas.

In terms of physical hardware, the "DVR Edition" or whatever drives are identical to their standard use counterparts. They're just software-optimized to perform well with at least two "simultaneous" writes and one read going at any given time.

They're optimized for writing large chunks of contiguous data. You wouldn't want to use this in your computer:

7200RPM 160GB WD SATA:
#Average Seek Time: 8.7ms

7200RPM 160GB Seagate SATA - optimized to write multiple streams at once:
# Average Seek Time: 17ms

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Regarding RAID-Z, it's different to fix some issues with RAID-5 (write hole). Sun can't go around claiming to have a filesystem that stays consistent after crashes and then go use RAID-5 with static stripe width loving up the data.

As far as RAID-1 read performance goes, given a good IO scheduler (which negates any need to "read in sync"), you can get near double read speeds.

I run two WD Raid Editions (WD5000ABYS) in a RAID-1. I can get up to 70MB/s off a single drive. In mirror configuration, up to 130MB/s is in. The IO scheduler involved here is ZFS' IO pipeline. Both measurements are taken by dd'ing a huge file on the filesystem to /dev/null, using the filesystem's record size as block size.

What should be taken in mind with these numbers is that ZFS is a COW system with load balancing and what not. Anyone with a defragmentation fetish would weep blood.

Shalrath posted:

On a similar note, I believe the inode table (or whatever NTFS uses) has gone bad on my laptop's windows partition.
It's called Master File Table (MFT) and it's mirrored on the drive. Something else may have broken, if chkdsk can't fix the MFT.

stephenm00 posted:

why isn't zfs and raid-z a more common option? Their must be some disadvantage for home users right?
The disadvantage is that it can have quite a memory footprint. Actually, that's not entirely correct. It's just that a lot of memory to it can make it fly even more. The IO pipeline of ZFS takes huge advantage of a huge ARC (adaptive replacement cache, is what ZFS uses), because it can detect various read patterns and prefetches accordingly into the ARC.

The ARC cache resizes with memory pressure. At least it does in Solaris, not sure if that works already in FreeBSD or if it's still a fixed setting (I think it was 64MB). Anyway, you can manually set a limit, which would be stupid, but people get too impressed with code making gratuitous use of free unused memory (See the Vista Superfetch bullshitting).

Idiotic anecdotal reference: When I was new to Solaris and running ZFS, watching a movie from harddisk in background, I was wondering why the drive LED wasn't going at all and why I was having occasional sound skipping (lovely driver caving under load, is fixed now). At some point diagnosing, I ended up checking the IO stats in ZFS, turned out that it figured out I was doing linear reads and actually reading 180-200MB at once every 15 minutes.

Combat Pretzel fucked around with this message at Mar 19, 2008 around 15:38

wolrah
May 8, 2006
what?


900ftjesus posted:

They're optimized for writing large chunks of contiguous data. You wouldn't want to use this in your computer:

7200RPM 160GB WD SATA:
#Average Seek Time: 8.7ms

7200RPM 160GB Seagate SATA - optimized to write multiple streams at once:
# Average Seek Time: 17ms

For a same-brand comparison, straight from the Seagate datasheets:

Barracuda 7200.11 (desktop/workstation) 1TB: 4.16ms
Barracuda ES.2 (nearline/NAS/SAN) 1TB: 4.16ms
DB35.3 (professional/security DVR) 1TB: Read <14ms, Write <15ms

They don't have numbers listed for the SV35 (consumer DVR) aside from the vague "up to 10 HDTV streams" and it's also a generation out of date (based on the Barracuda 7200.10), otherwise I'd have included that too. As far as I know the three drives I listed are all physically the same, just with different firmware for their intended application.

Migishu
Oct 22, 2005

I'll eat your fucking eyeballs if you're not careful



PowerLlama posted:

I have to second King Nothing's review. I have the D-Link Drandom numbers, and I love it.

I, also, must commend King Nothing and his review of the Dlink NAS box. I bought one of these 3 months ago and have been very happy with it. It works perfectly, takes up very little power, and my girlfriends Mac recognized it straight away.

We plugged a printer into it, and it recognized it straight away. It was absolutely brilliant. For something cheap and easy to use, with all the benefits of stability and customization, this is definitely worth the money.

Though I would LOVE to build me one of those DAS raid boxes, I just don't have the money. This works though, and I'm very happy with it.

kri kri
Jul 18, 2007



I would be really interested to hear any hands on experiences with UnRaid. I am looking to move away from my WHS (performance, corruption bug) to UnRaid.

teamdest
Jul 1, 2007


kri kri posted:

I would be really interested to hear any hands on experiences with UnRaid. I am looking to move away from my WHS (performance, corruption bug) to UnRaid.

I would too, if someone's got a good review I'll throw it into the OP. also, any comments/suggestions to improve that?

stephenm00
Jun 28, 2006


cycleback posted:

I think that the previous poster is referring to the fact you can't currently expand a VDEV. This is pretty limiting for most home setups.


Can you explain this more?

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar!
Rigoddamndicuλous.

You should know that if you plan on running ZFS and using Samba, you might have problems. My server, running Solaris, had stability issues for months and then I stopped listening to music stored on the network. All of a sudden my server, which had a maximum uptime of about two weeks ( , I know, but I don't want to poo poo where I eat so I didn't mess with it) stayed up for 3 months.

Ars thread with more information:
http://episteme.arstechnica.com/eve...67005626831/p/1

More recently, I installed some RAM that's either bad or the motherboard hates and Solaris crashed and corrupted a file* that prevents it from booting at all, so I figured I'd try BSD and see if it worked. Unfortunately, it's refusing to recognise zpool (which I admit I was not able to properly export). Back to solaris now, hope the newest version of the developer edition is more stable. Also I hope it will import the array because if not I'm really gonna

That said, I had a lovely SATA cable that was taking one of the disks in my array offline all the time (untill I replaced it; I'm not totally helpless damnit) and ZFS handled it pretty well, although sometimes the machine would freeze when the drive went away. Never lost any data, though.

*Disclaimer: Solaris was installed to a disk off the array and used a non-redundant filesystem. For some reason, I never thought to dump an image of it untill today

Munkeymon fucked around with this message at Mar 21, 2008 around 03:23

admiraldennis
Jul 22, 2003

I am the stone that builder refused
I am the visual
The inspiration
That made lady sing the blues


Is there a way to prevent md from starting an array in degraded mode?

i.e. if i disconnected a drive used in a raid5 and booted the machine, the array would simply not start instead of starting and becoming degraded?

I found "md_mod.start_dirty_degraded=0" but that only prevents a degraded & dirty array from starting (and is the default), so that doesn't really help.

admiraldennis fucked around with this message at Mar 21, 2008 around 20:08

elcapjtk
Mar 14, 2005
Some people say I am a terrible person.


This is probably a dumb question, but I really haven't had much personal expirience with building network storage type things.

I have 10X 160GB PATA hard drives that I want to shove into an array for home use. This won't hold anything really important so redundancy really isn't that big of a deal for me. What I'd prefer is something cheap, what kinda stuff would I be looking for?

teamdest
Jul 1, 2007


elcapjtk posted:

This is probably a dumb question, but I really haven't had much personal expirience with building network storage type things.

I have 10X 160GB PATA hard drives that I want to shove into an array for home use. This won't hold anything really important so redundancy really isn't that big of a deal for me. What I'd prefer is something cheap, what kinda stuff would I be looking for?

the biggest issues you face are: number of drives often dictates price of your product, and that you haven't specified direct or network attached. for network attached, you'd probably be best off with a computer and PATA controller cards, unless someone knows of some ridiculously cheap 10-bay PATA NAS device. I don't know as much about direct-attached solutions for that many pata drives.

landoverbaptist
Sep 9, 2006

by Fistgrrl


I suggest getting an old corporate IDE Disk Array. I am using an ultratrak RM8000 and I love it, except for the fact it is slow across the network. Vista to XP box running: 810KB!!! Vista SP1 to it: 10MB/s in RAID5.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Munkeymon posted:

More recently, I installed some RAM that's either bad or the motherboard hates and Solaris crashed and corrupted a file* that prevents it from booting at all, so I figured I'd try BSD and see if it worked. Unfortunately, it's refusing to recognise zpool (which I admit I was not able to properly export). Back to solaris now, hope the newest version of the developer edition is more stable. Also I hope it will import the array because if not I'm really gonna
FreeBSD only supports pool version 2. That and apparently GEOM's interfering. If it ain't either of these, you can supply the -f flag to zpool import. Exporting the pool is just a management semantic.

As far as Solaris not booting, if GRUB and failsafe mode still work, the boot archive is hosed. Ain't a biggie, since you can recreate (i.e. update) it in failsafe mode.

Munkeymon posted:

You should know that if you plan on running ZFS and using Samba, you might have problems. My server, running Solaris, had stability issues for months and then I stopped listening to music stored on the network.
Might consider looking into the most recent Nevada builds. It comes now with a CIFS server written by Sun based on the actual Microsoft documentation. If you've another month time, you should wait for the next Developer Edition based on snv_87, which apparently comes with an updated version of the Caiman installer that supports installing to ZFS and setting up ZFS boot (single disk or mirror pool only). I'd figure that boot files on ZFS is more resilient to random crashes loving up your boot environment.

Combat Pretzel fucked around with this message at Mar 22, 2008 around 00:10

King Nothing
Apr 26, 2005

Ray was on a stool when he glocked the cow.

elcapjtk posted:

This is probably a dumb question, but I really haven't had much personal expirience with building network storage type things.

I have 10X 160GB PATA hard drives that I want to shove into an array for home use. This won't hold anything really important so redundancy really isn't that big of a deal for me. What I'd prefer is something cheap, what kinda stuff would I be looking for?

10 cheap USB/firewire enclosures is about the best you'll be able to do with that, unless you have an old full tower machine sitting around that can handle 10 drives. A better idea would be to sell those drives for maybe $20 each and put the money towards a single modern SATA drive and enclosure.

cypherks
Apr 8, 2003



Not sure if it's the right thread, but I can answer any questions about EMC Clariion and Celerra, iSCSI and Fiber Channel switching. I've done a bunch of SAN with ESX 3.0 and 3.5 installs as well.

teamdest
Jul 1, 2007


cypherks posted:

Not sure if it's the right thread, but I can answer any questions about EMC Clariion and Celerra, iSCSI and Fiber Channel switching. I've done a bunch of SAN with ESX 3.0 and 3.5 installs as well.

well then I'll take you up on that: could you give a basic overview of what a SAN is, how it is hooked up/accessed, etc? I've got my NAS and was thinking of building a SAN and realized I literally know nothing about them, and even iSCSI confuses me when I try to figure out what's going on. I believe I just fundamentally don't understand what a SAN is for.

swalk
Nov 20, 2004
bucka blaow

teamdest posted:

well then I'll take you up on that: could you give a basic overview of what a SAN is, how it is hooked up/accessed, etc? I've got my NAS and was thinking of building a SAN and realized I literally know nothing about them, and even iSCSI confuses me when I try to figure out what's going on. I believe I just fundamentally don't understand what a SAN is for.

I'm no SAN expert, but generally compared to NAS they offer: speed, more storage space, and scalability. Plus, with SANs, storage can appear locally attached so you can run servers with no hard drives (for example, run an ESX server directly off a SAN).

The Gunslinger
Jul 24, 2004

Do not forget the face of your father.

For whoever asked, I'm getting a few new drives this weekend and I'll be setting up UnRaid to see how it is. I'll do a little review after I've spent a few days with it and do the usual poo poo to sabotage/stress test it.

stephenm00
Jun 28, 2006


The Gunslinger posted:

For whoever asked, I'm getting a few new drives this weekend and I'll be setting up UnRaid to see how it is. I'll do a little review after I've spent a few days with it and do the usual poo poo to sabotage/stress test it.

Cant wait...

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar!
Rigoddamndicuλous.

Toiletbrush posted:

FreeBSD only supports pool version 2. That and apparently GEOM's interfering. If it ain't either of these, you can supply the -f flag to zpool import. Exporting the pool is just a management semantic.

That doesn't make sense based on what I found on their wiki: http://wiki.freebsd.org/ZFSQuickStartGuide they support features from at least version 8, though I can see that the features don't exactly stack, and so could be skipped for lower-hanging fruit. Besides, the -f flag doesn't do anything when the system swears there are no pools or if it simply can't start ZFS in the first place.

I thought I read somewhere that export wrote some extra metadata, but I could easily be wrong since all my research is a year old at this point.

quote:

As far as Solaris not booting, if GRUB and failsafe mode still work, the boot archive is hosed. Ain't a biggie, since you can recreate (i.e. update) it in failsafe mode.

I'd rather have the newer system going and the only things I care about are the the pool and the Azureus install, which is only valuble because it required a retarded ammount of effort to get working.

quote:

Might consider looking into the most recent Nevada builds. It comes now with a CIFS server written by Sun based on the actual Microsoft documentation. If you've another month time, you should wait for the next Developer Edition based on snv_87, which apparently comes with an updated version of the Caiman installer that supports installing to ZFS and setting up ZFS boot (single disk or mirror pool only). I'd figure that boot files on ZFS is more resilient to random crashes loving up your boot environment.

I'd much rather get an AMD64 build running, but that apparently means conjuring nforce drivers out of thin air, which I'm not up for. Maybe I will just get a minimal effort system running and ride it out untill the next version if it's that close :\ I miss the warm, fuzzy feeling of having nightly automated backups.

Munkeymon fucked around with this message at Mar 22, 2008 around 09:40

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Munkeymon posted:

That doesn't make sense based on what I found on their wiki: http://wiki.freebsd.org/ZFSQuickStartGuide they support features from at least version 8, though I can see that the features don't exactly stack, and so could be skipped for lower-hanging fruit. Besides, the -f flag doesn't do anything when the system swears there are no pools or if it simply can't start ZFS in the first place.
What I see is up to version 5, which is gzip. The features above are not really that important (yet), but I figure that zpool throws a fit if the version's higher. Actually, I don't even know how it'd behave on a higher pool version than what's supported. Silence might just be it, perhaps.

quote:

I thought I read somewhere that export wrote some extra metadata, but I could easily be wrong since all my research is a year old at this point.
Export only sets a flag in the pool that it's unused and removes its entry from zpool.cache.

quote:

I'd rather have the newer system going and the only things I care about are the the pool and the Azureus install, which is only valuble because it required a retarded ammount of effort to get working.
I don't get what you mean. You've already set up FreeBSD? Fixing the boot archive is one single line. Actually, the more recent Nevada builds should notice it themselves when booting to failsafe and ask you if it should be updated.

bootadm update-archive -R /a

(Since in failsafe mode, it mounts your root fs to /a)

quote:

I'd much rather get an AMD64 build running, but that apparently means conjuring nforce drivers out of thin air, which I'm not up for. Maybe I will just get a minimal effort system running and ride it out untill the next version if it's that close :\ I miss the warm, fuzzy feeling of having nightly automated backups.
Uhm. I still run snv_76, half a year old, and is pretty stable on my NForce4 mainboard. And it boots to 64bit mode. They ship drivers for the Nvidia SATA controller since I think snv_72.

servo@bigmclargehuge:~ > modinfo | grep nv_sata
38 fffffffff7842000 5b88 189 1 nv_sata (Nvidia ck804/mcp55 HBA v1.1)

MrMoo
Sep 14, 2000


QNAP with its 409 looks to competing with ReadyNAS NV+ for looks, but Synology on features. It doesn't have a LCD panel or that much media support, but they've bumped up the power to support RAID 6, which they've started to advertise in Asia with 256MB system memory and a 500Mhz random processor speed.

cypherks
Apr 8, 2003



teamdest posted:

well then I'll take you up on that: could you give a basic overview of what a SAN is, how it is hooked up/accessed, etc? I've got my NAS and was thinking of building a SAN and realized I literally know nothing about them, and even iSCSI confuses me when I try to figure out what's going on. I believe I just fundamentally don't understand what a SAN is for.

With a SAN, you can pick and choose your disks, and LUN sizes. Say you've got a tray of 15 disks. You can do RAID 1, 3, 5, 10, or 6. Once you have a RAID group, you can carve out LUNs. Let's say you have 3, 500GB disks. Put those in a single RAID 5 group. Out of that you could carve out, for example, 10, 300mb LUNs. Present those LUNs to servers, and what they'll see is the 300mb of storage.

Basically what a SAN does is manage storage for hosts. There's a lot of smoke and mirrors but what you end up with is a host that thinks it has local storage, when in fact it resides elsewhere.

iSCSI - all you're doing is using an IP network to transport SCSI commands. You set up storage on the SAN, give it an iSCSI IP address and pick what host/server you'll send it to. On the host/server, you set up an iSCSI initiator, point it at the SAN and voila, you have storage.

Let me know if you want more info or details, I'm keeping it pretty simple.

There is a ton of info available at http://www.snia.org/home - it's vendor neutral.

If anyone really wants to mess around, there is a SAN simulator, free, available from EMC. Not sure what the rules about 'filez' are these days otherwise I'd put up a link.

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar!
Rigoddamndicuλous.

Toiletbrush posted:

What I see is up to version 5, which is gzip. The features above are not really that important (yet), but I figure that zpool throws a fit if the version's higher. Actually, I don't even know how it'd behave on a higher pool version than what's supported. Silence might just be it, perhaps.

Oh sorry, I read here: http://wiki.freebsd.org/ZFS that they have delegated admin going, which is a version 8 feature. The pool it failed to notice was at version 3, so I don't think it was a version problem.

quote:

I don't get what you mean. You've already set up FreeBSD? Fixing the boot archive is one single line. Actually, the more recent Nevada builds should notice it themselves when booting to failsafe and ask you if it should be updated.

bootadm update-archive -R /a

(Since in failsafe mode, it mounts your root fs to /a)

Yeah, I installed FreeBSD on a spare drive that I swapped in and ZFS didn't work. I got an error message about it being unable to initialize the ZFS system and I couldn't find anything helpfull on Google, so I installed the latest Solaris Express Developer over it.

I did try updating my old install (~1 year old now), but the installer said there wasn't enough space on the drive. I don't see why because that drive has 62 GB free on slice 7, though I may be misunderstanding the update procedure.

quote:

Uhm. I still run snv_76, half a year old, and is pretty stable on my NForce4 mainboard. And it boots to 64bit mode. They ship drivers for the Nvidia SATA controller since I think snv_72.

servo@bigmclargehuge:~ > modinfo | grep nv_sata
38 fffffffff7842000 5b88 189 1 nv_sata (Nvidia ck804/mcp55 HBA v1.1)

All I know is that when I did install the AMD64 build last year, no disk plugged into an SATA controller would show up in /dev/dsk and I the only mention I could find of a similar situation (nforce 5 SATA controller not working) was 'solved' by giving up on the normal build and using the developer express package instead. I guess that's changed since then, so I'll probably have to give it annother try. Did you have to do any specail configuration for that or did everything work right from the get-go?

On a side note, I can't believe you use > in your prompt. I'd constantly be checking to be sure I wasn't redirecting output into an executable file

teamdest
Jul 1, 2007


cypherks posted:

SAN Stuff

If it's free I don't think it's "files" really, so it should be fine. Someone correct me if I'm wrong, though.


Also, the SNIA stuff looks good, i'm reading it over now. I guess my primary question would be: what do I really need to make a SAN in my "home" (Dorm Room), can I convert my NAS to a SAN, would there be benefits beyond learning?

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Munkeymon posted:

Oh sorry, I read here: http://wiki.freebsd.org/ZFS that they have delegated admin going, which is a version 8 feature. The pool it failed to notice was at version 3, so I don't think it was a version problem.
Must be that GEOM thing. I think ZFS in FreeBSD can't handle full disk vdevs properly, since Solaris partitions them as GPT disk when you slap ZFS all across it and GEOM kind of trips over itself with these. At least I think that was a limitation some time ago, that you had to set up a full disk BSD slice first and make it use that.

quote:

Yeah, I installed FreeBSD on a spare drive that I swapped in and ZFS didn't work. I got an error message about it being unable to initialize the ZFS system and I couldn't find anything helpfull on Google, so I installed the latest Solaris Express Developer over it.
The Developer Editions are actually pretty stable. The development process of Solaris Nevada is pretty cool (and I guess similar to FreeBSD). Everything has to be tested and then approved and then tested before it can go into the main tree that'll become the Community and Developer editions. As said, I'm running a Nevada build and it's mighty stable.

quote:

I did try updating my old install (~1 year old now), but the installer said there wasn't enough space on the drive. I don't see why because that drive has 62 GB free on slice 7, though I may be misunderstanding the update procedure.
Slice 7 is /export with the standard layout and / with all the rest is slice 0. The loving dumb thing with the current Solaris installer is that it sized said slice more or less close to what it needs to install the system. If you try a regular update to the same slice, you'll be out of luck.

If you still want to run Solaris and get rid of these silly hassles, wait a month for the snv_87 Community Edition, following the next upcoming Developer Edition. The new graphical installer (which you access under the Developer Edition boot option ) will support ZFS root and boot. Like this, you don't have to deal with mis-sized slices anymore on upgrades. Snap Upgrade will also be integrated, which is like Live Upgrade but for ZFS and taking advantage of it.

(ZFS boot works currently only on single disk or single mirror pools, so you need a seperate pool on your system disk.)

I'm also waiting for that build. If you intend to use GUI stuff on your server (locally or XDMCP), snv_88 will have Gnome 2.22 integrated, you don't want that because it appears like the new GVFS stuff makes it crash happy.

quote:

Did you have to do any specail configuration for that or did everything work right from the get-go?
Nope. Pre snv_72, the SATA drives acted like regular IDE drives towards the system. I thought that was normal behaviour from the Nvidia SATA chipset. With snv_72, the Nvidia SATA driver was released and the disks act like SCSI disks now (which they're apparently supposed to).

quote:

On a side note, I can't believe you use > in your prompt. I'd constantly be checking to be sure I wasn't redirecting output into an executable file
It's colored, either green for normal user or red for root (additional indicator).

Actually, I could remove the drat user name, because there's just me and root.

Combat Pretzel fucked around with this message at Mar 22, 2008 around 23:16

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar!
Rigoddamndicuλous.

Toiletbrush posted:

Must be that GEOM thing. I think ZFS in FreeBSD can't handle full disk vdevs properly, since Solaris partitions them as GPT disk when you slap ZFS all across it and GEOM kind of trips over itself with these. At least I think that was a limitation some time ago, that you had to set up a full disk BSD slice first and make it use that.

That sounds plausible, but it's kind of crappy when disk management is tripping around in the first place.

quote:

The Developer Editions are actually pretty stable. The development process of Solaris Nevada is pretty cool (and I guess similar to FreeBSD). Everything has to be tested and then approved and then tested before it can go into the main tree that'll become the Community and Developer editions. As said, I'm running a Nevada build and it's mighty stable.

Yeah, I read about that, but I just don't see the stability in the graphical environment. For example, I enabled a service the other night through the service management GUI application and as soon as I checked the box, the machine stopped responding and I had to reset it after 5 minutes of it not responding to the Num Lock key. Maybe I should have spent more for 'real' server hardware, but that and the hard locking caused by using WinAmp over SMB look more like software problems to me.

quote:

Slice 7 is /export with the standard layout and / with all the rest is slice 0. The loving dumb thing with the current Solaris installer is that it sized said slice more or less close to what it needs to install the system. If you try a regular update to the same slice, you'll be out of luck.

I had assumed it would just move or copy everything on slice 0 to someplace on 7 and then merge new files in, but I guess the installer isn't that smart.

quote:

If you still want to run Solaris and get rid of these silly hassles, wait a month for the snv_87 Community Edition, following the next upcoming Developer Edition. The new graphical installer (which you access under the Developer Edition boot option ) will support ZFS root and boot. Like this, you don't have to deal with mis-sized slices anymore on upgrades. Snap Upgrade will also be integrated, which is like Live Upgrade but for ZFS and taking advantage of it.

(ZFS boot works currently only on single disk or single mirror pools, so you need a seperate pool on your system disk.)

I'm also waiting for that build. If you intend to use GUI stuff on your server (locally or XDMCP), snv_88 will have Gnome 2.22 integrated, you don't want that because it appears like the new GVFS stuff makes it crash happy.

I don't really want Gnome at all because I prefer KDE I do use the GUI, though, for the torrent client.

Also, I don't think putting the system root in the pool isn't really something I care to do. I see the advantage to making it redundant, but I'm not confident in the updatability of that kind of setup. The built-in update manager never worked on my old install, for example, and it was easy to just swap out the hard disk and gently caress around with BSD without having to worry about stepping on other bootladers.

Will the community edition ever come out in 64-bit, do you think? You seem way more knowledgeable the Solaris community than I am. Oh, and what about the blurb on the download page that says it's 'unsupported'? Is that Sun speak for 'you're pretty much on your own'?

quote:

Nope. Pre snv_72, the SATA drives acted like regular IDE drives towards the system. I thought that was normal behaviour from the Nvidia SATA chipset. With snv_72, the Nvidia SATA driver was released and the disks act like SCSI disks now (which they're apparently supposed to).

Well on my first try with a 64-bit build the system didn't seem to be able to see them at all, IDE or no.

My guess would be that the system doesn't expect PATA drives to be hot-swappable like SATA is and someone had to hack in something ugly to make that workable.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Munkeymon posted:

I had assumed it would just move or copy everything on slice 0 to someplace on 7 and then merge new files in, but I guess the installer isn't that smart.
I wouldn't know, it never worked at all for me, and I have 16GB root slices. Live Upgrade is also stupid in my general direction, so I'm hoping for Snap Upgrade to work well.

quote:

I don't really want Gnome at all because I prefer KDE I do use the GUI, though, for the torrent client.
Sun's own distro will be Gnome based, due to their investments in it. There's at least one Sun engineer working with the KDE team to port KDE4 to Solaris. I guess once it's workable, and Project Indiana (the prototype OpenSolaris distro incl. distro constructor) reaches beta and/or release stage, it'll be available as option there. Not sure how it'll be handled with the SX*E's.

quote:

Also, I don't think putting the system root in the pool isn't really something I care to do.
What I was saying is that, if you were to use a Solaris build with the new revision of the new installer, that you should create a seperate pool on the seperate disk you ran your system on. It'll not be redundant, but you get the advantages of pooled storage making the fixed slices crap go away, and Snap Upgrade, that'll employ snapshots and clones magic to update your system during regular operation (and make it available on reboot). The pool would be seperate from your data pool.

Munkeymon posted:

Will the community edition ever come out in 64-bit, do you think? You seem way more knowledgeable the Solaris community than I am. Oh, and what about the blurb on the download page that says it's 'unsupported'? Is that Sun speak for 'you're pretty much on your own'?
On boot, it's decided whether it loads the 32bit or 64bit kernel. The userland is mainly 32bit, but ships 64bit versions of most libraries. Components like Xorg are available in both versions, which version's loaded is decided with isaexec magic.

Right now, it's the same argument as with Windows. There's no real point in a 64bit userland, except in places where it makes sense for the last bit of performance, i.e. kernel and Xorg.

And unsupported in the sense that they won't be liable if you run NASDAQ on it and then whoopsy-daisy, your data goes missing.

quote:

Well on my first try with a 64-bit build the system didn't seem to be able to see them at all, IDE or no.
That's strange. I figured that with the old ATA chips, the situation is similar to SATA's AHCI, that there's a generic way to operate them.

Adbot
ADBOT LOVES YOU

TheChipmunk
Sep 29, 2003

Eschew Obfuscation

ZFS on Mac.
http://trac.macosforge.org/projects...i/documentation
BIg deal?

I'm currently using Ubuntu Server edition on an old PC with two fairly large drives via LVM (Basically JABOD).
I'm very new to the unix environment and I really have no idea what the hell I'm doing. LVM on Ubuntu Server Edition was quite the accomplishment for me.

Is OpenSolaris a respectable option for ZFS and homebrew NAS boxes? (By NAS I mean old computer + drives).

And also a rather bad question for RAID-5: Lets say I have two 250gig hard drives and I put them in RAID 5 configuration. Do I see 500Gigs available? Or do I only see 250? Seeing 500gigs available blows my mind if its true.
(I am obviously not speaking of ACTUAL availability but is this theoretically sound?)

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply
«270 »