Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

adorai posted:

for cifs performance, freebsd is not going to beat opensolaris. If you do build it on opensolaris, the illumos project will probably provide an eventual upgrade path to get on one of the distros based on it.

Actually OpenIndiana would provide the upgrade, not illumos. illumos is the kernel, OpenIndiana would be the distro. OpenIndiana already provides an online upgrade path from OpenSolaris b134 to OpenSolaris b147 (the last release of OpenSolaris, the code was released but not as a build) and will continue releasing new builds using the illumos kernel and other new bits.

Adbot
ADBOT LOVES YOU

TerryLennox
Oct 12, 2009

There is nothing tougher than a tough Mexican, just as there is nothing gentler than a gentle Mexican, nothing more honest than an honest Mexican, and above all nothing sadder than a sad Mexican. -R. Chandler.

necrobobsledder posted:

There's two properties of low power drives in the industry that are of interest to those running RAID setups. The aggressive head parking behavior resulting in a likely shortened lifecycle due to just plain mechanical wear is one, and the TLER / timeout behavior is the other. These are not a problem in any drive line elsewhere. Otherwise, the behaviors and features are down to warranty, certain electro-mechanical longevity, and 4k / 512b sector size issues (this is relevant in storage systems overall).

If anyone wants a heads up, I'm about to post my 1.3 year old Thecus N4100Pro 4-bay NAS on SA-Mart for $250 with the most recent firmware as of August 2010. It's been a great NAS, but after I've spent a good while with my ZFS setup, I don't have a need for this anymore. Smoke free home :)

I don't have plat so I can't PM you but I'm interested. At that price I could finally have a suitable home for the MET-Art girls.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

Star War Sex Parrot posted:

Several years. And from what I can tell the green enterprise drives are just the consumer green drives with more RAID-friendly firmware, rather than the RE4 drives running slower.
That makes it even more of a rip-off to me. You can't possibly spend more than $180 PER DRIVE in cooling and power over the same 5 year period in an enterprise environment. It may be arguable if there's some overhead like man-hours replacing drives, but it's seriously a rip-off for firmware that turns off the head parking and turns on the TLER, which is algorithmically easier to write than for a regular consumer green drive.

TerryLennox posted:

I don't have plat so I can't PM you but I'm interested.
You can e-mail me: djk29a at gmail. I do have a bunch of pictures I'll be uploading later. The big downer to consider here is shipping because this is a little heavier than I thought, so I'll likely do UPS or Fedex.

MrMoo
Sep 14, 2000

Latest Thecus NAS supports ZFS, shame their website, much like their products looks like rear end, useless for finding any useful details.

http://www.thecus.com/products_spec.php?cid=10&pid=220&set_language=english

Allistar
Feb 14, 2002

Regulation, aisle 8. Nerf, aisle 15.
FreeNAS is a bit sluggish, development wise, for me. I'm checking out OpenIndiana in a VM. If it works out well, I might install it on a HD for my NAS and see how easily I can import my ZFS pool.

I am running the embedded FreeNAS "firmware" right now off a USB stick and while it's great and all (plus the singular web interface for everything is grand), I'd like to be able to add other stuff.

IOwnCalculus
Apr 2, 2003





necrobobsledder posted:

That makes it even more of a rip-off to me. You can't possibly spend more than $180 PER DRIVE in cooling and power over the same 5 year period in an enterprise environment. It may be arguable if there's some overhead like man-hours replacing drives, but it's seriously a rip-off for firmware that turns off the head parking and turns on the TLER, which is algorithmically easier to write than for a regular consumer green drive.

Let's run some rough numbers then - RE4 2TB vs RE4-GP 2TB. From WD's site:

Read/write: 10.7W vs 6.8W
Idle: 8.1W vs 3.8W
Standby/Sleep: 1.5W vs 0.8W

In any given situation, the GP is anywhere from not quite 40% to not quite 60% more efficient than the regular 2TB drive; to make numbers easier, we'll just deal with read/write (where the savings are smallest percentage wise, but greatest in terms of Watt-hours conserved).

Over the course of five years of constant read/write, the standard drive will consume 468.66 kWh. The GP drive will consume 297.84 kWh. Of course, in a datacenter, a kWh consumed means heat to deal with, so on top of the actual power cost we need to multiply what it cost to cool that heat. The Uptime Institute figures an average datacenter's PUE is 2.5, so the actual kWh consumed over those five years becomes 1171.65 kWh and 744.60 kWh respectively.

So what does this cost? The US EIA puts the average power cost across all sectors in July 2010 at 10.5 cents per kWh. So to power and cool the RE4, we spent about $123.02. To power the RE4-GP, we spent $78.19. Savings in this very rough and not entirely realistic situation? $44.83.

Real-world, that difference is highly variable. If your cost per kWh is higher than 10.5 cents, it'll be a lot bigger. If your datacenter is a lot more efficient than that, it'll be a lot smaller. If your regular RE4s can spend more time at idle than your GPs can because they read the data off faster, the difference will be smaller; but if your drives spent the majority of the time idle instead, the difference will be bigger.

There are other costs to consider here - across a large enough deployment of drives, GP vs regular can make a (small) difference in how much cooling capacity your datacenter needs in the first place. More likely, you may be able to pack more drives in a given chassis design, and/or utilize smaller power supplies in the servers. You may or may not see reduced drive failures due to temperatures (Google's data indicates it's not nearly as much of an issue as once thought, at least only at the temperatures you should see in a datacenter).

That said, a $180 premium per drive is a bit extreme. A quick look on Froogle shows a difference of under $20 per drive when comparing RE4 to RE4-GP at 2TB. I'd hope that anyone comparing RE4-GP drives to consumer drives of any speed, is doing so for home use and not a datacenter. I don't bother paying for 'enterprise' drives for my home setup, but I wouldn't ever tell a customer who relies on the data on an array for their business to use consumer drives instead.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

IOwnCalculus posted:

That said, a $180 premium per drive is a bit extreme. A quick look on Froogle shows a difference of under $20 per drive when comparing RE4 to RE4-GP at 2TB. I'd hope that anyone comparing RE4-GP drives to consumer drives of any speed, is doing so for home use and not a datacenter. I don't bother paying for 'enterprise' drives for my home setup, but I wouldn't ever tell a customer who relies on the data on an array for their business to use consumer drives instead.
I'd rather not be buying the storage equivalent of Monster cables (even if it's not my money) if I could help it, but without solid data instead of manufacturer's specs we take as gold (and thus become subject to marketing spin and "lying with statistics" as commonplace in modern business) we can't really do much but sigh and pay for the "supposed" best and to make certain assumptions, can we? Nobody got fired for buying IBM, and nobody got fired for paying for "enterprise" drives in their datacenter even if the failure rate might be identical in practice. Funny, I don't consider 7200RPM drives enterprise anyway, but that's out of scope of this thread.

EngineerJoe
Aug 8, 2004
-=whore=-



necrobobsledder posted:

I'd rather not be buying the storage equivalent of Monster cables (even if it's not my money) if I could help it, but without solid data instead of manufacturer's specs we take as gold (and thus become subject to marketing spin and "lying with statistics" as commonplace in modern business) we can't really do much but sigh and pay for the "supposed" best and to make certain assumptions, can we? Nobody got fired for buying IBM, and nobody got fired for paying for "enterprise" drives in their datacenter even if the failure rate might be identical in practice. Funny, I don't consider 7200RPM drives enterprise anyway, but that's out of scope of this thread.

I'm sure you'd get fired if you bought Monster cables for a recording studio though.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

EngineerJoe posted:

I'm sure you'd get fired if you bought Monster cables for a recording studio though.

Does monster even make shielded studio cables?

IOwnCalculus
Apr 2, 2003





necrobobsledder posted:

I'd rather not be buying the storage equivalent of Monster cables (even if it's not my money) if I could help it, but without solid data instead of manufacturer's specs we take as gold (and thus become subject to marketing spin and "lying with statistics" as commonplace in modern business) we can't really do much but sigh and pay for the "supposed" best and to make certain assumptions, can we? Nobody got fired for buying IBM, and nobody got fired for paying for "enterprise" drives in their datacenter even if the failure rate might be identical in practice. Funny, I don't consider 7200RPM drives enterprise anyway, but that's out of scope of this thread.

Oh, I agree with you big time here across the board. In my mind, if you're paying for datacenter space, and you're using 7200RPM drives, you should hopefully only be doing so for a large, low-performance array with enough redundancy built in on the array level that it doesn't matter if you use an enterprise drive or not...but what you can get away with in real life and what you can get away with telling a customer to do are often two different things.

Now, for home users / people comparing consumer green drives versus consumer 7200RPM drives...I'd bet the power savings still sway in favor of green drives, though you do need to make sure you're not pairing them with a hardware RAID controller. I'd also argue that for a home setup, a hardware RAID controller is way overkill anyway. I'm more than happy with the performance I get out of Linux md-raid and 5400RPM SATA drives.

EngineerJoe
Aug 8, 2004
-=whore=-



Methylethylaldehyde posted:

Does monster even make shielded studio cables?

Why would they bother when they do pretty well selling nitrogen enriched cables with electron alignment technology!

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

IOwnCalculus posted:

Oh, I agree with you big time here across the board. In my mind, if you're paying for datacenter space, and you're using 7200RPM drives, you should hopefully only be doing so for a large, low-performance array with enough redundancy built in on the array level that it doesn't matter if you use an enterprise drive or not...but what you can get away with in real life and what you can get away with telling a customer to do are often two different things.
Sun (now oracle) has been selling SANs with 7200 RPM drives for quite some time, and their performance is on par with the rest of the storage world. Throw enough cache at an array, and spindle speed becomes relatively unimportant, and with cheap enough drives it makes sense to add additional parity and hot spares.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

adorai posted:

Sun (now oracle) has been selling SANs with 7200 RPM drives for quite some time, and their performance is on par with the rest of the storage world. Throw enough cache at an array, and spindle speed becomes relatively unimportant, and with cheap enough drives it makes sense to add additional parity and hot spares.

The thumper/thor also runs 7200 RPM drives. If Oracle gave a poo poo about Sun's hardware business, I bet they'd sell one with 5400 RPM drives too.

MrMoo
Sep 14, 2000

With 5400rpms you're looking at 1.7-2.5" form factor, they could probably ram 200 disks into one server for one amazing Super Thumper.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Welp, just got an email from Microcenter. I can preorder a 3TB Western Digital Caviar Green drive for $239.99 (regularly $309.99).

So it begins.

Falcon2001
Oct 10, 2004

Eat your hamburgers, Apollo.
Pillbug
Out of curiosity, can anyone recommend an opensolaris supported motherboard? Most of the ones I can find are out of manufacturing now; has anyone built a system lately and could recommend a base?

Specifically one for running an opensolaris based NAS system.

Falcon2001 fucked around with this message at 17:10 on Oct 19, 2010

Wanderer89
Oct 12, 2009

Falcon2001 posted:

Out of curiosity, can anyone recommend an opensolaris supported motherboard? Most of the ones I can find are out of manufacturing now; has anyone built a system lately and could recommend a base?

Specifically one for running an opensolaris based NAS system.

I just built one off of a 785g chipset (AM2) Asus board. The integrated ati 4200hd graphics don't work, but the basic vga driver suffices for my headless setup.

Have b134 running powering a minecraft server and a 6x1tb raidz :)

devilmouse
Mar 26, 2004

It's just like real life.

Falcon2001 posted:

Out of curiosity, can anyone recommend an opensolaris supported motherboard? Most of the ones I can find are out of manufacturing now; has anyone built a system lately and could recommend a base?

Specifically one for running an opensolaris based NAS system.

For Intel: For a small board, SUPERMICRO MBD-X8SIL-F http://www.newegg.com/Product/Product.aspx?Item=N82E16813182211 or for a full-size SUPERMICRO MBD-X8SIA-F http://www.newegg.com/Product/Product.aspx?Item=N82E16813182235

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Yeah, you really can't go wrong with an Intel chipset. You also can't really go wrong with an AMD chipset either. You can putz around the HCL, find motherboards that work, and then find the same chipset. But I think you'll be pretty golden with just about anything Intel or AMD makes. Solaris likes Nvidia graphics, but that's only really an issue if you want to use it as a desktop. A temp card can be used to install, and I've got a 2mb PCI ATI Radeon card that I can use for the text console on my server.
E:

devilmouse posted:

For Intel: For a small board, SUPERMICRO MBD-X8SIL-F http://www.newegg.com/Product/Product.aspx?Item=N82E16813182211 or for a full-size SUPERMICRO MBD-X8SIA-F http://www.newegg.com/Product/Product.aspx?Item=N82E16813182235

That small board looks awesome. Dual Intel Nics (which are $40 a piece) and remote management!

FISHMANPET fucked around with this message at 17:43 on Oct 19, 2010

Telex
Feb 11, 2003

FISHMANPET posted:

Welp, just got an email from Microcenter. I can preorder a 3TB Western Digital Caviar Green drive for $239.99 (regularly $309.99).

So it begins.

Maybe it begins in a year. if 2TB is $109 and thus 4TB is $220.. it seems like the price point is still a bit off unless you're crunched for physical space that badly.

Falcon2001
Oct 10, 2004

Eat your hamburgers, Apollo.
Pillbug

FISHMANPET posted:

Yeah, you really can't go wrong with an Intel chipset. You also can't really go wrong with an AMD chipset either. You can putz around the HCL, find motherboards that work, and then find the same chipset. But I think you'll be pretty golden with just about anything Intel or AMD makes. Solaris likes Nvidia graphics, but that's only really an issue if you want to use it as a desktop. A temp card can be used to install, and I've got a 2mb PCI ATI Radeon card that I can use for the text console on my server.
E:


That small board looks awesome. Dual Intel Nics (which are $40 a piece) and remote management!

Is there any real reason to run dual nics if I'm connecting to a single LAN, just out of curiosity? I'm planning on running Nexenta, since it looks pretty cool and is solaris-based.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
NIC teaming is one benefit of multiple NICs on a single network segment, and it lets you get above, say, gigabit speeds without requiring you to buy a lot more expensive equipment. I'm planning on using it for my setups later on instead of running 10g ethernet everywhere and could use a card like this for my setup. After all, 10g ethernet is really expensive and overkill for home use (even if you're running tons of stuff). Your switch (and also OS) will need to support either the 802.3 standard, Cisco's Etherchannel, or some other random standard to support multiple NICs demuxed to one logical network node.

Falcon2001
Oct 10, 2004

Eat your hamburgers, Apollo.
Pillbug

necrobobsledder posted:

NIC teaming is one benefit of multiple NICs on a single network segment, and it lets you get above, say, gigabit speeds without requiring you to buy a lot more expensive equipment. I'm planning on using it for my setups later on instead of running 10g ethernet everywhere and could use a card like this for my setup. After all, 10g ethernet is really expensive and overkill for home use (even if you're running tons of stuff). Your switch (and also OS) will need to support either the 802.3 standard, Cisco's Etherchannel, or some other random standard to support multiple NICs demuxed to one logical network node.

Ah, I had entirely forgotten about NIC teaming. Unfortunately I'm running a DD-WRT'd buffalo router as my gigabit switch right now, with the option of a linksys gigabit switch to expand - and I doubt either of those will support 802.3ab or whatever. :(

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

Falcon2001 posted:

Is there any real reason to run dual nics if I'm connecting to a single LAN, just out of curiosity? I'm planning on running Nexenta, since it looks pretty cool and is solaris-based.

I wish my OpenSolaris server had dual NICs so I could run my VM off of one and the actual OS on another. I've had terrible luck with both Xen and VirtualBox and my NICs where after a week or two of uptime the connection will stop for a few seconds up to a few minutes. Makes streaming things and working on the machine pretty unbearable.

PopeOnARope
Jul 23, 2007

Hey! Quit touching my junk!
So, this is frustrating. With the Beta of WHS2:Vail, I can't carry my data forward, and just as well, the build put out on 8/16 is pretty crippled. Crippled how, you ask? It eats 445GB on a 4 x 1TB array, with no data there.

DLCinferno
Feb 22, 2003

Happy

FISHMANPET posted:

I wish my OpenSolaris server had dual NICs so I could run my VM off of one and the actual OS on another. I've had terrible luck with both Xen and VirtualBox and my NICs where after a week or two of uptime the connection will stop for a few seconds up to a few minutes. Makes streaming things and working on the machine pretty unbearable.

Why not try ESXi?

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

FISHMANPET posted:

I wish my OpenSolaris server had dual NICs so I could run my VM off of one and the actual OS on another. I've had terrible luck with both Xen and VirtualBox and my NICs where after a week or two of uptime the connection will stop for a few seconds up to a few minutes. Makes streaming things and working on the machine pretty unbearable.
I don't think dual NICs are the answer here, it's probably a driver issue.

uXs
May 3, 2005

Mark it zero!
I need some advice: my harddisks are filling up quite rapidly, and I'll need some extra space pretty soon.

Normally I'd just add a new one, but I'm out of sata ports. The smallest disks I have are 500GB, and I don't want to swap one of those out either.

So what would be the most cost effective option here? I need them primarily for movies and tv shows. I stream those over the network to my PS3 a lot, so they have to be able to keep up with that. I only have one PC, I don't need to be able to share the data with other computers.

As far as I know, I could:
-buy a new motherboard with more sata ports
-some kind of external (directly connected) hard drive, but I have no clue what the best option is here. I have the impression that usb would be way too slow for movies. My pc has a hard time keeping up with video above 720p as it is.
-some kind of nas, but I'm really in the dark here about what's what.

Any ideas?

H110Hawk
Dec 28, 2006
Do you have any free PCI slots? (32-bit, -X, -E) Pickup a SATA card, the non-raid ones are pretty cheap. Steer clear of the $20 rosewill's if you want durability, but otherwise you should be fine.

Farmer Crack-Ass
Jan 2, 2001

this is me posting irl
I don't think a USB-attached external hard drive would be too slow to stream an HD movie. Your computer currently not being able to keep up very well isn't a bottleneck at the hard drive, it's almost certainly a bottleneck at the CPU.

Wanderer89
Oct 12, 2009
Just wanted to get an update to any concerned raidz home users....

I did a complete overhaul on my opensolaris homeserver a month or two ago, and while it's been great at doing its job of running minecraft servers (what... java awesomeness across multi-platforms? who would've known we'd all be using it for this) ... and awesome performance via smb shared 6x1tb raidz, I was getting sick and tired of nothing being up to date, and having to cross my fingers whenever I build anything current for my b134 release.

So I updated to openindiana last night (oi_147) and it has been wonderful. Painless in-place-upgrade of my b134 install, just created a new boot environment, and everything "just worked" when I booted up into oi_147, just as I'd left it. If you have any questions, feel free to ask.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

DLCinferno posted:

Why not try ESXi?

I've considered it, but I don't want to lose my mirrored boot drives. Also, I think the RDM stuff is a pain in the rear end. Having OpenSolaris as my hypervisor makes the most sense because that's where all the heavy lifting is done, and I don't exactly have a workhorse of a CPU.

adorai posted:

I don't think dual NICs are the answer here, it's probably a driver issue.

I've got an Intel NIC, I'm not sure what kind of driver issue I would be running into. All the Intel drivers are built into the kernel, right?

Wanderer89 posted:

Just wanted to get an update to any concerned raidz home users....

I did a complete overhaul on my opensolaris homeserver a month or two ago, and while it's been great at doing its job of running minecraft servers (what... java awesomeness across multi-platforms? who would've known we'd all be using it for this) ... and awesome performance via smb shared 6x1tb raidz, I was getting sick and tired of nothing being up to date, and having to cross my fingers whenever I build anything current for my b134 release.

So I updated to openindiana last night (oi_147) and it has been wonderful. Painless in-place-upgrade of my b134 install, just created a new boot environment, and everything "just worked" when I booted up into oi_147, just as I'd left it. If you have any questions, feel free to ask.

I made the decision last night at 3AM when I couldn't sleep to do this in the future. I just moved so everything is complete chaos (though the first thing the lady had me setup was the server so we could watch some teev :3:) but it's glad to hear it went well.

alo
May 1, 2005


FISHMANPET posted:

I made the decision last night at 3AM when I couldn't sleep to do this in the future. I just moved so everything is complete chaos (though the first thing the lady had me setup was the server so we could watch some teev :3:) but it's glad to hear it went well.

Correct me if I'm wrong, but isn't the XVM stuff removed from OpenIndiana since Oracle is no longer supporting it?

See also: http://opensolaris.org/jive/thread.jspa?threadID=134657

I have the same setup and will be moving to ESXi with a separate storage server in the near future. I'm not too crazy about having two machines where I used to have one. (Unless anyone has any experiences with passing drives directly to a VM in ESXi and the performance implications).

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

alo posted:

Correct me if I'm wrong, but isn't the XVM stuff removed from OpenIndiana since Oracle is no longer supporting it?

See also: http://opensolaris.org/jive/thread.jspa?threadID=134657

I have the same setup and will be moving to ESXi with a separate storage server in the near future. I'm not too crazy about having two machines where I used to have one. (Unless anyone has any experiences with passing drives directly to a VM in ESXi and the performance implications).

I was using Xen in my first attempt, but now I'm using virtualbox and I have the same problems.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

alo posted:

I have the same setup and will be moving to ESXi with a separate storage server in the near future. I'm not too crazy about having two machines where I used to have one. (Unless anyone has any experiences with passing drives directly to a VM in ESXi and the performance implications).
There are two ways to do it. First, use IOMMU (only on newer high end motherboards), second is to use RDM. Either one of these should have basically no performance penalty.

wang souffle
Apr 26, 2002
This had to have been discussed sometime in the past, but has anyone played around with btrfs at all? I'm not the biggest fan of running Solaris at home, and this seems to be the best reasonable alternative to ZFS in the near future.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

wang souffle posted:

This had to have been discussed sometime in the past, but has anyone played around with btrfs at all? I'm not the biggest fan of running Solaris at home, and this seems to be the best reasonable alternative to ZFS in the near future.

From what I've heard it's just not ready yet. It only does mirroring, not RAID5, which is the biggest deal breaker for me.

There's a kernel level implmentation of ZFS. Currently it's only for vdevs, so you'd have to create a EXT4 filesystem on top of your Zpool, but you'd lose some of the cool ZFS features.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA
Between Vbox, iSCSI, CIFS and NFS, there isn't really any reason not to use OpenIndiana for your media storage box, as long as you're building it with modern parts. Basically anything new has the virtualization extensions that make Vbox run fast enough to do basically whatever you need it to, and it's poo poo easy to get ZFS set up.

ufarn
May 30, 2009
I have a Western Digital My Book World II (something something long name), and I don't get the back-up.

Previously, I had it back up some folders. I subsequently removed some of the folders to be backed up.

It seems to back up by adding all new files and folders to the same back-up folder. My problem with this is that it never seems to delete anything. Sure, I can see the advantage of having it back up my photo collection etc., but I also just want a more general back-up for my computer as-is. The downside to this is that the back-up software just keeps incrementally adding files and folders, even as some are moved and deleted. This creates copious amounts of bloat and a messy back-up that doesn't even represent how my computer looks.

Is this the standard way NAS HDDs back up in their own incremental way? I see why it can be great as one way of backing up, but it's not doing me a lot of good as a one-trick pony back-up.

Is there any way I can instruct the software to synchronize instead of just adding new files and folders incrementally disregarding anything else that happens? Just like you would with something like Dropbox and bvckup.

Adbot
ADBOT LOVES YOU

Ziploc
Sep 19, 2006
MX-5
Cross postin from the Windows thread:

Is the built in Windows7 backup utility good for large backups? I need to backup ~1.5 terrabytes to a 2tb external eSATA drive. Windows seems to be taking it's sweet old time deciding what to put on there. The external hard drive seems to be doing very little work.

Not that I'm impatient. But it almost seems like it would be faster if I just dragged and dropped the contents of each drive.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply