Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
I'm just having a big argument with Peter Bright from Ars Technica on their forums, where he's trying to convince me that on block-based filesystem, doing filesystem-level checksumming on a per-file basis is a good thing (i.e. one checksum for a whole file), and that RAID resiliency depends on how the filesystem clusters/blocks are laid out on the logical disk instead of the actual array doing the proper parity calculations.

:psyduck:

Adbot
ADBOT LOVES YOU

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Well, nothing. What does a filesystem have to do with a RAID array's operation?

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

quote:

:words: Cairo, WinFS... ReFS? :words:
Not to spoil your ReFS bickering, but Cairo and WinFS have poo poo to do with ReFS.

Cairo and WinFS were supposed to be object stores. Not sure what the plan for Cairo was, being that old, but WinFS wasn't even a filesystem. It was a masked SQL Server instance with some ORM, running a bunch of database files hidden away in C:\System Volume Information. Two of the biggest reasons it failed was because they couldn't get FileStreams out on time in SQL Server back then, and that its WinFS' main APIs were pretty much only .NET. If something like WinRT would have existed back then, it'd have made more sense, seeing how .NET established itself for consumer application development (note: not really).

Anyway, ReFS is just a block filesystem like NTFS. Same upper level APIs, different on-disk format. It's completely B+ tree now and can do copy-on-write. As far as the other cut features go, I figure those that still make sense will slowly be added back. One of the reasons why it won't ship to the client yet.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

szlevi posted:

You didn't spoil anything because you clearly didn't get the point.
What point? You were equating it to the efforts of Cairo and WinFS. Which makes no sense, since it's not remotely comparable. ReFS isn't trying to be anything else than a block filesystem.

szlevi posted:

What you're talking about here is the last iteration before it's got whacked - but, as I wrote, WinFS started out well before that, as a fs (hence its name, y'know.)
The FS stood for future storage when it was unveiled ages after Cairo. There never was any other on-disk format than NTFS in play, as far as what's known in public.

As for Cairo OFS, there's no public information how its development versions were implemented at all. Might just have been a similar layer on top of traditional NTFS as well.
--fake edit: If I am to believe Wikipedia, their object store attempts were almost all based on SQL Server since at least 1995. So yeah, NTFS all the way.

szlevi posted:

yeah, it's a rather crappy patched-up NTFS - but it's still the same idea except now they make even less effort to solve their inherent disadvantages when it comes to managing data; they just make some bulletpoint changes and that's it.
Unless I'm missing something, every filesystem actually in use works practically the same. Hierarchically organized unstructured streams. If other semantics were of advantage over that, there would have been some big solutions established by now, if only in open source space. Maybe I'm not seeing it, but latest developments in filesystems, like ZFS and BTRFS, don't attempt to be different.

So I'm not sure what you mean by managing data. In what way does NTFS have disadvantages?

szlevi posted:

It will go through at least 2 more generations before it gets usable and by then who knows where the world will be...
Probably the same place where it is now. Just with flashier UIs. If Microsoft doesn't pull another Longhorn, 2 generation's pretty much just 6 years. Since they've announced a three year cadence in Windows development.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

szlevi posted:

OK, let me reiterate again: the purpose of all these gymnastics was and still is to provide better management of the information.
The whole point in OFS (Cairo) was to introduce an object-based system and while WinFS was not a replacement in any way the aim of its development was focusing on the same thing (ie information management.)
In principle, WinFS would have been nice. It'd however have failed thanks to NIH syndrome, which most big software developers are prone to. Either by injecting their own non-inherited schemas, or refusing to share data via WinFS at all. Ignoring the fact that the public version of WinFS didn't ship with native APIs, which contributed to its failing.

szlevi posted:

Let me guess, you have not seen any object-based storage system... but if you have seen then please explain how they would work running NTFS and why that's not possible... :P
No, I haven't seen one. Apparently. I still fail to see what advantage they'd have, outside specific scenarios. Most data's still unstructured, or hard to structure. I'm happy to learn something, so give it a try. Maybe I don't understand object based storage, because my definition of objects stems from programming, and how it's different from stream based storage.

szlevi posted:

You mean except the fact that NTFS lacks almost all the features of the other ones?
NTFS is about as capable as any other filesystem. I guess your reply relates to this:

szlevi posted:

Are you serious? No volume management, no RAID, no checksums whatsoever etc etc?
Apart from integrated checksums, no filesystem does volume management and consequently redundancy, with the exception of ZFS. The things you mention are usually implemented elsewhere in the storage stack, below the actual filesystem. You'll be getting it finally with Windows 8 and Storage Spaces, altho not as extensive on first release as one would want it (lack of double and triple parity, RAID10).

szlevi posted:

I am not sure if I understand your PoV - these announcements have nothing to do with UI, my comment was about the fact that MS really lags behind others and very slow to roll out new features and new versions.
It was a generic comment as to where we'd be in future. Given how Windows Server 8 is shaping up, Microsoft servers are also going to be flashy. But anyway, the main point was that ReFS and Storage Spaces are subject to Microsoft's release cadence, and you won't be seeing any worthwhile improvements until at least three years from that. With open source, this is different. For instance improvements in the the Linux filesystems get rolled out as they come and are deemed usable or stable. In the time Microsoft needs to roll out a new Windows version, ZFS saw tons of improvements and feature additions, available to anyone. I don't see Microsoft doing gradual updates with service packs.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
So they've decided to implement volume management in BTRFS after all? That's hilarious, considering how much the Linux crowd shat on Sun in the past for doing this with ZFS.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
So, RDMA on Intel, i.e. iWARP, are there actually products that support it? Google leads me to believe that there are 10GbE products that do RDMA, but per Intel, none of their adapters supports it.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
This is probably the best thread for this type of question:

Are Intel network cards generally a pain in the rear end to tune? I have a Linux NAS here that runs iSCSI via LIO and Samba. The metric I've been paying the most attention to is single threaded random 4K reads, and something curious happened. With my old Intel X520 cards, I managed to get 14MB/s on iSCSI and 12MB/s on Samba over the wire. Just the mere fact of switching the X520 cards with Mellanox ConnectX3 VPI cards, the iSCSI metric went up to 34-40MB/s and Samba to 45-50MB/s. All these reads come straight from ZFS' ARC, so directly from memory.

Given all the values listed, I don't think 10GbE vs 40GbE should make a difference. No RDMA involved.

Adbot
ADBOT LOVES YOU

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
It's been a while I ran iperf on the Intel cards, but I think it took multiple threads to get it to full rate, whereas the Mellanox cards reach 12GBit on a single thread. With four I get it to 28GBit. Given large copies, the Intel card reached its max. transmission speeds easily on SMB3/Samba without drop-outs (at least so long until the 32GB of RAM on the NAS were filled with buffered writes). It's just that this specific scenario didn't do well on Intel, but just flies on Mellanox.

I don't know how the situation with offloads is on Linux, and what the drivers generally do both on Windows and Linux. Maybe it bypasses TCP on a successful connection and implicitly does RDMA.

Also, in regards to RDMA, I wish Samba would be a little more transparent about its roadmap. It appears you have piece together everything from outdated webpages, stale development branches and presentations hosted on third party sites.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply