|
I'm just having a big argument with Peter Bright from Ars Technica on their forums, where he's trying to convince me that on block-based filesystem, doing filesystem-level checksumming on a per-file basis is a good thing (i.e. one checksum for a whole file), and that RAID resiliency depends on how the filesystem clusters/blocks are laid out on the logical disk instead of the actual array doing the proper parity calculations.
|
# ¿ Jan 24, 2012 17:56 |
|
|
# ¿ Apr 27, 2024 11:59 |
|
Well, nothing. What does a filesystem have to do with a RAID array's operation?
|
# ¿ Jan 24, 2012 19:09 |
|
quote:Cairo, WinFS... ReFS? Cairo and WinFS were supposed to be object stores. Not sure what the plan for Cairo was, being that old, but WinFS wasn't even a filesystem. It was a masked SQL Server instance with some ORM, running a bunch of database files hidden away in C:\System Volume Information. Two of the biggest reasons it failed was because they couldn't get FileStreams out on time in SQL Server back then, and that its WinFS' main APIs were pretty much only .NET. If something like WinRT would have existed back then, it'd have made more sense, seeing how .NET established itself for consumer application development (note: not really). Anyway, ReFS is just a block filesystem like NTFS. Same upper level APIs, different on-disk format. It's completely B+ tree now and can do copy-on-write. As far as the other cut features go, I figure those that still make sense will slowly be added back. One of the reasons why it won't ship to the client yet.
|
# ¿ Jan 29, 2012 14:08 |
|
szlevi posted:You didn't spoil anything because you clearly didn't get the point. szlevi posted:What you're talking about here is the last iteration before it's got whacked - but, as I wrote, WinFS started out well before that, as a fs (hence its name, y'know.) As for Cairo OFS, there's no public information how its development versions were implemented at all. Might just have been a similar layer on top of traditional NTFS as well. --fake edit: If I am to believe Wikipedia, their object store attempts were almost all based on SQL Server since at least 1995. So yeah, NTFS all the way. szlevi posted:yeah, it's a rather crappy patched-up NTFS - but it's still the same idea except now they make even less effort to solve their inherent disadvantages when it comes to managing data; they just make some bulletpoint changes and that's it. So I'm not sure what you mean by managing data. In what way does NTFS have disadvantages? szlevi posted:It will go through at least 2 more generations before it gets usable and by then who knows where the world will be...
|
# ¿ Jan 30, 2012 00:17 |
|
szlevi posted:OK, let me reiterate again: the purpose of all these gymnastics was and still is to provide better management of the information. szlevi posted:Let me guess, you have not seen any object-based storage system... but if you have seen then please explain how they would work running NTFS and why that's not possible... :P szlevi posted:You mean except the fact that NTFS lacks almost all the features of the other ones? szlevi posted:Are you serious? No volume management, no RAID, no checksums whatsoever etc etc? szlevi posted:I am not sure if I understand your PoV - these announcements have nothing to do with UI, my comment was about the fact that MS really lags behind others and very slow to roll out new features and new versions.
|
# ¿ Feb 4, 2012 19:15 |
|
So they've decided to implement volume management in BTRFS after all? That's hilarious, considering how much the Linux crowd shat on Sun in the past for doing this with ZFS.
|
# ¿ Feb 5, 2012 00:23 |
|
So, RDMA on Intel, i.e. iWARP, are there actually products that support it? Google leads me to believe that there are 10GbE products that do RDMA, but per Intel, none of their adapters supports it.
|
# ¿ Jun 5, 2017 20:42 |
|
This is probably the best thread for this type of question: Are Intel network cards generally a pain in the rear end to tune? I have a Linux NAS here that runs iSCSI via LIO and Samba. The metric I've been paying the most attention to is single threaded random 4K reads, and something curious happened. With my old Intel X520 cards, I managed to get 14MB/s on iSCSI and 12MB/s on Samba over the wire. Just the mere fact of switching the X520 cards with Mellanox ConnectX3 VPI cards, the iSCSI metric went up to 34-40MB/s and Samba to 45-50MB/s. All these reads come straight from ZFS' ARC, so directly from memory. Given all the values listed, I don't think 10GbE vs 40GbE should make a difference. No RDMA involved.
|
# ¿ Jul 22, 2017 02:48 |
|
|
# ¿ Apr 27, 2024 11:59 |
|
It's been a while I ran iperf on the Intel cards, but I think it took multiple threads to get it to full rate, whereas the Mellanox cards reach 12GBit on a single thread. With four I get it to 28GBit. Given large copies, the Intel card reached its max. transmission speeds easily on SMB3/Samba without drop-outs (at least so long until the 32GB of RAM on the NAS were filled with buffered writes). It's just that this specific scenario didn't do well on Intel, but just flies on Mellanox. I don't know how the situation with offloads is on Linux, and what the drivers generally do both on Windows and Linux. Maybe it bypasses TCP on a successful connection and implicitly does RDMA. Also, in regards to RDMA, I wish Samba would be a little more transparent about its roadmap. It appears you have piece together everything from outdated webpages, stale development branches and presentations hosted on third party sites.
|
# ¿ Jul 22, 2017 13:56 |