|
Anyone have any ideas on a (all things being relative) "slow" CIFS mount in Ubuntu 17.04/17.10? Situation: Several VMs accessing a separate FreeNAS array via a point-to-point network link of 10GbE. Works great, no problems, a Windows VM can read at 500-600MB/sec and write at 900MB/sec (weird, but whatever). Close-ish to line speed, at least for write. So that kinda eliminates the ESXi portion as far as I'm concerned. The Ubuntu VMs are using an fstab cifs mount with vers=3.02 specified and verified on the FreeNAS side with smbstatus ... that's the protocol level they're connecting at. iperf tests to one of those VMs shows 10GbE speeds. But an actual large file copy never goes above 220MB/sec. I've got these additional parameters set in smb.conf: socket options = TCP_NODELAY SO_RCVBUF=524288 SO_SNDBUF=524288 IPTOS_LOWDELAY But that didn't seem to change anything. Anything else I could look into? I'm just baffled as to why it's 2x faster than gigabit, but no better, and CPU/memory/disk isn't the issue, either. All the Ubuntu VMs behave this way in my setup.
|
# ¿ Jan 11, 2018 22:35 |
|
|
# ¿ Mar 28, 2024 13:54 |
|
Keito posted:Isn't SMB known for being slow? Why aren't you using NFS between *nix systems? I also access those shares from Windows boxes, and mixing NFS/SMB operations is bad. SMB 3.x isn't bad at all, usually. Besides, the Windows VM is close enough to line speed, and it's using SMB for access. Edit: For clarity, the FreeNAS share is an SMB share the Ubuntu VMs connect to and add/modify/delete data. Most of the clients are Windows, and also need ACL controlled access to the share. NFS isn't an option for this share. insularis fucked around with this message at 15:38 on Jan 12, 2018 |
# ¿ Jan 12, 2018 15:27 |
|
insularis posted:Anyone have any ideas on a (all things being relative) "slow" CIFS mount in Ubuntu 17.04/17.10? No one had anything, but I figured this out. It was related to NFS queue depth in ESXi, even though it presented as an SMB issue. Setting it to <64 on ESXi solved it. I had the same point to point network mounted as an NFS storage share, and traffic over the vmkernel NIC and guest mounted storage was causing timeouts and huge latency. insularis fucked around with this message at 05:55 on Jan 18, 2018 |
# ¿ Jan 18, 2018 05:51 |
|
General_Failure posted:This addresses the other replies too. Re-reading it I can see I was a little unclear. It's only accessible on the home network. Using cloud based solutions would suck hairy balls. #1. I'm Australian, and have Australian Internet access. #2 Need something easy for other family members to get to. Preferably in a read only manner. I use an nginx reverse proxy with user/pass auth to serve Calibre to the outside. The Calibre web server is really nice (can download or read in the browser, very pretty, good search), but yeah, it should not be directly on the web.
|
# ¿ Jan 30, 2018 14:20 |
|
Sometimes, I think I'm pretty dumb. I just set up SaltStack on my home infrastructure (about 20 Linux VMs) and played around with centralized updates and maintenance because management was getting very tedious. This is ... so much easier. Are you kidding me? Why am I so dumb? Why did I wait so long to try this? Anyway, yeah, I'm pretty happy with Salt.
|
# ¿ Feb 1, 2018 20:20 |
|
LochNessMonster posted:Maybe the tool that gives your certs the rating thinks that using subject alternative names make it slightly less secure? End of February, LE supports wildcard in production, right now it's only for staging. Soooooo looking forward to it.
|
# ¿ Feb 2, 2018 20:20 |
|
Methanar posted:In the future if you suspect that you've got an IP address conflict, arpwatch is a good tool to find it In a similar vein, I just used nmap against my management VLAN today because I couldn't remember a management port for an XMPP server.
|
# ¿ Feb 14, 2018 02:54 |
|
Could also try disabling the lower C States in BIOS in case one of those is being mishandled. I had a laptop that did that when it picked up a C6 state.
|
# ¿ Feb 16, 2018 02:02 |
|
|
# ¿ Mar 28, 2024 13:54 |
|
Is there a clever way to upgrade PHP and PHP-FPM from 7.2 to 7.3, but retain all your configuration modifications without redoing it all?
|
# ¿ Jun 11, 2019 23:34 |