Saurabh Nanda
2019-Feb-08 13:26 UTC
[Samba] 32 seconds vs 72 minutes -- expected performance difference?
## QUESTION I am sharing a 120GB folder with lots of files via Samba on a LAN (1Gbps connection). 1) Doing an `ls -lR` on the server (on this folder) takes ~32 seconds, compared with **72 minutes** on the client. Is this difference in performance expected (due to network and protocol overhead)? 2) While the client is executing an `ls -lR`, one smbd process on the server uses about 30-40% of a single CPU (on an 4c/8t machine). Is this much CPU load expected? (Also, the client ends up consuming all 1Gbps of network bandwidth). Is samba transferring all the files from the server to the client for this operation? ## CONFIGURATION Server & client, both, are on Ubuntu 18.04 and are using the stock version of the kernel, samba, etc. The version being reported: # smbd --version Version 4.7.6-Ubuntu smb.conf: [global] smb encrypt = required disable netbios = yes case sensitive = yes preserve case = yes short preserve case = yes workgroup = WORKGROUP server string = %h server (Samba, Ubuntu) wins support = no dns proxy = no interfaces = {{ internal_ip }} bind interfaces only = yes log file = /var/log/samba/log.%m max log size = 1000 syslog = 0 panic action = /usr/share/samba/panic-action %d server role = standalone server passdb backend = tdbsam obey pam restrictions = no unix password sync = no pam password change = no map to guest = never usershare allow guests = yes [myshare] path = /samba/uploaded-files browseable = no read only = no valid users = [redacted] -- Saurabh.
Jeremy Allison
2019-Feb-08 23:51 UTC
[Samba] 32 seconds vs 72 minutes -- expected performance difference?
On Fri, Feb 08, 2019 at 06:56:48PM +0530, Saurabh Nanda via samba wrote:> ## QUESTION > > I am sharing a 120GB folder with lots of files via Samba on a LAN (1Gbps > connection).Define "lots of files" ? What does ls | wc -l say ?
Saurabh Nanda
2019-Feb-09 01:52 UTC
[Samba] 32 seconds vs 72 minutes -- expected performance difference?
> > Define "lots of files" ? What does ls | wc -l say ? >Number of files + directories: # find . | wc -l 2651882 Number of files: # find . -type f | wc -l 1057105 Number of directories: # find . -type d | wc -l 1594777 -- Saurabh.
Aurélien Aptel
2019-Feb-11 09:35 UTC
[Samba] 32 seconds vs 72 minutes -- expected performance difference?
Saurabh Nanda via samba <samba at lists.samba.org> writes:> ## QUESTION > > I am sharing a 120GB folder with lots of files via Samba on a LAN (1Gbps > connection). > > 1) Doing an `ls -lR` on the server (on this folder) takes ~32 seconds, > compared with **72 minutes** on the client. Is this difference in > performance expected (due to network and protocol overhead)?Unless you upload a network capture of you mounting and doing the ls -lR on the client it's hard to say what really goes on. I understand you might not want to make it public.. but if you do You can make a capture using: tcpdump -s 0 -w capture.pcap port 445 & tracepid=$! mount.cifs //foo/bar /mnt -o .... ls -lR /mnt #..wait or kill ls.. kill $tracepid Note that you can probably kill tcpdump after 10mn or so. This should provide enough data. If you end up doing it, the capture will probably be quite large, try compressing it as well.> 2) While the client is executing an `ls -lR`, one smbd process on the > server uses about 30-40% of a single CPU (on an 4c/8t machine). Is this > much CPU load expected? (Also, the client ends up consuming all 1Gbps of > network bandwidth). > > Is samba transferring all the files from the server to the client for this > operation?No it shouldn't.> ## CONFIGURATION > > Server & client, both, are on Ubuntu 18.04 and are using the stock version > of the kernel, samba, etc. The version being reported:How are you mounting your share (which mount options)? I can't comment on the server side but using a very recent kernel version like 4.20 or above (which at the moment is 5.0-rc6 :/) you will get compounding, which basically bundles multiple SMB operations in one paquet and saves a lot of round trips as a result. Note that compounding requires mounting using SMB2+ user vers= mount option, cf mount.cifs man page. I suspect it won't solve the problem but it's something. -- Aurélien Aptel / SUSE Labs Samba Team GPG: 1839 CB5F 9F5B FB9B AA97 8C99 03C8 A49B 521B D5D3 SUSE Linux GmbH, Maxfeldstraße 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
Saurabh Nanda
2019-Feb-14 12:04 UTC
[Samba] 32 seconds vs 72 minutes -- expected performance difference?
> > Unless you upload a network capture of you mounting and doing the ls -lR > on the client it's hard to say what really goes on. I understand you > might not want to make it public.. but if you do >This is the last thing I'll try after I've exhausted all the other options. How are you mounting your share (which mount options)?>Something weird is going on with the mount options. I hadn't used vers= in my fstab earlier, and noticed the following in syslog: No dialect specified on mount. Default has changed to a more secure dialect, SMB2.1 or later (e.g. SMB3), from CIFS (SMB1). To use the less secure SMB1 dialect to access old servers which do not support SMB3 (or SMB2.1) specify vers=1.0 on mount. Which implies that the server & client auto-negotiated a protocol version greater than SMB2.1, right? However, to be sure, I manually specified versin fstab, but something strange happened. While `man mount.cifs` claims that the following are allowed -- 1.0, 2.0, 2.1, 3.0, 3.1.1 (or 3.11) -- few of them failed with strange errors: # mount -t cifs -o rw,username=myuser,password=[REDACTED],uid=myuser,gid=myuser,vers=3.11 //[REDACTED]/uploaded_files /home/myuser/shared mount error(11): Resource temporarily unavailable Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) # tail /var/log/syslog [...] Feb 14 12:57:04 prod-backoffice kernel: [105926.746067] CIFS VFS: failed to connect to IPC (rc=-11) Feb 14 12:57:04 prod-backoffice kernel: [105926.767443] CIFS VFS: session 0000000044187aeb has no tcon available for a dfs referral request Feb 14 12:57:04 prod-backoffice kernel: [105926.770039] CIFS VFS: cifs_mount failed w/return code = -11 # mount -t cifs -o rw,username=myuser,password=[REDACTED],uid=myuser,gid=myuser,vers=3.1 //[REDACTED]/uploaded_files /home/myuser/shared mount error(22): Invalid argument Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) # tail /var/log/syslog [...] Feb 14 12:58:54 prod-backoffice kernel: [106036.706942] CIFS VFS: Unknown vers= option specified: 3.1 # mount -t cifs -o rw,username=myuser,password=[REDACTED],uid=myuser,gid=myuser,vers=3.0 //[REDACTED]/uploaded_files /home/myuser/shared [ finally worked! ] It seems that vers=3.0 worked, but how do I confirm that the mount ACTUALLY happened with vers=3.0? Further, `mount` is showing a LOT of options that I did not specify. I do not fully understand their impact: //[REDACTED]/uploaded_files on /home/myuser/shared type cifs (rw,relatime,vers=3.0,cache=strict,username=vl,domain=,uid=1000,forceuid,gid=1000,forcegid,addr=95.216.67.179,file_mode=0755,dir_mode=0755,soft,nounix,serverino,mapposix,rsize=1048576,wsize=1048576,echo_interval=60,actimeo=1) Namely: forceuid, forcegid, nouinx, serverino, & mapposix Could any of these be causing the problems that I'm observing? PS: vers=3.0 didn't fix the problem! -- Saurabh.
Saurabh Nanda
2019-Feb-18 11:38 UTC
[Samba] 32 seconds vs 72 minutes -- expected performance difference?
> > (Also, the client ends up consuming all 1Gbps of network bandwidth). >This statement is grossly incorrect. The client is not consuming 1Gbps, but 1MBps -- I'm confused between bits per second and bytes per second, it seems. I am replicating the scenario with NFS to see if it's an issue with Samba, or something to do with the underlying network stack. -- Saurabh.
Saurabh Nanda
2019-Feb-18 13:40 UTC
[Samba] 32 seconds vs 72 minutes -- expected performance difference?
> > (Also, the client ends up consuming all 1Gbps of network bandwidth). >> > > This statement is grossly incorrect. The client is not consuming 1Gbps, > but 1MBps -- I'm confused between bits per second and bytes per second, it > seems. > > I am replicating the scenario with NFS to see if it's an issue with Samba, > or something to do with the underlying network stack. >Here's NFS vs Samba: ## Samba - Uses about 1MBps throughout real 87m9.739s user 2m9.222s sys 17m21.828s ## NFS - Uses about 1.5-2.0 MBps throughout real 34m3.902s user 0m55.798s sys 6m36.227s I now have no clue what is going on. Are the **number** of files and directories so large that this is kind-of expected? -- Saurabh.
Götz Reinicke
2019-Feb-22 08:04 UTC
[Samba] 32 seconds vs 72 minutes -- expected performance difference?
> Am 08.02.2019 um 14:26 schrieb Saurabh Nanda via samba <samba at lists.samba.org>: > > ## QUESTION > > I am sharing a 120GB folder with lots of files via Samba on a LAN (1Gbps > connection). > > 1) Doing an `ls -lR` on the server (on this folder) takes ~32 seconds, > compared with **72 minutes** on the client. Is this difference in > performance expected (due to network and protocol overhead)? > > 2) While the client is executing an `ls -lR`, one smbd process on the > server uses about 30-40% of a single CPU (on an 4c/8t machine). Is this > much CPU load expected? (Also, the client ends up consuming all 1Gbps of > network bandwidth). > > Is samba transferring all the files from the server to the client for this > operation?Hi, I’v been faced with similar situations ion the past - „lots“ of small files on a network share like SMB/NFS. Beside the software configuration a major topic for me was in most cases the storage type hosting the files. E.g. Raid-level and spinning Disk vrs. SSD. So, what kind of storage you use? Regards . Götz
Saurabh Nanda
2019-Feb-22 09:23 UTC
[Samba] 32 seconds vs 72 minutes -- expected performance difference?
> > I’v been faced with similar situations ion the past - „lots“ of small > files on a network share like SMB/NFS. Beside the software configuration a > major topic for me was in most cases the storage type hosting the files. > E.g. Raid-level and spinning Disk vrs. SSD. > > So, what kind of storage you use? >NVMe/SSD on the server side. -- Saurabh.
Reasonably Related Threads
- 32 seconds vs 72 minutes -- expected performance difference?
- 32 seconds vs 72 minutes -- expected performance difference?
- 32 seconds vs 72 minutes -- expected performance difference?
- 32 seconds vs 72 minutes -- expected performance difference?
- 32 seconds vs 72 minutes -- expected performance difference?