Linda W
2014-Feb-13 21:18 UTC
[Samba] samba w/vfs notify_fam perf hit of 25% on writes noticed...?
I was running some benchmarks and trying to tune speed between a samba 3.6.22 server and win7. My primary benchmark is using 'dd' on windows to read/write to device files in my home directory to eliminate effects of disk latency. So for reads, I transfer from h:/zero and for writes I write to h:/null. Where h: is my unix home dir. (for the other end of the transfer, I use /dev/null and /dev/zero, respectively, under cygwin). I don't remember seeing this during previous benchmarks, which is why I was more than a little curious. When I'm doing the client write's (writing 4G to the server, I used to see 98-99% cpu usage from smbd that was servicing my session. Now, I'm seeing about that amount from famd, and only in the upper 80%'s for smbd. Transfer wise, I'm losing 25% on writes dropping them down to around the same speed, or slightly less than 'reads' (writes have normally been higher, cuz a writer with a large TCP window can get ahead of where the reader is, but a reader can never be ahead of what the writer has sent...). So why is famd being peg'ed. My default write size is 8M, count of 512. Even if samba called famd once/write, that'd only be 512 times/second which should be negligible, but it's acting more like it is getting called w/each packet? Even that shouldn't be horrible, as I use a 9K packet size to cut packet overhead by 5/6ths... So wondering if anyone else has seen such or had experience with famd -- especially in the most recent 3.6.22 series? Thanks...
Volker Lendecke
2014-Feb-17 10:34 UTC
[Samba] samba w/vfs notify_fam perf hit of 25% on writes noticed...?
Hi! smbd should not call famd at all for the write code path. It might be that due to an open notify (open explorer window on that directory?) famd must send many change notifies to smbd and then to the client, but the write code path itself does not touch famd at all. With best regards, Volker Lendecke On Thu, Feb 13, 2014 at 01:18:24PM -0800, Linda W wrote:> I was running some benchmarks and trying to tune speed between > a samba 3.6.22 server and win7. > > My primary benchmark is using 'dd' on windows to read/write to > device files in my home directory to eliminate effects of disk latency. > So for reads, I transfer from h:/zero and for writes I write to h:/null. > Where h: is my unix home dir. (for the other end of the transfer, > I use /dev/null and /dev/zero, respectively, under cygwin). > > I don't remember seeing this during previous benchmarks, which is > why I was more than a little curious. > > When I'm doing the client write's > (writing 4G to the server, I used to see 98-99% cpu > usage from smbd that was servicing my session. > > Now, I'm seeing about that amount from famd, and only in the upper > 80%'s for smbd. > > Transfer wise, I'm losing 25% on writes dropping them down to > around the same speed, or slightly less than 'reads' (writes > have normally been higher, cuz a writer with a large TCP > window can get ahead of where the reader is, but a reader > can never be ahead of what the writer has sent...). > > So why is famd being peg'ed. My default write size is 8M, > count of 512. Even if samba called famd once/write, that'd > only be 512 times/second which should be negligible, but > it's acting more like it is getting called w/each packet? > Even that shouldn't be horrible, as I use a 9K packet size > to cut packet overhead by 5/6ths... > > So wondering if anyone else has seen such or had experience > with famd -- especially in the most recent 3.6.22 series? > > Thanks... > > > > -- > To unsubscribe from this list go to the following URL and read the > instructions: https://lists.samba.org/mailman/options/samba-- SerNet GmbH, Bahnhofsallee 1b, 37081 G?ttingen phone: +49-551-370000-0, fax: +49-551-370000-9 AG G?ttingen, HRB 2816, GF: Dr. Johannes Loxen http://www.sernet.de, mailto:kontakt at sernet.de