The shared mem size option is set to 102400 by default and change is
discouraged:
The man page (1.9.18p10) says:
This parameter is only useful when Samba has been compiled
with FAST_SHARE_MODES. It specifies the size of the shared
memory (in bytes) to use between smbd processes. You should
never change this parameter unless you have studied the
source and know what you are doing. This parameter defaults
to 1024 multiplied by the setting of the maximum number of
open files in the file local.h in the Samba source code.
MAX_OPEN_FILES is normally set to 100, so this parameter
defaults to 102400 bytes.
and John Blair's excellent book indicates it is a hacker option.
As far as I can see there is no other documentation on this option.
However I have a server (used for software distribution) to 100+ NT clients
and it clearly ran out of shared memory eg:
Jan 7 14:55:37 cedar.brunel.ac.uk smbd[27486]: ERROR:set_share_mode shmops->
shm
_alloc fail!
Jan 7 14:55:37 cedar.brunel.ac.uk smbd[27486]: ERROR shm_alloc : alloc of 56
by
tes failed
Jan 7 14:55:37 cedar.brunel.ac.uk smbd[27486]: ERROR:set_share_mode shmops->
shm
_alloc fail!
Jan 7 14:55:41 cedar.brunel.ac.uk smbd[28154]: PANIC ERROR:del_share_mode
hash
bucket 3 empty
Can I just up the amount of shared memory?
Is there any guideline for amount consumed per oplock?
I am running under Solaris 2.5.1 if that helps.
Thanks,
--
-----------------------------------------------------------------------------
| Peter Polkinghorne, Computer Centre, Brunel University, Uxbridge, UB8 3PH,|
| Peter.Polkinghorne@brunel.ac.uk +44 1895 274000 x2561 UK |
-----------------------------------------------------------------------------