Hello,
I work for a computer manufacturer and we are using Linux/Samba in our
production as file servers for our end-user software installation. We have
done this now for over a year but recently we needed to re-engineer the
system to accommodate the larger software installations. Here are the
problems.
Bandwidth Issues:
I am using Samba to provide connectivity to DOS clients. This environment is
very data intensive and I am having bandwidth problems. In my first setup is
have two hosts with 6 clients attached (Figure 1). I use a ping-pong
approach to load balancing. When a client attaches to one share it toggles
the flag to point to the other. This approached worked well until I
interconnected more than one set of "clusters" (Figure 2). The
performance
of the "Slave" drops drastically when interconnection is made.
Here is my setup:
An isolated cluster of a Master, Slave and 6 DOS clients. Typical transfer
rate 130 MB/min per client as measured by Ghost.
"Master"
* dhcp server on eth0
* samba server only installed
* two shares
* Domain logon
"Slave"
* samba server and client
* static ip on eth0
* one share
Upgraded to the following setup:
Independent clusters networked togther utilizing a NT server as a factory
wide repository for all images.
"Master"
* dhcp server on eth0
* dhcp client on eth1
* samba server and client installed
* two shares
* Domain logon
* Occasional mounting of NT share using mount ?t smbfs ( the
reason for installing smbclient)
"Slave"
* samba server and client
* static ip on eth0
* one share
This is when the performance degraded on the "Slave". I then
restructured
the shares so that the "Master" system mounted the slave via nfs and
then
provided this as a share through smb.conf. This improved the performance but
there is still a serious degradation of performance if 3 clients attach to
the Slave.
The ping-pong logic is susceptible to resets and leads to having many
clients attached to the slowest share. This leads me to my next question.
I am looking for a way to programmatically return the number of users
attached to a given service.
Are there any system calls that would give this information? Smbstatus comes
close by displaying all user and the service they are attached to but I need
to manage some load balancing.
I have also tried software raid but the PCs we are using cannot handle this.
I will also try a new HW platform that has HW raid but I would really like
to understand the degradation issue as I seem to only be pushing the problem
around and not solving it. I make changes and it crops up at a different
point in the process.
Also, the performance degradation occurs even when the "cluster" has
no
phyical connection to the second network. The resolution of this problem is
getting critical with the increase in OS size due to Microsoft's pushing of
"dual" installations at the factory.
Thank you in advance for your assistance.
Setup 1
Smb.conf (master )
[master]
comment = Master Image Store
path = /images
public = yes
read only = yes
writable = no
[slave]
comment = Slave Image Store
# The following line does not work at all.
# preexec = csh -c 'echo mount -t nfs -o exec,suid,ro,rsize=8192
Slave:/images /slave > /download/%m.QUE'
path = /slave
public = yes
read only = yes
writable = no
[download]
comment = Process Software
path = /download
public = yes
read only = no
writable = yes
Smb.conf (slave)
[global]
# workgroup = NT-Domain-Name or Workgroup-Name
netbios name = Slave
workgroup = workgroup
# server string is the equivalent of the NT Description field
server string = Slave Share
max log size = 0
socket options = TCP_NODELAY SO_RCVBUF=8192 SO_SNDBUF=8192
remote announce = 10.0.0.1
[images]
comment = Image Library
path = /images
public = yes
read only = no
writeable = yes
Jeff Powell
NEC CustomTechnica