Hi, I invoked several rsync processes simultaneously. The rsync code reads the from a file each time it's invoked. The file read into the script contains lists of filesystems to be sync-ed from client machine to the NFS fileserver. Both the client machine and the NFS fileserver are on separate NIS domain, and have been made to trust one another. The 2 NIS domains are also on separate geographical locations. However, it looks like the rsync process may have caused the local disk on the NFS fileserver to hit a 100% capacity. The filesystems to be copied from the client machine has already been mounted on the fileserver. Below is the rsync code: #cat sync.sh for i in `cat datafile.txt` do echo rsync -avz --dry-run --rsync-path=/usr/bin/rsync --delete $i myfileserver.pg.willow.com:$i /usr/intel/bin/rsync -av --rsync-path=/usr/bin/rsync --delete $i myfileserver.pg.willow.com:$i done #cat datafile.txt /f1/my_schematics /f2/layout_design /f3/clock_time /f5/data_padding .... (list continues...) Each text file to be read from the script contains about 8 - 10 filesystem entries to be sync-ed. There are 5 of such text files which serves as input to the script. Each filesystems are between 10GB-15GB in size, and they have been specified in the input text file with the absolute paths. Hence I've invoked 5 rsync processes. The network bandwith is approximately 0.8GB/hr. These filesystems were mounted as follows on both client and the NFS server: /f1/my_schematics /f2/layout_design /f3/clock_time /f5/data_padding After synchronizing the filesystems to the NFS fileserver, I discovered duplicated copies of the copied filesystems with one of them mounted on /. eg: # cd /f1 #ls -l drwxrws--- .. root engineering .... my_schematics drwxr-xr-x .. root system .... my_schmatics Noticed that the group ownership and permission differs. I had invoked as root and belonging to the engineering group. # cd f1 # df -k * /dev/vg32lv02 17670144 ... /f1/my_schematics /dev/hda4 393216 ... / (mounted on /) I wasn't sure which of the 2 that I should remove. Hence I did : # cd f1 #ls -lafd * my_schematics . .. design README circuits my_schematics . .. # cd my_schematics^D my_schematics/ my_schematics^M/ I had hence removed the my_schematics^M/ by doing a rm -rf my_schematics?. My question is: This problem only occured for a few filesystems copied from the file lists, and not all filesystems copied had caused the / be 100% full. 1) Is there a limit to the number of processes which rsync can handle? Or is the maximum number of channels which rsync is capable of handling? 2) The rsync version on the client machine is a version 1.7.1 protocol version 17. The NFS server has version 2.4.4 protocol version 24. Are they compatible? 3) As some filesystems copied from the source machine were also duplicated and did have an additional "?" appended at the end of its name, could rsync differentiate which filesystems to be sync-ed? Some filesystems copied from the source machine were not duplicated and did not contain the additional "?", but when copied over, had produced a duplicate copy with the hidden character appended at the end of its name. This made the trouble shooting task more ambiguous. Could someone kindly help me out? Thanks.
Hi, I invoked several rsync processes simultaneously. The rsync code reads the from a file each time it's invoked. The file read into the script contains lists of filesystems to be sync-ed from client machine to the NFS fileserver. Both the client machine and the NFS fileserver are on separate NIS domain, and have been made to trust one another. The 2 NIS domains are also on separate geographical locations. However, it looks like the rsync process may have caused the local disk on the NFS fileserver to hit a 100% capacity. The filesystems to be copied from the client machine has already been mounted on the fileserver. Below is the rsync code: #cat sync.sh for i in `cat datafile.txt` do echo rsync -avz --dry-run --rsync-path=/usr/bin/rsync --delete $i myfileserver.pg.willow.com:$i /usr/intel/bin/rsync -av --rsync-path=/usr/bin/rsync --delete $i myfileserver.pg.willow.com:$i done #cat datafile.txt /f1/my_schematics /f2/layout_design /f3/clock_time /f5/data_padding .... (list continues...) Each text file to be read from the script contains about 8 - 10 filesystem entries to be sync-ed. There are 5 of such text files which serves as input to the script. Each filesystems are between 10GB-15GB in size, and they have been specified in the input text file with the absolute paths. Hence I've invoked 5 rsync processes. The network bandwith is approximately 0.8GB/hr. These filesystems were mounted as follows on both client and the NFS server: /f1/my_schematics /f2/layout_design /f3/clock_time /f5/data_padding After synchronizing the filesystems to the NFS fileserver, I discovered duplicated copies of the copied filesystems with one of them mounted on /. eg: # cd /f1 #ls -l drwxrws--- .. root engineering .... my_schematics drwxr-xr-x .. root system .... my_schmatics Noticed that the group ownership and permission differs. I had invoked as root and belonging to the engineering group. # cd f1 # df -k * /dev/vg32lv02 17670144 ... /f1/my_schematics /dev/hda4 393216 ... / (mounted on /) I wasn't sure which of the 2 that I should remove. Hence I did : # cd f1 #ls -lafd * my_schematics . .. design README circuits my_schematics . .. # cd my_schematics^D my_schematics/ my_schematics^M/ I had hence removed the my_schematics^M/ by doing a rm -rf my_schematics?. My question is: This problem only occured for a few filesystems copied from the file lists, and not all filesystems copied had caused the / be 100% full. 1) Is there a limit to the number of processes which rsync can handle? Or is the maximum number of channels which rsync is capable of handling? 2) The rsync version on the client machine is a version 1.7.1 protocol version 17. The NFS server has version 2.4.4 protocol version 24. Are they compatible? 3) As some filesystems copied from the source machine were also duplicated and did have an additional "?" appended at the end of its name, could rsync differentiate which filesystems to be sync-ed? Some filesystems copied from the source machine were not duplicated and did not contain the additional "?", but when copied over, had produced a duplicate copy with the hidden character appended at the end of its name. This made the trouble shooting task more ambiguous. Could someone kindly help me out? Thanks.
Reasonably Related Threads
- (no subject)
- domain member with winbind, slow smbcacls or smbclient listing
- samba 4.8 client and 4.9 AD DC: Reducing LDAP page size from 1000 to 500 due to IO_TIMEOUT
- domain member with winbind, slow smbcacls or smbclient listing
- samba 4.8 client and 4.9 AD DC: Reducing LDAP page size from 1000 to 500 due to IO_TIMEOUT