similar to: AIX 5.1 rsync large file

Displaying 20 results from an estimated 2000 matches similar to: "AIX 5.1 rsync large file"

2010 Jan 08
1
Rsync performance with very large files
We're having a performance issue when attempting to rsync a very large file. Transfer rate is only 1.5MB/sec. My issue looks very similar to this one: http://www.mail-archive.com/rsync at lists.samba.org/msg17812.html In that thread, a 'dynamic_hash.diff' patch was developed to work around this issue. I applied the 'dynamic_hash' patch included in the 2.6.7 src, but it
2006 Mar 21
3
Rsync 4TB datafiles...?
I need to rsync 4 TB datafiles to remote server and clone to a new oracle database..I have about 40 drives that contains this 4 TB data. I would like to do rsync from a directory level by using --files-from=FILE option. But the problem is what will happen if the network connection fails the whole rsync will fail right. rsync -a srchost:/ / --files-from=dbf-list and dbf-list would contain this:
2002 Apr 19
2
out of memory in build_hash_table
I recently installed rsync 2.5.5 on both my rsync server and client being used. I installed the latest version because I was having problems with rsync stalling with version 2.4.6 (I read that 2.5.5 was supposed to clear this up or at least give more appropriate errors). I am still having problems with rsync stalling even after upgrading to 2.5.5. It only stalls in the "/home" tree
2006 Mar 29
2
Help -- rsync Causing High Load Averages
This is my situation and I am running into dead ends. We have a server with about 400GB of data that we are trying to backup with rsync. On the content1 server we had rsyncd.conf as: [content1] path = / comment = Backup list = no read only = yes hosts allow = 192.168.22.181 hosts deny = * uid = root gid = root and on the backup server we had a crontab entry as follows:
2003 Oct 05
2
Possible security hole
Maybe security related mails should be sent elsewhere? I didn't notice any so here it goes: sender.c:receive_sums() s->count = read_int(f); .. s->sums = (struct sum_buf *)malloc(sizeof(s->sums[0])*s->count); if (!s->sums) out_of_memory("receive_sums"); for (i=0; i < (int) s->count;i++) { s->sums[i].sum1 = read_int(f);
2004 Jan 26
1
How match.c hash_search works with multiple checksums that have identical tags
I am trying to understand how match.c works. I am reading the code and something doesnt look quite right. This is usually a sign that I am missing something obvious. Here is what I see. build_hash_table uses qsort to order targets in ascending order of //tag,index// into the array of checksums. It then accesses the targets in ascending order and writes the index at the tag's location in
2004 Jan 27
1
Init array to -1 with memset()?
The match.c code has a loop that initializes an array to -1. I'm considering changing this to a memset() of 0xFF over all the array's bytes, but that depends on a system's representation of a -1 being "all bit on". Should I be anal about this and add a configure check to make sure that we're not running on some weird system where this is not true? Or should I just let
2007 Sep 01
1
RHEL 5.1 beta, Dovecot 1.0.3: error while loading shared libraries?
Hello, I try to run dovecot 1.0.3 on RHEL 5.1 beta. As soon as I start it I get the following output and it's impossible to IMAP-log in over network. I've been following the list for a while and it seems to ring a bell about setting the correct ulimit, but I can't find the thread anymore, so I need to ask. :( Here's the debug output I've put together. I've been not able
2007 Jan 08
1
Extremely poor rsync performance on very large files (near 100GB and larger)
I've been playing with rsync and very large files approaching and surpassing 100GB, and have found that rsync has excessively very poor performance on these very large files, and the performance appears to degrade the larger the file gets. The problem only appears to happen when the file is being "updated", that is, when it already exists on the receiving side. For instance,
2006 Mar 15
0
WHAM as dtrace
In the past I used a tool called WHAM to collect system and process information from Solaris. It looks like dtrace could replace WHAM. Below are the counters that I got from WHAM. I got cumulative and instantaneous counts. Is there some existing dtrace script that will give me all of these plus even more? Also, WHAM was distributed, that is it could collect these counters from multiple
2018 Jan 14
2
Lmtp Memory Limit
Hi, i am using dovecot 2.2.33.2 on CentOS 7.4. Since i upgraded from CentOS 7.2. to CentOS 7.4. (without upgrading dovecot), my dovecot sieve-pipe scripts crash with Out of memory: Out of memory (allocated 262144) (tried to allocate 8793 bytes) There are some memory limits in dovecot or sieve? Can i change this value? Kernel limitks: [root at xxx software]# ulimit -a core file size
2020 Sep 16
1
dovecot 2.2.36.4 problem with ulimit
Hi, perhaps this? > with new debian9: > open files (-n) 1024 Regards Urban Am 16.09.20 um 12:57 schrieb Maciej Milaszewski: > Hi > Limits: > > Where all working fine: > > core file size????????? (blocks, -c) 0 > data seg size?????????? (kbytes, -d) unlimited > scheduling priority???????????? (-e) 0 > file size?????????????? (blocks,
2018 Jan 15
0
Lmtp Memory Limit
On 14.01.2018 09:11, Thomas Manninger wrote: > Hi, > > i am using dovecot 2.2.33.2 on CentOS 7.4. > > Since i upgraded from CentOS 7.2. to CentOS 7.4. (without upgrading dovecot), my dovecot sieve-pipe scripts crash with Out of memory: > Out of memory (allocated 262144) (tried to allocate 8793 bytes) > > There are some memory limits in dovecot or sieve? Can i change
2020 Sep 16
0
dovecot 2.2.36.4 problem with ulimit
Hi Limits: Where all working fine: core file size????????? (blocks, -c) 0 data seg size?????????? (kbytes, -d) unlimited scheduling priority???????????? (-e) 0 file size?????????????? (blocks, -f) unlimited pending signals???????????????? (-i) 257970 max locked memory?????? (kbytes, -l) 64 max memory size???????? (kbytes, -m) unlimited open files????????????????????? (-n) 65536 pipe
2018 Jan 15
1
Aw: Re: Lmtp Memory Limit
2001 Oct 05
0
"File size limit exceeded" when running /sbin/mke2fs -j /dev/sdb1
Hi! I have problem making ext3 FS on new disk. When I run mke2fs, it stops and gives me: "File size limit exceeded". Is this known issue? I'm running linux-2.4.10 with ext3 patch, e2fsprogs-1.25 freshly compiled. Cheers, Vita Appended are outputs of following programs: bash /usr/src/linux/scripts/ver_linux /sbin/mke2fs -m0 -v -j /dev/sdb1 fdisk -l /dev/sdb strace
2009 Jul 22
11
Request for feedback
A number of years back it became necessary to limit the size of messages that could be posted to the samba mailing list. The current limit is 64 KBytes. While it continues be be desirable to block large spam messages, I believe it is time to ask current subscribers for their preferences. This list is here to serve the wishes and needs of our subscribers. We wonder if the time is right to review
2003 Aug 01
2
Bandwidth Monitor
Anybody knows about one bandwidth meter to use in Bering. This is a script i built, it''s works wel, but it''s not very nice!!! =P #!/bin/bash # Bandwidth Monitor device=eth0 bytes=`grep $device /proc/net/dev | cut -f 2 -d : | cut -d '' '' -f 2` kbytes=`expr $bytes / 1024` actual=$kbytes i=1 x=0 total=0 while [ $i -le 2 ] do x=`expr $x + 1`
2008 Aug 04
1
pam max locked memory issue after updating to 5.2 and rebooting
We were previously running 5.1 x86_64 and recently updated to 5.2 using yum. Under 5.1 we were having problems when running jobs using torque and the solution had been to add the following items to the files noted "* soft memlock unlimited" in /etc/security/limits.conf "session required pam_limits.so" in /etc/pam.d/{rsh,sshd} This changed the max
2017 Mar 29
2
cannot login to imap under load
Hi Steffen, On 29-03-17 12:38, Steffen Kaiser wrote: > On Tue, 28 Mar 2017, Gerard Ranke wrote: > >> dovecot: master: Error: service(imap): fork() failed: Resource >> temporarily unavailable >> dovecot: master: Error: service(imap): command startup failed, >> throttling for 2 secs > > check out the ulimits for the Dovecot process. > > -- Steffen Kaiser