similar to: Extremely poor rsync performance on very large files (near 100GB and larger)

Displaying 20 results from an estimated 6000 matches similar to: "Extremely poor rsync performance on very large files (near 100GB and larger)"

2010 Jan 08
1
Rsync performance with very large files
We're having a performance issue when attempting to rsync a very large file. Transfer rate is only 1.5MB/sec. My issue looks very similar to this one: http://www.mail-archive.com/rsync at lists.samba.org/msg17812.html In that thread, a 'dynamic_hash.diff' patch was developed to work around this issue. I applied the 'dynamic_hash' patch included in the 2.6.7 src, but it
2006 Mar 23
1
AIX 5.1 rsync large file
Hello, I have inherited a setup where there are 2 AIX 5.1 systems in 2 separate sites. There are large database files that are backed from each site to the other via rsync. Currently, it is using rsync version 2.5.4. It does it via ssh with the options -avz. This has all been merrily plodding along for some time. There is one file that is over 45 GB, and it started having trouble with that
2006 Mar 21
3
Rsync 4TB datafiles...?
I need to rsync 4 TB datafiles to remote server and clone to a new oracle database..I have about 40 drives that contains this 4 TB data. I would like to do rsync from a directory level by using --files-from=FILE option. But the problem is what will happen if the network connection fails the whole rsync will fail right. rsync -a srchost:/ / --files-from=dbf-list and dbf-list would contain this:
2004 Jul 15
0
rsyncing very large files - extremely slow at end
Hello, I am trying to rsync a tree containing very large files. The first one encountered is a file that is 127 GB on the remote (authorative) end and around 80 GB on the receiving end. rsync version is 2.6.2, patched to fix the bug mentioned at: https://bugzilla.samba.org/show_bug.cgi?id=1529 But even so transfer is extremely slow towards the end. It will sync approximately (or exactly,
2004 Nov 29
0
Re: Mode context extremely poor performance and varyio
Stephen, all, Thank u very much for your answer and i wish you an happy thanksgiving. Currently we tried to migrate our base on RAW devices already using e41smp, SecurePath (HP) 3C and Qlogic 7.00. The average response time of our sql request on ia32 is 0.32Sec very regularly. On our Itanium on OCFS that's very randomly between 1 and 15Sec, and on RAW between <1s and 3 sec. David is an
2004 Nov 29
0
Re: Mode context extremely poor performance and varyio
Stephen, all, Thank u very much for your answer and i wish you an happy thanksgiving. Currently we tried to migrate our base on RAW devices already using e41smp, SecurePath (HP) 3C and Qlogic 7.00. The average response time of our sql request on ia32 is 0.32Sec very regularly. On our Itanium on OCFS that's very randomly between 1 and 15Sec, and on RAW between <1s and 3 sec. David is an
2007 Jan 13
2
Extremely poor ZFS perf and other observations
I''m observing the following behavior in our environment (Sol10U2, E2900, 24x96, 2x2Gbps, ...) - I''ve a compressed ZFS filesystem where I''m creating a large tar file. I notice that the tar process is running fine (accumulating CPU, truss shows writes, ...) but for whatever reason the timestamp on the file doesn''t change nor does the file size change. The same is
2004 Jul 16
6
[Bug 1529] 32bit rollover problem rsyncing files greater than 4GB in size
https://bugzilla.samba.org/show_bug.cgi?id=1529 wayned@samba.org changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |RESOLVED Resolution| |FIXED ------- Additional Comments From wayned@samba.org 2004-07-14 09:55
2004 Jul 16
0
[Bug 1529] New: 32bit rollover problem rsyncing files greater than 4GB in size
https://bugzilla.samba.org/show_bug.cgi?id=1529 Summary: 32bit rollover problem rsyncing files greater than 4GB in size Product: rsync Version: 2.6.2 Platform: x86 OS/Version: Linux Status: NEW Severity: normal Priority: P3 Component: core AssignedTo: wayned@samba.org
2005 Jul 19
2
Feature request for rsync for running scripts
I was wondering if others might find it useful to have a parameter in the rsync daemon config that would allow running a command on the server at session start or at successful rsync completion. For instance, this would allow a webpage to be automatically maintained (by a script called by this method) with the timestamps of the last successful rsync completion (no errors, all files transferred),
2013 Mar 29
4
How do you graph data when you have lots of small values but few extremely large values?
I was thinking of splitting the y-axis into two? Is this possible? Thanks -- Shane [[alternative HTML version deleted]]
2006 Mar 29
2
Help -- rsync Causing High Load Averages
This is my situation and I am running into dead ends. We have a server with about 400GB of data that we are trying to backup with rsync. On the content1 server we had rsyncd.conf as: [content1] path = / comment = Backup list = no read only = yes hosts allow = 192.168.22.181 hosts deny = * uid = root gid = root and on the backup server we had a crontab entry as follows:
2011 Sep 07
1
predictive modeling and extremely large data
Hi, I am new to R and here is what I am doing in it now. I am using machine learning technique (svm) to do predictive modeling. The data that I am using is one that is bound to grow perpetually. what I want to know is, say, I fed in a data set with 5000 data points to svm initially. The algorithm derives a certain intelligence (i.e.,output) based on these 5000 data points. I have an additional
2004 Jun 18
2
[Bug 1463] poor performance with large block size
https://bugzilla.samba.org/show_bug.cgi?id=1463 ------- Additional Comments From wayned@samba.org 2004-06-18 14:45 ------- Created an attachment (id=543) --> (https://bugzilla.samba.org/attachment.cgi?id=543&action=view) Suggested patch from Craig Barratt Wallace Matthews confirmed that this alleviates the poor performance. Just need to confirm that the window isn't getting too
2011 Sep 10
0
npreg: plotting out of sample, extremely large bandwidths
Hello r-help, I am using the excellent np package to conduct a nonparametric kernel regression and am having some trouble plotting the results. I have 2 covariates, x1 and x2, and a continuous outcome variable y. I am conducting a nonparametric regression of y on x1 and x2. The one somewhat unusual feature of these data is that, to be included in the dataset, x1 must be at least as large as x2.
2008 Sep 26
3
Dealing With Extremely Large Files
Hi, I'm sure that a large fixed width file, such as 300 million rows and 1,000 columns, is too large for R to handle on a PC, but are there ways to deal with it? For example, is there a way to combine some sampling method with read.fwf so that you can read in a sample of 100,000 records, for example? Something like this may make analysis possible. Once analyzed, is there a way to, say, read
2004 Nov 22
3
Mode context extremely poor performance.
Hi all, I curently have a big problem. One request (listed above) using context take up to 1000 more time than the on RAW or ext2 database. I have ran this request on a single IA32 machine with Redhat and dbf on ext2. The average reponse time is less than a sec. The same request on RAC 4 nodes cluster on RAW take the same average time. On ext2 idem. But on OCFS it took up to 15 sec randomly
2004 Nov 22
3
Mode context extremely poor performance.
Hi all, I curently have a big problem. One request (listed above) using context take up to 1000 more time than the on RAW or ext2 database. I have ran this request on a single IA32 machine with Redhat and dbf on ext2. The average reponse time is less than a sec. The same request on RAC 4 nodes cluster on RAW take the same average time. On ext2 idem. But on OCFS it took up to 15 sec randomly
2010 May 25
1
Need Help! Poor performance about randomForest for large data
Hi, dears, I am processing some data with 60 columns, and 286,730 rows. Most columns are numerical value, and some columns are categorical value. It turns out that: when ntree sets to the default value (500), it says "can not allocate a vector of 1.1 GB size"; And when I set ntree to be a very small number like 10, it will run for hours. I use the (x,y) rather than the (formula,data).
2007 Nov 27
7
DO NOT REPLY [Bug 5109] New: poor performance on large drives with big bandwidth
https://bugzilla.samba.org/show_bug.cgi?id=5109 Summary: poor performance on large drives with big bandwidth Product: rsync Version: 2.6.6 Platform: x86 OS/Version: Windows XP Status: NEW Severity: normal Priority: P3 Component: core AssignedTo: wayned@samba.org ReportedBy: