similar to: rsync alternatives for large mirrors?

Displaying 20 results from an estimated 9000 matches similar to: "rsync alternatives for large mirrors?"

2001 Nov 30
1
Rsync: Re: patch to enable faster mirroring of large filesyst ems
I, too, was disappointed with rsync's performance when no changes were required (23 minutes to verify that a system of about 3200 files was identical). I wrote a little client/server python app which does the verification, and then hands rsync the list of files to update. This reduced the optimal case compare time to under 30 seconds. Here's what it does, and forgive me if these sound
2014 Apr 11
1
rsync performance on large files strongly depends on file's (dis)similarity
Hi list, I've found this post on rsync's expected performance for large files: https://lists.samba.org/archive/rsync/2007-January/017033.html I have a related but different observation to share: with files in the multi-gigabyte-range, I've noticed that rsync's runtime also depends on how much the source/destination diverge, i.e., synchronization is faster if the files are
2010 Dec 16
1
use parted to create "raw paration"????
we have CENTOS 5.5 on X86. I tried to create a "raw partition" (NOT FS) on a disk and it continue to show "ext3". How can I get ride of it? === procedures===== # parted /dev/sde GNU Parted 1.8.1 Using /dev/sde Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) p Model: DELL PERC
2001 Nov 30
0
Rsync: Re: patch to enable faster mirroring of large filesyst ems
In my particular case, it is reasonable to assume that the size and timestamp will change when the file is updated. (We are looking at it as a patching mechanism.) Right now it's actually using update time only, I should modify it to check the file size as well. Is there a way you could query your database to tell you which extents have data that has been modified within a certain timeframe?
2001 Nov 30
0
Rsync: Re: patch to enable faster mirroring of large filesyst ems
Keating, Tim [TKeating@origin.ea.com] writes: > Is there a way you could query your database to tell you which > extents have data that has been modified within a certain timeframe? Not in any practical way that I know of. It's not normally a major hassle for us since rsync is used for a central backup that occurs on a large enough time scale that the timestamp does normally change
2004 Sep 07
1
large blocksize for transferring files from one filesystem to another on the same disk?
Is there any way of increasing rsync's block transfer size for when you need to transfer data from one filesystem to another on the same disk? Using a large blocksize tends to decrease track to track seeks. If I put a large dd or something in between two tar's in such a situation, I see over double the throughput on an AIX system (1M vs 64K). Thanks. -- Dan Stromberg DCS/NACS/UCI
2003 Jul 08
2
problems mirroring a disk
i'm trying to use rsync to mirror my server boot volume to another disk of the same size (120gb) on the same machine: % sudo rsync -avxuH --delete --progress /./ /volumes/cbc.server2/ although all of the files from the source volume appear to copy to the destination, if i try to boot from the destination volume the system doesn't recognize it as a bootable disk. anyone have any
2001 Nov 20
2
patch to enable faster mirroring of large filesystems
I have attached a patch that adds 4 options to rsync that have helped me to speed up my mirroring. I hope this is useful to someone else, but I fear that my relative inexperience with rsync has caused me to miss a way to do what I want without having to patch the code. So please let me know if I'm all wet. Here's my story: I have a large filesystem (around 20 gigabytes of data) that
2015 Oct 13
2
transferring large encrypted images.
Hi Folks, I was wondering if I could ask this question here. Initially when I was thinking up how to do this I was expecting block encryption to stay consistent from one 'encryption run' to the next, but I found out later that most schemes randomize the result by injecting a random block or seed at the beginning and basing all other encrypted data on that. In order to prevent
2006 Sep 07
2
Alternatives to merge for large data sets?
Hello, I am trying to merge two very large data sets, via pubbounds.prof <- merge(x=pubbounds,y=prof,by.x="user",by.y="userid",all=TRUE,sort=FALSE) which gives me an error of Error: cannot allocate vector of size 2962 Kb I am reasonably sure that this is correct syntax. The trouble is that pubbounds and prof are large; they are data frames which take up 70M and 11M
2001 Nov 20
1
Rsync: Re: patch to enable faster mirroring of large filesystems
Is there any chance this can be added into the distribution as it sounds really nifty. Another suggestion unless I have read the following - would it be useful to have a command option in rsync to generate the file list by doing the "find" and outputting into a standard format? (As this would make it less OS specific or kludgy?) Cheers, Lachlan. At 16:06 19/11/01 -0500, you wrote:
2003 Jul 30
2
Large files and symlinks
Hi, I'm mirroring a single server to multiple clients. Currently I'm using scp, but I (think I) want to use rsync. The files I'm mirroring are large - c.4GB (video data) Each client has a different set of these files. The transfer is done over the internet, and may fail (regularly!). I set up a separate directory for each client, and put in symlinks to the actual files (maximum
2015 Oct 13
3
transferring large encrypted images.
On Tue, Oct 13, 2015 at 12:54 PM, Xen <list at xenhideout.nl> wrote: > Hi Folks, > > I was wondering if I could ask this question here. > > Initially when I was thinking up how to do this I was expecting block > encryption to stay consistent from one 'encryption run' to the next, but I > found out later that most schemes randomize the result by injecting a >
2012 Jan 03
0
Biglm source code alternatives (E.g. Call to Fortran)
Hi everyone, I have been looking at the Bigglm (Basically does Generalised Linear Models for big data under the Biglm package) command and I have done some profiling on this code and found that to do a GLM on a 100mb file (9 million rows by 5 columns matrix(most of the numbers were either a 0,1 or 2 randomly generated)) it took about 2 minutes on a linux machine with 8gb of RAM and 4 cores.
2004 Aug 10
1
rsync erroring out when syncing a large tree
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 I'm trying to sync a mandrakelinux tree (~120GB) but it bombs out after a while with this error: rsync: connection unexpectedly closed (289336107 bytes read so far) rsync error: error in rsync protocol data stream (code 12) at io.c(189) rsync: writefd_unbuffered failed to write 4092 bytes: phase "unknown": Broken pipe rsync error:
2016 Nov 30
2
[PATCH kernel v5 5/5] virtio-balloon: tell host vm's unused page info
On 11/30/2016 12:43 AM, Liang Li wrote: > +static void send_unused_pages_info(struct virtio_balloon *vb, > + unsigned long req_id) > +{ > + struct scatterlist sg_in; > + unsigned long pos = 0; > + struct virtqueue *vq = vb->req_vq; > + struct virtio_balloon_resp_hdr *hdr = vb->resp_hdr; > + int ret, order; > + > + mutex_lock(&vb->balloon_lock); > +
2016 Nov 30
2
[PATCH kernel v5 5/5] virtio-balloon: tell host vm's unused page info
On 11/30/2016 12:43 AM, Liang Li wrote: > +static void send_unused_pages_info(struct virtio_balloon *vb, > + unsigned long req_id) > +{ > + struct scatterlist sg_in; > + unsigned long pos = 0; > + struct virtqueue *vq = vb->req_vq; > + struct virtio_balloon_resp_hdr *hdr = vb->resp_hdr; > + int ret, order; > + > + mutex_lock(&vb->balloon_lock); > +
2009 Mar 10
3
Cannot get CentOS to install
I used google but did not come up with anything current to solve my problem. I have not tried any other search engines yet. There is not a search for the CentOS list so I am creating a new post. I have tested the CentOS 5.2 and RHEL 5.3 CD's and they passed. I had to use linux mediacheck ide=nodma to get them to pass. The newest releases seem to most times fail with just linux mediacheck. I
2014 May 20
4
"EDD Load error" on btrfs, how to debug?
On Tue, May 20, 2014 1:52 pm, Anatol Pomozov wrote: > Ok, we've figured out potential cause of the problem. The next > question how to minimize the size of ldlinux.sys? > > BTW looking at official (?) binary > https://www.kernel.org/pub/linux/utils/boot/syslinux/Testing/6.03/syslinux-6.03-pre11.tar.xz > I see that their size is also more than 64K Actually, there is *no*
2015 Oct 13
0
transferring large encrypted images.
Why are you encrypting the files and not the filesystem and the channel? On Tue, Oct 13, 2015 at 6:54 PM, Xen <list at xenhideout.nl> wrote: > Hi Folks, > > I was wondering if I could ask this question here. > > Initially when I was thinking up how to do this I was expecting block > encryption to stay consistent from one 'encryption run' to the next, but I >