Displaying 20 results from an estimated 5000 matches similar to: "Memory consumption for rsync -axv --delete"
2016 Mar 25
0
Memory consumption for rsync -axv --delete
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
If you were using --link-dest to make multiple backups you wouldn't
need --delete because the target is always a new empty directory (with
- --link-dest pointing to the previous backup run).
So, you get the benefit of having multiple backups to restore from and
rsync doesn't have to --delete. When you run low on space you just rm
- -rf some
2016 Mar 25
4
Memory consumption for rsync -axv --delete
Hi,
I have been using rsync for many years and never had any kind of problem.
Lately I am running out of RAM trying to do an incremental backup to a box
that only has 2G of RAM. The entire directory structure I'm mirroring is
about 200G of files. A minority of subdirectories have many files.
Is there a way to do an incremental backup with --delete option that does
not use as much memory? Is
2016 Mar 27
0
Memory consumption for rsync -axv --delete
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
You will have an old backup dir and a new backup dir. The new one
will contain all the current stuff. The old one will contain what was
current the last time you ran rsync. Just rm -rf the old one. Or
keep a few. Or a few dozen.
On 03/27/2016 02:54 AM, John Long wrote:
> Thanks I'll look this up. There is still the issue of how to get
2016 Mar 27
0
Memory consumption for rsync -axv --delete
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
You miss-understand the purpose of --link-dest. Yes, it gives you
multiple complete backups, but each only consumes the disk space
needed to store files that are unique to that backup. Files that are
the same in 2 backup runs are actually the same file in multiple
directories requiring only 1 to actually be stored.
On 03/27/2016 02:39 AM, John Long
2016 Mar 27
2
Memory consumption for rsync -axv --delete
Hi,
On Fri, Mar 25, 2016 at 11:16:47AM -0400, Kevin Korb wrote:
> If you were using --link-dest to make multiple backups you wouldn't
> need --delete because the target is always a new empty directory (with
> - --link-dest pointing to the previous backup run).
The source is around 200G and the target box only has 500G total and some of
it is used for other data. What I want to do
2016 Mar 27
2
Memory consumption for rsync -axv --delete
Thanks I'll look this up. There is still the issue of how to get the target
box cleaned up since I can no longer run --delete.
/jl
On Sun, Mar 27, 2016 at 02:49:02AM -0400, Kevin Korb wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> You miss-understand the purpose of --link-dest. Yes, it gives you
> multiple complete backups, but each only consumes the disk space
2018 Mar 25
4
Rsync between 2 datacenters not working
You could try using an automounter, like autofs, in combination with
sshfs. It'll be slower, possibly a lot slower, but it should be more
reliable over an unreliable connection.
I've been using:
remote -fstype=fuse,allow_other,nodev,noatime,reconnect,ServerAliveInterval=15,ServerAliveCountMax=40,uid=0,gid=0,ro,nodev,noatime
:sshfs\#root at remote.host.com\:/
BTW, I'm not sure
2011 May 31
2
Samba serving sshfs shares: can't delete files
Hello!
I have samba share on my sshfs-mounted folder. All works just fine
except I can't delete files from sshfs unless they are in 0777 chmodded
directory. Even if that files were putted trough smbclient. I can read
files, write files (regardless their directory permissions) but not
delete them.
Here is my share config:
[myshare]
comment = shre over sshfs
path = /home/kli/work/remotes/dev
2013 Mar 21
1
sshfs -o rellinks (module option) rejected by fuse
New to sshfs and new to this mailing list so please guide me if required.
Is this a bug? When sshfs is given option -o rellinks, it responds with
fuse: unknown option `rellinks'
According to my understanding of the sshfs man page and --help output
this option a) is valid and b) should be passed to the module, not to fuse.
Versions:
SSHFS version 2.4
FUSE library version: 2.8.5
2019 Jun 09
2
[Bug 13991] New: rsync --delete --one-file-system skips deletes after crossing filesystems on destination.
https://bugzilla.samba.org/show_bug.cgi?id=13991
Bug ID: 13991
Summary: rsync --delete --one-file-system skips deletes after
crossing filesystems on destination.
Product: rsync
Version: 3.1.3
Hardware: All
OS: Linux
Status: NEW
Severity: normal
Priority: P5
2013 Apr 10
5
[Bug 9783] New: please don't use client-server model for local copies
https://bugzilla.samba.org/show_bug.cgi?id=9783
Summary: please don't use client-server model for local copies
Product: rsync
Version: 3.0.9
Platform: All
URL: http://lwn.net/Articles/400489/
OS/Version: Linux
Status: NEW
Severity: enhancement
Priority: P5
Component: core
2020 May 18
4
how does autofs deal with stuck NFS mounts and suspending to RAM?
Hi,
after trying sshfs to mount a remote file system on a server with the result
that sshfs will sooner or later get stuck and require a reboot of the client,
I'm fed up with it and am looking for alternatives.
So next I would like to use NFS over a VPN connection instead. To minimize
the instances of the NFS mount getting stuck, it might be helpful to use
autofs.
What happens when the
2020 May 19
2
how does autofs deal with stuck NFS mounts and suspending to RAM?
On Tuesday, May 19, 2020 1:36:03 AM CEST Warren Young wrote:
> On May 18, 2020, at 5:13 AM, hw <hw at gc-24.de> wrote:
> > Is there a better alternative for mounting remote file systems over
> > unreliable connections?
>
> I don?t have a good answer for you, because if you?d asked me without all
> this backstory whether NFS or SSHFS is more tolerant of bad
2011 Oct 04
6
zvol space consumption vs ashift, metadata packing
I sent a zvol from host a, to host b, twice. Host b has two pools,
one ashift=9, one ashift=12. I sent the zvol to each of the pools on
b. The original source pool is ashift=9, and an old revision (2009_06
because it''s still running xen).
I sent it twice, because something strange happened on the first send,
to the ashift=12 pool. "zfs list -o space" showed figures at
2006 Nov 18
1
cannot get fuse-ssh to operate from a batch script - but does from cmd line
Hi there
I am wanting to call sshfs (auth via DSA keys) via a rsync pre-xfer bash
script, and cannot get something right. If I run it from the cmdline line:
env - sshfs usern@server:/share /dir/path -o -o IdentityFile=/tmp/id_dsa
it mounts it just fine. (note the "env -" - I specifically tested with
no environment to try to make the two situations identical). If I put
that sole line
2006 Sep 14
3
Anyone using fuse and/or sshfs under Centos 4.4?
Hi
A search of google failed to show any prebuilt rpms for sshfs and fuse.
I do see that fuse support is in 2.6.14 kernel which isn't a whole lot of
help.
Before I dive headlong into this has anyone successfully built fuse/sshfs
against Centos 4.4.
If so would you share your experience?
Thanks
Daveh
2008 Nov 05
1
Bug+bugfix in sftp-server : failed to rename file on sshfs mount
Hello,
Renaming a file via sftp on an sshfs mount resulted in a failure with
errorcode 38 (ENOSYS).
This is reproducable with openssh release 4.9p1 & 5.1p1 in combination
sshfs 2.2 (latest releases). Investigation revealed that sshfs only
implements the rename()-call and not the link()-call (used by
sftp-server).
Attached is a patch to perform the rename()-call upon a failed link().
The
2018 Jan 16
1
sshfs mounting on Centos 6.9
Hi all,
I am trying to mount on boot a sshfs filesystem.
I have tried this in /etc/fstab
backup at myserver.com:/home/backup/myserver /backup fuse.sshfs
nonempty,allow_other 0 2
but only works when network works.
I have also tried this in my crontab:
@reboot sshfs -o idmap=backup myserver.com:/home/backup/myserver /backup
but doesn?t seem to work either.
What else can I try?
2005 Nov 24
2
FUSE/SSHFS RPM Packages.
Alle,
Does anyone know if there are any reputable repositories out there that
contain packages for fuse/sshfs?
Best Regards,
Camron
--
Camron W. Fox
Hilo Office
High Performance Computing Group
Fujitsu America, INC.
E-mail: cwfox at us.fujitsu.com
2008 Dec 30
1
Set connection timeouts?
Hello,
Perhaps you could give some information here or redirect me, because it was
not clear while reading manuals: how can connection timeout be set for sshd?
Problem is, when some system is hibernated and it resumes, connections are
dead. Mostly I made a successful workaround, but would be nice to know...
Also, which version of ssh(d) support df on sshfs?
I hope, is not a problem to enlighten