similar to: rsync behavior on copy-on-write filesystems

Displaying 20 results from an estimated 200 matches similar to: "rsync behavior on copy-on-write filesystems"

2013 Jan 11
4
Does rsync need a --ignore-unreadable-files option?
I work on software that archives gigabytes of files to multiple sites. Occasionally one or two files have no read permissions: % ls -l dir/foo --w-------+ 1 abcserve myusers 11222 Jan 10 03:14 The error message is: rsync: send_files failed to open "/dir/foo" (in xxx): Permission denied (13) rsync error: some files/attrs were not transferred (see previous errors) (code 23) at
2004 Aug 06
0
alias mount points
On Wed, 2004-05-05 at 08:51, Warren J. Beckett wrote: > I want to setup a site where I can allocate 2 hour time slots to anyone > that wants to play there tunes. My idea was to setup a number of mount > point, each with a username and password and dish these out, along with > a allocated time slot to the DJ's that have booked. This can all be done > with a wee bit of CGI - No
2018 Sep 18
0
Duplicate mails on pop3 expunge with dsync replication on 2.2.35 (2.2.33.2 works)
Hi, Has anyone any idea how to solve or further debug this issue? It seems indeed that it was introduced in 2.2.34 and is still there in 2.3.2.1. I found a couple of posts for this on the mailing list and elsewhere, but no solution: When a message is retrieved and immediately expunged, it gets replicated back from the other dsync node. This usually happens with POP3 but with IMAP as well,
2009 Oct 09
6
disk I/O problems and Solutions
Hey folks, CentOS / PostgreSQL shop over here. I'm hitting 3 of my favorite lists with this, so here's hoping that the BCC trick is the right way to do it :-) We've just discovered thanks to a new Munin plugin http://blogs.amd.co.at/robe/2008/12/graphing-linux-disk-io-statistics-with-munin.html that our production DB is completely maxing out in I/O for about a 3 hour stretch from
2008 Dec 12
2
Really slow performance
I am seeing extremely slow performance with glusterfs. OS: CentOS 5 glusterfs version: glusterfs-1.3.9-1 Server configuration: ############################################## ### GlusterFS Server Volume Specification ## ############################################## #### CONFIG FILE RULES: ### "#" is comment character. ### - Config file is case sensitive ### - Options within a
2016 Mar 03
1
Live migration - backing file
Hi! I'm testing the live migration on libvirt + KVM, the VMs are using non-shared local storage only. If I run a live migration with --copy-storage-full, the final disk file on the remote host after the migration has a full blown size of the specified value (10G) in my case, instead of the few MB on the source host before the migration. Running qemu-img I can see that the ref for the backing
2013 Jan 18
8
migrate from physical disk problems in xen
I''ve been trying to migrate a win nt 4 machine to a xen domu for the past few months with no success. However, on my current attempt, the original hardware no longer boots, so I''m trying to resolve the issues with xen properly, or else take a long holiday... Anyway, the physical machine had a 9G drive (OS drive), a 147 G drive (not in use) and a 300G drive (all SCSI Ultra320 on
2013 Jun 10
1
Re: virsh snapshot-create and blockcopy
Am 10.06.13 10:40, schrieb Kashyap Chamarthy: > On 06/10/2013 01:20 PM, Thomas Stein wrote: >> Am 10.06.13 09:07, schrieb Kashyap Chamarthy: >>> On 06/09/2013 02:46 PM, Thomas Stein wrote: >>>> Hello. >>>> >>>> I just tried the following: >>>> >>>> virsh dumpxml --security-info gentoo-template > gentoo-template.xml
2012 Jun 19
1
CentOS 6.2 on partitionable mdadm RAID1 (md_d0) - kernel panic with either disk not present
Environment: CentOS 6.2 amd64 (min. server install) 2 virtual hard disks of 10GB each Linux KVM Following the instructions on CentOS Wiki <http://wiki.centos.org/HowTos/Install_On_Partitionable_RAID1> I installed a min. server in Linux KVM setup (script shown below) <script> #!/bin/bash nic_mac_addr0=00:07:43:53:2b:bb kvm \ -vga std \ -m 1024 \ -cpu core2duo \ -smp 2,cores=2 \
2007 Dec 06
0
LVM2: large volume problem?
Hi all, I'm having problems to create/resize an lv up to 1T (well I can't reach 300G), my system is a CentOS 5.1 x86_64 on a Dell 2950 with 6x500G SATA (RAID5 to aprox. 2.5T) [root at Mugello ~]# fdisk -l Disk /dev/sda: 2497.7 GB, 2497791918080 bytes 255 heads, 63 sectors/track, 303672 cylinders Units = cilindros of 16065 * 512 = 8225280 bytes Dispositivo Boot Start
2010 Aug 24
0
Booting CentOS 5.5 (KVM) from a second disk
Hi all! Doing some tests with CentOS 5.5 on a KVM virtual machine, after doing the installation, I added a second disk. But when trying to boot from it, I get the following error: --------------------------------------------------------------------- root (hd1,0) Error 21: Selected disk does not exist --------------------------------------------------------------------- The two disks are
2013 Jun 10
0
Re: virsh snapshot-create and blockcopy
On 06/10/2013 01:20 PM, Thomas Stein wrote: > Am 10.06.13 09:07, schrieb Kashyap Chamarthy: >> On 06/09/2013 02:46 PM, Thomas Stein wrote: >>> Hello. >>> >>> I just tried the following: >>> >>> virsh dumpxml --security-info gentoo-template > gentoo-template.xml >>> virsh snapshot-create gentoo-template >>> virsh undefine
2010 Aug 24
1
Booting CentOS 5.5 (KVM) from a second disk
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi all! Doing some tests with CentOS 5.5 on a KVM virtual machine, after doing the installation, I added a second disk. But when trying to boot from it, I get the following error: - --------------------------------------------------------------------- root (hd1,0) Error 21: Selected disk does not exist -
2011 Feb 01
3
centos 4.8 or centos 5.5 for server is great?
hi,i am new guy for linux world i made a server (centos5.5 8g ram 300g*2 sas 15k harddisc ),but some my friend use linux feveral years advise me use centos 4.8,he said it is much good than centos 5.5 it is trouble me ,is it newest is good than older? please give me some advice Thanks all -------------- next part -------------- An HTML attachment was scrubbed... URL:
2011 Jun 27
3
unofficial ext3 and ext4 compare
I have something like 300G I routinely backup. This includes some large 12Gig images and other files. I had been using ext3 on an external USB disk for part of the process. Under ext3 doing "rsync -a /home /mnt/external_back/backup.jun.27.2011" it took 200 minutes. I took the same computer, same external HD and reformatted it for ext4 (mkfs.ext4 /dev/sdd1). I then started the same
2020 Oct 07
1
dovecot 2.3.11.3 namespace/ACL shared folder not accessible in sharing-user's Mail folder tree? have a working config?
I'm running dovecot --version 2.3.11.3 (502c39af9) I'm setting up folder sharing. Following https://wiki.dovecot.org/SharedMailboxes/Shared I've configured a folder to be shared, but it's not seen/accessible in the target user's Mail folder tree. My config includes, mail_plugins = virtual acl protocol imap { mail_plugins = $mail_plugins imap_acl imap_quota
2023 Apr 12
2
Matrix scalar operation that saves memory?
Hi all, I am currently working with a quite large matrix that takes 300G of memory. My computer only has 512G of memory. I would need to do a few arithmetic on it with a scalar value. My current code looks like this: mat <- 100 - mat However such code quickly uses up all of the remaining memory and got the R script killed by OOM killer. Are there any more memory-efficient way of doing
2008 Apr 21
1
rejecting I/O to offline device (PERC woes)
Haven't gotten any tips on a solution to the problem below. It happened again this weekend. My next test steps (order not determined): 1. Downgrade to CentOS 4 2. Swap out PERC controller with a spare I have never had a problem with the PERC4/DC controllers on our other machines (RHEL3/4, CentOS 4). Although, I've no other machine that has 5 300G Fujitsu SCSI drives either. Any
2023 Apr 12
1
Matrix scalar operation that saves memory?
I doubt that R's basic matrix capabilities can handle this, but have a look at the Matrix package, especially if your matrix is some special form. Bert On Tue, Apr 11, 2023, 19:21 Shunran Zhang <szhang at ngs.gen-info.osaka-u.ac.jp> wrote: > Hi all, > > I am currently working with a quite large matrix that takes 300G of > memory. My computer only has 512G of memory. I
2023 Apr 12
1
Matrix scalar operation that saves memory?
The example given does not leave room for even a single copy of your matrix so, yes, you need alternatives. Your example was fairly trivial as all you wanted to do is subtract each value from 100 and replace it. Obviously something like squaring a matrix has no trivial way to do without multiple copies out there that won't fit. One technique that might work is a nested loop that changes one