Displaying 20 results from an estimated 700 matches similar to: "rsync remote raw block device with --inplace"
2018 Dec 31
0
Aw: Re: rsync remote raw block device with --inplace
These responses have been very useful. Thanks especially to *Roland* devzero
at web.de
<rsync%40lists.samba.org?Subject=Re%3A%20Aw%3A%20Re%3A%20rsync%20remote%20raw%20block%20device%20with%20--inplace&In-Reply-To=%3Ctrinity-177d08a5-29f5-475c-916b-85273fc31962-1546203055586%403c-app-webde-bs15%3E>because...I'm
installing diskrsync <https://github.com/dop251/diskrsync> . So
2018 Dec 30
3
Aw: Re: rsync remote raw block device with --inplace
> There have been addons to rsync in the past to do that but rsync really
> isn't the correct tool for the job.
why not correct tool ?
if rsync can greatly keep two large files in sync between source and destination
(using --inplace), why should it (generally spoken) not also be used to keep two
blockdevices in sync ?
maybe these links are interesting in that context:
2018 Dec 30
0
Aw: Re: rsync remote raw block device with --inplace
It was broucht up before indeed:
https://lists.samba.org/archive/rsync/2012-June/027680.html
On 12/30/18 9:50 PM, devzero--- via rsync wrote:
>> There have been addons to rsync in the past to do that but rsync really
>> isn't the correct tool for the job.
> why not correct tool ?
>
> if rsync can greatly keep two large files in sync between source and destination
>
2018 Sep 07
3
how "safe" is blockcommit ?
Hi,
currently i'm following https://wiki.libvirt.org/page/Live-disk-backup-with-active-blockcommit. I 'm playing around with it and it seems to be quite nice.
What i want is a daily consistent backup of my image file of the guest.
I have the idea of the following procedure:
- Shutdown the guest (i can live with a downtime of a few minutes, it will happen in the night).
And i think
2018 Dec 30
2
rsync remote raw block device with --inplace
It would be very nice to be able to rsync the raw data content of, e.g., a
non-mounted disk partition, particularly in combination with --inplace.
Our reality: several dual-boot machines running Windows during the day and
Linux at night, during backups. Windows is very tedious and iffy to
re-reinstall without a raw disk image to start from. Disks fail, and the
ensuing downtime must be
2012 Mar 09
2
iotop :: OSError: Netlink error: Invalid argument (22)
Hi! i have a problem with iotop :
root at alien: ~ # iotop
Traceback (most recent call last):
File "/usr/bin/iotop", line 16, in ?
main()
File "/usr/lib/python2.4/site-packages/iotop/ui.py", line 567, in main
main_loop()
File "/usr/lib/python2.4/site-packages/iotop/ui.py", line 557, in <lambda>
main_loop = lambda: run_iotop(options)
File
2018 Sep 08
1
Re: how "safe" is blockcommit ?
Il 07-09-2018 21:26 Eric Blake ha scritto:
> We're also trying to add support for incremental backups into a future
> version of libvirt on top of the qemu 3.0 feature of persistent
> bitmaps in qcow2 images, which could indeed guarantee that you
> transfer only the portions of the guest disk that were touched since
> the last backup. But as that's still something I'm
2020 May 01
0
io_uring cause data corruption
On 2020-04-30 22:56, Jeremy Allison via samba wrote:
> On Thu, Apr 30, 2020 at 10:25:49AM +0200, A L wrote:
>
>> So I did some more tests. smbclient mget does not copy in the same way
>> Windows Explorer does. When copying in Windows Explorer, there are many
>> multiple concurrent threads used to transfer the files. With smbclient mget
>> there are no corruptions,
2016 Dec 05
0
Huge write amplification with thin provisioned logical volumes
Hi,
I've noticed huge write amplification problem with thinly provisioned
logical volumes and I wondered if anyone can explain why it happens and if
and how can be fixed. The behavior is the same on Centos 6.8 and Centos
7.2.
I have a NVME card (Intel DC P3600 -2 TB) on which I create a thinly
provisioned logical volume:
pvcreate /dev/nvme0n1
vgcreate vgg /dev/nvme0n1
lvcreate
2019 Nov 11
0
cli Checking disk i/o
On 11/11/19 1:37 PM, Robert Moskowitz wrote:
> OK.? That is interesting.? I am assuming tps is transfers per sec?
>
> I would have to get a stop watch, but it seems to go a bit of time, and
> then a write.
>
> Is there something that would accumulate this and give me a summary over
> some period of time?? Of course it better NOT be doing its own IOs...
I like iostat -x 4
2009 Jan 27
1
paravirtualized vs HVM disk interference (85% vs 15%)
Hi,
We have found that exist a huge degradation in performance when doing I/O to a disk images contained in single files from a paravirtualized domain and from an HVM at the same time.
The problem was found in a Xen box with Fedora 8 x86_64 binaries installed (Xen 3.1.0 + dom0 Linux 2.6.21). The test hardware was a rack mounted server with two 2.66 Ghz Xeon X5355 (4 cores each one, 128 Kb L1
2011 May 04
4
Finding wich files a writen to
Hi !
I have a server (Centos 5) that is using a pair of SAS drives to store the
data. (Mail server) They are on an adaptec raid controler with a battery
backup and write back cache active.
>From time to time, I have sever peak io to those data disks (> 400 to 500
iops, > 70 to 100 megs/sec).
With iostat, I find that it's almost a write i/o problem. How can I find to
which files
2014 May 06
0
poor write performance or locking issues with ocfs2
Hello all,
I've got heavy troubles with my ocfs2 environment. Cluster filesystem worked fine for about 3-6 weeks after initial setup, but since 1 week performance issues occurs. I've already searched long time in google and on this mailing list but I wasn't able to found any solution. I've found a lot of posts with "same" problems but without the magic answer :-)
2020 Sep 10
0
Btrfs RAID-10 performance
"Miloslav" == Miloslav H?la <miloslav.hula at gmail.com>
<miloslav.hula at gmail.com> writes:
Miloslav> Dne 09.09.2020 v 17:52 John Stoffel napsal(a):
Miloslav> There is a one PCIe RAID controller in a chasis. AVAGO
Miloslav> MegaRAID SAS 9361-8i. And 16x SAS 15k drives conneced to
Miloslav> it. Because the controller does not support pass-through for
2013 Feb 02
1
shutdown a windows guest take ages..... (20- 34 minutes here)
hi,
i use libvirt 1.0.2 (-r1 , gentoo linux).
when i create a vm with win7 guest, virtio nic, virtio hdd, all is
running fine.
but when i shutdown the windows guest, it takes somestimes 20-34
minutes !!!
iotop shows me in the whole time writing with 2,xxmb/sec.
is the complete machine "rewritten" to disk?
i use disk.images on a raid6, but hey, dd shows me it can writes there
with
2013 Jun 11
1
btrfs-transacti:1014 blocked for more than 120 seconds.
Hey,
I''ve a 2x4TB RAID1 setup with btrfs on kernel 3.8.0. Under high I/O load
(BackupPC dump or writing a large file over gigabit) I get messages in
syslog such as the one mentioned in the subject.
The full non-logcheck-ignored log is under [1].
A BackupPC dump between the same exact machines onto a 2TB ext4 volume
take 90 minutes on average, the process on the btrfs volume took 465
2006 Apr 07
1
dtrace: invalid probe specifier
Hello,
I''m a newby to dtrace and have just installed the dtrace toolkit v0.92 on
a core soalris 10 1/06 installation on SUN v40z.
I have tried the following commands:
iotop
iosnoop
but I get the message
dtrace: invalid probe specifier
and a lot of code. At the end it says:
: in action list: failed to resolve uid: Unknown variable name
Could it be, that there are some
2019 Feb 27
1
performance issue with UID SEARCH
Hi,
I'm running dovecot 2.2.x and I'm having an issue where I see many
dovecot processes use all the available IO on a server. According to
iotop the worst offenders seem to be in this state (NOTE: I swapped in
phony username & IP info):
dovecot/imap [someusername 123.456.789.012 UID SEARCH]
The server in question is running with Maildirs on top of an XFS
filesystem. Is there
2020 Feb 03
3
Hard disk activity will not die down
I updated my backup server this weekend from CentOS 7 to CentOS 8.
OS disk is SSD, /dev/md0 are two 4TB WD mechanical drives.
No hardware was changed.
1. wiped all drives
2. installed new copy of 8 on system SSD
3. re-created the 4TB mirror /dev/md0 with the same WD mechanical drives
4. created the largest single partition possible on /dev/md0 and formatted
it ext4
5. waited several hours for the