similar to: rsync, --sparse and VM disk images

Displaying 20 results from an estimated 3000 matches similar to: "rsync, --sparse and VM disk images"

2009 Mar 18
1
Is it possible to make rsync VMware split .vmdk's aware?
Hi, I am using rsync for my customers to have disaster recovery off-site with files from a VMware Server (under Linux). All works very well, but when I defragment the VM's (once a week) or Exchange defragments it's datastore the disk layout changes offcourse and sometimes a lot. What do I do: - I am making a local copy with vmware-vdiskmanager to an USB disk in the split
2013 Jun 09
2
Re: libguestfs support snapshot based virutal disk analysis?
I've used QEMU to read .vmdk snapshots. The VM directory layout in my case (Fusion, and I've seen Workstation do the same) created a .vmdk file per snapshot, and qemu-img could use that .vmdk file and the base .vmdk to convert the disk image to a raw image. IIRC there is a manifest file that ties .vmdk files to the snapshot they represent. So, from my experience, qemu does read disk
2013 Jun 10
1
Re: libguestfs support snapshot based virutal disk analysis?
(Sorry, Rich, I managed to miss reply-all.) These VMDK files are difference files from a baseline VMDK file. I'm not familiar with ESX's storage, but the size indicators on these tell me they aren't raw. For a virtual disk that has 3.2GB of data in its current state, against 3.0GB of data in its baseline, the current state's .vmdk file is 200MB. --Alex On Mon, Jun 10, 2013 at
2013 Jun 10
0
Re: libguestfs support snapshot based virutal disk analysis?
On Sun, Jun 09, 2013 at 05:49:44PM -0400, Alexander Nelson wrote: > I've used QEMU to read .vmdk snapshots. The VM directory layout in my case > (Fusion, and I've seen Workstation do the same) created a .vmdk file per > snapshot, and qemu-img could use that .vmdk file and the base .vmdk to > convert the disk image to a raw image. IIRC there is a manifest file that > ties
2012 Mar 02
1
nocow flags
I set the C (NOCOW) and z (Not_Compressed) flags on a folder but the extent counts of files contained there keep increasing. Said files are large and frequently modified but not changing in size. This does not happen when the filesystem is mounted with nodatacow. I''m using this as a workaround since subvolumes can''t be mounted with different options simultaneously. ie. one with
2013 Aug 01
3
filefrag and btrfs filesystem defragment and maybe snapshots
While exploring some btrfs maintenance with respect to defragmenting I ran the following commands: # filefrag /path/to/34G.file /path/to/5.7G.file /path/to/34G.file: 2406 extents found /path/to/5.7G.file: 572 extents found Thinking those mostly static files could be less fragmented I ran: # btrfs filesystem defragment -c /path/to/34G.file # btrfs filesystem defragment -c /path/to/5.7G.file and
2009 Aug 07
0
sparse files patch question
Hello, i just came across the sparse-block patch. i`m using rsync to store vmware vmdk virtual disks to a zfs filesystem. vmdk files have large portions of zeroed data and when thin provisioned (not being used yet), they even may be sparse. on the target, after writing to zfs the zeroes are always efficiently stored/compressed, i.e. they take no additional space on zfs. is this patch worth a
2011 May 29
22
[Bug 8177] New: Problems with big sparsed files
https://bugzilla.samba.org/show_bug.cgi?id=8177 Summary: Problems with big sparsed files Product: rsync Version: 3.0.8 Platform: x64 OS/Version: Linux Status: NEW Severity: normal Priority: P5 Component: core AssignedTo: wayned at samba.org ReportedBy: joluinfante at gmail.com
2010 Oct 31
6
Horrible btrfs performance due to fragmentation
On Mon, Oct 11, 2010 at 6:46 PM, Calvin Walton <calvin.walton@gmail.com> wrote: > On Mon, 2010-10-11 at 03:30 +0300, Felipe Contreras wrote: >> I use btrfs on most of my volumes on my laptop, and I''ve always felt >> booting was very slow, but definitely sure is slow, is starting up >> Google Chrome: >> >> encrypted ext4: ~20s >> btrfs: ~2:11s
2010 Feb 08
1
Big send/receive hangs on 2009.06
So, I was running my full backup last night, backing up my main data pool zp1, and it seems to have hung. Any suggestions for additional data gathering? -bash-3.2$ zpool status zp1 pool: zp1 state: ONLINE status: The pool is formatted using an older on-disk format. The pool can still be used, but some features are unavailable. action: Upgrade the pool using ''zpool
2011 Mar 24
1
2.6.38 defragment compression oops...
I found that I''m able to provoke undefined behaviour with 2.6.38 with extent defragmenting + recompression, eg: mkfs.btrfs /dev/sdb mount /dev/sdb /mnt cp -xa / /mnt find /mnt -print0 | xargs -0 btrfs filesystem defragment -vc After a short time, I was seeing what looked like a secondary effect [1]. Reproducing with lock instrumentation reported recursive spinlock acquisition, probably
2012 Aug 03
2
no space left on device
Hi, I am new to btrfs, and just installed a new system with SLED 11 SP2 a few days ago. However the system seems to be in a real sad state now, saying there is no free space left on the device even though there is about 8GB left on the / filesystem. A defragment works sometimes but then it goes back to the original state a day or two later. Other times the command just won''t respond.
2011 Jul 19
6
[PATCH 0/6] Move the infor for the help/man page in the source
The following series implement a way to generate the help messages and the btrfs man page from the sources comments for the "btrfs" commanda . The syntax and the detailed help of every subcommand are stored in the comments before the function which implements the subcommand. The fact that the help messages and the man page are generated from the same source should help to avoid
2023 Feb 17
1
[PATCH] ocfs2: fix defrag path triggering jbd2 ASSERT
code path: ocfs2_ioctl_move_extents ocfs2_move_extents ocfs2_defrag_extent __ocfs2_move_extent + ocfs2_journal_access_di + ocfs2_split_extent //sub-paths call jbd2_journal_restart + ocfs2_journal_dirty //crash by jbs2 ASSERT crash stacks: PID: 11297 TASK: ffff974a676dcd00 CPU: 67 COMMAND: "defragfs.ocfs2" #0 [ffffb25d8dad3900] machine_kexec at ffffffff8386fe01
2017 Apr 04
0
100% CPU freeze on read of large files with --sparse
Hello, While restoring a large data backup which contained some big sparse-ish files, using rsync 3.1.1, (these were VMDK files to be precise), I found that adding the --sparse option can permanently wedge the rsync processes. I performed a few basic checks during the time it happened (at one point I left it a few days so I suspect it can last more or less forever). * strace didn't show
2007 Feb 06
4
Mongrel service will not start on win32 w/ --prefix option
All, I am in need of some help. I''ve run into a problem that I am not able to fix or even troubleshoot. I am trying to run Mongrel as a service on Win32. Basically, my problem is that running Mongrel as a service works fine. Fine until I change the configuration (using service::remove and service::install) to use --prefix. I must have this as I am running multiple webapps and app
2006 Mar 29
0
Strange Panic During Reboot
I'm getting the following panic during shutdown of a 6.1-PRERELEASE system cvsupped yesterday. Strangely enabling DDB in the kernel "fixes" the problem. The computer is an old P120 (which I should probably replace). It occurs every time when DDB is disabled but when DDB is enabled, it does not panic. All buffers synced. Uptime: 19m16s (da0:ahc0:0:0:0): SYNCHRONIZE CACHE. CDB: 35
2008 Jul 22
1
NFS V4?
Looks like just starting the nfs service turns on V2, 3, and 4 (based on reading the script, reading the man pages, and looking at the ports using netstat -l). However, I can connect using -t nfs in the mount, and -t nfs4 fails. I don't believe this is a firewall issue, internal IPs are fully open to each other according to an early rule in iptables. $ sudo mount host03:/home/ddb /mnt/ddb
2013 May 11
4
Defragmentation of large files
Hi list, I have a few large image files (VMware workstation VMDKs and TrueCrypt containers) which I routinely back up over the network to a btrfs raid10 volume via bigsync (https://code.google.com/p/bigsync/). The VM images in particular get really fragmented due to CoW, which is expected. I haven''t yet switched off CoW on the backups directory mainly to experiment and see what
2007 Mar 29
1
mongrel and vista
Thanks for your reply Luis > Will be very helpful if you provide the logs (servicefb.log and > mongrel_service.log) both located into ruby\bin directory. --------------------------------------------------------------------------------- # Logfile created on 28/03/2007 17:54:32 native/process.bas:44, fb.process.spawn: Spawn() init native/process.bas:50, fb.process.spawn: Success in