similar to: Filefrag

Displaying 20 results from an estimated 8000 matches similar to: "Filefrag"

2013 Aug 01
3
filefrag and btrfs filesystem defragment and maybe snapshots
While exploring some btrfs maintenance with respect to defragmenting I ran the following commands: # filefrag /path/to/34G.file /path/to/5.7G.file /path/to/34G.file: 2406 extents found /path/to/5.7G.file: 572 extents found Thinking those mostly static files could be less fragmented I ran: # btrfs filesystem defragment -c /path/to/34G.file # btrfs filesystem defragment -c /path/to/5.7G.file and
2009 Feb 27
3
ext3 heavy file fragmentation with NFS write
Hello, Does anybody know how to avoid the file fragmentation when a file is created over NFSv3? A file created locally is OK: dd bs=32k if=/dev/zero of=test count=32x1024 conv=fsync filefrag test test: 10 extents found, perfection would be 9 extents When I create the file in the same dir, but from another machine, mounted over NFS: filefrag test test: 4833 extents found, perfection would be
2011 Feb 16
2
ZFS utility like Filefrag on linux to help analyzing the extents mapping
Hello All, I''d like to know if there is an utility like `Filefrag'' shipped with e2fsprogs on linux, which is used to fetch the extents mapping info of a file(especially a sparse file) located on ZFS? I am working on efficient sparse file detection and backup through lseek(SEEK_DATA/SEEK_HOLE) on ZFS, and I need to verify the result by comparing the original sparse file and
2013 Feb 21
5
BTRFS fails defragging
Hi folks, I''m using Ubuntu 12.10 Quantal with # uname -r 3.5.0-24-generic And it seems I cannot defrag : # filefrag /boot/initrd.img-3.5.0-24-generic /boot/initrd.img-3.5.0-24-generic: 3 extents found # btrfs filesystem defrag /boot/initrd.img-3.5.0-24-generic # echo $? 20 # filefrag /boot/initrd.img-3.5.0-24-generic /boot/initrd.img-3.5.0-24-generic: 3 extents found Any clue
2013 May 11
4
Defragmentation of large files
Hi list, I have a few large image files (VMware workstation VMDKs and TrueCrypt containers) which I routinely back up over the network to a btrfs raid10 volume via bigsync (https://code.google.com/p/bigsync/). The VM images in particular get really fragmented due to CoW, which is expected. I haven''t yet switched off CoW on the backups directory mainly to experiment and see what
2011 Oct 08
5
defrag makes fragmentation worse
Kernel 3.1-rc8 btrfs-progs-0.19 mount options: noatime,autodefrag (space_cache is enabled) There are snapshots present on the filesystem. When I do a btrfs fi defrag on a file, the file becomes much more fragmented. The end result can be a file with 20k times more fragments than before. Initially I thought the extents were just smaller but were next to each other, so I checked with both
2013 Jan 31
3
/home on BTRFS on SSD, now highly fragmenting virtuoso database - use autodefrag?
Hi! Today I converted my /home from Ext4 to BTRFS by reformatting and copying all over again. I created the filesystem with -l 16384 -n 16384 -d single -m single on an logical volume Intel SSD 320 and mount with compress=lzo,spacecache. Current state: merkaba:~> btrfs filesystem show failed to read /dev/sr0 Label: ''home'' uuid: […] Total devices 1 FS bytes used
2010 Nov 18
1
driver type for .vdi images?
Hello libvirt experts, I use libvirt (0.7.5-5ubuntu27.7) with KVM / qemu (0.12.3+noroms-0ubuntu9.2). Some machines use raw images but same use Virtualbox images (.vdi). Since last Ubuntu 10.04 LTS update I can not start the vdi-images. The "driver name= type=" option was new. <devices> <emulator>/usr/bin/kvm</emulator> <disk type='file'
2005 Feb 07
2
mke2fs options for very large filesystems
Wow, it takes a really long time to make a 2TB ext2fs. Are there better-than-default options that could be used for a large filesystem? mke2fs 1.34 (25-Jul-2003) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 244203520 inodes, 488382016 blocks 24419100 blocks (5.00%) reserved for the super user First data block=0 14905 block groups 32768 blocks per group,
2012 Apr 04
7
Driver domains communication protocol proposal
During some discussions and handwaving, including discussions with some experts on the Xenserver/XCP storage architecture, we came up with what we think might be a plausible proposal for an architecture for communication between toolstack and driver domain, for storage at least. I offered to write it up. The abstract proposal is as I understand the consensus from our conversation. The concrete
2011 Feb 23
1
Using Solaris iSCSI target in VirtualBox iSCSI Initiator
Hello, I?m using ZFS to export some iscsi targets for the virtual box iscsi initiator. It works ok if I try to install the guest OS manually. However, I?d like to be able to import my already prepared guest os vdi images into the iscsi devices but I can?t figure out how to do it. Each time I tried, I cannot boot. It only works if I save the manually installed guest os and re-instate the same
2011 Apr 15
1
Errors attaching VBDs to dom0 VM
Hi, I''ve been touring XCP and making my way around it for the last few days, and am having problems with some reasonably straightforward actions. I''ve poured a several hours into different ways of resolving the issue, but am coming up short. Here''s the main issue. The following command (plugging in a VBD to the dom0 VM), when, run, hangs for several minutes, and then
2017 Oct 08
4
Re: Virtualbox vdi Input Format and man pages
[stef204 sent me the full log since it contains sensitive information] The log says that virt-v2v cannot see anything at all on the 34.1 GB disk (as if the disk is blank). However I think the actual problem is that you've given the wrong disk type in the XML: <disk type='file' device='disk'> <driver name='qemu' type='raw'/>
2011 Nov 22
1
[file PATCH] Properly detect .vdi (VirtualBox disk image) files
The current test for .vdi files is incorrect. It tries to detect the string "<<< Sun xVM VirtualBox Disk Image >>>". However this string is just free text and .vdi files often contain different strings (ref: [1]). The correct test in this patch looks for the magic number at offset 0x40 in the file (ref: [2]). Example: Upstream 'file' without the patch on an
2017 Aug 19
3
Virtualbox vdi Input Format and man pages
Hi, I am new to v2v/libguestfs. I need to convert a 30 GB virtual machine running Windows7 64 bit (a guest on a Linux system) from Virtualbox vdi format to qcow2 (or raw/img--another debate in itself) so I can use libvirt/qemu/kvm to run it and completely migrate away from Virtualbox. The vdi machine is a mission critical production environment and I cannot afford to mess it up, etc. Will keep
2011 Jun 27
1
how to share vdi among several xcp pools
hi I am having several pools of xcp with NFS shared storage as default SR on each pool and i want to do following in this arrangement -I want to share vdis among all pools, i.e. for example if I want to detach a vdi from a vm in pool-1 to a vm in pool-2. so is this possible to do so? vdi export is one of the solution but i want to do it seamlessly just like it happens within the same pool or
2012 Jan 05
7
Blocking countries with shorewall
I''m currently getting a huge number of (failed) attempts to access my home server at UDP port 27845. I think most if not all the attacks come from China or Korea. I see there is a list of Chinese and Korean networks at <http://www.countryipblocks.net/country-blocks/>. Is there a standard way of using such a list in shorewall? -- Timothy Murphy e-mail: gayleard /at/ eircom.net
2013 Jun 09
2
libguestfs support snapshot based virutal disk analysis?
Hi dear all: I have browsed almost all architecture of libGuestFs and use it on Fedora 18. It really seems greate! However, 1. No virtual disk (*.vdi, *.vmdk...) based snashot analysis code Current LibGuestFs could recognize and virtual disk format (e.g. *.VDI) and the file syste inside it (e.g. NTFS ), but find no support for snapshot virtual disk. Virtual Machine such VirtualBox
2013 Oct 19
7
Lots of trouble hanging when rm files with many extents
Hello folks, I reported a bug here: https://bugzilla.kernel.org/show_bug.cgi?id=63071 but I am not sure if that was the right thing to do. This is producing OOM issues and leading to system crashes (including eventual panics) with such alarming frequency that I wonder if perhaps there is something different about my setup than others. In a nutshell, I originally made the mistake of storing a
2017 Jun 08
1
[PATCH] lib: create: Allow any [[:alnum:]]+ string as a backingfmt parameter.
If you use the libguestfs tools which open disk images read-only (eg. virt-df), with formats such as 'vdi', then you will see an error: error: invalid value for backingformat parameter 'vdi' This is because opening a disk image read-only will try to create a qcow2 file with the original image as a backing file. However the list of permitted backing formats was very restrictive