similar to: ZFS Metadata on-disk grouping

Displaying 20 results from an estimated 1000 matches similar to: "ZFS Metadata on-disk grouping"

2010 Nov 11
8
zpool import panics
Hi, I just had my Dell R610 reboot with a kernel panic when I threw a couple of zfs clone commands in the terminal at it. Now, after the system had rebooted zfs will not import my pool anylonger and instead the kernel will panic again. I have had the same symptom on my other host, for which this one is basically the backup, so this one is my last line if defense. I tried to run zdb -e
2009 Oct 30
1
internal scrub keeps restarting resilvering?
After several days of trying to get a 1.5TB drive to resilver and it continually restarting, I eliminated all of the snapshot-taking facilities which were enabled and 2009-10-29.14:58:41 [internal pool scrub done txg:567780] complete=0 2009-10-29.14:58:41 [internal pool scrub txg:567780] func=1 mintxg=3 maxtxg=567354 2009-10-29.16:52:53 [internal pool scrub done txg:567999] complete=0
2006 Oct 31
0
6217236 pcfs module has incorrect FAT signature check (fix lint)
Author: wyllys Repository: /hg/zfs-crypto/gate Revision: c300512c0d2609e729a85ce9f318a9133e7a8b1e Log message: 6217236 pcfs module has incorrect FAT signature check (fix lint) 6310335 mkfs_pcfs FAT size computation is wrong. (fix lint) Files: update: usr/src/cmd/fs.d/pcfs/mkfs/mkfs.c update: usr/src/uts/common/fs/pcfs/pc_vfsops.c
2006 Dec 18
1
zfs/fstyp slows down recognizing pcfs formatted floppies
I''ve noticed that fstyp on a floppy media formatted with "pcfs" now needs somewhere between 30 - 100 seconds to find out that the floppy media is formatted with "pcfs". E.g. on sparc snv_48, I currently observe this: % time fstyp /vol/dev/rdiskette0/nomedia pcfs 0.01u 0.10s 1:38.84 0.1% zfs''s /usr/lib/fs/zfs/fstyp.so.1 seems to add about 40 seconds to that
2004 Dec 01
0
FreeBSD Security Advisory FreeBSD-SA-04:17.procfs
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 ============================================================================= FreeBSD-SA-04:17.procfs Security Advisory The FreeBSD Project Topic: Kernel memory disclosure in procfs and linprocfs Category: core Module: sys Announced:
2014 Feb 20
2
[PATCH] NTFS: fragmented $MFT file was not handled
NTFS $MFT file may be fragmented by itself (and actually is in most cases). However, such a situation was not handled. This patch add support for fragmented $MFT file. Signed-off-by: Andy Alex <andy at r-tt.com> --- diff -uprN syslinux-6.02.orig/core/fs/ntfs/ntfs.c syslinux-6.02/core/fs/ntfs/ntfs.c --- syslinux-6.02.orig/core/fs/ntfs/ntfs.c 2013-10-13 21:59:03.000000000 +0400 +++
2010 Sep 10
3
zpool upgrade and zfs upgrade behavior on b145
Not sure what the best list to send this to is right now, so I have selected a few, apologies in advance. A couple questions. First I have a physical host (call him bob) that was just installed with b134 a few days ago. I upgraded to b145 using the instructions on the Illumos wiki yesterday. The pool has been upgraded (27) and the zfs file systems have been upgraded (5). chris at bob:~# zpool
2010 Sep 10
3
zpool upgrade and zfs upgrade behavior on b145
Not sure what the best list to send this to is right now, so I have selected a few, apologies in advance. A couple questions. First I have a physical host (call him bob) that was just installed with b134 a few days ago. I upgraded to b145 using the instructions on the Illumos wiki yesterday. The pool has been upgraded (27) and the zfs file systems have been upgraded (5). chris at bob:~# zpool
2016 Mar 30
0
[PATCH 1/3] Rename icat command in download_inode
The "icat" name comes from the employed command line tool which might be replaced at any time with a different implementation. The command name is a bit confusing because it's similar to "cat" but act as "donwload". download_inode is more clear and descriptive. Signed-off-by: Matteo Cafasso <noxdafox@gmail.com> --- daemon/sleuthkit.c | 2
2007 Sep 14
3
space allocation vs. thin provisioning
Short question: I''m curious as to how ZFS manages space (free and used) and how its usage interacts with thin provisioning provided by HDS arrays. Is there any effort to minimize the number of provisioned disk blocks that get writes so as to not negate any space benefits that thin provisioning may give? Background & more detailed questions: In Jeff Bonwick''s blog[1], he
2004 Dec 01
1
FreeBSD Security Advisory FreeBSD-SA-04:17.procfs
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 ============================================================================= FreeBSD-SA-04:17.procfs Security Advisory The FreeBSD Project Topic: Kernel memory disclosure in procfs and linprocfs Category: core Module: sys Announced:
2008 Sep 05
0
raidz pool metadata corrupted nexanta-core->freenas 0.7->nexanta-core
I made a bad judgment and now my raidz pool is corrupted. I have a raidz pool running on Opensolaris b85. I wanted to try out freenas 0.7 and tried to add my pool to freenas. After adding the zfs disk, vdev and pool. I decided to back out and went back to opensolaris. Now my raidz pool will not mount and got the following errors. Hope someone expert can help me recover from this error.
2012 Feb 15
0
[GIT PULL] NTFS features and fixes
Hi all, - There is a worth thing to note about these changes, which is the handling of $ATTRIBUTE_LIST attribute, that is rare attribute. When there are a lot of attributes and there is no more space in the MFT record, all those attributes that can be made non-resident are moved out the MFT, so this is where the $ATTRIBUTE_LIST comes in. - Most people must have seen the ugly "EDD
2016 Feb 29
0
[PATCH 2/2] added ntfscat_i tests
Test is based on file signature, it checks whether the extracted file is the $MFT. Signed-off-by: Matteo Cafasso <noxdafox@gmail.com> --- Makefile.am | 1 + configure.ac | 1 + generator/actions.ml | 6 ++++- tests/ntfscat/Makefile.am | 26 +++++++++++++++++++++ tests/ntfscat/test-ntfscat.sh | 53 +++++++++++++++++++++++++++++++++++++++++++
2007 Apr 22
1
Metaslab allocation control?
I was wondering if it''s planned to give some control over the metaslab allocation into the hands of the user. What I have in mind is an attribute on a ZFS filesystem that acts as modifier to the allocator. Scenarios for this would be directly controlling performance characteristics, e.g. having system and application files being allocated on the inner side of the platter while pushing
2006 Oct 31
0
6410698 ZFS metadata needs to be more highly replicated (ditto blocks)
Author: billm Repository: /hg/zfs-crypto/gate Revision: 33640e100342f4a847c599f1a1671dda6faf4e05 Log message: 6410698 ZFS metadata needs to be more highly replicated (ditto blocks) 6410700 zdb should support reading raw blocks out of storage pool 6410709 ztest: spa config can change before pool export Files: update: usr/src/cmd/mdb/common/modules/zfs/zfs.c update: usr/src/cmd/zdb/zdb.c update:
2016 Mar 01
1
[PATCH] tests: move ntfs tests in a single directory
Move test-ntfscat.sh and test-ntfsclone.sh in a single ntfs directory, much like the tests for other filesystems. --- Makefile.am | 3 +- configure.ac | 3 +- tests/ntfs/Makefile.am | 27 +++++++++++++++++ tests/ntfs/test-ntfscat.sh | 53 +++++++++++++++++++++++++++++++++ tests/ntfs/test-ntfsclone.sh | 62
2011 Jul 10
3
How create a FAT filesystem on a zvol?
The `lofiadm'' man page describes how to export a file as a block device and then use `mkfs -F pcfs'' to create a FAT filesystem on it. Can''t I do the same thing by first creating a zvol and then creating a FAT filesystem on it? Nothing I''ve tried seems to work. Isn''t the zvol just another block device? -- -Gary Mills- -Unix Group-
2012 Jun 18
1
Restore destroyed snapshot ???
OK, I am a butt-head and accidentally destroyed my last snapshot of a replicated ZFS dataset. The dataset is NOT mounted and other than a resilver going on, there is no I/O going on to this dataset. Is there any way to roll back and get my latest snapshot back? from zpool history -i: 2012-06-18.10:34:00 zfs destroy xxx at 1339668001 2012-06-18.10:34:00 [internal destroy txg:2213852] dataset =
2007 Jul 10
1
ZFS pool fragmentation
I have a huge problem with ZFS pool fragmentation. I started investigating problem about 2 weeks ago http://www.opensolaris.org/jive/thread.jspa?threadID=34423&tstart=0 I found workaround for now - changing recordsize - but I want better solution. The best solution would be a defragmentator tool, but I can see that it is not easy. When ZFS pool is fragmented then: 1. spa_sync function is