search for: dnodes

Displaying 20 results from an estimated 43 matches for "dnodes".

Did you mean: nodes
2007 Feb 06
4
The ZFS MOS and how DNODES are stored
...not a combination of the two. This seems sensible to me, but the description of object sets beginning on page 26 of the ZFS On-Disk Specification, states that the DNODE type DMU_OT_DNODE (the type of the DNODE that?s included in the 1KB objset_phys_t structure) will have a data load of an array of DNODES allocated in 128KB blocks, and the picture (Illustration 12 in the spec) shows these blocks as containing 1024 DNODES. Since DNODES are 512 bytes, it would not be possible to fit the 1024 DNODES depicted in the illustration and if DNODES did live in such an array then they could not be atomically...
2007 Jul 23
12
GRUB, zfs-root + Xen: Error 16: Inconsistent filesystem structure
Hi Lin, In addition to bug 6541114... Bug ID 6541114 Synopsis GRUB/ZFS fails to load files from a default compressed (lzjb) root ... I found yet another way to get the "Error 16: Inconsistent filesystem structure" from GRUB. This time when trying to boot a Xen Dom0 from a zfs bootfs Synopsis: grub/zfs-root: cannot boot xen from a zfs root
2010 Sep 17
3
ZFS Dataset lost structure
After a crash, in my zpool tree, some dataset report this we i do a ls -la: brwxrwxrwx 2 777 root 0, 0 Oct 18 2009 mail-cts also if i set zfs set mountpoint=legacy dataset and then i mount the dataset to other location before the directory tree was only : dataset - vdisk.raw The file was a backing device of a Xen VM, but i cannot access the directory structure of this dataset. However i
2006 Jan 03
4
zfs object sets and datasets
Hi All, I am looking at trying to understand the conceptual model of how data is grouped and partitioned within a storge pool. In looking at the on disk document that Tabriz sent out a few weeks ago I see that object sets are the grouping which ZFS uses to group objects that are related. Specifially to aid in format and layout of like objects in to a set. So, for example 1 potential object
2004 Jun 17
2
using "= matrix (...)" in .C calls
Dear R-devel, I am trying to alter rpart so that it makes additional calculations when growing the tree. In the "rpart.s" there is a call to the C routine: rp <- .C("s_to_rp2", as.integer(nobs), as.integer(nsplit), as.integer(nodes), as.integer(ncat),
2006 Nov 02
11
ZFS and memory usage.
ZFS works really stable on FreeBSD, but I''m biggest problem is how to control ZFS memory usage. I''ve no idea how to leash that beast. FreeBSD has a backpresure mechanism. I can register my function so it will be called when there are memory problems, which I do. I using it for ARC layer. Even with this in place under heavy load the kernel panics, because memory with KM_SLEEP
2012 Feb 26
3
zfs diff performance
I had high hopes of significant performance gains using zfs diff in Solaris 11 compared to my home-brew stat based version in Solaris 10. However the results I have seen so far have been disappointing. Testing on a reasonably sized filesystem (4TB), a diff that listed 41k changes took 77 minutes. I haven''t tried my old tool, but I would expect the same diff to take a couple of
2006 Aug 31
3
Find the difference between two snapshots
Hi everyone, Is there an easy way to find out which files has changed between two snapshots? Currently I''m doing a # rsync -arvn <snapshot1> <snapshot2> and it creates a list. But rsync needs to go through the whole filesystem and compare files. It would be nice if zfs would have this option builtin. Regards, Nickus
2002 Apr 25
1
understanding and resolving seg faults
Dear r-devel, I am mutating rpart to do calculations on trees. I am trying to extract information from the tree. However, I got a seg. fault. This is the offending line in "rpmatrix.c": deltaI[0][0] = spl->improve; (Commenting it out cures the seg fault) I would like some advice on how to debug this. I have allocated memory with calloc and deltaI[0][0] should be
2012 Jan 17
0
ZDB returning strange values
Hello all, I have a question about what output "ZDB -dddddd" should produce in L0 DVA fields. I expected there to be one or more same-sized references to data blocks stored in top-level vdevs (one vdev #0 in my 6-disk raidz2 pool), as confirmed by the source: http://src.illumos.org/source/xref/illumos-gate/usr/src/cmd/zdb/zdb.c#sprintf_blkptr_compact And I do see that for some of my
2002 Jan 25
0
rpart subsets
A few weeks back I posted that the subset feature of rpart was not working when predicting a categorical variable. I was able to figure out a simple solution to the problem that I hope can be included in future editions of rpart. I also include a fix for another related problem. The basic problem is that when predicting a categorical using a subset, the subset may not have all the categories
2008 Feb 19
32
storing SOM epoch in EA
Good day, some time ago we discussed that it would be very helpful to store epoch in inode on mds. the perfect solution could be to store epoch in old inode body, but there is no much space for this in the body and with DMU we''ll have this problem again. given the minimal inode size we use on MDS is 512 bytes, we can store upto 13 stripes in the body. larger EAs go to a dedicated block.
2002 Jan 28
0
rpart subset fix
(Apparently, I posted this to the wrong place. I am hopefully posting this is the correct place now. If not, please advise.) A few weeks back I posted that the subset feature of rpart was not working when predicting a categorical variable. I was able to figure out a simple solution to the problem that I hope can be included in future editions of rpart. I also include a fix for another related
2008 Mar 20
7
ZFS panics solaris while switching a volume to read-only
Hi, I just found out that ZFS triggers a kernel-panic while switching a mounted volume into read-only mode: The system is attached to a Symmetrix, all zfs-io goes through Powerpath: I ran some io-intensive stuff on /tank/foo and switched the device into read-only mode at the same time (symrdf -g bar failover -establish). ZFS went ''bam'' and triggered a Panic: WARNING: /pci at
2006 Mar 03
5
flag day: ZFS on-disk format change
Summary: If you use ZFS, do not downgrade from build 35 or later to build 34 or earlier. This putback (into Solaris Nevada build 35) introduced a backwards- compatable change to the ZFS on-disk format. Old pools will be seamlessly accessed by the new code; you do not need to do anything special. However, do *not* downgrade from build 35 or later to build 34 or earlier. If you do so, some of
2007 Nov 09
3
Major problem with a new ZFS setup
We recently installed a 24 disk SATA array with an LSI controller attached to a box running Solaris X86 10 Release 4. The drives were set up in one big pool with raidz, and it worked great for about a month. On the 4th, we had the system kernel panic and crash, and it''s now behaving very badly. Here''s what diagnostic data I''ve been able to collect so far: In the
2013 Oct 26
2
[PATCH] 1. changes for vdiskadm on illumos based platform
2. update ZFS in libfsimage from illumos for pygrub diff -r 7c12aaa128e3 -r c2e11847cac0 tools/libfsimage/Rules.mk --- a/tools/libfsimage/Rules.mk Thu Oct 24 22:46:20 2013 +0100 +++ b/tools/libfsimage/Rules.mk Sat Oct 26 20:03:06 2013 +0400 @@ -2,11 +2,19 @@ include $(XEN_ROOT)/tools/Rules.mk CFLAGS += -Wno-unknown-pragmas -I$(XEN_ROOT)/tools/libfsimage/common/
2017 Dec 20
0
[PATCH v4 3/4] virtio_vop: don't kfree device on register failure
As mentioned at drivers/base/core.c: /* * NOTE: _Never_ directly free @dev after calling this function, even * if it returned an error! Always use put_device() to give up the * reference initialized in this function instead. */ so we don't free vdev until vdev->vdev.dev.release be called. Signed-off-by: weiping zhang <zhangweiping at didichuxing.com> ---
2017 Dec 21
0
[PATCH v5 3/4] virtio_vop: don't kfree device on register failure
As mentioned at drivers/base/core.c: /* * NOTE: _Never_ directly free @dev after calling this function, even * if it returned an error! Always use put_device() to give up the * reference initialized in this function instead. */ so we don't free vdev until vdev->vdev.dev.release be called. Signed-off-by: weiping zhang <zhangweiping at didichuxing.com> Reviewed-by: Cornelia Huck
2006 Oct 31
0
6389368 fat zap should use 16k blocks (with backwards compatability)
Author: ahrens Repository: /hg/zfs-crypto/gate Revision: 0fdac67554fe0f4938120fb4f0cb35cbbcd38c0b Log message: 6389368 fat zap should use 16k blocks (with backwards compatability) Files: update: usr/src/uts/common/fs/zfs/dbuf.c update: usr/src/uts/common/fs/zfs/dmu_tx.c update: usr/src/uts/common/fs/zfs/dnode.c update: usr/src/uts/common/fs/zfs/sys/zap_impl.h update: