Displaying 20 results from an estimated 700 matches similar to: "6389368 fat zap should use 16k blocks (with backwards compatability)"
2006 Mar 03
5
flag day: ZFS on-disk format change
Summary: If you use ZFS, do not downgrade from build 35 or later to
build 34 or earlier.
This putback (into Solaris Nevada build 35) introduced a backwards-
compatable change to the ZFS on-disk format. Old pools will be
seamlessly accessed by the new code; you do not need to do anything
special.
However, do *not* downgrade from build 35 or later to build 34 or
earlier. If you do so, some of
2006 Oct 31
0
6407444 unhandled i/o error from dnode_next_offset_level()
Author: ahrens
Repository: /hg/zfs-crypto/gate
Revision: 2515f06e22b3263851614e08b37dd2736737f102
Log message:
6407444 unhandled i/o error from dnode_next_offset_level()
6411780 unhandled i/o error from dnode_sync_free() due to faulty pre-read logic
Files:
update: usr/src/lib/libzfs/common/libzfs_dataset.c
update: usr/src/uts/common/fs/zfs/dmu_tx.c
update: usr/src/uts/common/fs/zfs/dnode.c
2006 Oct 31
0
6397264 zfs-s10-0311:assertion failed:((&dnp->dn_blkptr[0])->blk_birth == 0)
Author: ahrens
Repository: /hg/zfs-crypto/gate
Revision: 2f3e2b378c0e7958796b026a1d3ee28f3329221a
Log message:
6397264 zfs-s10-0311:assertion failed:((&dnp->dn_blkptr[0])->blk_birth == 0)
6397267 assertion failed: (link->list_next == 0) == (link->list_prev == 0)
Files:
update: usr/src/uts/common/fs/zfs/dmu_tx.c
update: usr/src/uts/common/fs/zfs/dnode_sync.c
2006 Oct 31
0
6407842 zfs panic when closing a file
Author: maybee
Repository: /hg/zfs-crypto/gate
Revision: e9d162c151b1d4186acb7ee8fb89bc7791633212
Log message:
6407842 zfs panic when closing a file
6410836 zfs umount hang during ZFS stress testing.
Files:
update: usr/src/uts/common/fs/zfs/arc.c
update: usr/src/uts/common/fs/zfs/dbuf.c
update: usr/src/uts/common/fs/zfs/dmu_tx.c
update: usr/src/uts/common/fs/zfs/zfs_vnops.c
2007 Feb 06
4
The ZFS MOS and how DNODES are stored
ZFS documentation lists snapshot limits on any single file system in a pool at 2**48 snaps, and that seems to logically imply that a snap on a file system does not require an update to the pool?s currently active uberblock. That is to say, that if we take a snapshot of a file system in a pool, and then make any changes to that file system, the copy on write behavior induced by the changes will
2007 Jul 23
12
GRUB, zfs-root + Xen: Error 16: Inconsistent filesystem structure
Hi Lin,
In addition to bug 6541114...
Bug ID 6541114
Synopsis GRUB/ZFS fails to load files from a default compressed (lzjb) root
... I found yet another way to get the "Error 16: Inconsistent filesystem
structure" from GRUB. This time when trying to boot a Xen Dom0 from a
zfs bootfs
Synopsis: grub/zfs-root: cannot boot xen from a zfs root
2013 Oct 26
2
[PATCH] 1. changes for vdiskadm on illumos based platform
2. update ZFS in libfsimage from illumos for pygrub
diff -r 7c12aaa128e3 -r c2e11847cac0 tools/libfsimage/Rules.mk
--- a/tools/libfsimage/Rules.mk Thu Oct 24 22:46:20 2013 +0100
+++ b/tools/libfsimage/Rules.mk Sat Oct 26 20:03:06 2013 +0400
@@ -2,11 +2,19 @@ include $(XEN_ROOT)/tools/Rules.mk
CFLAGS += -Wno-unknown-pragmas -I$(XEN_ROOT)/tools/libfsimage/common/
2002 Apr 25
1
understanding and resolving seg faults
Dear r-devel,
I am mutating rpart to do calculations on trees.
I am trying to extract information from the tree.
However, I got a seg. fault.
This is the offending line in "rpmatrix.c":
deltaI[0][0] = spl->improve;
(Commenting it out cures the seg fault)
I would like some advice on how to debug this. I have allocated memory
with calloc and deltaI[0][0] should be
2010 Sep 17
3
ZFS Dataset lost structure
After a crash, in my zpool tree, some dataset report this we i do a ls -la:
brwxrwxrwx 2 777 root 0, 0 Oct 18 2009 mail-cts
also if i set
zfs set mountpoint=legacy dataset
and then i mount the dataset to other location
before the directory tree was only :
dataset
- vdisk.raw
The file was a backing device of a Xen VM, but i cannot access the directory structure of this dataset.
However i
2004 Jun 17
2
using "= matrix (...)" in .C calls
Dear R-devel,
I am trying to alter rpart so that it makes additional calculations when
growing the tree.
In the "rpart.s" there is a call to the C routine:
rp <- .C("s_to_rp2",
as.integer(nobs),
as.integer(nsplit),
as.integer(nodes),
as.integer(ncat),
2012 Jan 17
0
ZDB returning strange values
Hello all, I have a question about what output "ZDB -dddddd" should
produce in L0 DVA fields. I expected there to be one or more
same-sized references to data blocks stored in top-level vdevs
(one vdev #0 in my 6-disk raidz2 pool), as confirmed by the source:
http://src.illumos.org/source/xref/illumos-gate/usr/src/cmd/zdb/zdb.c#sprintf_blkptr_compact
And I do see that for some of my
2002 Jan 25
0
rpart subsets
A few weeks back I posted that the subset feature of rpart was not working
when predicting a categorical variable. I was able to figure out a simple
solution to the problem that I hope can be included in future editions of
rpart. I also include a fix for another related problem.
The basic problem is that when predicting a categorical using a subset, the
subset may not have all the categories
2002 Jan 28
0
rpart subset fix
(Apparently, I posted this to the wrong place. I am hopefully posting this
is the correct place now. If not, please advise.)
A few weeks back I posted that the subset feature of rpart was not working
when predicting a categorical variable. I was able to figure out a simple
solution to the problem that I hope can be included in future editions of
rpart. I also include a fix for another related
2007 Nov 09
3
Major problem with a new ZFS setup
We recently installed a 24 disk SATA array with an LSI controller attached
to a box running Solaris X86 10 Release 4. The drives were set up in one
big pool with raidz, and it worked great for about a month. On the 4th, we
had the system kernel panic and crash, and it''s now behaving very badly.
Here''s what diagnostic data I''ve been able to collect so far:
In the
2013 Nov 26
0
[LLVMdev] LLVM Backwards-Compatability
Hi,
I understand that backend support for Alpha was taken out a few years back,
but I was wondering if anyone was aware of a workaround for the following
use case: I would like to be able to generate Alpha binaries, but I have
transforms that were written using the structure of the latest release
(3.3).
I have tried generating IR using r3.3, and understandably, neither the
static compiler or
2006 Oct 31
0
6416794 zfs panics in dnode_reallocate during incremental zfs restore
Author: maybee
Repository: /hg/zfs-crypto/gate
Revision: fc64f8534e67547081ada693f0184cc19b89f5c9
Log message:
6416794 zfs panics in dnode_reallocate during incremental zfs restore
6425740 assertion failed: new_state != old_state
Files:
update: usr/src/uts/common/fs/zfs/arc.c
update: usr/src/uts/common/fs/zfs/dnode.c
2011 Dec 18
0
Scrub found error in metadata:0x0, is that always fatal? No checksum errors now...
2011-12-17 21:59, Steve Gonczi wrote:
> Coincidentally, I am pretty sure entry 0 of these meta dnode objects is
> never used,
> so the block with the checksum error does never comes into play.
> Steve
I wonder if this is true indeed - seems so, because the pool
seems to work reardless of the seemingly deep metadata error.
Now, can someone else please confirm this guess? If I were
to
2017 Dec 20
0
[PATCH v4 3/4] virtio_vop: don't kfree device on register failure
As mentioned at drivers/base/core.c:
/*
* NOTE: _Never_ directly free @dev after calling this function, even
* if it returned an error! Always use put_device() to give up the
* reference initialized in this function instead.
*/
so we don't free vdev until vdev->vdev.dev.release be called.
Signed-off-by: weiping zhang <zhangweiping at didichuxing.com>
---
2017 Dec 21
0
[PATCH v5 3/4] virtio_vop: don't kfree device on register failure
As mentioned at drivers/base/core.c:
/*
* NOTE: _Never_ directly free @dev after calling this function, even
* if it returned an error! Always use put_device() to give up the
* reference initialized in this function instead.
*/
so we don't free vdev until vdev->vdev.dev.release be called.
Signed-off-by: weiping zhang <zhangweiping at didichuxing.com>
Reviewed-by: Cornelia Huck
2005 Aug 01
4
Backwards compatability
In doing my testing I''m wondering if maintaining backwards compatability
for existing applications is important. The question boils down to
this: Are there sufficient applications that are using wxRuby (pre
swig) that we should expect to have to support all/most without changes
or should we expect that most applications will need to learn the ''new''
ways things