similar to: traverse_dataset()

Displaying 20 results from an estimated 300 matches similar to: "traverse_dataset()"

2006 Jan 03
4
zfs object sets and datasets
Hi All, I am looking at trying to understand the conceptual model of how data is grouped and partitioned within a storge pool. In looking at the on disk document that Tabriz sent out a few weeks ago I see that object sets are the grouping which ZFS uses to group objects that are related. Specifially to aid in format and layout of like objects in to a set. So, for example 1 potential object
2008 Nov 08
9
How does zfs COW deal with ''..'' in brother directory?
Hi matt, I have some problems about understanding zfs COW implemention. Suppose b and c are both children dir of a, if c changes, there will be new versions of both a and c, namely c'' and a''. a a'' b c c'' Because ''..'' in b points to a before this change, shall we modify b to let ''..'' point to a''? If yes,
2013 Feb 17
13
zfs raid1 error resilvering and mount
hi, i have raid1 on zfs with 2 device on pool first device died and boot from second not working... i try to get http://mfsbsd.vx.sk/ flash and load from it with zpool import http://puu.sh/2402E when i load zfs.ko and opensolaris.ko i see this message: Solaris: WARNING: Can''t open objset for zroot/var/crash Solaris: WARNING: Can''t open objset for zroot/var/crash zpool status:
2008 Mar 27
4
dsl_dataset_t pointer during ''zfs create'' changes
I''ve noticed that the dsl_dataset_t that points to a given dataset changes during the life time of a ''zfs create'' command. We start out with one dsl_dataset_t* during dmu_objset_create_sync() but by the time we are later mounting the dataset we have a different in memory dsl_dataset_t* referring to the same dataset. This causes me a big issue with per dataset
2006 Jan 04
8
Using same ZFS under different kernel versions
I build two zfs filesystems using b29 (from brandz). I then re-installed solaris express b28, preserving the zfs filesystems. When I tried to "zpool import" my zfs filesystems I got a kernel panic: > debugging crash dump vmcore.0 (32-bit) from blackbird > operating system: 5.11 snv_28 (i86pc) > panic message: > ZFS: bad checksum (read on /dev/dsk/c1d0p0 off 24d5e000: zio
2012 Oct 17
24
[zfs] portable zfs send streams (preview webrev)
We have finished a beta version of the feature. A webrev for it can be found here: http://cr.illumos.org/~webrev/sensille/fits-send/ It adds a command ''zfs fits-send''. The resulting streams can currently only be received on btrfs, but more receivers will follow. It would be great if anyone interested could give it some testing and/or review. If there are no objections,
2010 Sep 17
3
ZFS Dataset lost structure
After a crash, in my zpool tree, some dataset report this we i do a ls -la: brwxrwxrwx 2 777 root 0, 0 Oct 18 2009 mail-cts also if i set zfs set mountpoint=legacy dataset and then i mount the dataset to other location before the directory tree was only : dataset - vdisk.raw The file was a backing device of a Xen VM, but i cannot access the directory structure of this dataset. However i
2006 Jul 20
1
tracking an error back to a file
Hi. I''m in the process of writing an introductory paper on ZFS. The paper is meant to be something that could be given to a systems admin at a site to introduce ZFS and document common procedures for using ZFS. In the paper, I want to document the method for identifying which file has a checksum error. In previous discussions on this alias, I''ve used the following
2010 May 26
14
creating a fast ZIL device for $200
Recently, I''ve been reading through the ZIL/slog discussion and have the impression that a lot of folks here are (like me) interested in getting a viable solution for a cheap, fast and reliable ZIL device. I think I can provide such a solution for about $200, but it involves a lot of development work. The basic idea: the main problem when using a HDD as a ZIL device are the cache flushes
2010 Jun 04
5
Depth of Scrub
Hi, I have a small question about the depth of scrub in a raidz/2/3 configuration. I''m quite sure scrub does not check spares or unused areas of the disks (it could check if the disks detects any errors there). But what about the parity? Obviously it has to be checked, but I can''t find any indications for it in the literature. The man page only states that the data is being
2006 Jul 20
2
How can I watch IO operations with dtrace on zfs?
I have been using iosoop script (see http://www.opensolaris.org/os/community/dtrace/scripts/) written by Brendan Gregg to look at the IO operations of my application. When I was running my test-program on a UFS filesystem I could see both read and write operations like: UID PID D BLOCK SIZE COMM PATHNAME 203803 4436 R 6016592 16384 diskio <none> 203803 4436 W 3448432
2013 Feb 13
1
[PATCH] Btrfs: fix crash in log replay with qgroups enabled
When replaying a log tree with qgroups enabled, tree_mod_log_rewind does a sanity-check of the number of items against the maximum possible number. It calculates that number with the nodesize of fs_root. Unfortunately fs_root is not yet set at this stage. So instead use the nodesize from tree_root, which is already initialized. Signed-off-by: Arne Jansen <sensille@gmx.net> ---
2008 Dec 15
15
Need Help Invalidating Uberblock
I have a ZFS pool that has been corrupted. The pool contains a single device which was actually a file on UFS. The machine was accidentally halted and now the pool is corrupt. There are (of course) no backups and I''ve been asked to recover the pool. The system panics when trying to do anything with the pool. root@:/$ zpool status panic[cpu1]/thread=fffffe8000758c80: assertion failed:
2012 Oct 12
9
[PATCH] Fits: tool to parse stream
Simple tool to parse a fits-stream from stdout. Signed-off-by: Arne Jansen <sensille@gmx.net> --- The idea of the btrfs send stream format was to generate it in a way that it is easy to receive on different platforms. Thus the proposed name FITS, for Filesystem Incremental Backup Stream. We should also build the tools to receive the stream on different platforms. As a place to collect
2012 Aug 12
2
"Masked by GlobalEnv"
hello everyone, i am getting problems in graph plotting. When i attach file after adding color attributes in my data set. i got problem of "GlobalEnv" and masked the followings. Like this >attach(machm) The following object(s) are masked _by_ '.GlobalEnv': coll, sp The following object(s) are masked from 'mach': angle, area, dis, plot, sp
2011 Oct 06
26
[PATCH v0 00/18] btfs: Subvolume Quota Groups
This is a first draft of a subvolume quota implementation. It is possible to limit subvolumes and any group of subvolumes and also to track the amount of space that will get freed when deleting snapshots. The current version is functionally incomplete, with the main missing feature being the initial scan and rescan of an existing filesystem. I put some effort into writing an introduction into
2011 May 02
5
[PATCH v3 0/3] btrfs: quasi-round-robin for chunk allocation
In a multi device setup, the chunk allocator currently always allocates chunks on the devices in the same order. This leads to a very uneven distribution, especially with RAID1 or RAID10 and an uneven number of devices. This patch always sorts the devices before allocating, and allocates the stripes on the devices with the most available space, as long as there is enough space available. In a low
2012 Oct 04
3
[PATCH] btrfs ulist use rbtree instead
From: Rock <zimilo@code-trick.com> --- fs/btrfs/backref.c | 10 ++-- fs/btrfs/qgroup.c | 16 +++--- fs/btrfs/send.c | 2 +- fs/btrfs/ulist.c | 154 +++++++++++++++++++++++++++++++++++++--------------- fs/btrfs/ulist.h | 45 ++++++++++++--- 5 files changed, 161 insertions(+), 66 deletions(-) diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c index ff6475f..a5bebc8
2007 Feb 11
0
unable to mount legacy vol - panic in zfs:space_map_remove - zdb crashes
I have a 100gb SAN lun in a pool, been running ok for about 6 months. panicked the system this morning. system was running S10U2. In the course of troubleshooting I''ve installed the latest recommended bundle including kjp 118833-36 and zfs patch 124204-03 created as: zpool create zfspool01 /dev/dsk/emcpower0c zfs create zfspool01/nb60openv zfs set mountpoint=legacy zfspool01/nb60openv
2008 Jun 26
3
[Bug 2334] New: zpool destroy panics after zfs_force_umount_stress
http://defect.opensolaris.org/bz/show_bug.cgi?id=2334 Summary: zpool destroy panics after zfs_force_umount_stress Classification: Development Product: zfs-crypto Version: unspecified Platform: Other OS/Version: Solaris Status: NEW Severity: major Priority: P2 Component: other AssignedTo: