search for: musant

Displaying 14 results from an estimated 14 matches for "musant".

Did you mean: mutant
2009 Feb 02
8
ZFS core contributor nominations
...ibutor level, except for those with a "*". Those with a "*" are no longer involved with ZFS and we should let their grants expire. I am nominating the following to be new Core Contributors of ZFS: Jonathan W. Adams (jwadams) Chris Kirby Lin Ling Eric C. Taylor (taylor) Mark Musante Rich Morris George Wilson Tim Haley Brendan Gregg Adam Leventhal Pawel Jakub Dawidek Ricardo Correia For Contributor I am nominating the following: Darren Moffat Richard Elling I am voting +1 for all of these (including myself) Feel free to nominate others for Contributor or Core Contributor....
2007 Apr 10
15
Poor man''s backup by attaching/detaching mirror drives on a _striped_ pool?
Hi, one quick&dirty way of backing up a pool that is a mirror of two devices is to zpool attach a third one, wait for the resilvering to finish, then zpool detach it again. The third device then can be used as a poor man''s simple backup. Has anybody tried it yet with a striped mirror? What if the pool is composed out of two mirrors? Can I attach devices to both mirrors, let them
2008 Nov 03
4
cleaning user properties
I have a little question about user properties, I have two filesystems: rpool/export/home/luca and rpool/export/home/luca/src in this two I have one user property, setted with: zfs set net.morettoni:test=xyz rpool/export/home/luca zfs set net.morettoni:test=123 rpool/export/home/luca/src now I need to *clear* (remove) the property from rpool/export/home/luca/src filesystem, but if I use the
2009 Jan 22
3
Failure to boot from zfs on Sun v880
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi. I am trying to move the root volume from an existing svm mirror to a zfs root. The machine is a Sun V880 (SPARC) running nv_96, with OBP version 4.22.34 which is AFAICT the latest. The svm mirror was constructed as follows / d4 m 18GB d14 d14 s 35GB c1t0d0s0 d24 s 35GB c1t1d0s0 swap d3
2009 Apr 09
8
ZIL SSD performance testing... -IOzone works great, others not so great
Hi folks, I would appreciate it if someone can help me understand some weird results I''m seeing with trying to do performance testing with an SSD offloaded ZIL. I''m attempting to improve my infrastructure''s burstable write capacity (ZFS based WebDav servers), and naturally I''m looking at implementing SSD based ZIL devices. I have a test machine with the
2007 Sep 24
7
zfs chattiness at boot time
Hi all, I recently started seeing zfs chattiness at boot time: "reading zfs config" and something like "mounting zfs filesystems (n/n)". Is this really necessary? I thought with SMF the times where every script announced its'' existance had gone (and good thing, too). Can''t we print something only if it goes wrong? Michael -- Michael Schuster Recursion,
2007 Oct 29
9
zpool question
hello folks, I am running Solaris 10 U3 and I have small problem that I dont know how to fix... I had a pool of two drives: bash-3.00# zpool status pool: mypool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM mypool ONLINE 0 0 0 emcpower0a ONLINE 0 0 0 emcpower1a ONLINE
2007 Apr 12
10
How to bind the oracle 9i data file to zfs volumes
Experts, I''m installing Oracle 9i on Solaris 10 11/06(update 3),I created some zfs volumes which will be used by oracle data file,as: # zfs create -V 200m ora_pool/controlfile01_200m # zfs create -V 800m ora_pool/system_800m ... # ls -l /dev/zvol/rdsk/ora_pool lrwxrwxrwx 1 root root 39 Apr 11 12:23 controlfile01_200m -> ../../../../devices/pseudo/zfs at 0:1c,raw
2007 Apr 10
3
Renaming a pool?
Hi all, I have a pool called tank/home/foo and I want to rename it to tank/home/bar. What''s the best way to do this (the zfs and zpool man pages don''t have a "rename" option)? One way I can think of is to create a clone of tank/home/foo called tank/home/bar, and then destroy the former. Is that the best (or even only) way? TIA, -- Rich Teer, SCSA, SCNA, SCSECA,
2007 Apr 19
14
Permanently removing vdevs from a pool
Is it possible to gracefully and permanently remove a vdev from a pool without data loss? The type of pool in question here is a simple pool without redundancies (i.e. JBOD). The documentation mentions for instance offlining, but without going into the end results of doing that. The thing I''m looking for is an option to evacuate, for the lack of a better word, the data from a specific
2009 Apr 09
3
vdev_disk_io_start() sending NULL pointer in ldi_ioctl()
Hi All, I have corefile where we see NULL pointer de-reference PANIC as we have sent (deliberately) NULL pointer for return value. vdev_disk_io_start() ... ... error = ldi_ioctl(dvd->vd_lh, zio->io_cmd, (uintptr_t)&zio->io_dk_callback, FKIOCTL, kcred, NULL); ldi_ioctl() expects last parameter as an
2009 Sep 24
5
Checksum property change does not change pre-existing data - right?
My understanding is that if I "zfs set checksum=<different>" to change the algorithm that this will change the checksum algorithm for all FUTURE data blocks written, but does not in any way change the checksum for previously written data blocks. I need to corroborate this understanding. Could someone please point me to a document that states this? I have searched and searched
2010 Mar 27
16
zpool split problem?
Zpool split is a wonderful feature and it seems to work well, and the choice of which disk got which name was perfect! But there seems to be an odd anomaly (at least with b132) . Started with c0t1d0s0 running b132 (root pool is called rpool) Attached c0t0d0s0 and waited for it to resilver Rebooted from c0t0d0s0 zpool split rpool spool Rebooted from c0t0d0s0, both rpool and spool were mounted
2011 Mar 04
13
cannot replace c10t0d0 with c10t0d0: device is too small
In 2007 I bought 6 WD1600JS 160GB sata disks and used 4 to create a raidz storage pool and then shelved the other two for spares. One of the disks failed last night so I shut down the server and replaced it with a spare. When I tried to zpool replace the disk I get: zpool replace tank c10t0d0 cannot replace c10t0d0 with c10t0d0: device is too small The 4 original disk partition tables look like