similar to: Expanding a raidz vdev in zpool

Displaying 20 results from an estimated 200 matches similar to: "Expanding a raidz vdev in zpool"

2008 Jul 28
1
zpool status my_pool , shows a pulled disk c1t6d0 as ONLINE ???
New server build with Solaris-10 u5/08, on a SunFire t5220, and this is our first rollout of ZFS and Zpools. Have 8 disks, boot disk is hardware mirrored (c1t0d0 + c1t1d0) Created Zpool my_pool as RaidZ using 5 disks + 1 spare: c1t2d0, c1t3d0, c1t4d0, c1t5d0, c1t6d0, and spare c1t7d0 I am working on alerting & recovery plans for disks failures in the zpool. As a test, I have pulled disk
2015 Aug 22
1
Configuration file not found when using non-standard installation path
Installing with: syslinux --directory otherdir -i my_unmounted_device will install the bootloader in the desired directory ("otherdir") under the root directory of the desired unmounted device ("my_unmounted_device"). All the corresponding syslinux-related files are located in the same installation directory. When booting this device, SYSLINUX fails to find a
2013 Jul 14
2
(9.2) panic under disk load (gam_server / knlist_remove_kq)
9.2 PRERELEASE (today) / amd64 Hello, I'm seeing a panic while trying to build a poudriere repository. As far I can see it always happens when gam_server is started (ie xfce is running) and under disk load (poudriere bulk build) : (That is something new, the box was pretty stable) the complete crash dump (core.0.txt) is here: http://user.lamaiziere.net/patrick/panic_gam_server.txt Fatal
2011 Apr 01
15
Zpool resize
Hi, LUN is connected to solaris 10u9 from NETAP FAS2020a with ISCSI. I''m changing LUN size on netapp and solaris format see new value but zpool still have old value. I tryed zpool export and zpool import but it didn''t resolve my problem. bash-3.00# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0d1 <DEFAULT cyl 6523 alt 2 hd 255 sec 63>
2010 Jul 19
6
Performance advantages of spool with 2x raidz2 vdev"s vs. Single vdev
Hi guys, I am about to reshape my data spool and am wondering what performance diff. I can expect from the new config. Vs. The old. The old config. Is a pool of a single vdev of 8 disks raidz2. The new pool config is 2vdev''s of 7 disk raidz2 in a single pool. I understand it should be better with higher io throughput....and better read/write rates...but interested to hear the science
2009 Aug 04
0
zfs remove vdev
Does anyone know when Solaris 10 will have the bits to allow removal of vdevs from a pool to shrink the storage? Thanks, Brian
2010 Mar 17
0
checksum errors increasing on "spare" vdev?
Hi, One of my colleagues was confused by the output of ''zpool status'' on a pool where a hot spare is being resilvered in after a drive failure: $ zpool status data pool: data state: DEGRADED status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scrub:
2007 Nov 13
0
in a zpool consist of regular files, when I remove the file vdev, zpool status can not detect?
I make a file zpool like this: bash-3.00# zpool status pool: filepool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM filepool ONLINE 0 0 0 /export/f1.dat ONLINE 0 0 0 /export/f2.dat ONLINE 0 0 0 /export/f3.dat ONLINE 0 0 0 spares
2008 May 14
2
vdev cache - comments in the source
Hello zfs-code, http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/vdev_cache.c 72 * All i/os smaller than zfs_vdev_cache_max will be turned into 73 * 1<<zfs_vdev_cache_bshift byte reads by the vdev_cache (aka software 74 * track buffer). At most zfs_vdev_cache_size bytes will be kept in each 75 * vdev''s vdev_cache. While it
2007 Sep 19
2
import zpool error if use loop device as vdev
Hey, guys I just do the test for use loop device as vdev for zpool Procedures as followings: 1) mkfile -v 100m disk1 mkfile -v 100m disk2 2) lofiadm -a disk1 /dev/lofi lofiadm -a disk2 /dev/lofi 3) zpool create pool_1and2 /dev/lofi/1 and /dev/lofi/2 4) zpool export pool_1and2 5) zpool import pool_1and2 error info here: bash-3.00# zpool import pool1_1and2 cannot import
2007 Nov 13
3
zpool status can not detect the vdev removed?
I make a file zpool like this: bash-3.00# zpool status pool: filepool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM filepool ONLINE 0 0 0 /export/f1.dat ONLINE 0 0 0 /export/f2.dat ONLINE 0 0 0 /export/f3.dat ONLINE 0 0 0 spares
2012 Jul 25
8
online increase of zfs after LUN increase ?
Hello, There is a feature of zfs (autoexpand, or zpool online -e ) that it can consume the increased LUN immediately and increase the zpool size. That would be a very useful ( vital ) feature in enterprise environment. Though when I tried to use it, it did not work. LUN expanded and visible in format, but zpool did not increase. I found a bug SUNBUG:6430818 (Solaris Does Not Automatically
2008 Oct 11
5
questions about replacing a raidz2 vdev disk with a larger one
I''d like to replace/upgrade two 500GB disks in RaidZ2 vdev with 1TB disks, but I have some preliminary questions/concerns before trying ''zfs replace dpool ?'' Will ZFS permit this replacement? Will ZFS use the extra space in a heterogeneous RaidZ2 vdev, or is the size limited by the smallest disk in the vdev? Thanks in advance, Vizzini The system is currently running
2009 Nov 10
3
[PATCH 1/8] virtio: console: comment cleanup
Remove old lguest-style comments. Signed-off-by: Rusty Russell <rusty at rustcorp.com.au> --- drivers/char/virtio_console.c | 30 ++++++------------------------ 1 file changed, 6 insertions(+), 24 deletions(-) diff --git a/drivers/char/virtio_console.c b/drivers/char/virtio_console.c --- a/drivers/char/virtio_console.c +++ b/drivers/char/virtio_console.c @@ -1,18 +1,5 @@ -/*D:300 - *
2009 Nov 10
3
[PATCH 1/8] virtio: console: comment cleanup
Remove old lguest-style comments. Signed-off-by: Rusty Russell <rusty at rustcorp.com.au> --- drivers/char/virtio_console.c | 30 ++++++------------------------ 1 file changed, 6 insertions(+), 24 deletions(-) diff --git a/drivers/char/virtio_console.c b/drivers/char/virtio_console.c --- a/drivers/char/virtio_console.c +++ b/drivers/char/virtio_console.c @@ -1,18 +1,5 @@ -/*D:300 - *
2011 Apr 24
2
zfs problem vdev I/O failure
Good morning, I have a problem with ZFS: ZFS filesystem version 4 ZFS storage pool version 15 Yesterday my comp with Freebsd 8.2 releng shutdown with ad4 error detached,when I copy a big file... and after reboot in 2 wd green 1tb say me goodbye. One of them die and other with zfs errors: Apr 24 04:53:41 Flash root: ZFS: vdev I/O failure, zpool=zroot path= offset=187921768448 size=512 error=6
2010 Mar 02
11
Expand zpool capacity
Hello, Experts. I''ve got a problem. I''m trying to expand my main zpool (rpool), but don''t know how to do that. (i''m 100% newbie in non-windows world) I use Osol under Vmware on Windows. I had a pretty small vhdd -> only 12gb. Yesterday i decided to expand my virtual drive to 20gb. (After several tries to upgrade the OS to a newest dev-releases and
2009 Apr 27
23
Raidz vdev size... again.
Hi, i''m new to the list so please bare with me. This isn''t an OpenSolaris related problem but i hope it''s still the right list to post to. I''m on the way to move a backup server to using zfs based storage, but i don''t want to spend too much drives to parity (the 16 drives are attached to a 3ware raid controller so i could also just use raid6 there). I
2010 Mar 27
4
Mixed ZFS vdev in same pool.
I have a question about using mixed vdev in the same zpool and what the community opinion is on the matter. Here is my setup: I have four 1TB drives and two 500GB drives. When I first setup ZFS I was under the assumption that it does not really care much on how you add devices to the pool and it assumes you are thinking things through. But when I tried to create a pool (called group) with four
2010 Oct 20
5
Myth? 21 disk raidz3: "Don''t put more than ___ disks in a vdev"
In a discussion a few weeks back, it was mentioned that the Best Practices Guide says something like "Don''t put more than ___ disks into a single vdev." At first, I challenged this idea, because I see no reason why a 21-disk raidz3 would be bad. It seems like a good thing. I was operating on assumption that resilver time was limited by sustainable throughput of disks, which