similar to: zpool iostat question

Displaying 20 results from an estimated 300 matches similar to: "zpool iostat question"

2010 Feb 27
1
slow zfs scrub?
hi all I have a server running svn_131 and the scrub is very slow. I have a cron job for starting it every week and now it''s been running for a while, and it''s very, very slow scrub: scrub in progress for 40h41m, 12.56% done, 283h14m to go The configuration is listed below, consisting of three raidz2 groups with seven 2TB drives each. The root fs is on a pair of X25M (gen 1)
2007 Sep 08
1
zpool degraded status after resilver completed
I am curious why zpool status reports a pool to be in the DEGRADED state after a drive in a raidz2 vdev has been successfully replaced. In this particular case drive c0t6d0 was failing so I ran, zpool offline home/c0t6d0 zpool replace home c0t6d0 c8t1d0 and after the resilvering finished the pool reports a degraded state. Hopefully this is incorrect. At this point is the vdev in question now has
2010 Dec 05
4
Zfs ignoring spares?
Hi all I have installed a new server with 77 2TB drives in 11 7-drive RAIDz2 VDEVs, all on WD Black drives. Now, it seems two of these drives were bad, one of them had a bunch of errors, the other was very slow. After zfs offlining these and then zfs replacing them with online spares, resilver ended and I thought it''d be ok. Appearently not. Albeit the resilver succeeds, the pool status
2010 Oct 19
8
Balancing LVOL fill?
Hi all I have this server with some 50TB disk space. It originally had 30TB on WD Greens, was filled quite full, and another storage chassis was added. Now, space problem gone, fine, but what about speed? Three of the VDEVs are quite full, as indicated below. VDEV #3 (the one with the spare active) just spent some 72 hours resilvering a 2TB drive. Now, those green drives suck quite hard, but not
2013 Mar 23
0
Dirves going offline in Zpool
Hi, I have Dell md1200 connected to two heads ( Dell R710 ). The heads have Perc H800 card and drives are configured in Raid0 ( Virtual Disk) in the RAID controller. One of the drives had crashed and is replaced by a spare. Resilvering was triggered but fails to complete due to drives going offline. I have to reboot the head ( R710) and drives comes online. This happened repeatedly when
2009 Oct 14
14
ZFS disk failure question
So, my Areca controller has been complaining via email of read errors for a couple days on SATA channel 8. The disk finally gave up last night at 17:40. I got to say I really appreciate the Areca controller taking such good care of me. For some reason, I wasn''t able to log into the server last night or in the morning, probably because my home dir was on the zpool with the failed disk
2010 Apr 10
21
What happens when unmirrored ZIL log device is removed ungracefully
Due to recent experiences, and discussion on this list, my colleague and I performed some tests: Using solaris 10, fully upgraded. (zpool 15 is latest, which does not have log device removal that was introduced in zpool 19) In any way possible, you lose an unmirrored log device, and the OS will crash, and the whole zpool is permanently gone, even after reboots. Using opensolaris,
2010 May 08
5
Plugging in a hard drive after Solaris has booted up?
Hi guys, I have a quick question, I am playing around with ZFS and here''s what I did. I created a storage pool with several drives. I unplugged 3 out of 5 drives from the array, currently: NAME STATE READ WRITE CKSUM gpool UNAVAIL 0 0 0 insufficient replicas raidz1 UNAVAIL 0 0 0 insufficient replicas c8t2d0 UNAVAIL 0 0
2010 Oct 16
4
resilver question
Hi all I''m seeing some rather bad resilver times for a pool of WD Green drives (I know, bad drives, but leave that). Does resilver go through the whole pool or just the VDEV in question? -- Vennlige hilsener / Best regards roy -- Roy Sigurd Karlsbakk (+47) 97542685 roy at karlsbakk.net http://blogg.karlsbakk.net/ -- I all pedagogikk er det essensielt at pensum presenteres
2011 Aug 10
9
zfs destory snapshot takes an hours.
Hi, I am facing issue with zfs destroy, this takes almost 3 Hours to delete the snapshot of size 150G. Could you please help me to resolve this issue, why zfs destroy takes this much time. While taking snapshot, it''s done within few seconds. I have tried with removing with old snapshot but still problem is same. =========================== I am using : Release : OpenSolaris
2007 Sep 28
5
ZFS Boot Won''t work with a straight or mirror zfsroot
Using build 70, I followed the zfsboot instructions at http:// www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/ to the letter. I tried first with a mirror zfsroot, when I try to boot to zfsboot the screen is flooded with "init(1M) exited on fatal signal 9" Than I tried with a simple zfs pool (not mirrored) and it just reboots right away. If I try to setup grub
2011 Dec 08
1
Can't create striped replicated volume
Hi, I'm trying to create striped replicated volume but getting tis error: gluster volume create cloud stripe 4 replica 2 transport tcp nebula1:/dataPool nebula2:/dataPool nebula3:/dataPool nebula4:/dataPool wrong brick type: replica, use<HOSTNAME>:<export-dir-abs-path> Usage: volume create<NEW-VOLNAME> [stripe<COUNT>] [replica<COUNT>]
2014 Oct 20
0
2.2.14 Panic in imap_fetch_more()
This panic happens with different users, and it also occured in 2.2.13 Panic: file imap-fetch.c: line 556 (imap_fetch_more): assertion failed: (ctx->client->output_cmd_lock == NULL || ctx->client->output_cmd_lock == cmd) hmk GNU gdb 6.8 Copyright (C) 2008 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is
2010 Jul 09
2
snapshot out of space
I am getting the following erorr message when trying to do a zfs snapshot: root at pluto#zfs snapshot datapool/mars at backup1 cannot create snapshot ''datapool/mars at backup1'': out of space root at pluto#zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT datapool 556G 110G 446G 19% ONLINE - rpool 278G 12.5G 265G 4% ONLINE - Any ideas??? -------------- next part
2017 Jul 25
1
memory snapshot save error
libvirt version: 3.4.0 architecture: x86_64 ubuntu16.04-server hypervisor: kvm,qemu when I want make a memery snapshot for a vm ,and I call virsh save ,but tell me this error: error: Failed to save domain 177 to /datapool/mm.img error: operation failed: domain save job: unexpectedly failed xml configure: <domain type='kvm' id='177'> <name>virt23</name>
2014 Oct 20
1
2.2.14 Panic in sync_expunge_range()
I am getting some panics after upgrading from 2.2.13 to 2.2.14 This panic happens for one user only, he is subscribed to 86 folders, on two of them this panic happens quite often - several times a day. The mbox folders seems OK, less than 30M with 30 and 200 messages. Panic: file mail-index-sync-update.c: line 250 (sync_expunge_range): assertion failed: (count > 0) hmk GNU gdb 6.8
2014 Oct 29
2
2.2.15 Panic in mbox_sync_read_next_mail()
It might not be a fault in dovecot, as the user is accessing the folder locally with alpine while also running imap-sessions. However it would have been nice with a more graceful action than panic? The panic is preceeded by Error: Next message unexpectedly corrupted in mbox file PATH Panic: file mbox-sync.c: line 152 (mbox_sync_read_next_mail): assertion failed:
2011 Dec 13
1
question regarding samba permissions
I want to make a subfolder read only for certain users. for example: /data/pool is public rwx for all users. and now i would like to make a /data/pool/subfolder only rwx for user1 and grant read only permissions to user2 and user3 how do i do this? any links or direct tips on that? my suggestion would be something like this, but as you can imagine it didn't work: # The general datapool
2011 Dec 14
1
Fwd: Re: question regarding samba permissions
woudln't work because all the users are in one group anyway. and i am not allowed to to give read rights do "any" (i.e. 755) but theres really no option in smb.conf like "read only users = " or something like that? Am 13.12.2011 17:56, schrieb Raffael Sahli: > On Tue, 13 Dec 2011 16:38:41 +0100, "skull"<skull17 at gmx.ch> wrote: >> I want to
2010 May 31
3
zfs permanent errors in a clone
$ zfs list -t filesystem NAME USED AVAIL REFER MOUNTPOINT datapool 840M 25.5G 21K /datapool datapool/virtualbox 839M 25.5G 839M /virtualbox mypool 8.83G 6.92G 82K /mypool mypool/ROOT 5.48G 6.92G 21K legacy mypool/ROOT/May25-2010-Image-Update