similar to: zpool list No known data errors

Displaying 20 results from an estimated 5000 matches similar to: "zpool list No known data errors"

2010 Mar 19
3
zpool I/O error
Hi all, I''m trying to delete a zpool and when I do, I get this error: # zpool destroy oradata_fs1 cannot open ''oradata_fs1'': I/O error # The pools I have on this box look like this: #zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT oradata_fs1 532G 119K 532G 0% DEGRADED - rpool 136G 28.6G 107G 21% ONLINE - # Why
2009 Oct 01
1
cachefile for snail zpool import mystery?
Hi, We are seeing more long delays in zpool import, say, 4~5 or even 25~30 minutes, especially when backup jobs are going on in the FC SAN the LUNs resides (no iSCSI LUNs yet). On the same node for the LUNs of the same array, some pools takes a few seconds, but minutes for some. the pattern seems random to me so far. It''s first noticed soon after being upgraded to Solaris 10 U6
2007 Nov 13
3
this command can cause zpool coredump!
in Solaris 10 U4, type: -bash-3.00# zpool create -R filepool mirror /export/home/f1.dat /export/home/f2.dat invalid alternate root ''Segmentation Fault (core dumped) -- This messages posted from opensolaris.org
2011 Apr 01
15
Zpool resize
Hi, LUN is connected to solaris 10u9 from NETAP FAS2020a with ISCSI. I''m changing LUN size on netapp and solaris format see new value but zpool still have old value. I tryed zpool export and zpool import but it didn''t resolve my problem. bash-3.00# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0d1 <DEFAULT cyl 6523 alt 2 hd 255 sec 63>
2007 Apr 14
1
Move data from the zpool (root) to a zfs file system
Hi List, As a ZFS newbie, I foolishly copied my data set to the root zpool file system (a large iSCSI SAN array). Thus: # zpool create -f iscsi c4t19d0 c4t20d0 c4t21d0 c4t22d0 c4t23d0 c4t24d0 # zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT iscsi 9.53T 64.5K 5.34T 0% ONLINE - # zfs set mountpoint=/mydisks/iscsi iscsi Then copied
2007 Apr 30
4
need some explanation
Hi, OS : Solaris 10 11/06 zpool list doesn''t reflect pool usage stats instantly. Why? # ls -l total 209769330 -rw------T 1 root root 107374182400 Apr 30 14:28 deleteme # zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT wo 136G 100G 36.0G 73% ONLINE - # rm deleteme # zpool list NAME SIZE
2009 Jan 25
2
Unable to destory a pool
# zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT jira-app-zpool 272G 330K 272G 0% ONLINE - The following command hangs forever. If I reboot the box , zpool list shows online as I mentioned the output above. # zpool destroy -f jira-app-zpool How can get rid of this pool and any reference to it. bash-3.00# zpool status pool: jira-app-zpool state: UNAVAIL
2008 Aug 22
2
zpool autoexpand property - HowTo question
I noted this PSARC thread with interest: Re: zpool autoexpand property [PSARC/2008/353 Self Review] because it so happens that during a recent disk upgrade, on a laptop. I''ve migrated a zpool off of one partition onto a slightly larger one, and I''d like to somehow tell zfs to grow the zpool to fill the new partition. So, what''s the best way to do this? (and is it
2006 Jun 12
3
ZFS + Raid-Z pool size incorrect?
I''m seeing odd behaviour when I create a ZFS raidz pool using three disks. The output of "zpool status" shows the pool size as the size of the three disks combined (as if it were a Raid 0 volume). This isn''t expected behaviour is it? When I create a mirrored volume in ZFS everything is as one would expect the pool is the size of a single drive. My setup: Compaq
2008 Mar 27
5
[Bug 871] New: ''zpool key -l'' core dumped with keysource=hex, prompt and unmatched entered in
http://defect.opensolaris.org/bz/show_bug.cgi?id=871 Summary: ''zpool key -l'' core dumped with keysource=hex,prompt and unmatched entered in Classification: Development Product: zfs-crypto Version: unspecified Platform: Other OS/Version: Windows Status: NEW Severity: minor
2008 Jul 15
1
Cannot share RW, "Permission Denied" with sharenfs in ZFS
Hi everyone, I have just installed Solaris and have added a 3x500GB raidz drive array. I am able to use this pool (''tank'') successfully locally, but when I try to share it remotely, I can only read, I cannot execute or write. I didn''t do anything other than the default ''zfs set sharenfs=on tank''... how can I get it so that any allowed user can access
2009 Dec 15
7
ZFS Dedupe reporting incorrect savings
Hi, Created a zpool with 64k recordsize and enabled dedupe on it. zpool create -O recordsize=64k TestPool device1 zfs set dedup=on TestPool I copied files onto this pool over nfs from a windows client. Here is the output of zpool list Prompt:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT TestPool 696G 19.1G 677G 2% 1.13x ONLINE - When I ran a
2011 Jun 06
3
Available space confusion
I recently created a raidz of four 2TB-disks and moved a bunch of movies onto them. And then I noticed that I''ve somehow lost a full TB of space. Why? nebol at filez:/$ zfs list tank2 NAME USED AVAIL REFER MOUNTPOINT tank2 3.12T 902G 32.9K /tank2 nebol at filez:/$ zpool list tank2 NAME SIZE USED AVAIL CAP HEALTH ALTROOT tank2 5.44T 4.18T 1.26T 76% ONLINE -
2010 Sep 10
3
zpool upgrade and zfs upgrade behavior on b145
Not sure what the best list to send this to is right now, so I have selected a few, apologies in advance. A couple questions. First I have a physical host (call him bob) that was just installed with b134 a few days ago. I upgraded to b145 using the instructions on the Illumos wiki yesterday. The pool has been upgraded (27) and the zfs file systems have been upgraded (5). chris at bob:~# zpool
2011 Aug 14
4
Space usage
I''m just uploading all my data to my server and the space used is much more than what i''m uploading; Documents = 147MB Videos = 11G Software= 1.4G By my calculations, that equals 12.547T, yet zpool list is showing 21G as being allocated; NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT dpool 27.2T 21.2G 27.2T 0% 1.00x ONLINE - It doesn''t look like
2007 Oct 14
1
odd behavior from zpool replace.
i''ve got a little zpool with a naughty raidz vdev that won''t take a replacement that as far as i can tell should be adequate. a history: this could well be some bizarro edge case, as the pool doesn''t have the cleanest lineage. initial creation happened on NexentaCP inside vmware in linux. i had given the virtual machine raw device access to 4 500gb drives and 1 ~200gb
2010 Jan 21
1
Zpool is a bit Pessimistic at failures
Hello, Anyone else noticed that zpool is kind of negative when reporting back from some error conditions? Like: cannot import ''zpool01'': I/O error Destroy and re-create the pool from a backup source. or even worse: cannot import ''rpool'': pool already exists Destroy and re-create the pool from a backup source. The first one i
2008 Jun 01
1
capacity query
Hi, My swap is on raidz1. Df -k and swap -l are showing almost no usage of swap, while zfs list and zpool list are showing me 96% capacity. Which should i believe? Justin # df -hk Filesystem size used avail capacity Mounted on /dev/dsk/c3t0d0s1 14G 4.0G 10G 28% / /devices 0K 0K 0K 0% /devices ctfs
2010 Feb 10
5
zfs receive : is this expected ?
amber ~ # zpool list data NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT data 930G 295G 635G 31% 1.00x ONLINE - amber ~ # zfs send -RD data at prededup |zfs recv -d ezdata cannot receive new filesystem stream: destination ''ezdata'' exists must specify -F to overwrite it amber ~ # zfs send -RD data at prededup |zfs recv -d ezdata/data cannot receive:
2008 Jun 17
6
mirroring zfs slice
Hi All, I had a slice with zfs file system which I want to mirror, I followed the procedure mentioned in the amin guide I am getting this error. Can you tell me what I did wrong? root # zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT export 254G 230K 254G 0% ONLINE - root # echo |format Searching for disks...done