Displaying 20 results from an estimated 9000 matches similar to: "Administration Guide bug?"
2007 Nov 13
3
zpool status can not detect the vdev removed?
I make a file zpool like this:
bash-3.00# zpool status
pool: filepool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
filepool ONLINE 0 0 0
/export/f1.dat ONLINE 0 0 0
/export/f2.dat ONLINE 0 0 0
/export/f3.dat ONLINE 0 0 0
spares
2010 Apr 05
3
no hot spare activation?
While testing a zpool with a different storage adapter using my "blkdev"
device, I did a test which made a disk unavailable -- all attempts to
read from it report EIO.
I expected my configuration (which is a 3 disk test, with 2 disks in a
RAIDZ and a hot spare) to work where the hot spare would automatically
be activated. But I''m finding that ZFS does not behave this way
2007 Jun 09
2
zfs bug
dd if=/dev/zero of=sl1 bs=512 count=256000
dd if=/dev/zero of=sl2 bs=512 count=256000
dd if=/dev/zero of=sl3 bs=512 count=256000
dd if=/dev/zero of=sl4 bs=512 count=256000
zpool create -m /export/test1 test1 raidz /export/sl1 /export/sl2 /export/sl3
zpool add -f test1 /export/sl4
dd if=/dev/zero of=sl4 bs=512 count=256000
zpool scrub test1
panic. and message like on image.
This message posted
2006 Jun 22
1
zfs snapshot restarts scrubbing?
Hi,
yesterday I implemented a simple hourly snapshot on my filesystems. I also
regularly initiate a manual "zpool scrub" on all my pools. Usually the
scrubbing will run for about 3 hours.
But after enabling hourly snapshots I noticed that zfs scrub is always
restarted if a new snapshot is created - so basically it will never have the
chance to finish:
# zpool scrub scratch
# zpool
2006 Mar 30
39
Proposal: ZFS Hot Spare support
As mentioned last night, we''ve been reviewing a proposal for hot spare
support in ZFS. Below you can find a current draft of the proposed
interfaces. This has not yet been submitted for ARC review, but
comments are welcome. Note that this does not include any enhanced FMA
diagnosis to determine when a device is "faulted". This will come in a
follow-on project, of which some
2009 Oct 14
14
ZFS disk failure question
So, my Areca controller has been complaining via email of read errors for a couple days on SATA channel 8. The disk finally gave up last night at 17:40. I got to say I really appreciate the Areca controller taking such good care of me.
For some reason, I wasn''t able to log into the server last night or in the morning, probably because my home dir was on the zpool with the failed disk
2006 Jan 27
2
Do I have a problem? (longish)
Hi,
to shorten the story, I describe the situation. I have 4 disks in a zfs/svm config:
c2t9d0 9G
c2t10d0 9G
c2t11d0 18G
c2t12d0 18G
c2t11d0 is devided in two:
selecting c2t11d0
[disk formatted]
/dev/dsk/c2t11d0s0 is in use by zpool storedge. Please see zpool(1M).
/dev/dsk/c2t11d0s1 is part of SVM volume stripe:d11. Please see metaclear(1M).
/dev/dsk/c2t11d0s2 is in use by zpool storedge. Please
2007 Sep 13
11
How do I get my pool back?
After having to replace an internal raid card in an X2200 (S10U3 in
this case), I can see the disks just fine - and can boot, so the data
isn''t completely missing.
However, my zpool has gone.
# zpool status -x
pool: storage
state: FAULTED
status: One or more devices could not be opened. There are insufficient
replicas for the pool to continue functioning.
action: Attach the
2006 Jan 30
4
Adding a mirror to an existing single disk zpool
Hello All,
I''m transitioning data off my old UFS partitions onto ZFS. I don''t have a lot of duplicate space so I created a zpool, rsync''ed the data from UFS to the ZFS mount and then repartitioned the UFS drive to have partitions that match the cylinder count of the ZFS. The idea here is that once the data is over I wipe out UFS and then attach that partition to the
2008 Aug 03
1
Scrubbing only checks used data?
Hi there,
I am currently evaluating OpenSolaris as a replacement for my linux installations. I installed it as a xen domU, so there is a remote chance, that my observations are caused by xen.
First, my understanding of "zpool [i]scrub[/i]" is "Ok, go ahead, and rewrite [b]each block of each device[/b] of the zpool".
Whereas "[i]resilvering[/i]" means "Make
2006 Oct 26
2
experiences with zpool errors and glm flipouts
Tonight I''ve been moving some of my personal data around on my
desktop system and have hit some on-disk corruption. As you may
know, I''m cursed, and so this had a high probability of ending badly.
I have two SCSI disks and use live upgrade, and I have a partition,
/aux0, where I tend to keep personal stuff. This is on an SB2500
running snv_46.
The upshot is that I have a slice
2006 Oct 24
3
determining raidz pool configuration
Hi all,
Sorry for the newbie question, but I''ve looked at the docs and haven''t
been able to find an answer for this.
I''m working with a system where the pool has already been configured and
want to determine what the configuration is. I had thought that''d be
with zpool status -v <poolname>, but it doesn''t seem to agree with the
2006 May 09
3
Possible corruption after disk hiccups...
I''m not sure exactly what happened with my box here, but something caused a hiccup on multiple sata disks...
May 9 16:40:33 sol scsi: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci10de,5c at 9/pci-ide at a/ide at 0 (ata6):
May 9 16:47:43 sol scsi: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci-ide at 7/ide at 1 (ata3):
May 9 16:47:43 sol timeout: abort request, target=0
2006 Jun 15
4
devid support for EFI partition improved zfs usibility
Hi, guys,
I have add devid support for EFI, (not putback yet) and test it with a
zfs mirror, now the mirror can recover even a usb harddisk is unplugged
and replugged into a different usb port.
But there is still something need to improve. I''m far from zfs expert,
correct me if I''m wrong.
First, zfs should sense the hotplug event.
I use zfs status to check the status of the
2009 Oct 23
7
cryptic vdev name from fmdump
This morning we got a fault management message from one of our production servers stating that a fault in one of our pools had been detected and fixed. Looking into the error using fmdump gives:
fmdump -v -u 90ea244e-1ea9-4bd6-d2be-e4e7a021f006
TIME UUID SUNW-MSG-ID
Oct 22 09:29:05.3448 90ea244e-1ea9-4bd6-d2be-e4e7a021f006 FMD-8000-4M Repaired
2006 Oct 18
5
ZFS and IBM sdd (vpath)
Hello, I am trying to configure ZFS with IBM sdd. IBM sdd is like powerpath, MPXIO or VxDMP.
Here is the error message when I try to create my pool:
bash-3.00# zpool create tank /dev/dsk/vpath1a
warning: device in use checking failed: No such device
internal error: unexpected error 22 at line 446 of ../common/libzfs_pool.c
bash-3.00# zpool create tank /dev/dsk/vpath1c
cannot open
2009 Jan 20
2
hot spare not so hot ??
I have configured a test system with a mirrored rpool and one hot spare. I
powered the systems off, pulled one of the disks from rpool to simulate a
hardware failure.
The hot spare is not activating automatically. Is there something more i
should have done to make this work ?
pool: rpool
state: DEGRADED
status: One or more devices could not be opened. Sufficient replicas exist
for
2007 Apr 27
2
Scrubbing a zpool built on LUNs
I''m building a system with two Apple RAIDs attached. I have hardware RAID5 configured so no RAIDZ or RAIDZ2, just a basic zpool pointing at the four LUNs representing the four RAID controllers. For on-going maintenance, will a zpool scrub be of any benefit? From what I''ve read with this layer of abstration ZFS is only maintaining the metadata and not the actual data on the
2006 May 16
8
ZFS recovery from a disk losing power
running b37 on amd64. after removing power from a disk configured as
a mirror, 10 minutes has passed and ZFS has still not offlined it.
# zpool status tank
pool: tank
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear
2011 Apr 24
2
zfs problem vdev I/O failure
Good morning, I have a problem with ZFS:
ZFS filesystem version 4
ZFS storage pool version 15
Yesterday my comp with Freebsd 8.2 releng shutdown with ad4 error
detached,when I copy a big file...
and after reboot in 2 wd green 1tb say me goodbye. One of them die and other
with zfs errors:
Apr 24 04:53:41 Flash root: ZFS: vdev I/O failure, zpool=zroot path=
offset=187921768448 size=512 error=6