Displaying 19 results from an estimated 19 matches for "0h0m".
Did you mean:
00m
2006 Jun 22
1
zfs snapshot restarts scrubbing?
...ing will run for about 3 hours.
But after enabling hourly snapshots I noticed that zfs scrub is always
restarted if a new snapshot is created - so basically it will never have the
chance to finish:
# zpool scrub scratch
# zpool status scratch | grep scrub
scrub: scrub in progress, 1.90% done, 0h0m to go
# zfs snapshot scratch at test
# zpool status scratch | grep scrub
scrub: scrub stopped with 0 errors on Thu Jun 22 18:15:49 2006
# zpool status scratch | grep scrub
scrub: scrub in progress, 7.36% done, 0h1m to go
# zfs snapshot scratch at test2
# zpool status scratch | grep scrub
scru...
2010 Oct 04
8
Can I "upgrade" a striped pool of vdevs to mirrored vdevs?
Hi,
once I created a zpool of single vdevs not using mirroring of any kind. Now I wonder if it''s possible to add vdevs and mirror the currently existing ones.
Thanks,
budy
--
This message posted from opensolaris.org
2009 Jul 13
7
OpenSolaris 2008.11 - resilver still restarting
...the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using ''zpool clear'' or replace the device with ''zpool replace''.
see: http://www.sun.com/msg/ZFS-8000-9P
scrub: resilver in progress for 0h0m, 0.00% done, 57h3m to go
config:
NAME STATE READ WRITE CKSUM
rc-pool DEGRADED 0 0 0
mirror DEGRADED 0 0 0
c4t1d0 ONLINE 0 0 0 5.56M resilvered
replacing DEGRADED...
2008 May 04
3
Some bugs/inconsistencies.
...pool offline test <disk3>
cannot offline <disk3>: no valid replicas
3. Resilver reported without a reason.
# zpool create test mirror disk0 disk1
# zpool offline test disk0
# zpool export test
# zpool import test
# zpool status test | grep scrub
scrub: resilver completed after 0h0m with 0 errors on Sun May 4 15:57:47 2008
What ZFS tries to resilver here? I verified that disk0 is not touched
(which is expected behaviour).
4. Inconsistent ''zpool status'' output for log vdevs.
(I''ll show only relevant parts of ''zpool status''....
2008 Aug 03
1
Scrubbing only checks used data?
...anding or a terrible case of RTFM?
Another irrirating observation was, that scrubbing starts, then stalls for a minute or so at 0.4 something percent and then continues.
Any ideas / pointers / ... ?
Jens
---
bash-3.2# zpool status tank
pool: tank
state: ONLINE
scrub: scrub completed after 0h0m with 0 errors on Sun Aug 3 18:46:51 2008
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
c4d1 ONLINE 0 0 0
errors: No known data errors
bash-3.2# uname -X
System = SunOS
Node = opensolaris
Release = 5.11
KernelID = snv_94
Machine =...
2010 Jul 05
5
never ending resilver
...i list,
Here''s my case :
pool: mypool
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 147h19m, 100.00% done, 0h0m to go
config:
NAME STATE READ WRITE CKSUM
filerbackup13 DEGRADED 0 0 0
raidz2 DEGRADED 0 0 0
c0t8d0 ONLINE 0 0 0
replacing DEGRADED 0 0 0
c0t9d0 OFFLI...
2008 Nov 24
2
replacing disk
...ne or more devices could not be opened. Sufficient replicas exist for
the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using ''zpool online''.
see: http://www.sun.com/msg/ZFS-8000-2Q
scrub: resilver completed after 0h0m with 0 errors on Mon Nov 24 20:06:48 2008
config:
NAME STATE READ WRITE CKSUM
mypooladas DEGRADED 0 0 0
raidz2 DEGRADED 0 0 0
c4t2d0 ONLINE 0 0 0...
2009 Oct 14
14
ZFS disk failure question
So, my Areca controller has been complaining via email of read errors for a couple days on SATA channel 8. The disk finally gave up last night at 17:40. I got to say I really appreciate the Areca controller taking such good care of me.
For some reason, I wasn''t able to log into the server last night or in the morning, probably because my home dir was on the zpool with the failed disk
2011 Feb 05
0
40MB repaired on a disk during scrub but no errors
...is is a 6 drive mirrored pool with a single SSD for L2Arc cache. Pool version 26 under Nexenta Core Platform 3.0 with a LSI 9200-16E and SATA disks.
$ zpool status bigboy
pool: bigboy
state: ONLINE
scan: scrub in progress since Sat Feb 5 02:22:18 2011
3.74T scanned out of 3.74T at 141M/s, 0h0m to go
37.9M repaired, 99.88% done
[-----config snip - all columns 0, one drive on the right has "(repairing)"----]
errors: No known data errors
And then once the scrub completes:
$ zpool status bigboy
pool: bigboy
state: ONLINE
scan: scrub repaired 37.9M in 7h42m with 0 errors o...
2013 Jul 18
0
Seeing data corruption with g_multipath utility
...tempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
see: http://www.sun.com/msg/ZFS-8000-9P
scan: resilvered 27.2M in 0h0m with 0 errors on Thu Jul 4 19:47:44 2013
config:
NAME STATE READ WRITE CKSUM
mypool1 ONLINE 0 0 0
mirror-0 ONLINE 0 12 0
multipath/newdisk4 ONLINE 0 27 0
multipath/newdisk2 ONLINE...
2010 Apr 24
6
Extremely slow raidz resilvering
Hello everyone,
As one of the steps of improving my ZFS home fileserver (snv_134) I wanted
to replace a 1TB disk with a newer one of the same vendor/model/size because
this new one has 64MB cache vs. 16MB in the previous one.
The removed disk will be use for backups, so I thought it''s better off to
have a 64MB cache disk in the on-line pool than in the backup set sitting
off-line all
2010 Mar 19
3
zpool I/O error
Hi all,
I''m trying to delete a zpool and when I do, I get this error:
# zpool destroy oradata_fs1
cannot open ''oradata_fs1'': I/O error
#
The pools I have on this box look like this:
#zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
oradata_fs1 532G 119K 532G 0% DEGRADED -
rpool 136G 28.6G 107G 21% ONLINE -
#
Why
2010 May 02
8
zpool mirror (dumb question)
Hi there!
I am new to the list, and to OpenSolaris, as well as ZPS.
I am creating a zpool/zfs to use on my NAS server, and basically I want some
redundancy for my files/media. What I am looking to do, is get a bunch of
2TB drives, and mount them mirrored, and in a zpool so that I don''t have to
worry about running out of room. (I know, pretty typical I guess).
My problem is, is that
2010 Apr 05
3
no hot spare activation?
While testing a zpool with a different storage adapter using my "blkdev"
device, I did a test which made a disk unavailable -- all attempts to
read from it report EIO.
I expected my configuration (which is a 3 disk test, with 2 disks in a
RAIDZ and a hot spare) to work where the hot spare would automatically
be activated. But I''m finding that ZFS does not behave this way
2010 Oct 14
0
AMD/Supermicro machine - AS-2022G-URF
...0
c7t0d5 ONLINE 0 0 0
c7t0d6 ONLINE 0 0 0
c7t0d7 ONLINE 0 0 0
logs
c19d1 ONLINE 0 0 0
errors: No known data errors
pool: xx2
state: ONLINE
scan: scrub repaired 0 in 0h0m with 0 errors on Thu Oct 7 17:32:11 2010
config:
NAME STATE READ WRITE CKSUM
xx2 ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
c4t50014EE2AF54BB46d0 ONLINE 0 0...
2008 Jan 15
4
Moving zfs to an iscsci equallogic LUN
We have a Mirror setup in ZFS that''s 73GB (two internal disks on a sun fire v440). We currently are going to attach this system to an Equallogic Box, and will attach an ISCSCI LUN from the Equallogic box to the v440 of about 200gb. The Equallogic Box is configured as a Hardware Raid 50 (two hot spares for redundancy).
My question is what''s the best approach to moving the ZFS
2008 Dec 17
11
zpool detach on non-mirrored drive
I''m using zfs not to have access to a fail-safe backed up system, but to easily manage my file system. I would like to be able to, as I buy new harddrives, just to be able to replace the old ones. I''m very environmentally concious, so I don''t want to leave old drives in there to consume power as they''ve already been replaced by larger ones. However, ZFS
2010 Nov 23
14
ashift and vdevs
zdb -C shows an shift value on each vdev in my pool, I was just wondering if
it is vdev specific, or pool wide. Google didn''t seem to know.
I''m considering a mixed pool with some "advanced format" (4KB sector)
drives, and some normal 512B sector drives, and was wondering if the ashift
can be set per vdev, or only per pool. Theoretically, this would save me
some size on
2011 Jul 15
22
Zil on multiple usb keys
This might be a stupid question, but here goes... Would adding, say, 4 4 or 8gb usb keys as a zil make enough of a difference for writes on an iscsi shared vol?
I am finding reads are not too bad (40is mb/s over gige on 2 500gb drives stripped) but writes top out at about 10 and drop a lot lower... If I where to add a couple usb keys for zil, would it make a difference?
Thanks.
Sent from a