Displaying 7 results from an estimated 7 matches for "c9t3d0".
Did you mean:
c0t3d0
2010 Feb 27
1
slow zfs scrub?
...c8t6d0 ONLINE 0 0 0
c8t7d0 ONLINE 0 0 0
raidz2-2 ONLINE 0 0 0
c9t0d0 ONLINE 0 0 0
c9t1d0 ONLINE 0 0 0
c9t2d0 ONLINE 0 0 0
c9t3d0 ONLINE 0 0 0
c9t4d0 ONLINE 0 0 0
c9t5d0 ONLINE 0 0 0
c9t6d0 ONLINE 0 0 0
logs
mirror-3 ONLINE 0 0 0
c10d1s0 ONLINE 0 0 0
c11d0...
2011 Feb 06
1
Drive id confusion
...confused about drive IDs. The "c5t0d0"
names are very far removed from the real world, and possibly they''ve
gotten screwed up somehow. Is devfsadm supposed to fix those, or does
it only delete excess?
Reason I believe it''s confused:
zpool status shows mirror-0 on c9t3d0, c9t2d0, and c9t5d0. But format
shows the one remaining Seagate 400GB drive at c5t0d0 (my initial pool
was two of those; I replaced one with a Samsung 1TB earlier today). Now
the mirror with three drives in is my very first mirror, which has to
have the one remaining Seagate drive in it (give...
2011 Feb 07
0
zfs-discuss Digest, Vol 64, Issue 21
...m running
> snv_134. I''m still pretty badly lost in the whole repository /
> package thing with Solaris, most of my brain cells were already
> occupied with Red Hat, Debian, and Perl package information :-( .
> Where do I look?
>
> Are the controller port IDs, the "C9T3D0" things that ZFS likes,
> reasonably stable? They won''t change just because I add or remove
> drives, right; only maybe if I change controller cards?
2010 Nov 06
10
Apparent SAS HBA failure-- now what?
...1 data errors, use ''-v'' for a list
pool: uberdisk2
state: DEGRADED
scrub: scrub in progress for 3h3m, 32.26% done, 6h24m to go
config:
NAME STATE READ WRITE CKSUM
uberdisk2 DEGRADED 0 0 0
raidz2 DEGRADED 0 0 0
c9t3d0 ONLINE 0 0 0
c9t4d0 ONLINE 0 0 0
c9t5d0 ONLINE 0 0 0
c10t3d0 REMOVED 0 0 0
c10t4d0 REMOVED 0 0 0
c11t3d0 ONLINE 0 0 0
c11t4d0 ONLINE 0 0 0...
2010 Oct 19
8
Balancing LVOL fill?
...3 1.22M 56.1K
c8t7d0 - - 34 3 1.22M 56.1K
raidz2 12.0T 631G 101 7 6.50M 294K
c9t0d0 - - 39 3 1.56M 58.7K
c9t1d0 - - 39 3 1.56M 58.7K
c9t2d0 - - 39 3 1.56M 58.7K
c9t3d0 - - 39 3 1.56M 58.7K
spare - - 472 42 7.16M 83.9K
c9t4d0 - - 39 3 1.56M 58.7K
c9t7d0 - - 0 259 2 6.85M
c9t5d0 - - 39 3 1.56M 58.7K
c9t6d0 - - 38...
2010 Oct 16
4
resilver question
Hi all
I''m seeing some rather bad resilver times for a pool of WD Green drives (I know, bad drives, but leave that). Does resilver go through the whole pool or just the VDEV in question?
--
Vennlige hilsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
roy at karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres
2010 Nov 01
6
Excruciatingly slow resilvering on X4540 (build 134)
Hello,
I''m working with someone who replaced a failed 1TB drive (50% utilized),
on an X4540 running OS build 134, and I think something must be wrong.
Last Tuesday afternoon, zpool status reported:
scrub: resilver in progress for 306h0m, 63.87% done, 173h7m to go
and a week being 168 hours, that put completion at sometime tomorrow night.
However, he just reported zpool status shows: