Displaying 5 results from an estimated 5 matches for "rincebrain".
2009 May 13
2
With RAID-Z2 under load, machine stops responding to local or remote login
Hi world,
I have a 10-disk RAID-Z2 system with 4 GB of DDR2 RAM and a 3 GHz Core 2 Duo.
It''s exporting ~280 filesystems over NFS to about half a dozen machines.
Under some loads (in particular, any attempts to rsync between another
machine and this one over SSH), the machine''s load average sometimes
goes insane (27+), and it appears to all be in kernel-land (as nothing
in
2009 Jul 16
1
An amusing scrub
Today, I ran a scrub on my rootFS pool.
I received the following lovely output:
# zpool status larger_root
? pool: larger_root
?state: ONLINE
?scrub: scrub completed after 307445734561825856h29m with 0 errors on
Wed Jul 15 21:49:02 2009
config:
??????? NAME??????? STATE???? READ WRITE CKSUM
??????? larger_root? ONLINE?????? 0???? 0???? 0
????????? c4t1d0s0? ONLINE?????? 0???? 0???? 0
errors: No
2006 Oct 31
1
ZFS thinks my 7-disk pool has imaginary disks
Hi all,
I recently created a RAID-Z1 pool out of a set of 7 SCSI disks, using the
following command:
# zpool create magicant raidz c5t0d0 c5t1d0 c5t2d0 c5t3d0 c5t4d0 c5t5d0
c5t6d0
It worked fine, but I was slightly confused by the size yield (99 GB vs the
116 GB I had on my other RAID-Z1 pool of same-sized disks).
I thought one of the disks might have been to blame, so I tried swapping it
out
2009 Apr 23
1
Unexpectedly poor 10-disk RAID-Z2 performance?
Hail, caesar.
I''ve got a 10-disk RAID-Z2 backed by the 1.5 TB Seagate drives
everyone''s so fond of. They''ve all received a firmware upgrade (the
sane one, not the one that caused your drives to brick if the internal
event log hit the wrong number on boot).
They''re attached to an ARC-1280ML, a reasonably good SATA controller,
which has 1 GB of ECC DDR2 for
2006 Nov 01
0
RAID-Z1 pool became faulted when a disk was removed.
So I have attached to my system two 7-disk SCSI arrays, each of 18.2 GB
disks.
Each of them is a RAID-Z1 zpool.
I had a disk I thought was a dud, so I pulled the fifth disk in my array and
put the dud in. Sure enough, Solaris started spitting errors like there was
no tomorrow in dmesg, and wouldn''t use the disk. Ah well. Remove it, put the
original back in - hey, Solaris still thinks