Displaying 16 results from an estimated 16 matches for "c4t4d0".
Did you mean:
c0t4d0
2006 Oct 24
3
determining raidz pool configuration
...0
c1t6d0 ONLINE 0 0 0
c1t7d0 ONLINE 0 0 0
c4t0d0 ONLINE 0 0 0
c4t1d0 ONLINE 0 0 0
c4t2d0 ONLINE 0 0 0
c4t3d0 ONLINE 0 0 0
c4t4d0 ONLINE 0 0 0
c4t5d0 ONLINE 0 0 0
c4t6d0 ONLINE 0 0 0
c4t7d0 ONLINE 0 0 0
c5t1d0 ONLINE 0 0 0
c5t2d0 ONLINE 0 0 0
c5t3d0 ONLINE 0...
2010 Jan 22
0
Removing large holey file does not free space 6792701 (still)
...eopened but I post it here since some people where seeing something similar.
Example and attached zdb output:
filer01a:/$ uname -a
SunOS filer01a 5.11 snv_130 i86pc i386 i86pc Solaris
filer01a:/$ zpool create zpool01 raidz2 c4t0d0 c4t1d0 c4t2d0 c4t4d0 c4t5d0 c4t6d0
filer01a:/$ zpool create zpool01 raidz2 c4t0d0 c4t1d0 c4t2d0 c4t4d0 c4t5d0 c4t6d0
filer01a:/$ zfs list zpool01
NAME USED AVAIL REFER MOUNTPOINT
zpool01 123K 5.33T 42.0K /zpool01
filer01a:/$ df -h /zpool01...
2008 Nov 24
2
replacing disk
...STATE READ WRITE CKSUM
mypooladas DEGRADED 0 0 0
raidz2 DEGRADED 0 0 0
c4t2d0 ONLINE 0 0 0
c4t3d0 ONLINE 0 0 0
c4t4d0 ONLINE 0 0 0
c4t5d0 ONLINE 0 0 0
c4t8d0 UNAVAIL 0 0 0 cannot open
c4t9d0 ONLINE 0 0 0
c4t10d0 ONLINE 0 0...
2010 Jan 17
1
raidz2 import, some slices, some not
...versions.
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
c5t4d0 ONLINE 0 0 0
c3t5d0p0 ONLINE 0 0 0
c4t4d0 ONLINE 0 0 0
c3t2d0p0 ONLINE 0 0 0
c4t6d0 ONLINE 0 0 0
c5t6d0p0 ONLINE 0 0 0
c4t7d0p0 ONLINE 0 0 0
errors: No known data errors
Thanks for any help.
-------------- next...
2009 Feb 12
1
strange ''too many errors'' msg
...0 0
spare DEGRADED 0 0 0
c6t6d0 DEGRADED 0 0 0 too many errors
c4t0d0 ONLINE 0 0 0
c7t6d0 ONLINE 0 0 0
...
spares
c4t0d0 INUSE currently in use
c4t4d0 AVAIL
Strange thing is, that for more than 3 month there was no single error
logged with any drive. IIRC, before u4 I''ve seen occasionaly a bad
checksum error message, but this was obviously the result from the
wellknown race condition of the marvell driver when havy writes took plac...
2007 Apr 11
0
raidz2 another resilver problem
...c1t2d0 ONLINE 0 0 0
c5t2d0 ONLINE 0 0 0
c6t2d0 ONLINE 0 0 0
c7t2d0 ONLINE 0 0 0
c0t4d0 ONLINE 0 0 0
c1t4d0 ONLINE 0 0 0
c4t4d0 ONLINE 0 0 0
c6t4d0 ONLINE 0 0 0
c7t4d0 ONLINE 0 0 0
c0t3d0 ONLINE 0 0 0
c1t3d0 ONLINE 0 0 0
raidz2 ONLINE 0 0 0
c4t3d0...
2009 Jul 13
7
OpenSolaris 2008.11 - resilver still restarting
Just look at this. I thought all the restarting resilver bugs were fixed, but it looks like something odd is still happening at the start:
Status immediately after starting resilver:
# zpool status
pool: rc-pool
state: DEGRADED
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine
2008 Apr 02
1
delete old zpool config?
...0
c7t3d0 ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c0t4d0 ONLINE 0 0 0
c0t5d0 ONLINE 0 0 0
c1t4d0 ONLINE 0 0 0
c1t5d0 ONLINE 0 0 0
c4t4d0 ONLINE 0 0 0
c4t5d0 ONLINE 0 0 0
c5t5d0 ONLINE 0 0 0
c6t4d0 ONLINE 0 0 0
c6t5d0 ONLINE 0 0 0
c7t4d0 ONLINE 0 0 0
c7t5d0 ONLINE 0...
2009 Jan 13
12
OpenSolaris better Than Solaris10u6 with requards to ARECA Raid Card
...state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
backup ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c4t2d0 ONLINE 0 0 0
c4t3d0 ONLINE 0 0 0
c4t4d0 ONLINE 0 0 0
c4t5d0 ONLINE 0 0 0
c4t6d0 ONLINE 0 0 0
c4t7d0 ONLINE 0 0 0
c4t8d0 ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c4t9d0 ONLINE...
2010 Dec 05
4
Zfs ignoring spares?
...c8t35d0 ONLINE 0 0 0
c4t0d0 ONLINE 0 0 0
raidz2-5 ONLINE 0 0 0
c4t1d0 ONLINE 0 0 0
c4t2d0 ONLINE 0 0 0
c4t3d0 ONLINE 0 0 0
c4t4d0 ONLINE 0 0 0
c4t5d0 ONLINE 0 0 0
c4t6d0 ONLINE 0 0 0
c4t7d0 ONLINE 0 0 0
raidz2-6 ONLINE 0 0 0
c4t8d0 ONLINE 0 0 0
c4t9d0...
2008 Mar 12
3
Mixing RAIDZ and RAIDZ2 zvols in the same zpool
...ONLINE 0 0 0
>
> c4t7d0 ONLINE 0 0 0
>
> c5t7d0 ONLINE 0 0 0
>
> c6t7d0 ONLINE 0 0 0
>
> c7t7d0 ONLINE 0 0 0
>
> spares
>
> c4t4d0 AVAIL
>
> c7t4d0 AVAIL
2008 Jan 17
9
ATA UDMA data parity error
...only 12 of them all populated
- system has three AOC-SAT2-MV8 cards plugged into 6 mini-sas backplanes
- card1 ("c3")
- bp1 (c3t0d0, c3t1d0)
- bp2 (c3t4d0, c3t5d0)
- card2 ("c4")
- bp1 (c4t0d0, c4t1d0)
- bp2 (c4t4d0, c4t5d0)
- card3 ("c5")
- bp1 (c5t0d0, c5t1d0)
- bp2 (c5t4d0, c5t5d0)
- system has one Barcelona Opteron (step BA)
- the one with the potential look-aside cache bug...
- though its not clear this is related...
My First Thought To...
2007 Feb 27
16
understanding zfs/thunoer "bottlenecks"?
Currently I''m trying to figure out the best zfs layout for a thumper wrt. to read AND write performance.
I did some simple mkfile 512G tests and found out, that per average ~ 500 MB/s seems to be the maximum on can reach (tried initial default setup, all 46 HDDs as R0, etc.).
According to http://www.amd.com/us-en/assets/content_type/DownloadableAssets/ArchitectureWP_062806.pdf I would
2009 Jun 19
8
x4500 resilvering spare taking forever?
...#39;'zpool replace''.
see: http://www.sun.com/msg/ZFS-8000-9P
scrub: resilver in progress, 4.66% done, 12h16m to go
config:
NAME STATE READ WRITE CKSUM
bigpool ONLINE 0 0 0
raidz2 ONLINE 0 0 0
c4t4d0 ONLINE 0 0 0
c7t4d0 ONLINE 0 0 0
c6t4d0 ONLINE 0 0 0
c1t4d0 ONLINE 0 0 0
c0t4d0 ONLINE 0 0 0
c4t0d0 ONLINE 0 0 0
c7t0d0 ON...
2007 Oct 08
16
Fileserver performance tests
...ith 8gig of ram and two Sun SAS controllers, attached two sas-jbods with 8 SATA-HDDs each und created a zfs pool as a raid 10 by doing something like the following:
[i]zpool create zfs_raid10_16_disks mirror c3t0d0 c4t0d0 mirror c3t1d0 c4t1d0 mirror c3t2d0 c4t2d0 mirror c3t3d0 c4t3d0 mirror c3t4d0 c4t4d0 mirror c3t5d0 c4t5d0 mirror c3t6d0 c4t6d0 mirror c3t7d0 c4t7d0[/i]
the i set "noatime" and ran the following filebench tests:
[i]
root at sun1 # ./filebench
filebench> load fileserver
12746: 7.445: FileServer Version 1.14 2005/06/21 21:18:52 personality successfully loaded
12746: 7...
2006 Nov 09
16
Some performance questions with ZFS/NFS/DNLC at snv_48
Hello.
We''re currently using a Sun Blade1000 (2x750MHz, 1G ram, 2x160MB/s mpt
scsi buses, skge GigE network) as a NFS backend with ZFS for
distribution of free software like Debian (cdimage.debian.org,
ftp.se.debian.org) and have run into some performance issues.
We are running SX snv_48 and have run with a raidz2 with 7x300G for a
while now, just added another 7x300G raidz2 today but