search for: c3t0d0

Displaying 18 results from an estimated 18 matches for "c3t0d0".

Did you mean: c1t0d0
2006 Jul 18
1
file access algorithm within pools
Hello, What is the access algorithm used within multi-component pools for a given pool, and does it change when one or more members of the pool become degraded ? examples: zpool create mtank mirror c1t0d0 c2t0d0 mirror c3t0d0 c4t0d0 mirror c5t0d0 c6t0d0 or; zpool create ztank raidz c1t0d0 c2t0d0 c3t0d0 raidz c4t0d0 c5t0d0 c6t0d0 As files are created on the filesystem within these pools, are they distributed round-robin accross the components, or do they stay with the first component till full then go to the next,...
2008 Oct 08
1
Troubleshooting ZFS performance with SIL3124 cards
...is okay but the second on the disks are in 100 %w... and it stays at 100 %w for a few seconds. us sy wt id 0 34 0 66 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c3t0d0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c4t0d0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c0t0d0 0.0 400.9 0.0 49560.1 14.1 0.5 35.2 1.2 47 48 c5t0d0 0.0 156.0 0.0 18327.1 4.6 0.2 29.5 1.1 17 18 c5t1d0 0.0 7.0 0.0 13...
2009 Dec 16
27
zfs hanging during reads
...tended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 1.5 2.7 98.1 29.4 0.1 0.0 18.6 8.3 1 2 c0d0 3.9 0.6 220.9 57.9 0.0 0.0 5.6 1.2 0 0 c6d1 8.1 0.0 456.8 0.0 0.0 0.0 0.4 0.5 0 0 c3t0d0 12.8 0.0 717.7 0.0 5.5 0.2 433.1 14.6 18 19 c3t1d0 4.9 0.0 279.1 0.0 0.0 0.0 0.7 0.5 0 0 c3t2d0 4.9 0.0 279.0 0.0 0.0 0.0 0.9 0.6 0 0 c3t3d0 c0d0 is the rpool. c6d1 is a 1.5TB drive connected to a Sil3114 controller (backup1 pool) c3t...
2008 Jul 15
1
Cannot share RW, "Permission Denied" with sharenfs in ZFS
...1.36T 46.1G 1.31T 3% ONLINE - root at mosasaur:/# zpool status pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c3t0d0 ONLINE 0 0 0 c3t1d0 ONLINE 0 0 0 c3t2d0 ONLINE 0 0 0 errors: No known data errors root at mosasaur:/# zfs get all tank NAME PROPERTY VALUE SOURCE tank type filesystem - tank creati...
2008 Jul 15
2
Cannot share RW, "Permission Denied" with sharenfs in ZFS
...1.36T 46.1G 1.31T 3% ONLINE - root at mosasaur:/# zpool status pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c3t0d0 ONLINE 0 0 0 c3t1d0 ONLINE 0 0 0 c3t2d0 ONLINE 0 0 0 errors: No known data errors root at mosasaur:/# zfs get all tank NAME PROPERTY VALUE SOURCE tank type filesystem - tank creati...
2009 Jan 21
8
cifs perfomance
...5G 81 19 4.90M 63.9K ---------- ----- ----- ----- ----- ----- ----- tagra 2.58T 41.9G 0 0 0 0 raidz2 2.58T 41.9G 0 0 0 0 c5d1 - - 0 0 0 0 c7d0 - - 0 0 0 0 c3t0d0 - - 0 0 0 0 c4d0 - - 0 0 0 0 c4d1 - - 0 0 0 0 c5d0 - - 0 0 0 0 c8d0 - - 0 0 0 0 c8d1 - - 0 0 0...
2008 Jan 17
9
ATA UDMA data parity error
...was `cp -r /cdrom /tank/sxce_b78_disk1` - but this also fails `cp -r /usr /tank/usr` - system has 24 sata/sas drive bays, but only 12 of them all populated - system has three AOC-SAT2-MV8 cards plugged into 6 mini-sas backplanes - card1 ("c3") - bp1 (c3t0d0, c3t1d0) - bp2 (c3t4d0, c3t5d0) - card2 ("c4") - bp1 (c4t0d0, c4t1d0) - bp2 (c4t4d0, c4t5d0) - card3 ("c5") - bp1 (c5t0d0, c5t1d0) - bp2 (c5t4d0, c5t5d0) - system has one Barcelona Opteron (step BA...
2007 Jul 31
0
controller number mismatch
...0 $ zpool iostat -v 6 capacity operations bandwidth pool used avail read write read write ----------- ----- ----- ----- ----- ----- ----- zpool 930G 566G 16 114 41.5K 915K raidz1 631G 185G 11 71 30.9K 573K c3t0d0 - - 0 38 38.8K 120K c3t1d0 - - 0 42 38.8K 115K c3t2d0 - - 0 44 49.9K 120K c3t3d0 - - 0 43 33.3K 115K c3t4d0 - - 0 41 44.4K 120K c3t5d0 - - 0...
2007 Mar 07
0
anyone want a Solaris 10u3 core file...
...0 c3t2d0 ONLINE 0 0 0 raidz1 DEGRADED 0 0 0 c0t2d0 ONLINE 0 0 0 c0t3d0 ONLINE 0 0 0 c2t2d0 ONLINE 0 0 0 c3t3d0 ONLINE 0 0 0 c3t0d0 UNAVAIL 0 0 0 cannot open errors: No known data errors James Dickens uadmin.blogspot.com -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20070306/ec1b126f/attachment.html>
2011 Jul 21
0
Templates and self-knowledge
...sk all solaris all > > boot_device any preserve > > filesys rootdisk.s1 16384 swap > > filesys rootdisk.s0 40960 / > > filesys rootdisk.s7 free /export > > <% elsif zfs_root == "c3s" %> > > pool rootpool auto 16g 16g mirror c3t0d0s0 c3t4d0s0 > > fdisk c3t0d0 solaris all > > fdisk c3t4d0 solaris all > > <% else %> > > pool rootpool auto 16g 16g mirror <%= zfs_root %>t0d0s0 <%= zfs_root >> %>t1d0s0 > > fdisk <%= zfs_root %>t0d0 solaris all > > fdisk <%= zfs_r...
2006 Apr 06
15
A few Newbie questions about RAIDZ
1. I have a 4x18GB drive setup as RAIDZ. Now when thinking about it in terms of RAID5 I would expect to get (4-1)x18 worth of drive space, but DF -h shows 4x18. Is this a bug or do I not understand? 2. Once again thinking in RAID5 terms if I have 4X18GB and 12X9GB drives and I want to make a RAIDZ of all of them I would expect the 18GB to be treated at 9GB so the RAIDZ would be 16X9GB. Is
2012 Jun 13
0
ZFS NFS service hanging on Sunday morning problem
...; > : > scan: none requested > > : > config: > > : > > > : > NAME STATE READ WRITE CKSUM > > : > pptank ONLINE 0 0 0 > > : > raidz1-0 ONLINE 0 0 0 > > : > c3t0d0 ONLINE 0 0 0 > > : > c3t1d0 ONLINE 0 0 0 > > : > c3t2d0 ONLINE 0 0 0 > > : > c3t3d0 ONLINE 0 0 0 > > : > c3t4d0 ONLINE 0 0 0 > > : &gt...
2007 Jul 12
9
Again ZFS with expanding LUNs!
...t; than the resize of the LUN was performed within the SAN > format -e > Searching for disks...done > > > AVAILABLE DISK SELECTIONS: > 0. c1t0d0 <drive type unknown> > /pci at 0,0/pci108e,cb84 at 2/storage at 5/disk at 0,0 > 1. c3t0d0 <DEFAULT cyl 8872 alt 2 hd 255 sec 63> > /pci at 0,0/pci1022,7458 at 11/pci1000,3060 at 4/sd at 0,0 > 2. c5t600508B4000104ED0001600001430000d0 <COMPAQ-HSV110 (C)COMPAQ-3028-20.00GB> > /scsi_vhci/disk at g600508b4000104ed0001600001430000 >...
2007 Oct 08
16
Fileserver performance tests
...ion will be the Helios UB+ fileserver suite. I installed the latest Solaris 10 on a x4200 with 8gig of ram and two Sun SAS controllers, attached two sas-jbods with 8 SATA-HDDs each und created a zfs pool as a raid 10 by doing something like the following: [i]zpool create zfs_raid10_16_disks mirror c3t0d0 c4t0d0 mirror c3t1d0 c4t1d0 mirror c3t2d0 c4t2d0 mirror c3t3d0 c4t3d0 mirror c3t4d0 c4t4d0 mirror c3t5d0 c4t5d0 mirror c3t6d0 c4t6d0 mirror c3t7d0 c4t7d0[/i] the i set "noatime" and ran the following filebench tests: [i] root at sun1 # ./filebench filebench> load fileserver 12746:...
2008 Dec 17
12
disk utilization is over 200%
Hello, I use Brendan''s sysperfstat script to see the overall system performance and found the the disk utilization is over 100: 15:51:38 14.52 15.01 200.00 24.42 0.00 0.00 83.53 0.00 15:51:42 11.37 15.01 200.00 25.48 0.00 0.00 88.43 0.00 ------ Utilisation ------ ------ Saturation ------ Time %CPU %Mem %Disk %Net CPU Mem
2006 Mar 30
39
Proposal: ZFS Hot Spare support
As mentioned last night, we''ve been reviewing a proposal for hot spare support in ZFS. Below you can find a current draft of the proposed interfaces. This has not yet been submitted for ARC review, but comments are welcome. Note that this does not include any enhanced FMA diagnosis to determine when a device is "faulted". This will come in a follow-on project, of which some
2006 Jul 31
20
ZFS vs. Apple XRaid
...Raid-5 disk, I tried to create a single zpool on that disk. Again, the same performance problems as noted earlier. Then I partitioned the disk into a 100 GB partition and tried to create a zpool on that. Again, no luck. Performance still stinks. FWIW, format reports the xraid disks as: 2. c3t0d0 <APPLE-Xserve RAID-1.50-2.73TB> /pci at 0,0/pci1022,7450 at b/pci1000,1010 at 1,1/sd at 0,0 3. c3t1d0 <APPLE-Xserve RAID-1.50-2.73TB> /pci at 0,0/pci1022,7450 at b/pci1000,1010 at 1,1/sd at 1,0 4. c3t2d0 <APPLE-Xserve RAID-1.26-745.21GB>...
2008 Jul 25
18
zfs, raidz, spare and jbod
...repaired. scrub: resilver in progress, 0.02% done, 5606h29m to go config: NAME STATE READ WRITE CKSUM ef1 DEGRADED 0 0 0 raidz2 DEGRADED 0 0 0 spare ONLINE 0 0 0 c3t0d0p0 ONLINE 0 0 0 c3t1d2p0 ONLINE 0 0 0 c3t0d1p0 ONLINE 0 0 0 c3t0d2p0 ONLINE 0 0 0 c3t0d0p0 FAULTED 35 1.61K 0 too many errors c3t0d4p0 ONLINE 0 0...