search for: c0t0d0s7

Displaying 6 results from an estimated 6 matches for "c0t0d0s7".

Did you mean: c0t0d0s0
2006 Jan 30
4
Adding a mirror to an existing single disk zpool
Hello All, I''m transitioning data off my old UFS partitions onto ZFS. I don''t have a lot of duplicate space so I created a zpool, rsync''ed the data from UFS to the ZFS mount and then repartitioned the UFS drive to have partitions that match the cylinder count of the ZFS. The idea here is that once the data is over I wipe out UFS and then attach that partition to the
2007 Jul 26
8
Read-only (forensic) mounts of ZFS
Hi I''m looking into forensic aspects of ZFS, in particular ways to use ZFS tools to investigate ZFS file systems without writing to the pools. I''m working on a test suite of file system images within VTOC partitions. At the moment, these only have 1 file system per pool per VTOC partition for simplicity''s sake, and I''m using Solaris 10 6/06, which may not
2010 Jan 13
3
Recovering a broken mirror
...way to recover from this or are they SOL? Thanks in advance # zpool status no pools available # zpool import # ls /etc/zfs # ls /dev/dsk c0t0d0s0 c0t0d0s3 c0t0d0s6 c1t0d0s1 c1t0d0s4 c1t0d0s7 c1t1d0s2 c1t1d0s5 c1t2d0 c1t2d0s2 c1t2d0s5 c1t3d0s0 c1t3d0s3 c1t3d0s6 c0t0d0s1 c0t0d0s4 c0t0d0s7 c1t0d0s2 c1t0d0s5 c1t1d0s0 c1t1d0s3 c1t1d0s6 c1t2d0s0 c1t2d0s3 c1t2d0s6 c1t3d0s1 c1t3d0s4 c0t0d0s2 c0t0d0s5 c1t0d0s0 c1t0d0s3 c1t0d0s6 c1t1d0s1 c1t1d0s4 c1t1d0s7 c1t2d0s1 c1t2d0s4 c1t3d0 c1t3d0s2 c1t3d0s5 # format Searching for disks...done AVAILABLE DISK SELECTIONS:...
2005 Oct 26
1
Error message with fbt::copen:entry probe
All, The attached script is causing the following error message ... bash-3.00# ./zmon_bug.d dtrace: error on enabled probe ID 2 (ID 4394: fbt:genunix:copen:entry): invalid address (0xfd91747f) in predicate at DIF offset 120 dtrace: error on enabled probe ID 2 (ID 4394: fbt:genunix:copen:entry): invalid address (0xfef81a3f) in predicate at DIF offset 120 Any ideas? thxs Joe --
2006 Jun 20
3
nevada_41 and zfs disk partition
...untar it on the zfs filesystem and the machine comes to its knees. At times it appears that the system has hung. A Sol10 version of top shows that most of the cpu time is in the kernel (not suprising). The steps I used to create the pool/fs is basicly the following: # zpool create space /dev/dsk/c0t0d0s7 # zfs create space/src # cd /space/src/ # gtar xzf thunderbird.tar.gz Any ideas on how I can try and do a little debug of this? Has anyone else seen this behavior? This message posted from opensolaris.org
2008 Mar 13
12
7-disk raidz achieves 430 MB/s reads and 220 MB/s writes on a $1320 box
...ool iostat -v 2" capacity operations bandwidth pool used avail read write read write ------------ ----- ----- ----- ----- ----- ----- tank 2.54T 2.17T 3.38K 0 433M 0 raidz1 2.54T 2.17T 3.38K 0 433M 0 c0t0d0s7 - - 1.02K 0 61.9M 0 c0t1d0s7 - - 1.02K 0 61.9M 0 c0t2d0s7 - - 1.02K 0 62.0M 0 c0t3d0s7 - - 1.02K 0 62.0M 0 c1t0d0s7 - - 1.01K 0 61.9M 0 c2t0d0s7 - - 1.02K...