similar to: ZFS vs UFS performance Using Different Raid Configurations

Displaying 20 results from an estimated 1000 matches similar to: "ZFS vs UFS performance Using Different Raid Configurations"

2006 Jul 19
1
Q: T2000: raidctl vs. zpool status
Hi all, IHACWHAC (I have a colleague who has a customer - hello, if you''re listening :-) who''s trying to build and test a scenario where he can salvage the data off the (internal ?) disks of a T2000 in case the sysboard and with it the on-board raid controller dies. If I understood correctly, he replaces the motherboard, does some magic to get the raid config back, but even
2007 Jan 11
4
Help understanding some benchmark results
G''day, all, So, I''ve decided to migrate my home server from Linux+swRAID+LVM to Solaris+ZFS, because it seems to hold much better promise for data integrity, which is my primary concern. However, naturally, I decided to do some benchmarks in the process, and I don''t understand why the results are what they are. I though I had a reasonable understanding of ZFS, but now
2010 Sep 07
3
zpool create using whole disk - do I add "p0"? E.g. c4t2d0 or c42d0p0
I have seen conflicting examples on how to create zpools using full disks. The zpool(1M) page uses "c0t0d0" but OpenSolaris Bible and others show "c0t0d0p0". E.g.: zpool create tank raidz c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0 zpool create tank raidz c0t0d0p0 c0t1d0p0 c0t2d0p0 c0t3d0p0 c0t4d0p0 c0t5d0p0 I have not been able to find any discussion on whether (or when) to
2007 Apr 11
0
raidz2 another resilver problem
Hello zfs-discuss, One of a disk started to behave strangely. Apr 11 16:07:42 thumper-9.srv sata: [ID 801593 kern.notice] NOTICE: /pci at 1,0/pci1022,7458 at 3/pci11ab,11ab at 1: Apr 11 16:07:42 thumper-9.srv port 6: device reset Apr 11 16:07:42 thumper-9.srv scsi: [ID 107833 kern.warning] WARNING: /pci at 1,0/pci1022,7458 at 3/pci11ab,11ab at 1/disk at 6,0 (sd27): Apr 11 16:07:42 thumper-9.srv
2006 May 23
1
iostat numbers for ZFS disks, build 39
I updated an i386 system to b39 yesterday, and noticed this when running iostat: r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 0.0 0.5 0.0 10.0 0.0 0.0 0.0 0.5 0 0 c0t0d0 0.0 0.5 0.0 10.0 0.0 0.0 0.0 0.6 0 0 c0t1d0 0.0 65.1 0.0 119640001.5 0.0 0.0 0.0 0.3 0 2 c0t2d0 0.0 65.1 0.0 119640090.2 0.0
2008 Apr 02
1
delete old zpool config?
Hi experts zpool import shows some weird config of an old zpool bash-3.00# zpool import pool: data1 id: 7539031628606861598 state: FAULTED status: One or more devices are missing from the system. action: The pool cannot be imported. Attach the missing devices and try again. see: http://www.sun.com/msg/ZFS-8000-3C config: data1 UNAVAIL insufficient replicas
2009 Sep 26
5
raidz failure, trying to recover
Long story short, my cat jumped on my server at my house crashing two drives at the same time. It was a 7 drive raidz (next time ill do raidz2). The server crashed complaining about a drive failure, so i rebooted into single user mode not realizing that two drives failed. I put in a new 500g replacement and had zfs start a replace operation which failed at about 2% because there was two broken
2006 Oct 24
3
determining raidz pool configuration
Hi all, Sorry for the newbie question, but I''ve looked at the docs and haven''t been able to find an answer for this. I''m working with a system where the pool has already been configured and want to determine what the configuration is. I had thought that''d be with zpool status -v <poolname>, but it doesn''t seem to agree with the
2009 Aug 21
0
bug :zpool create allow member driver as the raw drive of full partition
IF you run solaris and opensolaris ?for example you my use c0t0d0 (for scsi disk) or c0d0 (for ide /SATA disk ) as the system disk. In default ,solaris x86 and opensolaris will use RAW driver : c0t0d0s0 (/dev/rdsk/c0t0d0s0) as the member driver of rpool. Infact, solaris2 partition can be more then one in each Hard Disk, so we also can use the RAW driver like : c0t0d0p1 (/dev/rdsk/c0t0d0p1)
2007 Sep 08
1
zpool degraded status after resilver completed
I am curious why zpool status reports a pool to be in the DEGRADED state after a drive in a raidz2 vdev has been successfully replaced. In this particular case drive c0t6d0 was failing so I ran, zpool offline home/c0t6d0 zpool replace home c0t6d0 c8t1d0 and after the resilvering finished the pool reports a degraded state. Hopefully this is incorrect. At this point is the vdev in question now has
2008 Apr 11
0
How to replace root drive if ZFS data is on it?
Hi, Experts: A customer has X4500 and the boot drives mirrored (c5t0d0s0 and c5t4d0s0) by SVM, The ZFS uses the two other partitions on these two drives(c5t0d0s3 and c5t4d0s3). If we need to replace the disk drive c5t0d0, do we need to do anything on the ZFS (c5t0d0s3 and c5t4d0s3) first or just follow the regular boot drive replacement procedure? Below is the summary of their current ZFS
2008 Mar 12
3
Mixing RAIDZ and RAIDZ2 zvols in the same zpool
I have a customer who has implemented the following layout: As you can see, he has mostly raidz zvols but has one raidz2 in the same zpool. What are the implications here? Is this a bad thing to do? Please elaborate. Thanks, Scott Gaspard Scott.J.Gaspard at Sun.COM > NAME STATE READ WRITE CKSUM > > chipool1 ONLINE 0 0 0 > >
2008 Aug 25
5
Unable to import zpool since system hang during zfs destroy
Hi all, I have a RAID-Z zpool made up of 4 x SATA drives running on Nexenta 1.0.1 (OpenSolaris b85 kernel). It has on it some ZFS filesystems and few volumes that are shared to various windows boxes over iSCSI. On one particular iSCSI volume, I discovered that I had mistakenly deleted some files from the FAT32 partition that is on it. The files were still in a ZFS snapshot that was made earlier
2009 Jun 19
8
x4500 resilvering spare taking forever?
I''ve got a Thumper running snv_57 and a large ZFS pool. I recently noticed a drive throwing some read errors, so I did the right thing and zfs replaced it with a spare. Everything went well, but the resilvering process seems to be taking an eternity: # zpool status pool: bigpool state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was
2011 Jan 28
2
ZFS root clone problem
(for some reason I cannot find my original thread..so I''m reposting it) I am trying to move my data off of a 40gb 3.5" drive to a 40gb 2.5" drive. This is in a Netra running Solaris 10. Originally what I did was: zpool attach -f rpool c0t0d0 c0t2d0. Then I did an installboot on c0t2d0s0. Didnt work. I was not able to boot from my second drive (c0t2d0). I cannot remember
2010 Mar 17
0
checksum errors increasing on "spare" vdev?
Hi, One of my colleagues was confused by the output of ''zpool status'' on a pool where a hot spare is being resilvered in after a drive failure: $ zpool status data pool: data state: DEGRADED status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scrub:
2007 Dec 03
2
Help replacing dual identity disk in ZFS raidz and SVM mirror
Hi, We have a number of 4200''s setup using a combination of an SVM 4 way mirror and a ZFS raidz stripe. Each disk (of 4) is divided up like this / 6GB UFS s0 Swap 8GB s1 /var 6GB UFS s3 Metadb 50MB UFS s4 /data 48GB ZFS s5 For SVM we do a 4 way mirror on /,swap, and /var So we have 3 SVM mirrors d0=root (sub mirrors d10, d20, d30, d40) d1=swap (sub mirrors d11, d21,d31,d41)
2009 Nov 17
1
upgrading to the latest zfs version
Hi guys, after reading the mailings yesterday i noticed someone was after upgrading to zfs v21 (deduplication) i''m after the same, i installed osol-dev-127 earlier which comes with v19 and then followed the instructions on http://pkg.opensolaris.org/dev/en/index.shtml to bring my system up to date, however, the system is reporting no updates are available and stays at zfs v19, any ideas?
2011 Jan 04
0
zpool import hangs system
Hello, I''ve been using Nexentastore Community Edition with no issues now for a while now, however last week I was going to rebuild a different system so I started to copy all the data off that to my to a raidz2 volume on me CE system. This was going fine until I noticed that they copy was stalled, as well as the entire system was non-responsive. I let it sit for several hours with no
2008 Mar 12
5
[Bug 752] New: zfs set keysource no longer works on existing pools
http://defect.opensolaris.org/bz/show_bug.cgi?id=752 Summary: zfs set keysource no longer works on existing pools Classification: Development Product: zfs-crypto Version: unspecified Platform: Other OS/Version: Solaris Status: NEW Severity: blocker Priority: P1 Component: other AssignedTo: