search for: sol10u8

Displaying 7 results from an estimated 7 matches for "sol10u8".

2010 Jun 18
6
WD caviar/mpt issues
...enerating lots of retryable read errors, spitting out lots of beloved " Log info 31080000 received for target" messages, and just generally not working right. (SM 836EL1 and 836TQ chassis - though I have several variations on theme depending on date of purchase: 836EL2s, 846s and 847s - sol10u8, 1.26/1.29/1.30 LSI firmware on LSI retail 3801 and 3081E controllers. Not that it works any better on the brace of 9211-8is I also tried these drives on.) Before signing up for the list, I "accidentally" bought a wad of caviar black 2TBs. No, they are new enough to not respond to WDTL...
2010 Apr 10
21
What happens when unmirrored ZIL log device is removed ungracefully
Due to recent experiences, and discussion on this list, my colleague and I performed some tests: Using solaris 10, fully upgraded. (zpool 15 is latest, which does not have log device removal that was introduced in zpool 19) In any way possible, you lose an unmirrored log device, and the OS will crash, and the whole zpool is permanently gone, even after reboots. Using opensolaris,
2011 Jun 15
1
ZFS Filesystem Quota under Solaris 10 and Sparc
Hallo. Filesystem quotas used to work well under Solaris 9 and ufs filesystems on the Sparc platform even with two rules for the folders in the users home directories /home/group/user and the separate filesystem /var/mail holding the inboxes: plugin { quota = fs:Home-Verzeichnis:noenforcing quota2 = fs:INBOX:noenforcing:mount=/var/mail } Since we upgraded last year to Solaris 10
2010 Aug 30
5
pool died during scrub
I have a bunch of sol10U8 boxes with ZFS pools, most all raidz2 8-disk stripe. They''re all supermicro-based with retail LSI cards. I''ve noticed a tendency for things to go a little bonkers during the weekly scrub (they all scrub over the weekend), and that''s when I''ll lose a disk here an...
2010 Apr 27
42
Performance drop during scrub?
Hi all I have a test system with snv134 and 8x2TB drives in RAIDz2 and currently no Zil or L2ARC. I noticed the I/O speed to NFS shares on the testpool drops to something hardly usable while scrubbing the pool. How can I address this? Will adding Zil or L2ARC help? Is it possible to tune down scrub''s priority somehow? Best regards roy -- Roy Sigurd Karlsbakk (+47) 97542685 roy at
2009 Dec 02
10
Separate Zil on HDD ?
Hi all, I have a home server based on SNV_127 with 8 disks; 2 x 500GB mirrored root pool 6 x 1TB raidz2 data pool This server performs a few functions; NFS : for several ''lab'' ESX virtual machines NFS : mythtv storage (videos, music, recordings etc) Samba : for home directories for all networked PCs I backup the important data to external USB hdd each day. I previously had
2011 Jun 24
13
Fixing txg commit frequency
Hi All, I''d like to ask about whether there is a method to enforce a certain txg commit frequency on ZFS. I''m doing a large amount of video streaming from a storage pool while also slowly continuously writing a constant volume of data to it (using a normal file descriptor, *not* in O_SYNC). When reading volume goes over a certain threshold (and average pool load over ~50%), ZFS