search for: s10u4

Displaying 20 results from an estimated 37 matches for "s10u4".

Did you mean: s1024
2008 Mar 13
4
Disabling zfs xattr in S10u4
Hi, I want to disable extended attributes in my zfs on s10u4. I found out that the command to do is zfs set xattr=off <poolname>. But, I do not see this option in s10u4. How can I disable zfs extended attributes on s10u4? I''m not in the zfs-discuss alias. Please respond to me directly. Thanks Balaji
2007 Sep 25
2
ZFS speed degraded in S10U4 ?
...ng with Blade 6300 to check performance of compressed ZFS with Oracle database. After some really simple tests I noticed that default (well, not really default, some patches applied, but definitely noone bother to tweak disk subsystem or something else) installation of S10U3 is actually faster than S10U4, and a lot faster. Actually it''s even faster on compressed ZFS with S10U3 than on uncompressed with S10U4. My configuration - default Update 3 LiveUpgraded to Update 4 with ZFS filesystem on dedicated disk, and I''m working with same files which are on same physical cylinders, so...
2007 Oct 24
1
S10u4 in kernel sharetab
There was a log of talk about ZFS and NFS shares being a problem when there was a large number of filesystems. There was a fix that in part included an in kernel sharetab (I think :) Does anyone know if this has made it into S10u4? Thanks, BlueUmp This message posted from opensolaris.org
2008 Mar 25
11
Failure to instal S10U4 HVM at SNV85 Dom0
System config:- bash-3.2# ifconfig -a lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 rge0: flags=201004843<UP,BROADCAST,RUNNING,MULTICAST,DHCP,IPv4,CoS> mtu 1500 index 2 inet 192.168.1.53 netmask ffffff00 broadcast 192.168.1.255 ether 0:1e:8c:25:cc:a5 lo0:
2007 Sep 18
5
ZFS panic in space_map.c line 125
One of our Solaris 10 update 3 servers paniced today with the following error: Sep 18 00:34:53 m2000ef savecore: [ID 570001 auth.error] reboot after panic: assertion failed: ss != NULL, file: ../../common/fs/zfs/space_map.c, line: 125 The server saved a core file, and the resulting backtrace is listed below: $ mdb unix.0 vmcore.0 > $c vpanic() 0xfffffffffb9b49f3() space_map_remove+0x239()
2007 Sep 19
3
Solaris 10 Update 4 support for VNICs
...multiple co-located zones on the same VLAN. Our current deployment OS is Solaris 10 Update 4. I''d like to use exclusive IP zones, but I gather the only way to currently have multiple exclusive zones with interfaces on the same VLAN is through the use of VNICs. From my initial look at S10u4, it seems VNIC support been not yet been integrated into that release. Is this correct? Thanks - Andres
2007 Sep 28
4
Sun 6120 array again
...s, Last April, in this discussion... http://www.opensolaris.org/jive/thread.jspa?messageID=143517 ...we never found out how (or if) the Sun 6120 (T4) array can be configured to ignore cache flush (sync-cache) requests from hosts. We''re about to reconfigure a 6120 here for use with ZFS (S10U4), and the evil tuneable zfs_nocacheflush is not going to serve us well (there is a ZFS pool on slices of internal SAS drives, along with UFS boot/OS slices). Any pointers would be appreciated. Thanks and regards, Marion
2007 Oct 08
2
safe zfs-level snapshots with a UFS-on-ZVOL filesystem?
I had some trouble installing a zone on ZFS with S10u4 (bug in the postgres packages) that went away when I used a ZVOL-backed UFS filesystem for the zonepath. I thought I''d push on with the experiment (in the hope Live Upgrade would be able to upgrade such a zone). It''s a bit unwieldy, but everything worked reasonably well - perfor...
2007 Sep 15
1
ZFS and Live Upgrade
Is there any update/work-around/patch/etc as of the S10u4 WOS for the bugs that existed with respect to LU, Zones, and ZFS? More specifically, the following: 6359924 live upgrade needs to include support for zfs I can''t even find that bug ID on bugs.opensolaris.org (or via sunsolve when I''m logged in for that matter) anymore. Basica...
2008 Feb 28
1
DTrace Toolkit tcpsnoop
...the data from the packets. Does the dtrace provider allow access to the entire data packet? Is it possible to modify tcpsnoop to dump out the data? I don''t know dtrace at all so any help you can offer to point me in the right direction would be appreciated. ------ I''m running S10U4 on this machine, if it matters.
2008 Apr 18
1
lots of small, twisty files that all look the same
...changed until removed. No Z RAID''ing is used. The storage device is a 3510 FC array with 5+1 RAID5 in hardware. I would like to triage this if possible. Would changing the recordsize to something much smaller like 8k and tuning down vdev_cache to something like 8k be of initial benefit (S10U4)? Any other ideas gratefully accepted. bill This message posted from opensolaris.org
2008 Mar 14
8
xcalls - mpstat vs dtrace
HI, T5220, S10U4 + patches mdb -k > ::memstat While above is working (takes some time, ideally ::memstat -n 4 to use 4 threads could be useful) mpstat 1 shows: CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl 48 0 0 1922112 9 0 0 8 0 0 0 15254 6 94...
2007 Mar 26
4
Testing IP Instances in 61
I just BFU''ed up to B61 and started to play with IP Instances. I''m having trouble making my zone happy: root at aeon zones$ zoneadm -z testing1 boot zoneadm: zone ''testing1'': WARNING: unable to hold network interface ''skge0''.: Invalid argument root at aeon zones$ dladm show-dev nge0 link: up speed: 100Mb duplex: full
2007 Mar 21
3
zfs send speed
Howdy folks. I''ve a customer looking to use ZFS in a DR situation. They have a large data store where they will be taking snapshots every N minutes or so, sending the difference of the snapshot and previous snapshot with zfs send -i to a remote host, and in case of DR firing up the secondary. However, I''ve seen a few references to the speed of zfs send being, well, a bit
2008 May 21
9
Slow pkginstalls due to long door_calls to nscd
Hi all, I am installing a zone onto two different V445s running S10U4 and the zones are taking hours to install (about 1000 packages), that is, the problem is identical on both systems. A bit of trussing and dtracing has shown that the pkginstalls being run by the zoneadm install are making door_call calls to nscd that are taking very long, so far observed to be 5 to...
2008 May 20
4
Ways to speed up ''zpool import''?
...to do all of the ''zpool import''s in parallel doesn''t seem to speed the collective set of them up relative to doing them sequentially.) My test environment currently has 132 iSCSI LUNs (and 132 pools, one per LUN, because I wanted to test with extremes) on an up to date S10U4 machine. A truss of a ''zpool import'' suggests that it spends most of its time opening various disk devices and reading things from them and most of the rest of the time doing modctl() calls and ZFS ioctls(). (Also, using ''zpool import -d'' with a prepared direct...
2008 Jan 30
18
ZIL controls in Solaris 10 U4?
Is it true that Solaris 10 u4 does not have any of the nice ZIL controls that exist in the various recent Open Solaris flavors? I would like to move my ZIL to solid state storage, but I fear I can''t do it until I have another update. Heck, I would be happy to just be able to turn the ZIL off to see how my NFS on ZFS performance is effected before spending the $''s. Anyone
2007 Apr 19
14
Experience with Promise Tech. arrays/jbod''s?
Greetings, In looking for inexpensive JBOD and/or RAID solutions to use with ZFS, I''ve run across the recent "VTrak" SAS/SATA systems from Promise Technologies, specifically their E-class and J-class series: E310f FC-connected RAID: http://www.promise.com/product/product_detail_eng.asp?product_id=175 E310s SAS-connected RAID:
2008 Apr 29
0
zpool attach vs. zpool iostat
Hello zfs-discuss, S10U4+patches, SPARC If I attach a disk to vdev in a pool to get mirrored configuration then during resilver zpool iostat 1 will report only reads being done from pool and basically no writes. If I do zpool iostat -v 1 then I can see it is writing to new device however on a pool and mirror/vde...
2008 Jan 22
0
zpool attach problem
On a V240 running s10u4 (no additional patches), I had a pool which looked like this: <pre> > # zpool status > pool: pool01 > state: ONLINE > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > pool01...