search for: s10u3

Displaying 20 results from an estimated 28 matches for "s10u3".

Did you mean: s10u4
2007 Jun 07
2
plockstat/dtrace core dump S10U3
Hey, Im able to reproduce a crash from plockstat everytime Im tracing a JVM pid. I do recall a problem related with this one, but not clear if this has been fixed in U3 or is it planned for U4 ? Any BugID opened for this stack trace: > ::stack libc.so.1`strlen+0x50(100003faa, ffffffff7ffff5c8, ffffffff7eca1114, ffffffff7fffec79, 0, 100003fa9) libc.so.1`snprintf+0x88(ffffffff7ffff8f0, 0,
2008 Mar 13
4
Disabling zfs xattr in S10u4
Hi, I want to disable extended attributes in my zfs on s10u4. I found out that the command to do is zfs set xattr=off <poolname>. But, I do not see this option in s10u4. How can I disable zfs extended attributes on s10u4? I''m not in the zfs-discuss alias. Please respond to me directly. Thanks Balaji
2007 Sep 25
2
ZFS speed degraded in S10U4 ?
Hi Guys, I''m playing with Blade 6300 to check performance of compressed ZFS with Oracle database. After some really simple tests I noticed that default (well, not really default, some patches applied, but definitely noone bother to tweak disk subsystem or something else) installation of S10U3 is actually faster than S10U4, and a lot faster. Actually it''s even faster on compressed ZFS with S10U3 than on uncompressed with S10U4. My configuration - default Update 3 LiveUpgraded to Update 4 with ZFS filesystem on dedicated disk, and I''m working with same files which are o...
2007 Sep 17
1
Strange behavior zfs and soalris cluster
Hi All, Two and three-node clusters with SC3.2 and S10u3 (120011-14). If a node is rebooted when using SCSI3-PGR the node is not able to take the zpool by HAStoragePlus due to reservation conflict. SCSI2-PGRE is okay. Using the same SAN-LUN:s in a metaset (SVM) and HAStoragePlus works okay with PGR and PGRE. (both SMI and EFI-labled disks) If using scsh...
2007 Jul 05
4
ZFS receive issue running multiple receives and rollbacks
Hi, all, Environment: S10U3 running as VMWare Workstation 6 guest; Fedora 7 is the VMWare host, 1 GB RAM I''m creating a solution in which I need to be able to save off state on one host, then restore it on another. I''m using ZFS snapshots with ZFS receive and it''s all working fine, except for some...
2007 Jan 09
2
ZFS Hot Spare Behavior
I physically removed a disk (c3t8d0 used by ZFS ''pool01'') from a 3310 JBOD connected to a V210 running s10u3 (11/06) and ''zpool status'' reported this: # zpool status pool: pool01 state: DEGRADED status: One or more devices could not be opened. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Attach the missing device and online it u...
2009 Mar 25
3
anonymous dtrace?
...ppen when I do a simple reboot. The set bootpath property in this file is getting changed after the machine boots up for the first time in the newly created BE resulting in kernel panic (gives error: cannot mount root path /pci at 0,0/pci-ide at 7/ide at 0/cmdk at 0,0:e) The Primary boot env is S10u3 and ABE is S10u6. I want to see at what point of time and which process is writing to the bootenv.rc file. How can I achieve it using dtrace? Thanks in advance. Regards, Nishchaya
2007 Feb 27
3
2-way mirror or RAIDZ?
I have a shiny new Ultra 40 running S10U3 with 2 x 250Gb disks. I want to make best use of the available disk space and have some level of redundancy without impacting performance too much. What I am trying to figure out is: would it be better to have a simple mirror of an identical 200Gb slice from each disk or split each disk into 2...
2007 Apr 22
7
slow sync on zfs
Hello zfs-discuss, Relatively low traffic to the pool but sync takes too long to complete and other operations are also not that fast. Disks are on 3510 array. zil_disable=1. bash-3.00# ptime sync real 1:21.569 user 0.001 sys 0.027 During sync zpool iostat and vmstat look like: f3-1 504G 720G 370 859 995K 10.2M misc 20.6M 52.0G 0 0
2007 Oct 11
0
zfs as zone root
Hello, I surely did a mistake by configuring our zones with zfs-root: Patches are no longer possible (without disabling the zones in /etc/zones/index) in S10u3! My questions are: 1. Has S10u4 support for zone-root in zfs? 2. Will it be possible to patch my _existing_ zfs-rooted zones when such zones will be supported? 3. Sun itself seems to recommend zfs for zone-root for easy cloning of zones! I''m somewhat confused; Can someone give me a hin...
2007 Sep 17
2
zpool create -f not applicable to hot spares
...A B C spare D E'' and D or E contains UFS filesystem then despite of -f zpool command will complain that there is UFS file system on D. workaround: create a test pool with -f on D and E, destroy it and that create first pool with D and E as hotspares. I''ve tested it on s10u3 + patches - can someone confirm it on latest nv? -- Best regards, Robert Milkowski mailto:rmilkowski at task.gda.pl http://milek.blogspot.com
2007 Jun 26
0
Oracle DB giving zpools a wedgie
...nt to detach that side of the mirror from the zpool. What happened next was peculiar .... the ''zpool detach ...'' command hung, and continued to do so until Oracle itself was shut down... then the ''zpool detach'' command completed. The machine in question is s10u3+some patches. Has anyone seen this type of behavior before? /dale
2007 Sep 13
11
How do I get my pool back?
After having to replace an internal raid card in an X2200 (S10U3 in this case), I can see the disks just fine - and can boot, so the data isn''t completely missing. However, my zpool has gone. # zpool status -x pool: storage state: FAULTED status: One or more devices could not be opened. There are insufficient replicas for the pool to conti...
2007 Jul 25
3
Any fix for zpool import kernel panic (reboot loop)?
My system (a laptop with ZFS root and boot, SNV 64A) on which I was trying Opensolaris now has the zpool-related kernel panic reboot loop. Booting into failsafe mode or another solaris installation and attempting: ''zpool import -F rootpool'' results in a kernel panic and reboot. A search shows this type of kernel panic has been discussed on this forum over the last year.
2007 Jan 13
2
Extremely poor ZFS perf and other observations
I''m observing the following behavior in our environment (Sol10U2, E2900, 24x96, 2x2Gbps, ...) - I''ve a compressed ZFS filesystem where I''m creating a large tar file. I notice that the tar process is running fine (accumulating CPU, truss shows writes, ...) but for whatever reason the timestamp on the file doesn''t change nor does the file size change. The same is
2007 Sep 08
1
zpool degraded status after resilver completed
...e home/c0t6d0 zpool replace home c0t6d0 c8t1d0 and after the resilvering finished the pool reports a degraded state. Hopefully this is incorrect. At this point is the vdev in question now has full raidz2 protected even though it is listed as "DEGRADED"? P.S. This is on a pool created on S10U3 and upgraded to ZFS version 4 after upgrading the host to S10U4. Thanks. # zpool status pool: home state: DEGRADED status: One or more devices has been taken offline by the administrator. Sufficient replicas exist for the pool to continue functioning in a degraded state. actio...
2007 Apr 19
14
Experience with Promise Tech. arrays/jbod''s?
Greetings, In looking for inexpensive JBOD and/or RAID solutions to use with ZFS, I''ve run across the recent "VTrak" SAS/SATA systems from Promise Technologies, specifically their E-class and J-class series: E310f FC-connected RAID: http://www.promise.com/product/product_detail_eng.asp?product_id=175 E310s SAS-connected RAID:
2007 Sep 18
1
zfs-discuss Digest, Vol 23, Issue 34
...ool back? > > To: zfs-discuss at opensolaris.org > > Message-ID: > > <df1347730709130719p60c955cetef1f52db57a66e1a at mail.gmail.com> > > Content-Type: text/plain; charset=ISO-8859-1 > > > > After having to replace an internal raid card in an X2200 (S10U3 in > > this case), I can see the disks just fine - and can boot, so the data > > isn''t completely missing. > > > > However, my zpool has gone. > > > > # zpool status -x > > pool: storage > > state: FAULTED > > status: One or more devi...
2007 Mar 21
4
HELP!! I can''t mount my zpool!!
Hi all. One of our server had a panic and now can''t mount the zpool anymore! Here is what I get at boot: Mar 21 11:09:17 SERVER142 ^Mpanic[cpu1]/thread=ffffffff90878200: Mar 21 11:09:17 SERVER142 genunix: [ID 603766 kern.notice] assertion failed: ss->ss_start <= start (0x670000b800 <= 0x67 00009000), file: ../../common/fs/zfs/space_map.c, line: 126 Mar 21 11:09:17 SERVER142
2007 Mar 22
13
migration/acl4 problem
Hi, S10U3: It seems, that ufs POSIX-ACLs are not properly translated to zfs ACL4 entries, when one xfers a directory tree from UFS to ZFS. Test case: Assuming one has an user A and B, both belonging to group G and having their umask set to 022: 1) On UFS - as user A do: mkdir /dir chmod...