Displaying 9 results from an estimated 9 matches for "doorfs".
2005 Apr 29
2
[Bug 2670] rsync does not support Solaris' doorfs
https://bugzilla.samba.org/show_bug.cgi?id=2670
wayned@samba.org changed:
What |Removed |Added
----------------------------------------------------------------------------
Severity|normal |enhancement
Status|NEW |ASSIGNED
------- Additional Comments From wayned@samba.org 2005-04-29 12:42
2005 Apr 29
0
[Bug 2670] New: rsync does not support Solaris' doorfs
https://bugzilla.samba.org/show_bug.cgi?id=2670
Summary: rsync does not support Solaris' doorfs
Product: rsync
Version: 2.6.4
Platform: Sparc
OS/Version: Solaris
Status: NEW
Severity: normal
Priority: P3
Component: core
AssignedTo: wayned@samba.org
ReportedBy: zosh@ife.ee.ethz.ch
QAContac...
2008 May 21
9
Slow pkginstalls due to long door_calls to nscd
Hi all,
I am installing a zone onto two different V445s running S10U4 and the
zones are taking hours to install (about 1000 packages), that is, the
problem is identical on both systems. A bit of trussing and dtracing has
shown that the pkginstalls being run by the zoneadm install are making
door_call calls to nscd that are taking very long, so far observed to be
5 to 40 seconds, but always in
2007 Nov 27
5
Dtrace probes for voluntary and involuntary context switches
...ads, I printed whenever a system calls is invoked and whenever a context switch happens. I am profiling the system calls and context switched inside critical sections (while some lock is being held).
But I see something unexpected. I see
* Voluntary context switches occur almost every time due to doorfs()
system call. They do occur for a few times due to lwp_park() and very
few times due to yield().
* Involuntary happens anytime. (lwp_park(), read(), fstat(), putmsg(),
gtime() and sometime without any system call!!)
Does anyone have any idea, what could be the reason for this unexpected behavior...
2005 Aug 29
14
Oracle 9.2.0.6 on Solaris 10
How can I tell if this is normal behaviour? Oracle imports are horribly slow, an order of magnitude slower than on the same hardware with a slower disk array and Solaris 9. What I can look for to see where the problem lies?
The server is 99% idle right now, with one database running. Each sample is about 5 seconds. I''ve tried setting kernel parameters despite the docs saying that
2007 Mar 13
0
about use dtrace analyze tomcat''s situation
...fstat64 714
setsockopt 843
setcontext 946
ioctl 959
lwp_mutex_wakeup 1105
lwp_cond_broadcast 1430
close 1431
connect 2503
fsat 2505
getdents64 3894
fcntl 6785
doorfs 7043
resolvepath 8684
access 22627
send 73748
write 102515
stat64 126195
lwp_mutex_timedlock 339137
accept 273694273
read 285744641
pollsys 1213647291
lwp_cond_wait 1553123938
can see lwp_con...
2006 Sep 21
1
Dtrace script compilation error.
Hi All,
One of the customer is seeing the following error messages.
He is using a S10U1 system with all latest patches.
Dtrace script is attached.
I am not seeing these error messages on S10U2 system.
What could be the problem?
Thanks,
Gangadhar.
------------------------------------------------------------------------
rroberto at njengsunu60-2:~$ dtrace -AFs /lib/svc/share/kcfd.d
dtrace:
2006 Oct 31
0
PSARC/2002/762 Layered Trusted Solaris
...usr/src/uts/common/c2/audit_kevents.h
update: usr/src/uts/common/c2/audit_record.h
update: usr/src/uts/common/c2/audit_start.c
update: usr/src/uts/common/c2/audit_token.c
update: usr/src/uts/common/disp/thread.c
update: usr/src/uts/common/fs/autofs/auto_vfsops.c
update: usr/src/uts/common/fs/doorfs/door_vnops.c
update: usr/src/uts/common/fs/lofs/lofs_vfsops.c
update: usr/src/uts/common/fs/nfs/nfs3_vfsops.c
update: usr/src/uts/common/fs/nfs/nfs4_srv.c
update: usr/src/uts/common/fs/nfs/nfs4_subr.c
update: usr/src/uts/common/fs/nfs/nfs4_vfsops.c
update: usr/src/uts/common/fs/nfs/nfs_subr.c...
2007 Dec 09
8
zpool kernel panics.
Hi Folks,
I''ve got a 3.9 Tb zpool, and it is casing kernel panics on my Solaris
10 280r (SPARC) server.
The message I get on panic is this:
panic[cpu1]/thread=2a100a95cc0: zfs: freeing free segment
(offset=423713792 size=1024)
This seems to come about when the zpool is being used or being
scrubbed - about twice a day at the moment. After the reboot, the
scrub seems to have