Displaying 20 results from an estimated 200 matches similar to: "High CPU utilization on Solaris 10"
2007 Feb 20
3
1.0.rc23 released
http://dovecot.org/releases/dovecot-1.0.rc23.tar.gz
http://dovecot.org/releases/dovecot-1.0.rc23.tar.gz.sig
Documentation is probably the only important thing left before v1.0.
* deliver doesn't ever exit with Dovecot's internal exit codes anymore.
All its internal exit codes are changed to EX_TEMPFAIL.
* mbox: X-Delivery-ID header is now dropped when saving mails.
* mbox: If
2007 Feb 20
3
1.0.rc23 released
http://dovecot.org/releases/dovecot-1.0.rc23.tar.gz
http://dovecot.org/releases/dovecot-1.0.rc23.tar.gz.sig
Documentation is probably the only important thing left before v1.0.
* deliver doesn't ever exit with Dovecot's internal exit codes anymore.
All its internal exit codes are changed to EX_TEMPFAIL.
* mbox: X-Delivery-ID header is now dropped when saving mails.
* mbox: If
2006 Mar 22
1
Busyloop in dovecot-auth
SunOS pop01.unix 5.10 Generic_118844-26 i86pc i386 i86pc
dovecot-1.0.beta3
Dovecot itself runs well, and was easy to confirgure with LDAP. However, I am
seeing a cpu busy-loop in dovecot-auth. Is this a known issue, or do you know
why it happens? If needed I can go deeper if so required.
PID USERNAME LWP PRI NICE SIZE RES STATE TIME CPU COMMAND
26440 root 1 10 0 4232K
2007 Nov 21
2
dovecot-auth consumes 100% CPU time on Solaris 10
Is problem with CPU load solved?
I have same problem - dovecot-auth eats one of my cores.
I'm using dovecot 1.0.7 on Solaris 10 Sparc.
I tried to use auth-bind and standard scheme with separate user for bind and
have same result.
Problem occupts only with LDAP authentication, on some other systems I use
PostgreQSL and MySQL authentication and doesn't have this problem.
Using PAM
2006 May 31
1
Dovecot 1.0beta8 dovecot-auth consumes 100% CPU time on Solaris 10 amd64
Hello. I hope someone out there can help with this. It is getting pretty urgent.
I am running a Solaris 10 server on Opteron (amd64) hardware and have compiled
Dovecot 1.0beta8 from source. It has openssl compiled in (after much mucking
around with various environment variables and modifying the Makefile), and was
built with:
$ ./configure --with-ldap --with-ssl-dir=/etc/ssl
$ make
# make
2005 Nov 15
6
Oracle 9 process on Sol 10 container, doing a pollsys, using high CPU
We''re running a Solaris 10 container, with an Oracle 9.2.0.4 database - every 5-10 min, an Oracle process shoots up (using 20% + CPU) and then goes down in CPU %, doing a [i]pollsys [/i](see it via dtruss). I tried using some of the trace scripts in the Dtracetoolkit to see what the process is doing, but without any luck - also tried with the following, but dtrace process goes up to 30%
2016 Apr 18
8
[Bug 2565] New: High baud rate gets sent, solaris closes pty
https://bugzilla.mindrot.org/show_bug.cgi?id=2565
Bug ID: 2565
Summary: High baud rate gets sent, solaris closes pty
Product: Portable OpenSSH
Version: 7.1p2
Hardware: Sparc
OS: Solaris
Status: NEW
Severity: minor
Priority: P5
Component: sshd
Assignee: unassigned-bugs at
2005 Aug 29
14
Oracle 9.2.0.6 on Solaris 10
How can I tell if this is normal behaviour? Oracle imports are horribly slow, an order of magnitude slower than on the same hardware with a slower disk array and Solaris 9. What I can look for to see where the problem lies?
The server is 99% idle right now, with one database running. Each sample is about 5 seconds. I''ve tried setting kernel parameters despite the docs saying that
2008 May 20
7
[Bug 1986] New: ''zfs destroy'' hangs on encrypted dataset
http://defect.opensolaris.org/bz/show_bug.cgi?id=1986
Summary: ''zfs destroy'' hangs on encrypted dataset
Classification: Development
Product: zfs-crypto
Version: unspecified
Platform: Other
OS/Version: Solaris
Status: NEW
Severity: major
Priority: P2
Component: other
2010 Jan 22
1
libvirtd remote access
Hi,
I can''t seem to get libvirtd to accept remote connections. Both systems are built using genunix''s b130.
It seems that connections originating from the xvm0 server itself are fine but as soon as I go on to the other box and run the same python script (or simply virsh) the connection gets dropped immediately. Telnetting to port 16509 confirms that it drops the connection
2009 Apr 01
4
ZFS Locking Up periodically
I''ve recently re-installed an X4500 running Nevada b109 and have been
experiencing ZFS lock ups regularly (perhaps once every 2-3 days).
The machine is a backup server and receives hourly ZFS snapshots from
another thumper - as such, the amount of zfs activity tends to be
reasonably high. After about 48 - 72 hours, the file system seems to lock
up and I''m unable to do anything
2013 Aug 12
2
Odd Samba 4 ("4.2.0pre1-GIT-b505111"; actually only using client) behaviour #2 - "accept: Software caused connection abort".
Good day oh technical ones .
I was running Samba 4 (client only, not using it as a DC so
effectively running Samba 3 code from the Samba 4 tree) and, other than a
little "Gotcha!" regarding decoding Kerberos PACs, it was all working
perfectly.
Then recently I had to upgrade, to "4.2.0pre1-GIT-b505111"
(I had to upgrade the OS on the server
2005 Nov 09
1
Where & Why is my process sleeping a lot?
I have a program where the process seems to be sleeping a lot (waiting
on something)
What would be the right approach to figure out via dtrace where it is
sleeping and why it is sleeping?
In my current process, using truss -D shows that it reaches pollsys and
the whole process sleeps for 1.5 - 1.8 seconds before it awakes again.
Its the only significant process running on this two CPU
2008 May 12
12
[Bug 1463] New: Running nohup sleep 70 & and then exiting shell, hangs ssh
https://bugzilla.mindrot.org/show_bug.cgi?id=1463
Summary: Running nohup sleep 70 & and then exiting shell, hangs
ssh
Classification: Unclassified
Product: Portable OpenSSH
Version: 5.0p1
Platform: Sparc
OS/Version: Solaris
Status: NEW
Severity: normal
Priority: P2
Component:
2010 Jun 28
23
zpool import hangs indefinitely (retry post in parts; too long?)
Now at 36 hours since zdb process start and:
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
827 root 4936M 4931M sleep 59 0 0:50:47 0.2% zdb/209
Idling at 0.2% processor for nearly the past 24 hours... feels very stuck. Thoughts on how to determine where and why?
--
This message posted from opensolaris.org
2006 Jul 07
3
Mongrel & irbrc
Why does mongrel_rails insist on loading ~/.irbrc with each request?
a) I''m curious why it loads it at all (I assume there''s no way of
getting an inline breakpointer??)
b) Why re-load it? It causes problems with any constants that are
used in .irbrc... (alternatively, how do I avoid re-assigning to a
constant?)
Jon
-------------- next part --------------
A non-text
2015 Apr 05
2
nutdrv_qx hangs after send: QS
Thank you for the rapid response. I will try and investigate getting
answers to some of your points but I'm a little new to Solaris so I'll need
some time. Glancing at the configure output, it looks like it built against
v0.1.7 of libusb (yes i think that is derived from the one you mention),
checking for libusb version via pkg-config... 0.1.7 found
checking for libusb cflags...
checking
2007 Aug 29
6
How do I look up syscall name
I''m using a fbt probe where I get a system call id as an argument, how do I look up the name of it? At the moment I''m post-processing the output using /etc/name/to_sysnum but that doesn''t feel right :)
cheers,
/Martin
--
This message posted from opensolaris.org
2013 Jul 18
7
[Bug 10035] New: rsync hangs in solaris
https://bugzilla.samba.org/show_bug.cgi?id=10035
Summary: rsync hangs in solaris
Product: rsync
Version: 3.0.9
Platform: All
OS/Version: All
Status: NEW
Severity: normal
Priority: P5
Component: core
AssignedTo: wayned at samba.org
ReportedBy: nestor.urquiza at gmail.com
QAContact:
2009 Apr 15
3
MySQL On ZFS Performance(fsync) Problem?
Hi,all
I did some test about MySQL''s Insert performance on ZFS, and met a big
performance problem,*i''m not sure what''s the point*.
Environment
2 Intel X5560 (8 core), 12GB RAM, 7 slc SSD(Intel).
A Java client run 8 threads concurrency insert into one Innodb table:
*~600 qps when sync_binlog=1 & innodb_flush_log_at_trx_commit=1
~600 qps when sync_binlog=10