search for: sigkills

Displaying 20 results from an estimated 402 matches for "sigkills".

Did you mean: sigkill
2002 Sep 05
7
sshd and SIGKILL
On command: #kill -9 `cat /var/run/sshd.pid` sshd leave pid file ! sshd.c code: =============== .... /* * Arrange to restart on SIGHUP. The handler needs * listen_sock. */ signal(SIGHUP, sighup_handler); signal(SIGTERM, sigterm_handler); signal(SIGQUIT, sigterm_handler); .... =============== Missing line is : signal(SIGKILL, sigterm_handler);
2017 Sep 08
3
GlusterFS as virtual machine storage
2017-09-08 13:44 GMT+02:00 Pavel Szalbot <pavel.szalbot at gmail.com>: > I did not test SIGKILL because I suppose if graceful exit is bad, SIGKILL > will be as well. This assumption might be wrong. So I will test it. It would > be interesting to see client to work in case of crash (SIGKILL) and not in > case of graceful exit of glusterfsd. Exactly. if this happen, probably there
2019 Aug 08
2
another bizarre thing...
Is this on both EL6 and EL7? If only EL7, it could be control groups causing the issue. The idea of cgroups is to prevent zombie processes, but if you need your program to spawn another process then restart itself while the other process continues to run, you need to launch it in a different control group, or the shutdown of the parent process will also kill the child. In my case, we have an
2017 Sep 08
2
GlusterFS as virtual machine storage
2017-09-08 13:21 GMT+02:00 Pavel Szalbot <pavel.szalbot at gmail.com>: > Gandalf, isn't possible server hard-crash too much? I mean if reboot > reliably kills the VM, there is no doubt network crash or poweroff > will as well. IIUP, the only way to keep I/O running is to gracefully exiting glusterfsd. killall should send signal 15 (SIGTERM) to the process, maybe a bug in signal
2017 Nov 06
2
Log reopen broken in 2.2.33?
Has re-opening the logfiles broken between 2.2.32 and 2.2.33 releases? Using the same config that I've had for the past 10 or so point releases, /usr/bin/doveadm log reopen works perfectly up until 2.2.32, but with 2.2.33 (and .1 and .2) no new logfiles are created, file descriptors still have the original files open and keep writings to those logs. On an idling test instance, I get: master:
2014 Feb 08
1
Failed to terminate process X with SIGKILL: Device or resource busy
Hi, I'm using libvirt/kvm with openstack. When vm has CPU stall, the instance can't be destroyed. I have asked the question on openstack user mailing list and received no answers in weeks. Here is one example: # virsh destroy instance-00000085 error: Failed to destroy domain instance-00000085 error: Failed to terminate process 61222 with SIGKILL: Device or resource busy
2017 Sep 08
0
GlusterFS as virtual machine storage
On Sep 8, 2017 13:36, "Gandalf Corvotempesta" < gandalf.corvotempesta at gmail.com> wrote: 2017-09-08 13:21 GMT+02:00 Pavel Szalbot <pavel.szalbot at gmail.com>: > Gandalf, isn't possible server hard-crash too much? I mean if reboot > reliably kills the VM, there is no doubt network crash or poweroff > will as well. IIUP, the only way to keep I/O running is to
2017 Oct 05
2
dovecot: master: Warning: Sent SIGKILL to 100 imap-login processes
Hi, I am using Dovecot 2.2.32 (dfbe293d4) I noticed lots of messages like: dovecot: master: Warning: Sent SIGKILL to 100 imap-login processes in /var/log/maillog I commented out "process_limit" service imap { # Most of the memory goes to mmap()ing files. You may need to increase this # limit if you have huge mailboxes. #vsz_limit = $default_vsz_limit # Max. number of IMAP
2018 Dec 29
5
2.3.4 doesnt compile on FreeBSD 11.2 using clang
extract below, this has already been reported a while back but still no new patch, so this email is to serve as a reminder, if someone manually fixes it for the ports tree, I dont consider that a fix, ideally we need this fixed in the source code, as not everyone will install it from ports. Chris "clang40 -DHAVE_CONFIG_H -I. -I../.. -I../../src/lib -I../../src/lib-dns -I../../src/lib-test
2019 Aug 06
13
another bizarre thing...
Hi all! I'm stuck on something really bizarre that is happening to a product I "own" at work. It's a C program, built on CentOS, runs on CentOs or RHEL, has been in circulation since the early 00's, is in use at hundreds of sites. recently, at multiple customer sites it has started just going away. no core file (yes, ulimit is configured), nothing in any of its (several)
2017 Sep 08
4
GlusterFS as virtual machine storage
Gandalf, SIGKILL (killall -9 glusterfsd) did not stop I/O after few minutes. SIGTERM on the other hand causes crash, but this time it is not read-only remount, but around 10 IOPS tops and 2 IOPS on average. -ps On Fri, Sep 8, 2017 at 1:56 PM, Diego Remolina <dijuremo at gmail.com> wrote: > I currently only have a Windows 2012 R2 server VM in testing on top of > the gluster storage,
2023 May 22
2
[PATCH 3/3] fork, vhost: Use CLONE_THREAD to fix freezer/ps regression
On Sun, May 21, 2023 at 09:51:24PM -0500, Mike Christie wrote: > When switching from kthreads to vhost_tasks two bugs were added: > 1. The vhost worker tasks's now show up as processes so scripts doing ps > or ps a would not incorrectly detect the vhost task as another process. > 2. kthreads disabled freeze by setting PF_NOFREEZE, but vhost tasks's > didn't disable or
2008 Feb 04
3
[PATH] ioemu: use SIGHUP instead of SIGKILL
The stub domain device model needs to trap the termination signal so as to actually destroy the stub domain. SIGKILL can''t be trapped, SIGTERM is caught by SDL and so may be unnoticed. SIGHUP can be trapped and is not caught by SDL (and by default causes a process termination without core). Signed-off-by: Samuel Thibault <samuel.thibault@eu.citrix.com> diff -r 2407a61c0d30
2019 Aug 07
0
another bizarre thing...
On Mon, Aug 05, 2019 at 08:57:45PM -0400, Fred Smith wrote: > Hi all! > > I'm stuck on something really bizarre that is happening to a product > I "own" at work. It's a C program, built on CentOS, runs on CentOs or > RHEL, has been in circulation since the early 00's, is in use at > hundreds of sites. > > recently, at multiple customer sites it has
2017 Sep 08
0
GlusterFS as virtual machine storage
I currently only have a Windows 2012 R2 server VM in testing on top of the gluster storage, so I will have to take some time to provision a couple Linux VMs with both ext4 and XFS to see what happens on those. The Windows server VM is OK with killall glusterfsd, but when the 42 second timeout goes into effect, it gets paused and I have to go into RHEVM to un-pause it. Diego On Fri, Sep 8, 2017
2023 May 22
1
[PATCH 3/3] fork, vhost: Use CLONE_THREAD to fix freezer/ps regression
On 05/22, Mike Christie wrote: > > On 5/22/23 7:30 AM, Oleg Nesterov wrote: > >> + /* > >> + * When we get a SIGKILL our release function will > >> + * be called. That will stop new IOs from being queued > >> + * and check for outstanding cmd responses. It will then > >> + * call vhost_task_stop to tell us to return and exit. >
2018 Nov 23
3
v2.3.4 released
On 23.11.2018 15.20, Brad Smith wrote: > On Fri, Nov 23, 2018 at 02:29:22PM +0200, Timo Sirainen wrote: >> https://dovecot.org/releases/2.3/dovecot-2.3.4.tar.gz >> https://dovecot.org/releases/2.3/dovecot-2.3.4.tar.gz.sig >> Binary packages in https://repo.dovecot.org/ >> >> * The default postmaster_address is now "postmaster@<user domain or >>
2019 Aug 08
0
another bizarre thing...
On Thu, Aug 08, 2019 at 05:06:06PM +0000, Young, Gregory wrote: > Is this on both EL6 and EL7? If only EL7, it could be control groups causing the issue. The idea of cgroups is to prevent zombie processes, but if you need your program to spawn another process then restart itself while the other process continues to run, you need to launch it in a different control group, or the shutdown of the
2023 May 22
1
[PATCH 1/3] signal: Don't always put SIGKILL in shared_pending
When get_pending detects the task has been marked to be killed we try to clean up the SIGKLL by doing a sigdelset and recalc_sigpending, but we still leave it in shared_pending. If the signal is being short circuit delivered there is no need to put in shared_pending so this adds a check in complete_signal. This patch was modified from Eric Biederman <ebiederm at xmission.com> original
2011 Dec 08
5
Master repeatedly killing workers due to timeouts
Hi, We''re using unicorn as a Rails server on Solaris, and it''s been running great for several months. We''ve recently been having a few problems and I''m at a loss what might cause it. A number of times in the past few days, our unicorn slaves keep timing out & the master keeps restarting them. unicorn.log looks something like : E,