similar to: xend memory leakage

Displaying 20 results from an estimated 30000 matches similar to: "xend memory leakage"

2020 Nov 03
0
[PATCH v3 2/2] vhost-vdpa: fix page pinning leakage in error path
On 10/29/2020 2:53 PM, Michael S. Tsirkin wrote: > On Thu, Oct 15, 2020 at 01:17:14PM -0700, si-wei liu wrote: >> On 10/15/2020 6:11 AM, Michael S. Tsirkin wrote: >>> On Thu, Oct 15, 2020 at 02:15:32PM +0800, Jason Wang wrote: >>>> On 2020/10/14 ??7:42, si-wei liu wrote: >>>>>> So what I suggest is to fix the pinning leakage first and do the
2013 Nov 15
0
[qemu-upstream-unstable test] 21952: regressions - FAIL
flight 21952 qemu-upstream-unstable real [real] http://www.chiark.greenend.org.uk/~xensrcts/logs/21952/ Regressions :-( Tests which did not succeed and are blocking, including tests which could not be run: test-amd64-i386-qemuu-rhel6hvm-intel 7 redhat-install fail REGR. vs. 20054 Tests which did not succeed, but are not blocking: test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop
2009 Mar 22
0
[LLVMdev] Possible memory leakage in the LLVM JIT Engine
Hi, Was this ever resolved? I'm curious, I'm also in a situation where there may be many (very many) JITted functions over the history of an application (which may be running for many days) Thanks On Mar 20, 2009, at 7:34 AM, George Giorgidze wrote: > Hi, > > In my application I am JITing thousands of functions, though I am > doing it sequantially and running only
2013 Nov 14
0
[qemu-upstream-unstable test] 21930: regressions - FAIL
flight 21930 qemu-upstream-unstable real [real] http://www.chiark.greenend.org.uk/~xensrcts/logs/21930/ Regressions :-( Tests which did not succeed and are blocking, including tests which could not be run: test-amd64-i386-qemuu-rhel6hvm-intel 7 redhat-install fail REGR. vs. 20054 Tests which are failing intermittently (not blocking): test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 7
2013 Nov 18
0
[qemu-upstream-unstable test] 21993: regressions - FAIL
flight 21993 qemu-upstream-unstable real [real] http://www.chiark.greenend.org.uk/~xensrcts/logs/21993/ Regressions :-( Tests which did not succeed and are blocking, including tests which could not be run: test-amd64-i386-qemuu-rhel6hvm-intel 7 redhat-install fail REGR. vs. 20054 Tests which are failing intermittently (not blocking): test-amd64-amd64-xl-qemuu-win7-amd64 8
2009 Mar 20
2
[LLVMdev] Possible memory leakage in the LLVM JIT Engine
Hi, In my application I am JITing thousands of functions, though I am doing it sequantially and running only one at a time. So it is crucial to be able to properly clean up the memory from an old JITed function when JITing and running the new one. I am using Haskell binding of LLVM and my application works OK. However, memory usage increases and never decreases during the run time of my
2017 Jul 27
1
Memory Leakage in Gluster 3.10.2-1
Are you still facing the problem ? If so, Can you please provide the workload , cmd_log_history file, log files , etc ? Regards Rafi KC On 06/23/2017 02:06 PM, shridhar s n wrote: > Hi All, > > We are using GlusterFS 3.10.2 (upgraded from 3.7.0 last week) on > CentOS 7.x . > > We continue to see memory utilization going up once every 3 days. The > memory utilization of
2017 Jun 23
1
Memory Leakage in Gluster 3.10.2-1
Hi All, We are using GlusterFS 3.10.2 (upgraded from 3.7.0 last week) on CentOS 7.x . We continue to see memory utilization going up once every 3 days. The memory utilization of the server demon(glusterd) in ?server is keep on increasing. In about 30+ hours the Memory utilization of glusterd service alone will reach 70% of memory available. Since we have alarms for this threshold, we get notified
2019 Oct 24
1
[PATCH] virtio_ring: fix packed ring event may missing
On 2019/10/24 ??11:26, Liu, Yong wrote: > >> -----Original Message----- >> From: Jason Wang [mailto:jasowang at redhat.com] >> Sent: Tuesday, October 22, 2019 9:06 PM >> To: Liu, Yong <yong.liu at intel.com>; mst at redhat.com; Bie, Tiwei >> <tiwei.bie at intel.com> >> Cc: virtualization at lists.linux-foundation.org >> Subject: Re: [PATCH]
2003 May 03
2
Memory leakage?
Hi, all: I'm using R 1.7.0 on WinXP under SDI mode. However, very often after I closed all R windows, my CPU usage was still 100%. By checking the task manager, I found there are one or several "Rgui.exe" still running and took all the CPU. I had to close them one by one manually. This happened to me with R 1.6.1, R 1.6.2 also and also on Win2K. Rememeber there was a
2020 Nov 03
0
[PATCH 1/2] Revert "vhost-vdpa: fix page pinning leakage in error path"
On 2020/10/30 ??3:45, Si-Wei Liu wrote: > This reverts commit 7ed9e3d97c32d969caded2dfb6e67c1a2cc5a0b1. > > Signed-off-by: Si-Wei Liu <si-wei.liu at oracle.com> > --- > drivers/vhost/vdpa.c | 119 +++++++++++++++++++++------------------------------ > 1 file changed, 48 insertions(+), 71 deletions(-) I saw this has been reverted there
2019 Oct 22
0
[PATCH] virtio_ring: fix packed ring event may missing
On 2019/10/22 ??2:48, Liu, Yong wrote: > Hi Jason, > My answers are inline. > >> -----Original Message----- >> From: Jason Wang [mailto:jasowang at redhat.com] >> Sent: Tuesday, October 22, 2019 10:45 AM >> To: Liu, Yong <yong.liu at intel.com>; mst at redhat.com; Bie, Tiwei >> <tiwei.bie at intel.com> >> Cc: virtualization at
2005 Aug 26
1
Memory leakage/violation?
Hi, I've spotted a possible memory leakage/violation in the latest R v2.1.1 patched and R v2.2.0dev on Windows XP Pro SP2 Eng. I first caught it deep down in a nested svd algorithm when subtracting a double 'c' from a integer vector 'a' where both had finite values but when assigning 'a <- a - c' would report NaNs whereas (a-c) alone would not. Different runs
2020 Oct 01
0
[PATCH] vhost-vdpa: fix page pinning leakage in error path
Pinned pages are not properly accounted particularly when mapping error occurs on IOTLB update. Clean up dangling pinned pages for the error path. As the inflight pinned pages, specifically for memory region that strides across multiple chunks, would need more than one free page for book keeping and accounting. For simplicity, pin pages for all memory in the IOVA range in one go rather than have
2020 Oct 01
0
[PATCH v2] vhost-vdpa: fix page pinning leakage in error path
Pinned pages are not properly accounted particularly when mapping error occurs on IOTLB update. Clean up dangling pinned pages for the error path. As the inflight pinned pages, specifically for memory region that strides across multiple chunks, would need more than one free page for book keeping and accounting. For simplicity, pin pages for all memory in the IOVA range in one go rather than have
2010 Apr 26
0
Experiencing memory leak of xend/qemu-dm
We have created more 200 VMs and run them for almost one year. There have been no xend memory problems with the servers which contain only linux vms. Recently, we decided to use Windows VMs. After that, we have experienced lack of dom0 memory serveral times because dom0 memory usage (because of xend) has increased linearly and dom0 became out of memory. (Eventually OOM killer make the server
2004 Jun 20
0
key management with ssh-agent, IdentityFile and info leakage
editors note: just now found something about IdentitiesOnly that might do the trick. there's some other stuff in here too. about preventing info leakage [keys for other sites] from appearing in the client<-->server key negotiation with ssh-agent and IdentityFile. ssh/config:IdentityFile - seems to indicate that only the specified key will be tried, and if that key fails, no other keys
2019 Oct 27
1
[PATCH] virtio_ring: fix stalls for packed rings
From: Marvin Liu <yong.liu at intel.com> When VIRTIO_F_RING_EVENT_IDX is negotiated, virtio devices can use virtqueue_enable_cb_delayed_packed to reduce the number of device interrupts. At the moment, this is the case for virtio-net when the napi_tx module parameter is set to false. In this case, the virtio driver selects an event offset and expects that the device will send a
2006 Nov 14
4
Samba 3.0.14 (Debian Sarge) Memory Leakage
Hi! Our Samba file server seems to have a memory leakage. We are using samba as file server out of the box (debian sarge) on kernel 2.6.16.31. After a while users who have some shares and files open are acquiring more and more memory until the smbd dies. Here is a small shortcut from top: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 13843 gxxxxx 16 0 51792 44m 2552 S
2000 Mar 09
1
[Galen Hancock <galen@veribox.net>] Information leakage in sshd
Hi, Thought I'd just forward this here, because I don't have time to look into it right now, and am off skiing next week. I'd guess that we should be checking for username = ``root'' before going off to do password checks, and rejecting it on that basis first. Cheers, Phil. -- Mind-numbingly stupid UK law alert! Act now to stop it! http://www.stand.org.uk/ --------------