Displaying 6 results from an estimated 6 matches for "pid_namespace".
Did you mean:
cmd_namespace
2014 May 16
2
[LLVMdev] [llvmlinux] [LLVMLinux] Regression: rev 208833/208834 break linux kernel build in ASM handling
....file 44 "/src/linux/include/linux" "poll.h"
> .file 45 "/src/linux/include/linux" "pid.h"
> .file 46 "/src/linux/include/linux" "kref.h"
> .file 47 "/src/linux/include/linux" "pid_namespace.h"
> .file 48 "/src/linux/include/linux" "slub_def.h"
> .file 49 "/src/linux/include/asm-generic" "atomic-long.h"
> .file 50 "/src/linux/include/linux" "workqueue.h"
> .file 51 "...
2014 May 16
2
[LLVMdev] [LLVMLinux] Regression: rev 208833/208834 break linux kernel build in ASM handling
Hi !
I reproduced it on the file init/main.c
The invocation, log and main.i / main.s is attached.
--
Dipl.-Ing.
Jan-Simon Möller
jansimon.moeller at gmx.de
Am Freitag, 16. Mai 2014, 14:25:47 schrieb Renato Golin:
> On 16 May 2014 14:01, Jan-Simon Möller <dl9pf at gmx.de> wrote:
> > A bisection points to
> >
> > git-svn-id:
2013 Nov 19
5
xenwatch: page allocation failure: order:4, mode:0x10c0d0 xen_netback:xenvif_alloc: Could not allocate netdev for vif16.0
...336 336 72 56
ext4_extent_status 7361 7446 40 102
ext3_inode_cache 0 0 1312 24
ext3_xattr 3726 3956 88 46
dquot 0 0 384 21
kioctx 0 0 896 18
pid_namespace 0 0 2208 14
posix_timers_cache 108 108 296 27
UNIX 198 198 1472 22
Cache Num Total Size Pages
UDP-Lite 0 0 1280 25
ip_fib_trie 438 438 56...
2012 Nov 15
3
Likely mem leak in 3.7
Starting with 3.7 rc1, my workstation seems to loose ram.
Up until (and including) 3.6, used-(buffers+cached) was roughly the same
as sum(rss) (taking shared into account). Now there is an approx 6G gap.
When the box first starts, it is clearly less swappy than with <= 3.6; I
can''t tell whether that is related. The reduced swappiness persists.
It seems to get worse when I update
2010 Apr 19
20
Lustre Client - Memory Issue
Hi Guys,
My users are reporting some issues with memory on our lustre 1.8.1 clients.
It looks like when they submit a single job at a time the run time was about
4.5 minutes. However, when they ran multiple jobs (10 or less) on a client
with 192GB of memory on a single node the run time for each job was
exceeding 3-4X the run time for the single process. They also noticed that
the swap space
2013 Apr 19
14
[GIT PULL] (xen) stable/for-jens-3.10
Hey Jens,
Please in your spare time (if there is such a thing at a conference)
pull this branch:
git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git stable/for-jens-3.10
for your v3.10 branch. Sorry for being so late with this.
<blurb>
It has the ''feature-max-indirect-segments'' implemented in both backend
and frontend. The current problem with the backend and