similar to: Dom0 crash at 40Mbps Iperf traffic with only 80% CPU utilization ??

Displaying 20 results from an estimated 3000 matches similar to: "Dom0 crash at 40Mbps Iperf traffic with only 80% CPU utilization ??"

2005 Dec 09
0
RE: nodebytes and leafwords
hi kuhlen, what you said is correct. i am talking about how you are going to arrange these codewords into an array, i.e. in the function _make_decode_table. there he uses node bytes and leaf words for memory management. i got a 24 bit platform. so if i assume that max. codeword length that could be possible as 24 bits can i allocate a memory of (2 * used entries - 2), to arrange the whole tree in
2010 Feb 01
0
[LLVMdev] Crash in PBQP register allocator
On Sun, 2010-01-31 at 13:28 +1100, Lang Hames wrote: > Hi Sebastian, > > It boils down to this: The previous heuristic solver could return > infinite cost solutions in some rare cases (despite finite-cost > solutions existing). The new solver is still heuristic, but it should > always return a finite cost solution if one exists. It does this by > avoiding early reduction of
2010 Jan 31
2
[LLVMdev] Crash in PBQP register allocator
Hi Sebastian, It boils down to this: The previous heuristic solver could return infinite cost solutions in some rare cases (despite finite-cost solutions existing). The new solver is still heuristic, but it should always return a finite cost solution if one exists. It does this by avoiding early reduction of infinite spill cost nodes via R1 or R2. To illustrate why the early reductions can be a
2009 Nov 12
0
Is NUMA working correctly?
Hi Everyone, I''m trying to bind CPU and memory usage to particular cores using some Quad CPU 16 Core Opterons. They have 64GB RAM, 16 GB per node. It seems that xm info shows it is not working as expected though, below are details for the first node: Name ID Mem VCPUs State Time(s) Domain-0 0 4096 2
2011 Jan 27
7
[PATCH]: xl: fix broken cpupool-numa-split
Hi, the implementation of xl cpupool-numa-split is broken. It basically deals with only one poolid, but there are two to consider: the one from the original root CPUpool, the other from the newly created one. On my machine the current output looks like: root@dosorca:/data/images# xl cpupool-numa-split libxl: error: libxl.c:2803:libxl_create_cpupool Could not create cpupool error on creating
2012 Jun 21
1
echo 0 > /proc/sys/kernel/hung_task_timeout_secs and others error, Part II
The first problem is as below: One issue is the files copied to the device but it can't be list on node2, using ls -al the mounted directory. But using debug.ocfs2 on node2, it is ok to list the files copied. After remount of the device on node2, the file can be list. The second is that: Node1 is in the ocfs2 cluster, but using debug.ocfs2, and mounted.ocfs2 -f command, can not list the node1
2012 Jun 21
1
echo 0 > /proc/sys/kernel/hung_task_timeout_secs and others error, Part II
The first problem is as below: One issue is the files copied to the device but it can't be list on node2, using ls -al the mounted directory. But using debug.ocfs2 on node2, it is ok to list the files copied. After remount of the device on node2, the file can be list. The second is that: Node1 is in the ocfs2 cluster, but using debug.ocfs2, and mounted.ocfs2 -f command, can not list the node1
2012 Nov 14
2
How to filter xml value in R?
Hi, I have one xml file. <Class> <Node1 code ="1"> First node </Node1> <Node2 code ="1"> Second node </Node2> <Node3 code ="1"> Third node </Node3> <Node1 code ="2"> Fourth node </Node1> </Class> for (i in 1:xmlSize()) { print(Class[i]) # how can i filter Node1 ? } by
2013 Dec 17
1
Speed issue in only one direction
Hi all, I'm back again with my speed issues. The past issues where dependant of network I used. Now I run my tests in a lab, with 2 configurations linked by a Gigabit switch : node1: Intel Core i5-2400 with Debian 7.2 node2: Intel Core i5-3570 with Debian 7.2 Both have AES and PCLMULQDQ announced in /proc/cpuinfo. I use Tinc 1.1 from Git. When I run an iperf test from node2 (client) to
2010 Oct 27
2
Why is cpu-to-node mapping different between Xen 4.0.2-rc1-pre and Xen 4.1-unstable?
My system is a dual Xeon E5540 (Nehalem) HP Proliant DL380G6. When switching between Xen 4.0.2-rc1-pre and Xen 4.1-unstable I noticed that the NUMA info as shown by the Xen ''u'' debug-key is different. More specifically, the CPU to node mapping is alternating for 4.0.2 and grouped sequentially for 4.1. This difference affects the allocation (wrt node/socket) of pinned VCPUs to the
2018 Sep 14
0
Re: NUMA issues on virtualized hosts
Hello, ok, I found that cpu pinning was wrong, so I corrected it to be 1:1. The issue with iozone remains the same. The spec is running, however, it runs slower than 1-NUMA case. The corrected XML looks like follows: <cpu mode='host-passthrough'><topology sockets='8' cores='4' threads='1'/><numa><cell cpus='0-3'
2012 Sep 12
4
[LLVMdev] Nice nodes dumping patch
Hi all. Currently if you launch some tool with "-debug" option, you got pretty detailed dump. Though the SelectionDAG nodes will dumped as its pointer values: 0xa1d7258: i32 = GlobalAddress<void (i32, ...)* @f> 0 0xa1d7368: i32 = undef [ORD=1] 0xa1d73f0: i32 = TargetConstant<12> [ORD=1] ... It is good if you want to look at memory contents by its address then. But if you
2013 Jun 26
2
HI Guys
Hi ,? I recenlty configured the 2 node replica glusterfs , and I am having couple of issues? 1. As soon ?as a I reboot the node2 , the glusterfs on node1 is not available but when I reboot/shutdown node1 the glusterfs is available on node 0 , so please let me know if you guys have encountered the same issue 2. I am not able to mount the glusterfs mount at the time of reboot I had to do manually
2011 Apr 08
1
Clustered Samba: Every 24 hours "There are Currently No Logon Servers Available"
All, i have this very weird and annoying problem in my clustered setup: every ~24 hours the vista clients cant login, or even unlock there screens anymore. The error they receive is "currently no logon services available" this is very odd, because i have 2 samba 3.5.8 servers available, running and configured to handle login requests. in the mean time the people that are logged in
2010 Oct 05
0
WG: HOWTO samba4 centos5.5 named dnsupdate drbd simple failover
centOs5.5/samba4/named here is a short guide setting it up to work. I added TSIG for bind-master amd bind-slave. Update to samba4 alpha13 added (installing git on CentOs 5.5). If you do this howto right now you will start with samba4 alpha13. You do not need the update section. But you need git for your installation because the rsync-thing is broken!!!!!! First of all do not install the bind
2010 Sep 15
0
problem with gfs_controld
Hi, We have two nodes with centos 5.5 x64 and cluster+gfs offering samba and NFS services. Recently one node displayed the following messages in log files: Sep 13 08:19:07 NODE1 gfs_controld[3101]: cpg_mcast_joined error 2 handle 2846d7ad00000000 MSG_PLOCK Sep 13 08:19:07 NODE1 gfs_controld[3101]: send plock message error -1 Sep 13 08:19:11 NODE1 gfs_controld[3101]: cpg_mcast_joined error 2
2010 Aug 16
1
WG: HOWTO samba4 centos5.5 named dnsupdate drbd simple failover
centOs5.5/samba4/named here is a short guide setting it up to work. First of all do not install the bind package coming with centos 5.5!! Install needs for samba yum install libacl* gnutls* readline* python* gdb* autoconf* Named installation: Here is a description on what to do: http://jason.roysdon.net/2009/10/16/building-bind-9-6-on-rhel5-centos5-for-d nssec-nsec3-support/ The steps, yum
2016 Mar 10
0
different uuids, but still "Attempt to migrate guest to same host" error
Background: ---------- I'm trying to debug a two-node pacemaker/corosync cluster where I want to be able to do live migration of KVM/qemu VMs. Storage is backed via dual-primary DRBD (yes, fencing is in place). When moving the VM between nodes via 'pcs resource move RES NODENAME', the live migration fails although pacemaker will shut down the VM and restart it on the other node.
2010 Aug 02
0
HOWTO centOS 5.5 samba4 dns dynamic update/Replication
Dear all, after the feedbacks. I renew this HOWTO with replacation of a second SAMBA 4 PDC. We have 2 CentOS 5.5 servers on which we build a SAMBA4 forest with 2 Servers replication. We have one hosts called "node1" and the second "node2" Step1: On node1: Do not install the named coming with CentOs. This version can not do dns updates!!!! Install needs for samba. yum
2010 Aug 09
2
HOWTO samba4 centos5.5 named dnsupdate drbd simple failover
centOs5.5/samba4/named here is a short guide setting it up to work. First of all do not install the bind package coming with centos 5.5!! Install needs for samba yum install libacl* gnutls* readline* python* gdb* autoconf* Named installation: Here is a description on what to do: http://jason.roysdon.net/2009/10/16/building-bind-9-6-on-rhel5-centos5-for-d nssec-nsec3-support/ The steps, yum