Displaying 20 results from an estimated 4000 matches similar to: "Problem with creating constraints"
2010 Oct 27
2
Why is cpu-to-node mapping different between Xen 4.0.2-rc1-pre and Xen 4.1-unstable?
My system is a dual Xeon E5540 (Nehalem) HP Proliant DL380G6. When
switching between Xen 4.0.2-rc1-pre and Xen 4.1-unstable I noticed
that the NUMA info as shown by the Xen ''u'' debug-key is different.
More specifically, the CPU to node mapping is alternating for 4.0.2
and grouped sequentially for 4.1. This difference affects the
allocation (wrt node/socket) of pinned VCPUs to the
2007 Oct 04
0
Dom0 crash at 40Mbps Iperf traffic with only 80% CPU utilization ??
Hello
I have a 3 node experiment as detailed below to estimate the bridged
networking performance in para-virtualized Xen 3.0 on Emulab
(https://www.emulab.net/)
Here is how the 3-node topology looks (This topology is specified by
means of an NS-2 file)
topology:
_______________________________
| |
Node0:eth0 ------|Node1:eth0
2011 Apr 08
1
Clustered Samba: Every 24 hours "There are Currently No Logon Servers Available"
All,
i have this very weird and annoying problem in my clustered setup: every ~24
hours the vista clients cant login, or even unlock there screens anymore.
The error they receive is "currently no logon services available"
this is very odd, because i have 2 samba 3.5.8 servers available, running
and configured to handle login requests.
in the mean time the people that are logged in
2008 Mar 19
0
RE: [Xen-ia64-devel] New error trying to create a domain(usinglatestxend-unstable
Hi Keir,
The CS# 17131 which I write for bind guest to NUMA node via cpu affinity
missed one condition existing in some machines, where there aren''t any
cpus but only memories. Under this condition it will fail to set
cpu_affinity because of none parameter. I cope with this condition in
the new patch and make a little change of the methods to find suitable
node to bind guest. When
2011 Oct 17
1
brick out of space, unmounted brick
Hello Gluster users,
Before I put Gluster into production, I am wondering how it determines whether a byte can be written, and where I should look in the source code to change these behaviors. My experiences are with glusterfs 3.2.4 on CentOS 6 64-bit.
Suppose I have a Gluster volume made up of four 1 MB bricks, like this
Volume Name: test
Type: Distributed-Replicate
Status: Started
Number of
2016 Aug 31
0
status of Continuous availability in SMB3
On 2016-08-31 at 08:13 +0000, zhengbin.08747 at h3c.com wrote:
> hi Michael Adam:
> Thanks for you work on samba. Here I am looking for some advice and your help.
> I have been stuck in continuous availability of samba 4.3.9 for two weeks. Continuous availability in SMB3 is an attractive feature and I am strugling to enable it.
>
> smb.conf, ctdb.conf are attached. Cluster file
2010 Apr 22
0
harddisk offline frequently when running vm
without starting vm, we did decompression stress test. harddisk
offline did not happen.
when some vms are running, we did decompression stress test. harddisk
offline happened frequently.
We suspect that system interruption loss causes the problem.
did anybody meet this problem? or have an idea?
our env :
hardware :
Intel(R) Xeon(R) CPU E5520 @ 2.27GHz
SCSI storage
2010 Feb 01
0
[LLVMdev] Crash in PBQP register allocator
On Sun, 2010-01-31 at 13:28 +1100, Lang Hames wrote:
> Hi Sebastian,
>
> It boils down to this: The previous heuristic solver could return
> infinite cost solutions in some rare cases (despite finite-cost
> solutions existing). The new solver is still heuristic, but it should
> always return a finite cost solution if one exists. It does this by
> avoiding early reduction of
2010 Apr 19
1
Attach CDROM to windows
Hi Guys,
Now I am using Xen-4.0.0. I need to attach CDROM from ISO files to vitual
machines when they are running. The VMs contain RHEL4.6, Windows Server 2003
32bit and Windows Server 2006 64bit.
The disk configuration of the VMs is:
disk = [ ''tap:vhd:/guest/SS_test/vhd/hmi-100130.vhd,hda,w''] (1)
When I attach a CDROM to RHEL4.6 VM with the command:
virsh
2010 Apr 19
1
Attach CDROM to windows
Hi Guys,
Now I am using Xen-4.0.0. I need to attach CDROM from ISO files to vitual
machines when they are running. The VMs contain RHEL4.6, Windows Server 2003
32bit and Windows Server 2006 64bit.
The disk configuration of the VMs is:
disk = [ ''tap:vhd:/guest/SS_test/vhd/hmi-100130.vhd,hda,w''] (1)
When I attach a CDROM to RHEL4.6 VM with the command:
virsh
2010 Jan 31
2
[LLVMdev] Crash in PBQP register allocator
Hi Sebastian,
It boils down to this: The previous heuristic solver could return
infinite cost solutions in some rare cases (despite finite-cost
solutions existing). The new solver is still heuristic, but it should
always return a finite cost solution if one exists. It does this by
avoiding early reduction of infinite spill cost nodes via R1 or R2.
To illustrate why the early reductions can be a
2016 Aug 31
3
status of Continuous availability in SMB3
hi Michael Adam:
Thanks for you work on samba. Here I am looking for some advice and your help.
I have been stuck in continuous availability of samba 4.3.9 for two weeks. Continuous availability in SMB3 is an attractive feature and I am strugling to enable it.
smb.conf, ctdb.conf are attached. Cluster file system is cephfs and mount to /CephStorage
client: Windows 8 Pro
root at node0:~# samba
2010 Apr 19
0
redhat4.6-32bit DomU with pv driver can''t be saved
Hi all,
I have some problem with xm save/restore in Xen-4.0.0 & linux-2.6.31.13,
First, the /etc/init.d/xendomains seems do not work properly because of bash
version. So I modified it as follow:
root@r02k08027 # diff -up /etc/init.d/xendomains /etc/init.d/xendomains_31
--- /etc/init.d/xendomains 2010-04-08 00:12:04.000000000 +0800
+++ /etc/init.d/xendomains_31 2010-04-19
2010 Apr 19
0
redhat4.6-32bit DomU with pv driver can''t be saved
Hi all,
I have some problem with xm save/restore in Xen-4.0.0 & linux-2.6.31.13,
First, the /etc/init.d/xendomains seems do not work properly because of bash
version. So I modified it as follow:
root@r02k08027 # diff -up /etc/init.d/xendomains /etc/init.d/xendomains_31
--- /etc/init.d/xendomains 2010-04-08 00:12:04.000000000 +0800
+++ /etc/init.d/xendomains_31 2010-04-19
2014 Oct 08
2
is memoryBacking support 'share' and 'mem-path' parameter
Hi,
I want to use this qemu command '-object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on' with libvirt but i can't find the 'mem-path' and 'share' in the documentation.
because the vhost-user backend based on 'share=on' parameter and libvirt support vhostuser i guess there maybe another way to support this parameter?
this is my xml:
2013 Mar 04
1
Qemu-1.4.0-2 and Qemu-1.4.0-1 is broken on virt-preview and rawhide
All,
Our testing seems to indicate that Qemu-1.4.0-2 and Qemu-1.4.0-1 is
broken on virt-preview and rawhide.
I'm getting undefined symbol: usbredirparser_send_start_bulk_receiving
when running qemu-system-x86_64 -version..
Do others see this issue? Is there a known solution?
Harald
Failing cases:
2014 Oct 09
1
Re: is memoryBacking support 'share' and 'mem-path' parameter
On 2014/10/8 16:57, Martin Kletzander wrote:
> On Wed, Oct 08, 2014 at 10:03:47AM +0800, Linhaifeng wrote:
>> Hi,
>>
>> I want to use this qemu command '-object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on' with libvirt but i can't find the 'mem-path' and 'share' in the documentation.
>> because the vhost-user backend
2017 Aug 09
1
Gluster performance with VM's
Hi, community
Please, help me with my trouble.
I have 2 Gluster nodes, with 2 bricks on each.
Configuration:
Node1 brick1 replicated on Node0 brick0
Node0 brick1 replicated on Node1 brick0
Volume Name: gm0
Type: Distributed-Replicate
Volume ID: 5e55f511-8a50-46e4-aa2f-5d4f73c859cf
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1:
2010 Sep 07
2
remus failure -xen 4.0.1: xc_domain_restore cannot pin page tables
Hardware: Dell Poweredge R510 (32G ram, 8 CPU- Xeon)
64bit - xen 4.0.1 stable
64bit - 2.6.32.18 dom0 (.config attached) running Ubuntu 10.04
32 bit - 2.6.18.8 domU (.config attached) running ubuntu 8.04
domU has 3 tap2 disks, on lvm snapshots.
domU has 2G mem, 2 VCPU
workload on domU - ssh + top running, destroy domain -- This works .
But, If i run a heavier workload say postgres db (just
2011 Jan 27
7
[PATCH]: xl: fix broken cpupool-numa-split
Hi,
the implementation of xl cpupool-numa-split is broken. It basically
deals with only one poolid, but there are two to consider: the one from
the original root CPUpool, the other from the newly created one.
On my machine the current output looks like:
root@dosorca:/data/images# xl cpupool-numa-split
libxl: error: libxl.c:2803:libxl_create_cpupool Could not create cpupool
error on creating