Displaying 20 results from an estimated 1000 matches similar to: "Unknown Error"
2012 Feb 21
3
How many virtual guest 'cpus' can a core duo 'quad' core support
CentOS-6.2
What is the maximum number of cpus can I configure for a
single vm guest running on a host with this hardware?
# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 1
Core(s) per socket: 4
CPU socket(s): 1
NUMA node(s): 1
Vendor ID:
2019 May 08
2
failed to build llvm since 25de7691a0e27c29c8d783a22373cc265571f5e9 on AMD platform
Hi
we observed that below errors occur on AMD platform since 25de7691a0e27c29c8d783a22373cc265571f5e9
root at lkp-opteron1 /opt/rootfs/llvm_project/src/build# cmake -DCMAKE_BUILD_TYPE=release -DLLVM_ENABLE_PROJECTS=clang -G "Unix Makefiles" ../llvm -DCMAKE_INSTALL_PREFIX=/opt/cross/
-- clang project is enabled
-- clang-tools-extra project is disabled
-- compiler-rt project is disabled
2020 Sep 14
0
Re: [ovirt-users] Re: Testing ovirt 4.4.1 Nested KVM on Skylake-client (core i5) does not work
On Mon, Sep 14, 2020 at 8:42 AM Yedidyah Bar David <didi@redhat.com> wrote:
>
> On Mon, Sep 14, 2020 at 12:28 AM wodel youchi <wodel.youchi@gmail.com> wrote:
> >
> > Hi,
> >
> > Thanks for the help, I think I found the solution using this link : https://www.berrange.com/posts/2018/06/29/cpu-model-configuration-for-qemu-kvm-on-x86-hosts/
> >
> >
2011 Jul 02
1
Bug#632397: xen: /proc/uptime show idle bigger than uptime
Package: xen
Version: 4.0.1-2
Severity: normal
/proc/uptime shows idle bigger than uptime:
dom0:
% cat /proc/uptime
518389.91 944378.70
%
one domU:
% cat /proc/uptime
417536.22 764826.15
%
another domU:
% cat /proc/uptime
426960.17 795800.89
%
This is normal on multicore / ht cpu, but this is old amd:
% lscpu
Architecture: i686
CPU(s): 1
Thread(s) per core: 1
2018 Sep 14
0
Re: NUMA issues on virtualized hosts
Hello,
ok, I found that cpu pinning was wrong, so I corrected it to be 1:1. The issue
with iozone remains the same.
The spec is running, however, it runs slower than 1-NUMA case.
The corrected XML looks like follows:
<cpu mode='host-passthrough'><topology sockets='8' cores='4' threads='1'/><numa><cell cpus='0-3'
2017 Mar 21
0
Re: CPU Pinning Help
2019 May 09
3
failed to build llvm since 25de7691a0e27c29c8d783a22373cc265571f5e9 on AMD platform
LKP framework can guarantee that all the software environment are same on AMD and INTEL platform.
INTEL platform always work well, after revert this patch, AMD works well.
we tried below commit on AMD.
1) 25de7691a0e27c29c8d783a22373cc265571f5e9: bad
2) a82235843b102202766115e10003c9465a8b83ae: good
the error logs(build/CMakeFiles/CMakeError.log) has no difference b/w 1) and 2) on AMD platform
2020 Feb 28
1
kvm presenting wrong CPU Topology for cache
Folks,
I am having major performance issue with my Erlang application running
on openstack KVM hypervisor and after so many test i found something
wrong with my KVM guest CPU Topology
This is KVM host - http://paste.openstack.org/show/790120/
This is KVM guest - http://paste.openstack.org/show/790121/
If you carefully observe output of both host and guest you can see
guest machine threads has
2018 Sep 14
1
Re: NUMA issues on virtualized hosts
Hello again,
when the iozone writes slow. This is how slabtop looks like:
62476752 62476728 0% 0.10K 1601968 39 6407872K buffer_head
1000678 999168 0% 0.56K 142954 7 571816K radix_tree_node
132184 125911 0% 0.03K 1066 124 4264K kmalloc-32
118496 118224 0% 0.12K 3703 32 14812K kmalloc-node
73206 56467 0% 0.19K 3486 21
2018 Sep 14
3
NUMA issues on virtualized hosts
Hello,
I have cluster with AMD EPYC 7351 cpu. Two CPUs per node. I have performance
8-NUMA configuration:
This is from hypervizor:
[root@hde10 ~]# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
NUMA
2020 Apr 04
0
how to pick cipher for AES-NI enabled AMD GX-412TC SOC tincd at 100% CPU
Hello everybody,
Thank you Fufu Fang for your quick reply:
With tinc version 1.0.35 and the bellow options at 100% CPu load i get
about 10 MB/s...
PMTU = 1400
PMTUDiscovery = yes
#Cipher = none
Cipher = chacha20-poly1305
Digest = blake2b512
Tried Cipher = none as well and also got 10MB/s with 100% CPU on one
thread the other three available threads are idle.
With inc_1.1~pre17-1.1_amd64.deb
2017 Jul 19
3
creating new vm with virt-manager, existing disk failure
hello,
i rsynced a kvm vm from one host to another.
i start virt-manager and tell virt-manager to use an existing disk.
i set cpu to haswell that is on the host.
configure before start is set, and i start "begin installation".
I get this output by virt-manager:
Unable to complete install: 'internal error: process exited while
connecting to monitor: 2017-07-19T09:27:10.861928Z
2020 Apr 04
3
how to pick cipher for AES-NI enabled AMD GX-412TC SOC tincd at 100% CPU
Hello everybody,
First a big thanks for tinc-vpn I am still using it next to wireguard
and openvpn.
I am having a setup where the tinc debian appliance is at 100% cpu load
doing about 7.5MB/s.
Compression = 9
PMTU = 1400
PMTUDiscovery = yes
Cipher = aes-128-cbc
How can I pick a cipher that is the fasted for my CPU and don't create a
CPU bottleneck at 100%.
Kind regards,
Jelle de Jong
2009 Dec 09
2
PCI: Not using MMCONFIG, leave system completely hung while booting Xen 3.4.1
Hi Everyone,
I am trying to setup xen3.4.1 FC12(xendom0) on top of FC12 freshly
installed. I am using DQ965GF intel mother board having intel VT-x enabled.
I have installed everything properly, but xen boot gets stuck at this point,
xenbus_probe_init ok
ACPI: bus type pci registered
PCI: MCFG configuration 0: base e0000000 segment 0 buses 0 - 255
PCI: Not using MMCONFIG.
I did few googlings
2009 Dec 09
2
PCI: Not using MMCONFIG, leave system completely hung while booting Xen 3.4.1
Hi Everyone,
I am trying to setup xen3.4.1 FC12(xendom0) on top of FC12 freshly
installed. I am using DQ965GF intel mother board having intel VT-x enabled.
I have installed everything properly, but xen boot gets stuck at this point,
xenbus_probe_init ok
ACPI: bus type pci registered
PCI: MCFG configuration 0: base e0000000 segment 0 buses 0 - 255
PCI: Not using MMCONFIG.
I did few googlings
2019 Sep 15
3
virsh -c lxc:/// setvcpus and <vcpu> configuration fails
Hi folks!
i created a server with this XML file:
<domain type='lxc'>
<name>lxctest1</name>
<uuid>227bd347-dd1d-4bfd-81e1-01052e91ffe2</uuid>
<metadata>
<libosinfo:libosinfo
xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0">
<libosinfo:os id="http://centos.org/centos/6.9"/>
2016 Dec 06
1
Re: How to best I/O performance for Window2008 and MSSQL guest VM
On 12/06/2016 06:06 AM, Blair Bethwaite wrote:
> Hi Roberto,
Hi Blair
> What is the cpu and memory configuration of your guest?
I've set to copy host configuration (16 cores) and memory is set to 24GB, host has 64GB.
Guest is Windows 2012 64bits version
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 16
2017 May 08
1
Latest AMD CPUs and AM4 Motherboards
More info here
https://www.servethehome.com/amd-ryzen-with-ubuntu-here-is-what-you-have-to-do-to-fix-constant-crashes/
Sent with AquaMail for Android
http://www.aqua-mail.com
On 8 May 2017 11:53:40 a.m. "peter.winterflood"
<peter.winterflood at ossi.co.uk> wrote:
>
>
> Was looking into getting one. Idealy needs 4.10 kernel. Fix maybe
> backported . Did not check
2012 Jun 21
1
echo 0 > /proc/sys/kernel/hung_task_timeout_secs and others error, Part II
The first problem is as below:
One issue is the files copied to the device but it can't be list on node2, using ls -al the mounted directory.
But using debug.ocfs2 on node2, it is ok to list the files copied. After remount of the device on node2, the file can be list.
The second is that:
Node1 is in the ocfs2 cluster, but using debug.ocfs2, and mounted.ocfs2 -f command, can not list the node1
2012 Jun 21
1
echo 0 > /proc/sys/kernel/hung_task_timeout_secs and others error, Part II
The first problem is as below:
One issue is the files copied to the device but it can't be list on node2, using ls -al the mounted directory.
But using debug.ocfs2 on node2, it is ok to list the files copied. After remount of the device on node2, the file can be list.
The second is that:
Node1 is in the ocfs2 cluster, but using debug.ocfs2, and mounted.ocfs2 -f command, can not list the node1