Displaying 20 results from an estimated 5000 matches similar to: "Missing CPU Cores from 2nd Socket"
2006 Apr 17
6
DO NOT REPLY [Bug 3692] New: regression: symlinks are created as hardlinks with --link-dest
https://bugzilla.samba.org/show_bug.cgi?id=3692
Summary: regression: symlinks are created as hardlinks with --
link-dest
Product: rsync
Version: 2.6.7
Platform: x86
URL: http://rsync.samba.org
OS/Version: FreeBSD
Status: NEW
Severity: major
Priority: P3
Component: core
2011 May 26
0
CentOS-announce Digest, Vol 75, Issue 9
Send CentOS-announce mailing list submissions to
centos-announce at centos.org
To subscribe or unsubscribe via the World Wide Web, visit
http://lists.centos.org/mailman/listinfo/centos-announce
or, via email, send a message with subject or body 'help' to
centos-announce-request at centos.org
You can reach the person managing the list at
centos-announce-owner at centos.org
When
2015 Aug 06
0
CEBA-2015:1535 CentOS 7 numactl BugFix Update
CentOS Errata and Bugfix Advisory 2015:1535
Upstream details at : https://rhn.redhat.com/errata/RHBA-2015-1535.html
The following updated files have been uploaded and are currently
syncing to the mirrors: ( sha256sum Filename )
x86_64:
244a9d0b6a14c344d18aaf31e296b1d4b502cdfffe8439b8608138a76b092480 numactl-2.0.9-5.el7_1.x86_64.rpm
2016 Feb 17
0
CEBA-2016:0186 CentOS 7 numactl BugFix Update
CentOS Errata and Bugfix Advisory 2016:0186
Upstream details at : https://rhn.redhat.com/errata/RHBA-2016-0186.html
The following updated files have been uploaded and are currently
syncing to the mirrors: ( sha256sum Filename )
x86_64:
c930170b1194e984b60207d30b229e21f5468b7c99ad19715744e0d3149e609e numactl-2.0.9-6.el7_2.x86_64.rpm
2019 Dec 03
0
CEBA-2019:3977 CentOS 7 numactl BugFix Update
CentOS Errata and Bugfix Advisory 2019:3977
Upstream details at : https://access.redhat.com/errata/RHBA-2019:3977
The following updated files have been uploaded and are currently
syncing to the mirrors: ( sha256sum Filename )
x86_64:
eaab6b4fef776974a6f30f5655f298bc6128c318ab69cdbfcc5797311fba221d numactl-2.0.12-3.el7_7.1.x86_64.rpm
2011 May 25
0
CEBA-2011:0825 CentOS 5 x86_64 numactl Update
CentOS Errata and Bugfix Advisory 2011:0825
Upstream details at : https://rhn.redhat.com/errata/RHBA-2011-0825.html
The following updated files have been uploaded and are currently
syncing to the mirrors: ( md5sum Filename )
x86_64:
e639e64f2584c7d9d450a4cc19cb56dd numactl-0.9.8-12.el5_6.i386.rpm
19049844fdffec608ee3a5dc69c3a47f numactl-0.9.8-12.el5_6.x86_64.rpm
2016 Mar 10
0
different uuids, but still "Attempt to migrate guest to same host" error
Background:
----------
I'm trying to debug a two-node pacemaker/corosync cluster where I
want to be able to do live migration of KVM/qemu VMs. Storage is
backed via dual-primary DRBD (yes, fencing is in place).
When moving the VM between nodes via 'pcs resource move RES NODENAME',
the live migration fails although pacemaker will shut down the VM
and restart it on the other node.
2003 Jul 31
1
smtp over ssh probs
before I start, two notes. I alredy sent this mail to the other
mailing list, but no answer has came back. also, I'm not subscribed to
this list, so please cc me the answers. now, to business.
I have a problem but I don't know exactly what. Or why, better. The
scheme is like this: I don't have a direct connection to the inet,
except for ssh to certain range of ip's at a
2011 May 25
0
CEBA-2011:0825 CentOS 5 i386 numactl Update
CentOS Errata and Bugfix Advisory 2011:0825
Upstream details at : https://rhn.redhat.com/errata/RHBA-2011-0825.html
The following updated files have been uploaded and are currently
syncing to the mirrors: ( md5sum Filename )
i386:
6e9805b07c044e8370765a8a8d44055e numactl-0.9.8-12.el5_6.i386.rpm
a58f1a250b5fc931077931a7e45b6331 numactl-devel-0.9.8-12.el5_6.i386.rpm
Source:
2015 Jul 24
0
Unbound nodes using numactl
Dear Centos Users
We were wondering how can we change the default policy of numactl.
In other words, our cluster has 10 servers, each server has two nodes, and
each node has 16 cores.
Recently, we observed a poor performance of our cluster when a couple of
jobs run on the same server. By investigating the problem, we found that
each job moves randomly among the 32 cores of the server. This returns
2018 Apr 17
0
Re: can't find how to solve "QEMU guest agent is not connected"
On Tue, Apr 17, 2018 at 07:54:14PM +0900, Matt wrote:
> I am trying to make Qemu agent work with libvirt thanks to
> https://github.com/NixOS/nixops/pull/922 with libvirt 4.1.0. I've been
> trying to make it work for quite some time but I still haven't the
> slightest idea of what is wrong, I keep seeing "Guest agent is not
> responding: QEMU guest agent is not
2009 Jul 02
1
RHEL 5.4 Beta Package Changes
it's strange since this kernel don't have kvm support, qemu or qemu-kvm
or kvm package is not added. even though it was said that 5.4 will
support kvm?:-(
Tom "spot" Callaway wrote:
> New Packages in RHEL 5.4 Beta:
> ********************************
> blktrace-1.0.0-6.el5.src.rpm
> celt051-0.5.1.3-0.el5.src.rpm
> etherboot-5.4.4-10.el5.src.rpm
>
2006 Aug 17
0
RDF - Carmen?
I didn''t want to hijack Martin''s thread, but Carmen said a couple of
interesting [original thread below].
Carmen - since you''ve switched off of SQL what do you use as you data-
store? Have integrated AR?
I''ve been seeing the world in 3+NF for more than 15 years now. Would
you mind speaking to this topic a little, and if you''ve any rails
2018 Apr 17
2
can't find how to solve "QEMU guest agent is not connected"
I am trying to make Qemu agent work with libvirt thanks to
https://github.com/NixOS/nixops/pull/922 with libvirt 4.1.0. I've been
trying to make it work for quite some time but I still haven't the
slightest idea of what is wrong, I keep seeing "Guest agent is not
responding: QEMU guest agent is not connected" as the program I use
(nixops) calls the libvirt python API.
I
2013 Jun 25
1
dmidecode Output
Hey Y'all,
How much can I trust the output of dmidecode?
The manual that came with my MB says that my MB can support up to 2 Gb
of DDR2 RAM. dmidecode seems tell me that I can load up to 8 Gb on this MB.
As 4 Gb DDR2 sticks cost about $100 each I figured maybe I should ask,
what's the chances that I can plug in two 4 Gb sticks in this machine
and actually have it work?
[root at
2013 Jun 28
0
CEBA-2013:0989 CentOS 6 python-dmidecode Update
CentOS Errata and Bugfix Advisory 2013:0989
Upstream details at : https://rhn.redhat.com/errata/RHBA-2013-0989.html
The following updated files have been uploaded and are currently
syncing to the mirrors: ( sha256sum Filename )
i386:
ec8625fbf908361af91cffc693a6adf0e81a0f77674bc40f1fea8c595bad9d63 python-dmidecode-3.10.13-3.el6_4.i686.rpm
x86_64:
2014 Mar 25
0
CEEA-2014:0326 CentOS 6 dmidecode Update
CentOS Errata and Enhancement Advisory 2014:0326
Upstream details at : https://rhn.redhat.com/errata/RHEA-2014-0326.html
The following updated files have been uploaded and are currently
syncing to the mirrors: ( sha256sum Filename )
i386:
ea3a44b1efa0d5e6626e7096081dccef8ffab3fc5fe0404f0abb2b9f8ccc69b6 dmidecode-2.12-5.el6_5.i686.rpm
x86_64:
2014 Sep 30
0
CEEA-2014:1208 CentOS 5 dmidecode Enhancement Update
CentOS Errata and Enhancement Advisory 2014:1208
Upstream details at : https://rhn.redhat.com/errata/RHEA-2014-1208.html
The following updated files have been uploaded and are currently
syncing to the mirrors: ( sha256sum Filename )
i386:
ba71ab589f800f898b16240e357c49a0718da4606c94a1aa3d4af97bb21e407f dmidecode-2.12-1.el5.i386.rpm
x86_64:
2015 Jun 16
0
CEBA-2015:1119 CentOS 6 dmidecode BugFix Update
CentOS Errata and Bugfix Advisory 2015:1119
Upstream details at : https://rhn.redhat.com/errata/RHBA-2015-1119.html
The following updated files have been uploaded and are currently
syncing to the mirrors: ( sha256sum Filename )
i386:
57b7fdb152e4d62f1b59a5b8cc686eb98cbbf6ad60ed6cd381420cecc1f76c6a dmidecode-2.12-5.el6_6.1.i686.rpm
x86_64:
2010 Sep 17
0
CentOS-announce Digest, Vol 67, Issue 5
Send CentOS-announce mailing list submissions to
centos-announce at centos.org
To subscribe or unsubscribe via the World Wide Web, visit
http://lists.centos.org/mailman/listinfo/centos-announce
or, via email, send a message with subject or body 'help' to
centos-announce-request at centos.org
You can reach the person managing the list at
centos-announce-owner at centos.org
When