similar to: migrating from xend to libxl after xen 4.6.1

Displaying 20 results from an estimated 5000 matches similar to: "migrating from xend to libxl after xen 4.6.1"

2016 Apr 18
1
migrating from xend to libxl after xen
> On Sat, Apr 16, 2016 at 5:40 PM, rgritzo <rgritzo at gmail.com <https://lists.centos.org/mailman/listinfo/centos-virt>> wrote: > > so i guess i was not paying too close attention and upgraded to xen 4.6.1 before i migrated my domU configurations to libxl :{ > > Just FYI, as a fall-back you can always move yourself to the Xen 4.4 "track" by: > 1.
2014 Oct 31
2
Re: reboot problem with libxl
I was sort of hoping that is was something simple like setting the "do_the_right_thing" flag. The libvirtd kicks out 2014-10-31 11:58:57.111+0000: 8741: error : virRegisterNetworkDriver:549 : driver in virRegisterNetworkDriver must not be NULL 2014-10-31 11:59:29.379+0000: 8840: error : virRegisterNetworkDriver:549 : driver in virRegisterNetworkDriver must not be NULL 2014-10-31
2014 Oct 30
2
reboot problem with libxl
If I reboot a single vm through libvirt/libxl the system reboots normally. If I have several vm's reboot at the same time then The systems go into a paused state and do not reboot. I then have to kill them via xl and restart them. -- Alvin Starr || voice: (905)513-7688 Netvel Inc. || Cell: (416)806-0133 alvin@netvel.net ||
2015 Sep 08
2
Beta CentOS 7 Xen packages available
FIrstly Centos is primarily a RHEL clone. This means that the primary design decisions are to be as RHEL like as possible. After that there are additions and upgrades. Secondly Fedora does not actively support Xen. As a long time Xen and RH/Fedora user I have spent lots of time building/rebuilding broken/missing packages in Fedora. Quite frankly Xen under Fedora is somewhat broken. Libvirt
2015 Sep 08
1
Beta CentOS 7 Xen packages available
On 09/08/2015 10:58 AM, Konrad Rzeszutek Wilk wrote: > On Tue, Sep 08, 2015 at 10:50:57AM -0400, Alvin Starr wrote: >> FIrstly Centos is primarily a RHEL clone. >> This means that the primary design decisions are to be as RHEL like as >> possible. >> After that there are additions and upgrades. >> >> Secondly Fedora does not actively support Xen. >
2017 Jul 26
3
Re: Xen died - Fedora upgrade from 21 to 26
Jim, Thanks for that, I had manually installed libvirt-daemon-driver-xen, but also needed to install libvirt-daemon-driver-libxl. I can now create VMs and convert config formats. However the daemon still fails to start on bootup. It starts fine when I manually start it with "systemctl start libvirtd" but setting it to autostart with "systemctl enable libvirtd" seems
2017 Jul 26
1
Re: Xen died - Fedora upgrade from 21 to 26
2018 Feb 08
2
georeplication over ssh.
That makes for an interesting problem. I cannot open port 24007 to allow RPC access. On 02/07/2018 11:29 PM, Kotresh Hiremath Ravishankar wrote: > Hi Alvin, > > Yes, geo-replication sync happens via SSH. Ther server port 24007 is > of glusterd. > glusterd will be listening in this port and all volume management > communication > happens via RPC. > > Thanks, >
2015 Jan 31
3
libvirt errors after applying RPMS from 2015:X002
Thanks for the info. I am trying to connect to the Xen hypervisor, via a localhost connection defined in the virt-manager configuration. here is the detail provided in the error dialog: ????????? Unable to open a connection to the Xen hypervisor/daemon. Verify that: - A Xen host kernel was booted - The Xen service has been started internal error: DBus support not compiled into this
2018 Feb 08
0
georeplication over ssh.
Ccing glusterd team for information On Thu, Feb 8, 2018 at 10:02 AM, Alvin Starr <alvin at netvel.net> wrote: > That makes for an interesting problem. > > I cannot open port 24007 to allow RPC access. > > On 02/07/2018 11:29 PM, Kotresh Hiremath Ravishankar wrote: > > Hi Alvin, > > Yes, geo-replication sync happens via SSH. Ther server port 24007 is of >
2018 Feb 07
2
georeplication over ssh.
I am running gluster 3.8.9 and trying to setup a geo-replicated volume over ssh, It looks like the volume create command is trying to directly access the server over port 24007. The docs imply that all communications are over ssh. What am I missing? -- Alvin Starr || land: (905)513-7688 Netvel Inc. || Cell: (416)806-0133 alvin at netvel.net
2016 Feb 08
3
KVM
> If you run top what are you seeing on the %Cpu(s) line? %20 On Mon, Feb 8, 2016 at 9:30 PM, Alvin Starr <alvin at netvel.net> wrote: > Slow disks will show up as higher I/Owait times. > If your seeing 99% cpu usage then your likely looking at some other problem. > > If you run top what are you seeing on the %Cpu(s) line? > > > On 02/08/2016 02:20 PM, Gokan Atmaca
2018 Feb 08
0
georeplication over ssh.
Hi Alvin, Yes, geo-replication sync happens via SSH. Ther server port 24007 is of glusterd. glusterd will be listening in this port and all volume management communication happens via RPC. Thanks, Kotresh HR On Wed, Feb 7, 2018 at 8:29 PM, Alvin Starr <alvin at netvel.net> wrote: > I am running gluster 3.8.9 and trying to setup a geo-replicated volume > over ssh, > > It looks
2015 Jan 29
1
libvirt errors after applying RPMS from 2015:X002
folks: after applying the updated rpms from advisory 2015:X002 i am having problems with libvirtd, and with virt-manager. if i run libvirtd in the foreground and look at the error messages, the error i see is 2015-01-29 04:45:27.342+0000: 6477: error : virDBusGetSystemBus:1742 : internal error: DBus support not compiled into this binary and virt-manager is unable to connect to the hypervisor.
2017 Mar 20
2
grub-bootxen.sh
This is not abit issue just a minor annoyance. I use Foreman to provision my systems and to keep control I remove all the default *.repo files andkeep away from installing more *.repo files so I can control the content via the foreman(katello) provided redhat.repo. I would argue that the *-release-*.rpm should not contain any setup code but just the stuff in /etc/yum.repos.d. -- Alvin
2016 Sep 07
2
Fwd: Centos 6 AMI does not support c4-8xlarge
I have done that, but the point of the request is that we would like to have an official upstream AMI that we can use as the basis for our work. I'm guessing that the reason for the blacked out instance type is that early 6.x kernels didn't have the patches necessary to support 36 vCPU's present in the c4.8xlarge instance. John On Wed, Sep 7, 2016 at 10:46 AM, Alvin Starr <alvin
2016 Sep 07
2
Fwd: Centos 6 AMI does not support c4-8xlarge
One of the things suboptimal with Marketplace images is that the author can limit which instance types are allowed with the AMI and there is no way to override that. We are using Centos 6.8 for our deployments, but we need to move to the c4.8xlarge type, but that is not a permitted option for the "CentOS 6 (x86_64) - with Updates HVM" AMI. Is there any way we could get that image
2016 Feb 08
3
KVM
> I'm guessing you're using standard 7,200rpm platter drives? You'll need > to share more information about your environment in order for us to > provide useful feedback. Usually though, the answer is 'caching' and/or > 'faster disks'. Yes , 7.2k rpm disks. 2T mirror (soft). In fact, I had such a preference for slightly more capacity. Unfortunately very
2015 Sep 08
5
Beta CentOS 7 Xen packages available
On 09/08/2015 06:41 AM, George Dunlap wrote: > On Mon, Sep 7, 2015 at 11:40 AM, Johnny Hughes <johnny at centos.org> wrote: >> What we really need is to make the REAL xen RPMs .. the ones produced in >> this SIG .. work with systemd. These RPMs are produced by Citrix, so we >> need to get the right. > > Just to be clear -- RPMs are produced by the CentOS Virt
2019 Jul 31
2
Re: OVS / KVM / libvirt / MTU
In general if no MTU is set on interface creation the default value is 1500. On OVS the bridge MTU is automatically set to the smallest port MTU. So you just have to set the MTU of each port of the bridge. Take a look at: https://bugzilla.redhat.com/show_bug.cgi?id=1160897 It is a bit of a pain to read but seems to confirm the statement about the OVS MTU value being set by the MTU of the