Displaying 20 results from an estimated 7000 matches similar to: "Is there a way to short circuit the georeplication process?"
2018 Feb 08
2
georeplication over ssh.
That makes for an interesting problem.
I cannot open port 24007 to allow RPC access.
On 02/07/2018 11:29 PM, Kotresh Hiremath Ravishankar wrote:
> Hi Alvin,
>
> Yes, geo-replication sync happens via SSH. Ther server port 24007 is
> of glusterd.
> glusterd will be listening in this port and all volume management
> communication
> happens via RPC.
>
> Thanks,
>
2018 Feb 08
0
georeplication over ssh.
Ccing glusterd team for information
On Thu, Feb 8, 2018 at 10:02 AM, Alvin Starr <alvin at netvel.net> wrote:
> That makes for an interesting problem.
>
> I cannot open port 24007 to allow RPC access.
>
> On 02/07/2018 11:29 PM, Kotresh Hiremath Ravishankar wrote:
>
> Hi Alvin,
>
> Yes, geo-replication sync happens via SSH. Ther server port 24007 is of
>
2018 Feb 08
0
georeplication over ssh.
Hi Alvin,
Yes, geo-replication sync happens via SSH. Ther server port 24007 is of
glusterd.
glusterd will be listening in this port and all volume management
communication
happens via RPC.
Thanks,
Kotresh HR
On Wed, Feb 7, 2018 at 8:29 PM, Alvin Starr <alvin at netvel.net> wrote:
> I am running gluster 3.8.9 and trying to setup a geo-replicated volume
> over ssh,
>
> It looks
2018 Feb 07
2
georeplication over ssh.
I am running gluster 3.8.9 and trying to setup a geo-replicated volume
over ssh,
It looks like the volume create command is trying to directly access the
server over port 24007.
The docs imply that all communications are over ssh.
What am I missing?
--
Alvin Starr || land: (905)513-7688
Netvel Inc. || Cell: (416)806-0133
alvin at netvel.net
2024 Dec 07
1
GlusterFS over LVM (thick not thin!)
I am afraid of lvm going to disk failure.
So if I have 2 disk and one crashes, I thing all LVM goes bad!
So, I have decided to used individual disk.
Thanks
Em sex., 22 de nov. de 2024 ?s 13:39, Gilberto Ferreira <
gilberto.nunes32 at gmail.com> escreveu:
> Hi Allan!
>
> Thanks for your feedback.
>
> Cheers
>
>
>
>
>
>
>
> Em sex., 22 de nov. de
2016 Feb 08
1
KVM
>>> If you run top what are you seeing on the %Cpu(s) line?
http://i.hizliresim.com/NrmV9Y.png
On Mon, Feb 8, 2016 at 10:53 PM, Alvin Starr <alvin at netvel.net> wrote:
> You need to provide more information.
> 20% is what number.
> There are something like 6 numbers on that line.
>
>
> On 02/08/2016 02:56 PM, Gokan Atmaca wrote:
>>>
>>> If you
2016 Feb 08
0
KVM
You need to provide more information.
20% is what number.
There are something like 6 numbers on that line.
On 02/08/2016 02:56 PM, Gokan Atmaca wrote:
>> If you run top what are you seeing on the %Cpu(s) line?
> %20
>
>
> On Mon, Feb 8, 2016 at 9:30 PM, Alvin Starr <alvin at netvel.net> wrote:
>> Slow disks will show up as higher I/Owait times.
>> If your
2020 Jun 12
0
Re: Is it possible to configure libvirt's MAC generation?
One solution would be to dump the XML for the domain and then run
something like sed on it to change what ever you want and then update
the domain.
Of course that will only work if the domain is stopped.
On 6/12/20 2:00 PM, Ian Pilcher wrote:
> On 6/12/20 12:28 PM, Peter Crowther wrote:
>> Specify the MAC address as part of the domain XML for the bootstrap
>> node. See
2024 Nov 22
1
GlusterFS over LVM (thick not thin!)
Hi Allan!
Thanks for your feedback.
Cheers
Em sex., 22 de nov. de 2024 ?s 00:49, Alvin Starr <alvin at netvel.net>
escreveu:
> On 2024-11-21 16:19, Gilberto Ferreira wrote:
>
> Hi there.
>
> Any problems to use 2 NVMe joined together via lvm, and than on top of
> that, great a Gluster volume?
>
> Cheers
>
> As a general rule this is ok.
> We have
2015 Sep 08
1
Beta CentOS 7 Xen packages available
On 09/08/2015 10:58 AM, Konrad Rzeszutek Wilk wrote:
> On Tue, Sep 08, 2015 at 10:50:57AM -0400, Alvin Starr wrote:
>> FIrstly Centos is primarily a RHEL clone.
>> This means that the primary design decisions are to be as RHEL like as
>> possible.
>> After that there are additions and upgrades.
>>
>> Secondly Fedora does not actively support Xen.
>
2016 Sep 07
0
Fwd: Centos 6 AMI does not support c4-8xlarge
Hi John
On 07/09/16 15:50, John Peacock wrote:
> I have done that, but the point of the request is that we would like to
> have an official upstream AMI that we can use as the basis for our
> work. I'm guessing that the reason for the blacked out instance type is
> that early 6.x kernels didn't have the patches necessary to support 36
> vCPU's present in the
2019 Jul 31
2
Re: OVS / KVM / libvirt / MTU
In general if no MTU is set on interface creation the default value is 1500.
On OVS the bridge MTU is automatically set to the smallest port MTU. So
you just have to set the MTU of each port of the bridge.
Take a look at: https://bugzilla.redhat.com/show_bug.cgi?id=1160897
It is a bit of a pain to read but seems to confirm the statement about
the OVS MTU value being set by the MTU of the
2016 Feb 08
3
KVM
> If you run top what are you seeing on the %Cpu(s) line?
%20
On Mon, Feb 8, 2016 at 9:30 PM, Alvin Starr <alvin at netvel.net> wrote:
> Slow disks will show up as higher I/Owait times.
> If your seeing 99% cpu usage then your likely looking at some other problem.
>
> If you run top what are you seeing on the %Cpu(s) line?
>
>
> On 02/08/2016 02:20 PM, Gokan Atmaca
2014 Oct 31
2
Re: reboot problem with libxl
I was sort of hoping that is was something simple like setting the
"do_the_right_thing" flag.
The libvirtd kicks out
2014-10-31 11:58:57.111+0000: 8741: error : virRegisterNetworkDriver:549
: driver in virRegisterNetworkDriver must not be NULL
2014-10-31 11:59:29.379+0000: 8840: error : virRegisterNetworkDriver:549
: driver in virRegisterNetworkDriver must not be NULL
2014-10-31
2017 Mar 20
2
grub-bootxen.sh
This is not abit issue just a minor annoyance.
I use Foreman to provision my systems and to keep control I remove all
the default *.repo files andkeep away from installing more *.repo files
so I can control the content via the foreman(katello) provided redhat.repo.
I would argue that the *-release-*.rpm should not contain any setup
code but just the stuff in /etc/yum.repos.d.
--
Alvin
2014 Oct 31
0
Re: reboot problem with libxl
On Fri, Oct 31, 2014 at 08:34:48AM -0400, Alvin Starr wrote:
>I was sort of hoping that is was something simple like setting the
>"do_the_right_thing" flag.
>
>
>The libvirtd kicks out
>2014-10-31 11:58:57.111+0000: 8741: error : virRegisterNetworkDriver:549
>: driver in virRegisterNetworkDriver must not be NULL
>2014-10-31 11:59:29.379+0000: 8840: error :
2014 Oct 30
2
reboot problem with libxl
If I reboot a single vm through libvirt/libxl the system reboots normally.
If I have several vm's reboot at the same time then The systems go into
a paused state and do not reboot.
I then have to kill them via xl and restart them.
--
Alvin Starr || voice: (905)513-7688
Netvel Inc. || Cell: (416)806-0133
alvin@netvel.net ||
2024 Nov 22
1
GlusterFS over LVM (thick not thin!)
On 2024-11-21 16:19, Gilberto Ferreira wrote:
> Hi?there.
>
> Any problems to use 2 NVMe joined together via lvm, and than on top of
> that, great a Gluster volume?
>
> Cheers
>
As a general rule this is ok.
We have several gluster volumes built on? top of an LVM underlying
structure.
--
Alvin Starr || land: (647)478-6285
Netvel Inc.
2016 Jan 21
0
CentOS 6 Virt SIG Xen 4.6 packages available in centos-virt-xen-testing
My comment was targeted more at naming than support.
I appreciate that there are vanishingly few resources to throw at support.
I am glad to see any xen support for C7 and am thankful of all those who
are putting in time to make things happen.
I try to help out when I can but it is all too infrequently.
On 01/21/2016 10:29 AM, Johnny Hughes wrote:
> This is a community SIG .. and
2017 Mar 22
2
grub-bootxen.sh
I actually move the default *.repo files and replace them with "".
The thing is that Katello turns all the downloaded yum content into a
single redhat.repo file and I don't have to install any more *-release-*
rpms any more.
I would argue that I should not need to install any *-release-* rpms at
all to get all the required software.
On 03/22/2017 09:34 AM, -=X.L.O.R.D=- wrote: