similar to: georeplication over ssh.

Displaying 20 results from an estimated 2000 matches similar to: "georeplication over ssh."

2018 Feb 08
0
georeplication over ssh.
Hi Alvin, Yes, geo-replication sync happens via SSH. Ther server port 24007 is of glusterd. glusterd will be listening in this port and all volume management communication happens via RPC. Thanks, Kotresh HR On Wed, Feb 7, 2018 at 8:29 PM, Alvin Starr <alvin at netvel.net> wrote: > I am running gluster 3.8.9 and trying to setup a geo-replicated volume > over ssh, > > It looks
2018 Feb 08
2
georeplication over ssh.
That makes for an interesting problem. I cannot open port 24007 to allow RPC access. On 02/07/2018 11:29 PM, Kotresh Hiremath Ravishankar wrote: > Hi Alvin, > > Yes, geo-replication sync happens via SSH. Ther server port 24007 is > of glusterd. > glusterd will be listening in this port and all volume management > communication > happens via RPC. > > Thanks, >
2018 Feb 08
0
georeplication over ssh.
Ccing glusterd team for information On Thu, Feb 8, 2018 at 10:02 AM, Alvin Starr <alvin at netvel.net> wrote: > That makes for an interesting problem. > > I cannot open port 24007 to allow RPC access. > > On 02/07/2018 11:29 PM, Kotresh Hiremath Ravishankar wrote: > > Hi Alvin, > > Yes, geo-replication sync happens via SSH. Ther server port 24007 is of >
2016 Feb 08
3
KVM
> If you run top what are you seeing on the %Cpu(s) line? %20 On Mon, Feb 8, 2016 at 9:30 PM, Alvin Starr <alvin at netvel.net> wrote: > Slow disks will show up as higher I/Owait times. > If your seeing 99% cpu usage then your likely looking at some other problem. > > If you run top what are you seeing on the %Cpu(s) line? > > > On 02/08/2016 02:20 PM, Gokan Atmaca
2014 Oct 31
2
Re: reboot problem with libxl
I was sort of hoping that is was something simple like setting the "do_the_right_thing" flag. The libvirtd kicks out 2014-10-31 11:58:57.111+0000: 8741: error : virRegisterNetworkDriver:549 : driver in virRegisterNetworkDriver must not be NULL 2014-10-31 11:59:29.379+0000: 8840: error : virRegisterNetworkDriver:549 : driver in virRegisterNetworkDriver must not be NULL 2014-10-31
2017 Mar 20
2
grub-bootxen.sh
This is not abit issue just a minor annoyance. I use Foreman to provision my systems and to keep control I remove all the default *.repo files andkeep away from installing more *.repo files so I can control the content via the foreman(katello) provided redhat.repo. I would argue that the *-release-*.rpm should not contain any setup code but just the stuff in /etc/yum.repos.d. -- Alvin
2015 Sep 08
2
Beta CentOS 7 Xen packages available
FIrstly Centos is primarily a RHEL clone. This means that the primary design decisions are to be as RHEL like as possible. After that there are additions and upgrades. Secondly Fedora does not actively support Xen. As a long time Xen and RH/Fedora user I have spent lots of time building/rebuilding broken/missing packages in Fedora. Quite frankly Xen under Fedora is somewhat broken. Libvirt
2016 Sep 07
2
Fwd: Centos 6 AMI does not support c4-8xlarge
One of the things suboptimal with Marketplace images is that the author can limit which instance types are allowed with the AMI and there is no way to override that. We are using Centos 6.8 for our deployments, but we need to move to the c4.8xlarge type, but that is not a permitted option for the "CentOS 6 (x86_64) - with Updates HVM" AMI. Is there any way we could get that image
2015 Sep 08
1
Beta CentOS 7 Xen packages available
On 09/08/2015 10:58 AM, Konrad Rzeszutek Wilk wrote: > On Tue, Sep 08, 2015 at 10:50:57AM -0400, Alvin Starr wrote: >> FIrstly Centos is primarily a RHEL clone. >> This means that the primary design decisions are to be as RHEL like as >> possible. >> After that there are additions and upgrades. >> >> Secondly Fedora does not actively support Xen. >
2016 Sep 07
2
Fwd: Centos 6 AMI does not support c4-8xlarge
I have done that, but the point of the request is that we would like to have an official upstream AMI that we can use as the basis for our work. I'm guessing that the reason for the blacked out instance type is that early 6.x kernels didn't have the patches necessary to support 36 vCPU's present in the c4.8xlarge instance. John On Wed, Sep 7, 2016 at 10:46 AM, Alvin Starr <alvin
2024 Dec 07
1
GlusterFS over LVM (thick not thin!)
I am afraid of lvm going to disk failure. So if I have 2 disk and one crashes, I thing all LVM goes bad! So, I have decided to used individual disk. Thanks Em sex., 22 de nov. de 2024 ?s 13:39, Gilberto Ferreira < gilberto.nunes32 at gmail.com> escreveu: > Hi Allan! > > Thanks for your feedback. > > Cheers > > > > > > > > Em sex., 22 de nov. de
2014 Oct 30
2
reboot problem with libxl
If I reboot a single vm through libvirt/libxl the system reboots normally. If I have several vm's reboot at the same time then The systems go into a paused state and do not reboot. I then have to kill them via xl and restart them. -- Alvin Starr || voice: (905)513-7688 Netvel Inc. || Cell: (416)806-0133 alvin@netvel.net ||
2018 Feb 26
2
Gluster performance / Dell Idrac enterprise conflict
Here is info. about the Raid controllers. Doesn't seem to be the culprit. Slow host: Name PERC H710 Mini (Embedded) Firmware Version 21.3.4-0001 Cache Memory Size 512 MB Fast Host: Name PERC H310 Mini (Embedded) Firmware Version 20.12.1-0002 Cache Memory Size 0 MB Slow host: Name PERC H310 Mini (Embedded) Firmware Version 20.13.1-0002 Cache Memory Size 0 MB Slow host: Name PERC H310 Mini
2017 Mar 22
2
grub-bootxen.sh
I actually move the default *.repo files and replace them with "". The thing is that Katello turns all the downloaded yum content into a single redhat.repo file and I don't have to install any more *-release-* rpms any more. I would argue that I should not need to install any *-release-* rpms at all to get all the required software. On 03/22/2017 09:34 AM, -=X.L.O.R.D=- wrote:
2016 Feb 08
3
KVM
> I'm guessing you're using standard 7,200rpm platter drives? You'll need > to share more information about your environment in order for us to > provide useful feedback. Usually though, the answer is 'caching' and/or > 'faster disks'. Yes , 7.2k rpm disks. 2T mirror (soft). In fact, I had such a preference for slightly more capacity. Unfortunately very
2018 Feb 26
2
Gluster performance / Dell Idrac enterprise conflict
I've tested about 12 different Dell servers. Ony a couple of them have Idrac express and all the others have Idrac Enterprise. All the boxes with Enterprise perform poorly and the couple that have express perform well. I use the disks in raid mode on all of them. I've tried a few non-Dell boxes and they all perform well even though some of them are very old. I've also tried
2018 Aug 29
2
TPM
On 08/29/2018 12:08 PM, Stephen John Smoogen wrote: > > > On Wed, 29 Aug 2018 at 11:58, Dag Nygren <dag at newtech.fi > <mailto:dag at newtech.fi>> wrote: > > On onsdag 29 augusti 2018 kl. 17:39:18 EEST Stephen John Smoogen > wrote: > > On Wed, 29 Aug 2018 at 10:25, Dag Nygren <dag at newtech.fi > <mailto:dag at newtech.fi>>
2018 Feb 26
0
Gluster performance / Dell Idrac enterprise conflict
I would be really supprised if the problem was related to Idrac. The Idrac processor is a stand alone cpu with its own nic and runs independent of the main CPU. That being said it does have visibility into the whole system. try using dmidecode to compare the systems and take a close look at the raid controllers and what size and form of cache they have. On 02/26/2018 11:34 AM, Ryan
2018 Feb 27
0
Gluster performance / Dell Idrac enterprise conflict
What is your gluster setup? Please share volume details where vms ate stored. It could be that the slow host is having arbiter volume. Alex On Feb 26, 2018 13:46, "Ryan Wilkinson" <ryanwilk at gmail.com> wrote: > Here is info. about the Raid controllers. Doesn't seem to be the culprit. > > Slow host: > Name PERC H710 Mini (Embedded) > Firmware Version
2018 Feb 27
1
Gluster performance / Dell Idrac enterprise conflict
All volumes are configured as replica 3. I have no arbiter volumes. Storage hosts are for storage only and Virt hosts are dedicated Virt hosts. I've checked throughput from the Virt hosts to all 3 gluster hosts and am getting ~9Gb/s. On Tue, Feb 27, 2018 at 1:33 AM, Alex K <rightkicktech at gmail.com> wrote: > What is your gluster setup? Please share volume details where vms ate