Displaying 20 results from an estimated 300 matches similar to: "georeplication sync deamon"
2017 Oct 24
2
active-active georeplication?
hi everybody,
Have glusterfs released a feature named active-active georeplication? If
yes, in which version it is released? If no, is it planned to have this
feature?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20171024/0656b41f/attachment.html>
2017 Oct 24
2
active-active georeplication?
Halo replication [1] could be of interest here. This functionality is
available since 3.11 and the current plan is to have it fully supported in
a 4.x release.
Note that Halo replication is built on existing synchronous replication in
Gluster and differs from the current geo-replication implementation.
Kotresh's response is spot on for the current geo-replication
implementation.
Regards,
2017 Oct 24
0
active-active georeplication?
thx for reply, that was so much interesting to me.
How can I get these news about glusterfs new features?
On Tue, Oct 24, 2017 at 5:54 PM, Vijay Bellur <vbellur at redhat.com> wrote:
>
> Halo replication [1] could be of interest here. This functionality is
> available since 3.11 and the current plan is to have it fully supported in
> a 4.x release.
>
> Note that Halo
2017 Oct 24
0
active-active georeplication?
Hi,
No, gluster doesn't support active-active geo-replication. It's not planned
in near future. We will let you know when it's planned.
Thanks,
Kotresh HR
On Tue, Oct 24, 2017 at 11:19 AM, atris adam <atris.adam at gmail.com> wrote:
> hi everybody,
>
> Have glusterfs released a feature named active-active georeplication? If
> yes, in which version it is released?
2017 Jun 23
2
seeding my georeplication
I have a ~600tb distributed gluster volume that I want to start using geo
replication on.
The current volume is on 6 100tb bricks on 2 servers
My plan is:
1) copy each of the bricks to a new arrays on the servers locally
2) move the new arrays to the new servers
3) create the volume on the new servers using the arrays
4) fix the layout on the new volume
5) start georeplication (which should be
2017 Dec 21
1
seeding my georeplication
Thanks for your response (6 months ago!) but I have only just got around to
following up on this.
Unfortunately, I had already copied and shipped the data to the second
datacenter before copying the GFIDs so I already stumbled before the first
hurdle!
I have been using the scripts in the extras/geo-rep provided for an earlier
version upgrade. With a bit of tinkering, these have given me a file
2013 Jul 08
1
Possible to preload data on a georeplication target? First sync taking forever...
I have about 4 TB of data in a Gluster mirror configuration on top of ZFS,
mostly consisting of 20KB files.
I've added a georeplication target and the sync started ok. The target is
using an SSH destination. It ran pretty quick for a while but it's taken
over 2 weeks to sync just under 1 TB of data to the target server and it
appears to be getting slower.
The two servers are connected
2018 Feb 08
0
georeplication over ssh.
Hi Alvin,
Yes, geo-replication sync happens via SSH. Ther server port 24007 is of
glusterd.
glusterd will be listening in this port and all volume management
communication
happens via RPC.
Thanks,
Kotresh HR
On Wed, Feb 7, 2018 at 8:29 PM, Alvin Starr <alvin at netvel.net> wrote:
> I am running gluster 3.8.9 and trying to setup a geo-replicated volume
> over ssh,
>
> It looks
2023 Nov 28
0
Is there a way to short circuit the georeplication process?
We have an application that is storing an insane number of small files.
We have run some tests with enabling geo-replication and letting it run
but on our smallest data set it takes 10 days and our largest data set
will likely take over 100 days.
Would there be any way to take a copy of the data brick and convert that
into a replicated image and then enable replication from the time of the
2018 Feb 07
2
georeplication over ssh.
I am running gluster 3.8.9 and trying to setup a geo-replicated volume
over ssh,
It looks like the volume create command is trying to directly access the
server over port 24007.
The docs imply that all communications are over ssh.
What am I missing?
--
Alvin Starr || land: (905)513-7688
Netvel Inc. || Cell: (416)806-0133
alvin at netvel.net
2018 Feb 08
0
georeplication over ssh.
Ccing glusterd team for information
On Thu, Feb 8, 2018 at 10:02 AM, Alvin Starr <alvin at netvel.net> wrote:
> That makes for an interesting problem.
>
> I cannot open port 24007 to allow RPC access.
>
> On 02/07/2018 11:29 PM, Kotresh Hiremath Ravishankar wrote:
>
> Hi Alvin,
>
> Yes, geo-replication sync happens via SSH. Ther server port 24007 is of
>
2018 Feb 08
2
georeplication over ssh.
That makes for an interesting problem.
I cannot open port 24007 to allow RPC access.
On 02/07/2018 11:29 PM, Kotresh Hiremath Ravishankar wrote:
> Hi Alvin,
>
> Yes, geo-replication sync happens via SSH. Ther server port 24007 is
> of glusterd.
> glusterd will be listening in this port and all volume management
> communication
> happens via RPC.
>
> Thanks,
>
2005 Jan 30
0
pictures printed upside down and mirorred
Hi all,
one of my applications (Breezebrowser) runs fine under wine (it's
a digital picture cataloging system), but when I want to make
prints, either thumbnails or a single picture, the pictures are
printed upside down and mirorred. So I have to turn the paper 180
degrees and look thru against light, which is somewhat confusing.
The preview print on screen is OK! Also the print is OK if I
2018 Feb 04
2
halo not work as desired!!!
I have 2 data centers in two different region, each DC have 3 severs, I
have created glusterfs volume with 4 replica, this is glusterfs volume info
output:
Volume Name: test-halo
Type: Replicate
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 4 = 4
Transport-type: tcp
Bricks:
Brick1: 10.0.0.1:/mnt/test1
Brick2: 10.0.0.3:/mnt/test2
Brick3: 10.0.0.5:/mnt/test3
Brick4: 10.0.0.6:/mnt/test4
2012 Jun 29
2
compile glusterfs for debian squeeze
Hello, I'm compiling glusterfs for a debian squeeze.
When I do a make command, I see These parameter:
GlusterFS configure summary
===========================
FUSE client: yes
Infiniband verbs: yes
epoll IO multiplex: yes
argp-standalone: no
fusermount: no
readline: no
georeplication: yes
I would like to create a package that can be used both as a client and a server.
I'm not interested
2018 Feb 05
0
halo not work as desired!!!
I have mounted the halo glusterfs volume in debug mode, and the output is
as follows:
.
.
.
[2018-02-05 11:42:48.282473] D [rpc-clnt-ping.c:211:rpc_clnt_ping_cbk]
0-test-halo-client-1: Ping latency is 0ms
[2018-02-05 11:42:48.282502] D [MSGID: 0]
[afr-common.c:5025:afr_get_halo_latency] 0-test-halo-replicate-0: Using
halo latency 10
[2018-02-05 11:42:48.282525] D [MSGID: 0]
2017 Jul 12
1
Hi all
I have setup a distributed glusterfs volume with 3 servers. the network is
1GbE, i get filebench test with a client.
refer to this link:
https://s3.amazonaws.com/aws001/guided_trek/Performance_in_a_Gluster_Systemv6F.pdf
the more server for gluster, more throughput should gain. I have tested the
network, the bandwidth is 117 MB/s, so when i have 3 servers i should gain
about 300 MB/s (3*117
2017 Jun 20
1
Cloud storage with glusterfs
Hello everybody
I have 3 datacenters in different regions, Can I deploy my own cloud
storage with the help of glusterfs on the physical nodes?If I can, what are
the differences between cloud storage glusterfs and local gluster storage?
thx for your attention :)
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2017 Oct 14
1
nic requirement for teiring glusterfs
Hi everybody, I have a question about network interface used for tiering
in glusterfs, if I have a 1G nic on glusterfs servers and clients, can I
get more performance by setting up glusterfs tiering?or the network
interface should be 10G?
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2017 Oct 24
2
create volume in two different Data Centers
Hi
I have two data centers, each of them have 3 servers. This two data centers
can see each other over the internet.
I want to create a distributed glusterfs volume with these 6 servers, but I
have only one valid ip in each data center. Is it possible to create a
glusterfs volume?Can anyone guide me?
thx alot
-------------- next part --------------
An HTML attachment was scrubbed...
URL: