Displaying 20 results from an estimated 20 matches for "georeplication".
2017 Oct 24
2
active-active georeplication?
hi everybody,
Have glusterfs released a feature named active-active georeplication? If
yes, in which version it is released? If no, is it planned to have this
feature?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20171024/0656b41f/attachment.html>
2017 Sep 17
2
georeplication sync deamon
hi all,
I want to know some more detail about glusterfs georeplication, more about
syncdeamon, if 'file A' was mirorred in slave volume , a change happen to
'file A', then how the syncdeamon act?
1. transfer the whole 'file A' to slave
2. transfer the changes of file A to slave
thx lot
-------------- next part --------------
An HTML attachme...
2013 Jul 08
1
Possible to preload data on a georeplication target? First sync taking forever...
I have about 4 TB of data in a Gluster mirror configuration on top of ZFS,
mostly consisting of 20KB files.
I've added a georeplication target and the sync started ok. The target is
using an SSH destination. It ran pretty quick for a while but it's taken
over 2 weeks to sync just under 1 TB of data to the target server and it
appears to be getting slower.
The two servers are connected to the same switch on a private segment...
2017 Oct 24
0
active-active georeplication?
...ctive-active geo-replication. It's not planned
in near future. We will let you know when it's planned.
Thanks,
Kotresh HR
On Tue, Oct 24, 2017 at 11:19 AM, atris adam <atris.adam at gmail.com> wrote:
> hi everybody,
>
> Have glusterfs released a feature named active-active georeplication? If
> yes, in which version it is released? If no, is it planned to have this
> feature?
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
--
Thanks...
2017 Oct 24
0
active-active georeplication?
...now when it's planned.
>>
>> Thanks,
>> Kotresh HR
>>
>> On Tue, Oct 24, 2017 at 11:19 AM, atris adam <atris.adam at gmail.com>
>> wrote:
>>
>>> hi everybody,
>>>
>>> Have glusterfs released a feature named active-active georeplication? If
>>> yes, in which version it is released? If no, is it planned to have this
>>> feature?
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://lists.gluste...
2017 Oct 24
2
active-active georeplication?
...gt; planned in near future. We will let you know when it's planned.
>
> Thanks,
> Kotresh HR
>
> On Tue, Oct 24, 2017 at 11:19 AM, atris adam <atris.adam at gmail.com> wrote:
>
>> hi everybody,
>>
>> Have glusterfs released a feature named active-active georeplication? If
>> yes, in which version it is released? If no, is it planned to have this
>> feature?
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://lists.gluster.org/mailman/listinfo/glust...
2023 Nov 28
0
Is there a way to short circuit the georeplication process?
We have an application that is storing an insane number of small files.
We have run some tests with enabling geo-replication and letting it run
but on our smallest data set it takes 10 days and our largest data set
will likely take over 100 days.
Would there be any way to take a copy of the data brick and convert that
into a replicated image and then enable replication from the time of the
2018 Feb 08
0
georeplication over ssh.
Hi Alvin,
Yes, geo-replication sync happens via SSH. Ther server port 24007 is of
glusterd.
glusterd will be listening in this port and all volume management
communication
happens via RPC.
Thanks,
Kotresh HR
On Wed, Feb 7, 2018 at 8:29 PM, Alvin Starr <alvin at netvel.net> wrote:
> I am running gluster 3.8.9 and trying to setup a geo-replicated volume
> over ssh,
>
> It looks
2017 Jun 23
2
seeding my georeplication
...g geo
replication on.
The current volume is on 6 100tb bricks on 2 servers
My plan is:
1) copy each of the bricks to a new arrays on the servers locally
2) move the new arrays to the new servers
3) create the volume on the new servers using the arrays
4) fix the layout on the new volume
5) start georeplication (which should be relatively small as most of the
data should be already there?)
Is this likely to succeed?
Any advice welcomed.
--
Dr Stephen Remde
Director, Innovation and Research
T: 01535 280066
M: 07764 740920
E: stephen.remde at gaist.co.uk
W: www.gaist.co.uk
-------------- next part --...
2018 Feb 07
2
georeplication over ssh.
I am running gluster 3.8.9 and trying to setup a geo-replicated volume
over ssh,
It looks like the volume create command is trying to directly access the
server over port 24007.
The docs imply that all communications are over ssh.
What am I missing?
--
Alvin Starr || land: (905)513-7688
Netvel Inc. || Cell: (416)806-0133
alvin at netvel.net
2018 Feb 08
0
georeplication over ssh.
Ccing glusterd team for information
On Thu, Feb 8, 2018 at 10:02 AM, Alvin Starr <alvin at netvel.net> wrote:
> That makes for an interesting problem.
>
> I cannot open port 24007 to allow RPC access.
>
> On 02/07/2018 11:29 PM, Kotresh Hiremath Ravishankar wrote:
>
> Hi Alvin,
>
> Yes, geo-replication sync happens via SSH. Ther server port 24007 is of
>
2018 Feb 08
2
georeplication over ssh.
That makes for an interesting problem.
I cannot open port 24007 to allow RPC access.
On 02/07/2018 11:29 PM, Kotresh Hiremath Ravishankar wrote:
> Hi Alvin,
>
> Yes, geo-replication sync happens via SSH. Ther server port 24007 is
> of glusterd.
> glusterd will be listening in this port and all volume management
> communication
> happens via RPC.
>
> Thanks,
>
2017 Dec 21
1
seeding my georeplication
...center before copying the GFIDs so I already stumbled before the first
hurdle!
I have been using the scripts in the extras/geo-rep provided for an earlier
version upgrade. With a bit of tinkering, these have given me a file
contianing the gfid/file pairs needed to sync to the slave before enabling
georeplication.
Unfortunately, the gsync-sync-gfid program isn't working. On all the files
it reports that they failed and I see the following in the fuse log:
[2017-12-21 16:36:37.171846] D [MSGID: 0]
[dht-common.c:997:dht_revalidate_cbk] 0-video-backup-dht: revalidate
lookup of /path returned with op_ret...
2012 Jun 29
2
compile glusterfs for debian squeeze
Hello, I'm compiling glusterfs for a debian squeeze.
When I do a make command, I see These parameter:
GlusterFS configure summary
===========================
FUSE client: yes
Infiniband verbs: yes
epoll IO multiplex: yes
argp-standalone: no
fusermount: no
readline: no
georeplication: yes
I would like to create a package that can be used both as a client and a server.
I'm not interested on Infiniband, but I would like to know how to
behave with the following parameters:
argp-standalone: no
fusermount: no
readline: no
can be ignored or should I install any particular pack...
2017 Jun 26
1
"Rotating" .glusterfs/changelogs
Hello all,
I'm trying to find a way to rotate the metadata changelogs.
I've so far learned (by ndevos in #gluster) that changelog is needed
for certain services, among those, georeplication, but not entirely
sure about the extent.
Is there a way to rotate these logs so that it takes up less space?
This is not an entirely critical issue, but it seems kinda silly when
we have a 3 GB volume with 300 MB data that there is a growth of
metadata among the lines of 500 MB a month.
We can ju...
2012 Nov 15
0
Why does geo-replication stop when a replica member goes down
Hi,
We are testing glusterfs. We have a setup like this:
Site A: 4 nodes, 2 bricks per node, 1 volume, distributed, replicated,
replica count 2
Site B: 2 nodes, 2 bricks per node, 1 volume, distributed
georeplication setup: master: site A, node 1. slave:site B, node 1, ssh
replicasets on Site A:
node 1, brick 1 + node 3, brick 1
node 2, brick 1 + node 4, brick 1
node 2, brick 2 + node 3, brick 2
node 1, brick 2 + node 4, brick 2
I monitor geo replication status with command:
watch -n 1 gluster volume geo-repl...
2012 Dec 03
1
configure: error: python does not have ctypes support
Hi,
I am trying to install glusterfs 3.3.1 from source code. At the time of
configuration i am getting the following error
* configure: error: python does not have ctypes support*
On my system python version is: 2.4.3
Kindly advice on fixing the error.
Thanks n Regards
Neetu Sharma
Bangalore
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2018 May 08
2
Compiling 3.13.2 under FreeBSD 11.1?
...configure (taken from the glusterfs
3.11.1 port in the FreeBSD port repository):
./configure CFLAGS="-I/usr/local/include" LDFLAGS="-L/usr/local/lib -largp" \
--with-pkgconfigdir=/usr/local/libdata/pkgconfig \
--localstatedir=/var \
--disable-epoll \
--enable-glupy \
--disable-georeplication \
ac_cv_lib_aio_io_setup=no \
ac_cv_func_fallocate=no \
ac_cv_func_setfsuid=no \
ac_cv_func_fdatasync=no \
ac_cv_func_llistxattr=no \
ac_cv_func_malloc_stats=no \
--enable-debug <--- this one was added by me
# ldd gluster
gluster:
libglusterfs.so.0 => /usr/lib/libglusterfs.so.0 (0x80...
2018 May 07
0
Compiling 3.13.2 under FreeBSD 11.1?
On 05/07/2018 04:29 AM, Roman Serbski wrote:
> Hello,
>
> Has anyone managed to successfully compile the latest 3.13.2 under
> FreeBSD 11.1? ./autogen.sh and ./configure seem to work but make
> fails:
See https://review.gluster.org/19974
3.13 reached EOL with 4.0. There will be a fix posted for 4.0 soon. In
the mean time I believe your specific problem with 3.13.2 should be
2018 May 07
2
Compiling 3.13.2 under FreeBSD 11.1?
Hello,
Has anyone managed to successfully compile the latest 3.13.2 under
FreeBSD 11.1? ./autogen.sh and ./configure seem to work but make
fails:
Making all in src
CC glfs.lo
cc: warning: argument unused during compilation: '-rdynamic'
[-Wunused-command-line-argument]
cc: warning: argument unused during compilation: '-rdynamic'
[-Wunused-command-line-argument]
fatal