deepu srinivasan
2019-Jun-14 07:18 UTC
[Gluster-users] Geo Replication Stop even after migratingto 5.6
Hi Guys Yes, I will try the root geo-rep setup and update you back. Meanwhile is there any procedure for the below-quoted info in the docs?> Synchronization is not complete > > *Description*: GlusterFS geo-replication did not synchronize the data > completely but the geo-replication status displayed is OK. > > *Solution*: You can enforce a full sync of the data by erasing the index > and restarting GlusterFS geo-replication. After restarting, GlusterFS > geo-replication begins synchronizing all the data. All files are compared > using checksum, which can be a lengthy and high resource utilization > operation on large data sets. > >On Fri, Jun 14, 2019 at 12:30 PM Kotresh Hiremath Ravishankar < khiremat at redhat.com> wrote:> Could you please try root geo-rep setup and update back? > > On Fri, Jun 14, 2019 at 12:28 PM deepu srinivasan <sdeepugd at gmail.com> > wrote: > >> Hi Any updates on this >> >> >> On Thu, Jun 13, 2019 at 5:43 PM deepu srinivasan <sdeepugd at gmail.com> >> wrote: >> >>> Hi Guys >>> Hope you remember the issue I reported for geo replication hang status >>> on History Crawl. >>> So you advised me to update the gluster version. previously I was using >>> 4.1 now I upgraded to 5.6/Still after deleting the previous geo-rep session >>> and creating a new one the geo-rep session hangs. Is there any other way >>> that I could solve the issue. >>> I heard that I could redo the whole geo-replication again. How could I >>> do that? >>> Please help. >>> >> > > -- > Thanks and Regards, > Kotresh H R >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190614/e97b2abe/attachment-0001.html>
Kotresh Hiremath Ravishankar
2019-Jun-14 08:43 UTC
[Gluster-users] Geo Replication Stop even after migratingto 5.6
It's about complete re-sync. The idea is to set the stime xattr which marks the sync time to 0 on all the bricks. If lot of the data is not synced to slave, this is not very useful. You can as well delete the geo-rep session with 'reset-sync-time' option and re-setup. I prefer the second way. Thanks, Kotresh HR On Fri, Jun 14, 2019 at 12:48 PM deepu srinivasan <sdeepugd at gmail.com> wrote:> Hi Guys > Yes, I will try the root geo-rep setup and update you back. > Meanwhile is there any procedure for the below-quoted info in the docs? > >> Synchronization is not complete >> >> *Description*: GlusterFS geo-replication did not synchronize the data >> completely but the geo-replication status displayed is OK. >> >> *Solution*: You can enforce a full sync of the data by erasing the index >> and restarting GlusterFS geo-replication. After restarting, GlusterFS >> geo-replication begins synchronizing all the data. All files are compared >> using checksum, which can be a lengthy and high resource utilization >> operation on large data sets. >> >> > On Fri, Jun 14, 2019 at 12:30 PM Kotresh Hiremath Ravishankar < > khiremat at redhat.com> wrote: > >> Could you please try root geo-rep setup and update back? >> >> On Fri, Jun 14, 2019 at 12:28 PM deepu srinivasan <sdeepugd at gmail.com> >> wrote: >> >>> Hi Any updates on this >>> >>> >>> On Thu, Jun 13, 2019 at 5:43 PM deepu srinivasan <sdeepugd at gmail.com> >>> wrote: >>> >>>> Hi Guys >>>> Hope you remember the issue I reported for geo replication hang status >>>> on History Crawl. >>>> So you advised me to update the gluster version. previously I was using >>>> 4.1 now I upgraded to 5.6/Still after deleting the previous geo-rep session >>>> and creating a new one the geo-rep session hangs. Is there any other way >>>> that I could solve the issue. >>>> I heard that I could redo the whole geo-replication again. How could I >>>> do that? >>>> Please help. >>>> >>> >> >> -- >> Thanks and Regards, >> Kotresh H R >> >-- Thanks and Regards, Kotresh H R -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190614/d6fc5a87/attachment.html>