Displaying 20 results from an estimated 20 matches for "last_sync".
Did you mean:
lost_sync
2018 Feb 21
2
Geo replication snapshot error
...create: failed: geo-replication session is running for the volume vol. Session needs to be stopped before taking a snapshot.
gluster volume geo-replication status
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
-------------------------------------------------------------------------------------------------------------------------------------------------
ggluster1 vol /gluster geouser ssh://geouser at ggluster1-geo::vol N/A Paused N/A N/A...
2018 Feb 21
0
Geo replication snapshot error
...: geo-replication session is running for the volume
> vol. Session needs to be stopped before taking a snapshot.
>
> gluster volume geo-replication status
> MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE
> SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
> ------------------------------------------------------------
> ------------------------------------------------------------
> -------------------------
> ggluster1 vol /gluster geouser
> ssh://geouser at ggluster1-geo::vol N/A Paused N/A
>...
2018 Feb 07
2
add geo-replication "passive" node after node replacement
...know about the geo-replica and it is not ready to
geo-replicate in case S2 goes down.
Here was the original geo-rep status
# gluster volume geo-replication status
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE
SLAVE NODE STATUS CRAWL STATUS
LAST_SYNCED
----------------------------------------------------------------------------------------------------------------------------------------------------------------
S2 sharedvol /home/sharedvol root
ssh://S5::sharedvolslave S5 Passive N/A N/A
S1 sharedvol /h...
2018 Feb 06
4
geo-replication
...e thing I am wondering about is:
When I run: gluster volume geo-replication status
I see both slave nodes one is active and the other is passive.
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
gluster1 interbullfs /interbullfs geouser ssh://geouser at gluster-geo1::interbullfs-geo...
2018 Feb 07
0
add geo-replication "passive" node after node replacement
...ready to
> geo-replicate in case S2 goes down.
>
> Here was the original geo-rep status
>
> # gluster volume geo-replication status
>
> MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE
> SLAVE NODE STATUS CRAWL STATUS
> LAST_SYNCED
> ------------------------------------------------------------
> ------------------------------------------------------------
> ----------------------------------------
> S2 sharedvol /home/sharedvol root
> ssh://S5::sharedvolslave S5 Passive N/A...
2018 Feb 06
0
geo-replication
...ng about is:
> When I run: gluster volume geo-replication status
> I see both slave nodes one is active and the other is passive.
>
> MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
> gluster1 interbullfs /interbullfs geouser ssh://geouser at gluster-geo1::interb...
2018 Feb 07
0
geo-replication
...ering about is:
> When I run: gluster volume geo-replication status
> I see both slave nodes one is active and the other is passive.
>
> MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE
> SLAVE NODE STATUS CRAWL STATUS
> LAST_SYNCED
> ------------------------------------------------------------
> ------------------------------------------------------------
> ---------------------------------------------------
> gluster1 interbullfs /interbullfs geouser
> ssh://geouser at gluster-geo1::interbullfs-...
2018 Feb 07
1
geo-replication
...I run: gluster volume geo-replication status
> > I see both slave nodes one is active and the other is passive.
> >
> > MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE
> > SLAVE NODE STATUS CRAWL STATUS
> > LAST_SYNCED
> > ------------------------------------------------------------
> > ------------------------------------------------------------
> > ---------------------------------------------------
> > gluster1 interbullfs /interbullfs geouser
> > ssh://geouser at g...
2018 Mar 02
1
geo-replication
...ve and the other is passive.
> > > > > >
> > > > > > MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE
> > > > > > SLAVE NODE STATUS CRAWL
> > STATUS
> > > > > > LAST_SYNCED
> > > > > > ------------------------------------------------------------
> > > > > > ------------------------------------------------------------
> > > > > > ---------------------------------------------------
> > > > > > glu...
2018 Mar 02
0
geo-replication
...>
> > > > > > > > MASTER NODE MASTER VOL MASTER BRICK SLAVE USER
> > SLAVE
> > > > > > > > SLAVE NODE STATUS
> > CRAWL
> > > > STATUS
> > > > > > > > LAST_SYNCED
> > > > > > > > ------------------------------------------------------------
> > > > > > > > ------------------------------------------------------------
> > > > > > > > ---------------------------------------------------
>...
2017 Oct 05
0
Inconsistent slave status output
...atus shown by gluster volume geo-replication status on
each node.
[root at foo-gluster-srv3 ~]# gluster volume geo-replication status
MASTER NODE MASTER VOL MASTER BRICK SLAVE
USER SLAVE SLAVE NODE
STATUS CRAWL STATUS LAST_SYNCED
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
foo-gluster-srv1 gv0 /var/mnt/gluster/brick2
root ssh://foo-gluster-srv3::slavevol foo...
2024 Jan 22
1
Geo-replication status is getting Faulty after few seconds
...ception] <top>: Gluster Mount process exited [{error=ENOTCONN}]
# gluster volume geo-replication tier1data status
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
master1 tier1data /opt/tier1data2019/brick root ssh://drtier1data::drtier1data N/A...
2024 Jan 24
1
Geo-replication status is getting Faulty after few seconds
...ception] <top>: Gluster Mount process exited [{error=ENOTCONN}]
# gluster volume geo-replication tier1data status
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
master1 tier1data /opt/tier1data2019/brick root ssh://drtier1data::drtier1data N/A...
2024 Jan 27
1
Geo-replication status is getting Faulty after few seconds
...eption] <top>: Gluster Mount process exited [{error=ENOTCONN}]
# gluster volume geo-replication tier1data status
MASTER NODE ? ? ? ? ? ?MASTER VOL ? ?MASTER BRICK ? ? ? ? ? ? ? ?SLAVE USER ? ?SLAVE ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?SLAVE NODE ? ?STATUS ? ? ? ? ? ? CRAWL STATUS ? ?LAST_SYNCED ? ? ? ? ?
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
master1 ? ?tier1data ? ? /opt/tier1data2019/brick ? ?root ? ? ? ? ?ssh://drtier1data::drtier1data...
2018 Mar 13
1
trashcan on dist. repl. volume with geo-replication
...ins...stop and restart of master/slave volume and geo-replication
has no effect.
root at gl-node1:~# gluster volume geo-replication mvol1
gl-node5-int::mvol1 status
MASTER NODE???? MASTER VOL??? MASTER BRICK???? SLAVE USER???
SLAVE????????????????? SLAVE NODE????? STATUS???? CRAWL STATUS??????
LAST_SYNCED
----------------------------------------------------------------------------------------------------------------------------------------------------
gl-node1-int??? mvol1???????? /brick1/mvol1 root?????????
gl-node5-int::mvol1??? N/A???????????? Faulty N/A??????????????? N/A
gl-node3-int??? mvol...
2024 Jan 27
1
Geo-replication status is getting Faulty after few seconds
...eption] <top>: Gluster Mount process exited [{error=ENOTCONN}]
# gluster volume geo-replication tier1data status
MASTER NODE ? ? ? ? ? ?MASTER VOL ? ?MASTER BRICK ? ? ? ? ? ? ? ?SLAVE USER ? ?SLAVE ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?SLAVE NODE ? ?STATUS ? ? ? ? ? ? CRAWL STATUS ? ?LAST_SYNCED ? ? ? ? ?
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
master1 ? ?tier1data ? ? /opt/tier1data2019/brick ? ?root ? ? ? ? ?ssh://drtier1data::drtier1data...
2018 Mar 13
0
trashcan on dist. repl. volume with geo-replication
Hi Dietmar,
I am trying to understand the problem and have few questions.
1. Is trashcan enabled only on master volume?
2. Does the 'rm -rf' done on master volume synced to slave ?
3. If trashcan is disabled, the issue goes away?
The geo-rep error just says the it failed to create the directory
"Oracle_VM_VirtualBox_Extension" on slave.
Usually this would be because of gfid
2017 Sep 29
1
Gluster geo replication volume is faulty
...s then created and started the geo replicated volume
I check the status and they switch between being active with history crawl
and faulty with n/a every few second
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE
SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
-------------------------------------------------------------------------------------------------------------------------------------------------------------
gfs1 gfsvol /gfs/arbiter/gv0 geo-rep-user
geo-rep-user at gfs4::gfsvol_rep N/A Faulty N/A
N/A
gfs1...
2018 Mar 12
2
trashcan on dist. repl. volume with geo-replication
Hello,
in regard to
https://bugzilla.redhat.com/show_bug.cgi?id=1434066
i have been faced to another issue when using the trashcan feature on a
dist. repl. volume running a geo-replication. (gfs 3.12.6 on ubuntu 16.04.4)
for e.g. removing an entire directory with subfolders :
tron at gl-node1:/myvol-1/test1/b1$ rm -rf *
afterwards listing files in the trashcan :
tron at gl-node1:/myvol-1/test1$
2017 Oct 06
0
Gluster geo replication volume is faulty
...ster-georep-tools
>
> I check the status and they switch between being active with history
> crawl and faulty with n/a every few second
> MASTER NODE? ? MASTER VOL? ? MASTER BRICK? ? ? ? SLAVE USER? ? ?
> SLAVE? ? ? ? ? ? ? ? ? ? ? ? ? ? SLAVE NODE STATUS? ? CRAWL STATUS? ?
> ?LAST_SYNCED
> -------------------------------------------------------------------------------------------------------------------------------------------------------------
> gfs1? ? ? ? ? ?gfsvol? ? ? ? /gfs/arbiter/gv0 geo-rep-user? ?
> geo-rep-user at gfs4::gfsvol_rep? ? N/A ? ? ?Faulty? ? N/A? ?...