Hi Marcus,
There are no issues with geo-rep and disperse volumes. It works with
disperse volume
being master or slave or both. You can run replicated distributed at master
and diperse distributed
at slave or disperse distributed at both master and slave. There was an
issue with lookup on / taking
longer time because of eager locks in disperse and that's been fixed. Which
version are you running?
Thanks,
Kotresh HR
On Fri, Mar 2, 2018 at 3:05 PM, Marcus Peders?n <marcus.pedersen at
slu.se>
wrote:
> Hi again,
> I have been testing and reading up on other solutions
> and just wanted to check if my ideas are ok.
> I have been looking at dispersed volumes and wonder if there are any
> problems running replicated-distributed cluster on the master node and
> a dispersed-distributed cluster on the slave side of a geo-replication.
> Second thought, running disperesed on both sides, is that a problem
> (Master: dispersed-distributed, slave: dispersed-distributed)?
>
> Many thanks in advance!
>
> Best regards
> Marcus
>
>
> On Thu, Feb 08, 2018 at 02:57:48PM +0530, Kotresh Hiremath Ravishankar
> wrote:
> > Answers inline
> >
> > On Thu, Feb 8, 2018 at 1:26 PM, Marcus Peders?n <marcus.pedersen at
slu.se>
> > wrote:
> >
> > > Thank you, Kotresh
> > >
> > > I talked to your storage colleagues at Open Source Summit in Prag
last
> > > year.
> > > I described my layout idea for them and they said it was a good
> solution.
> > > Sorry if I mail you in private, but I see this as your internal
> matters.
> > >
> > > The reason that I seem stressed is that I have already placed my
order
> > > on new file servers for this so I need to change that as soon as
> possible.
> > >
> > > So, a last double check with you:
> > > If I build the master cluster as I thought from the beginning,
> > > distributed/replicated (replica 3 arbiter 1) and in total 4 file
> servers
> > > and one arbiter (same arbiter used for both "pairs"),
> > > and build the slave cluster the same, distributed/replicated
(replica 3
> > > arbiter 1)
> > > and in total 4 file servers and one arbiter (same arbiter used
for both
> > > "pairs").
> > > Do I get a good technical solution?
> > >
> >
> > Yes, that works fine.
> >
> > >
> > > I liked your description on how the sync works, that made me
understand
> > > much
> > > better how the system works!
> > >
> >
> > Thank you very much for all your help!
> > >
> >
> > No problem. We are happy to help you.
> >
> > >
> > > Best regards
> > > Marcus
> > >
> > >
> > > On Wed, Feb 07, 2018 at 09:40:32PM +0530, Kotresh Hiremath
Ravishankar
> > > wrote:
> > > > Answers inline
> > > >
> > > > On Wed, Feb 7, 2018 at 8:44 PM, Marcus Peders?n <
> marcus.pedersen at slu.se>
> > > > wrote:
> > > >
> > > > > Thank you for your help!
> > > > > Just to make things clear to me (and get a better
understanding of
> > > > > gluster):
> > > > > So, if I make the slave cluster just distributed and
node 1 goes
> down,
> > > > > data (say file.txt) that belongs to node 1 will not be
synced.
> > > > > When node 1 comes back up does the master not realize
that file.txt
> > > has not
> > > > > been synced and makes sure that it is synced when it
has contact
> with
> > > node
> > > > > 1 again?
> > > > > So file.txt will not exist on node 1 at all?
> > > > >
> > > >
> > > > Geo-replication syncs changes based on changelog journal
which
> records
> > > all
> > > > the file operations.
> > > > It syncs every file in two steps
> > > > 1. File creation with same attributes as on master via rpc
(CREATE is
> > > > recorded in changelog)
> > > > 2. Data sync via rsync (DATA is recorded in changelog. Any
further
> > > appends
> > > > will only record DATA)
> > > >
> > > > The changelog processing will not halt on encountering
ENOENT(It
> thinks
> > > > it's a safe error). It's not
> > > > straight forward. When I said, file won't be synced, it
means the
> file is
> > > > created on node1 and when
> > > > you append the data, the data would not sync as it gets
ENOENT since
> the
> > > > node1 is down. But if the
> > > > 'CREATE' of file is not synced to node1, then it is
persistent
> failure
> > > > (ENOTCON) and waits till node1 comes back.
> > > >
> > > > >
> > > > > I did a small test on my testing machines.
> > > > > Turned one of the geo machines off and created 10000
files
> containing
> > > one
> > > > > short string in the master nodes.
> > > > > Nothing became synced with the geo slaves.
> > > > > When I turned on the geo machine again all 10000 files
were synced
> to
> > > the
> > > > > geo slaves.
> > > > > Ofcause devided between the two machines.
> > > > > Is this the right/expected behavior of geo-replication
with a
> > > distributed
> > > > > cluster?
> > > > >
> > > >
> > > > Yes, it's correct. As I said earlier, CREATE itself
would have failed
> > > with
> > > > ENOTCON. geo-rep waited till slave comes back.
> > > > Bring slave node down, and now append data to files which
falls under
> > > node
> > > > which is down, you won't see appended data.
> > > > So it's always recommended to use replica/ec/arbiter
> > > >
> > > > >
> > > > > Many thanks in advance!
> > > > >
> > > > > Regards
> > > > > Marcus
> > > > >
> > > > >
> > > > > On Wed, Feb 07, 2018 at 06:39:20PM +0530, Kotresh
Hiremath
> Ravishankar
> > > > > wrote:
> > > > > > We are happy to help you out. Please find the
answers inline.
> > > > > >
> > > > > > On Tue, Feb 6, 2018 at 4:39 PM, Marcus Peders?n
<
> > > marcus.pedersen at slu.se>
> > > > > > wrote:
> > > > > >
> > > > > > > Hi all,
> > > > > > >
> > > > > > > I am planning my new gluster system and
tested things out in
> > > > > > > a bunch of virtual machines.
> > > > > > > I need a bit of help to understand how
geo-replication behaves.
> > > > > > >
> > > > > > > I have a master gluster cluster replica 2
> > > > > > > (in production I will use an arbiter and
> replicatied/distributed)
> > > > > > > and the geo cluster is distributed with 2
machines.
> > > > > > > (in production I will have the geo cluster
distributed)
> > > > > > >
> > > > > >
> > > > > > It's recommended to use slave also to be
distribute
> > > > > replicate/aribiter/ec.
> > > > > > Choosing only distribute will cause issues
> > > > > > when of the slave node is down and a file is being
synced which
> > > belongs
> > > > > to
> > > > > > that node. It would not sync
> > > > > > later.
> > > > > >
> > > > > >
> > > > > > > Everything is up and running and creating
files from client
> both
> > > > > > > replicates and is distributed in the geo
cluster.
> > > > > > >
> > > > > > > The thing I am wondering about is:
> > > > > > > When I run: gluster volume geo-replication
status
> > > > > > > I see both slave nodes one is active and the
other is passive.
> > > > > > >
> > > > > > > MASTER NODE MASTER VOL MASTER BRICK
SLAVE USER
> SLAVE
> > > > > > > SLAVE NODE
STATUS
> CRAWL
> > > STATUS
> > > > > > > LAST_SYNCED
> > > > > > >
------------------------------------------------------------
> > > > > > >
------------------------------------------------------------
> > > > > > >
---------------------------------------------------
> > > > > > > gluster1 interbullfs /interbullfs
geouser
> > > > > > > ssh://geouser at
gluster-geo1::interbullfs-geo gluster-geo2
> > > Active
> > > > > > > Changelog Crawl 2018-02-06 11:46:08
> > > > > > > gluster2 interbullfs /interbullfs
geouser
> > > > > > > ssh://geouser at
gluster-geo1::interbullfs-geo gluster-geo1
> > > > > Passive
> > > > > > > N/A N/A
> > > > > > >
> > > > > > >
> > > > > > > If I shutdown the active slave the status
changes to faulty
> > > > > > > and the other one continues to be passive.
> > > > > > >
> > > > > >
> > > > > > > MASTER NODE MASTER VOL MASTER BRICK
SLAVE USER
> SLAVE
> > > > > > > SLAVE NODE
STATUS
> CRAWL
> > > STATUS
> > > > > > > LAST_SYNCED
> > > > > > >
------------------------------------------------------------
> > > > > > >
------------------------------------------------------------
> > > > > > > ----------------------------------------
> > > > > > > gluster1 interbullfs /interbullfs
geouser
> > > > > > > ssh://geouser at
gluster-geo1::interbullfs-geo N/A
> > > Faulty
> > > > > > > N/A N/A
> > > > > > > gluster2 interbullfs /interbullfs
geouser
> > > > > > > ssh://geouser at
gluster-geo1::interbullfs-geo gluster-geo1
> > > > > Passive
> > > > > > > N/A N/A
> > > > > > >
> > > > > > >
> > > > > > > In my understanding I thought that if the
active slave stopped
> > > > > > > working the passive slave should become
active and should
> > > > > > > continue to replicate from master.
> > > > > > >
> > > > > > > Am I wrong? Is there just one active slave if
it is setup as
> > > > > > > a distributed system?
> > > > > > >
> > > > > >
> > > > > > The Active/Passive notion is for master node. If
gluster1 master
> > > node is
> > > > > > down glusterd2 master node will become Active.
> > > > > > It's not for slave node.
> > > > > >
> > > > > >
> > > > > >
> > > > > > >
> > > > > > > What I use:
> > > > > > > Centos 7, gluster 3.12
> > > > > > > I have followed the geo instructions:
> > > > > > >
http://docs.gluster.org/en/latest/Administrator%20Guide/Geo%
> > > > > 20Replication/
> > > > > > >
> > > > > > > Many thanks in advance!
> > > > > > >
> > > > > > > Bets regards
> > > > > > > Marcus
> > > > > > >
> > > > > > > --
> > > > > > >
**************************************************
> > > > > > > * Marcus Peders?n
*
> > > > > > > * System administrator
*
> > > > > > >
**************************************************
> > > > > > > * Interbull Centre
*
> > > > > > > * ================
*
> > > > > > > * Department of Animal Breeding &
Genetics ? SLU *
> > > > > > > * Box 7023, SE-750 07
*
> > > > > > > * Uppsala, Sweden
*
> > > > > > >
**************************************************
> > > > > > > * Visiting address:
*
> > > > > > > * Room 55614, Ulls v?g 26, Ultuna
*
> > > > > > > * Uppsala
*
> > > > > > > * Sweden
*
> > > > > > > *
*
> > > > > > > * Tel: +46-(0)18-67 1962
*
> > > > > > > *
*
> > > > > > >
**************************************************
> > > > > > > * ISO 9001 Bureau Veritas No SE004561-1
*
> > > > > > >
**************************************************
> > > > > > >
_______________________________________________
> > > > > > > Gluster-users mailing list
> > > > > > > Gluster-users at gluster.org
> > > > > > >
http://lists.gluster.org/mailman/listinfo/gluster-users
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > > Thanks and Regards,
> > > > > > Kotresh H R
> > > > >
> > > > > --
> > > > > **************************************************
> > > > > * Marcus Peders?n *
> > > > > * System administrator *
> > > > > **************************************************
> > > > > * Interbull Centre *
> > > > > * ================ *
> > > > > * Department of Animal Breeding & Genetics ? SLU *
> > > > > * Box 7023, SE-750 07 *
> > > > > * Uppsala, Sweden *
> > > > > **************************************************
> > > > > * Visiting address: *
> > > > > * Room 55614, Ulls v?g 26, Ultuna *
> > > > > * Uppsala *
> > > > > * Sweden *
> > > > > * *
> > > > > * Tel: +46-(0)18-67 1962 *
> > > > > * *
> > > > > **************************************************
> > > > > * ISO 9001 Bureau Veritas No SE004561-1 *
> > > > > **************************************************
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Thanks and Regards,
> > > > Kotresh H R
> > >
> > > --
> > > **************************************************
> > > * Marcus Peders?n *
> > > * System administrator *
> > > **************************************************
> > > * Interbull Centre *
> > > * ================ *
> > > * Department of Animal Breeding & Genetics ? SLU *
> > > * Box 7023, SE-750 07 *
> > > * Uppsala, Sweden *
> > > **************************************************
> > > * Visiting address: *
> > > * Room 55614, Ulls v?g 26, Ultuna *
> > > * Uppsala *
> > > * Sweden *
> > > * *
> > > * Tel: +46-(0)18-67 1962 *
> > > * *
> > > **************************************************
> > > * ISO 9001 Bureau Veritas No SE004561-1 *
> > > **************************************************
> > >
> >
> >
> >
> > --
> > Thanks and Regards,
> > Kotresh H R
>
> --
> **************************************************
> * Marcus Peders?n *
> * System administrator *
> **************************************************
> * Interbull Centre *
> * ================ *
> * Department of Animal Breeding & Genetics ? SLU *
> * Box 7023, SE-750 07 *
> * Uppsala, Sweden *
> **************************************************
> * Visiting address: *
> * Room 55614, Ulls v?g 26, Ultuna *
> * Uppsala *
> * Sweden *
> * *
> * Tel: +46-(0)18-67 1962 *
> * *
> **************************************************
> * ISO 9001 Bureau Veritas No SE004561-1 *
> **************************************************
>
--
Thanks and Regards,
Kotresh H R
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20180302/45ad2813/attachment.html>