Unsubscribe
On Mon, Apr 6, 2020 at 6:10 PM Oskar Pienkos <oskarp10 at hotmail.com>
wrote:
> Unsubscribe
>
> Sent from Outlook <http://aka.ms/weboutlook>
>
> ------------------------------
> *From:* gluster-users-bounces at gluster.org <
> gluster-users-bounces at gluster.org> on behalf of
> gluster-users-request at gluster.org <gluster-users-request at
gluster.org>
> *Sent:* April 6, 2020 5:00 AM
> *To:* gluster-users at gluster.org <gluster-users at gluster.org>
> *Subject:* Gluster-users Digest, Vol 144, Issue 6
>
> Send Gluster-users mailing list submissions to
> gluster-users at gluster.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://lists.gluster.org/mailman/listinfo/gluster-users
> or, via email, send a message with subject or body 'help' to
> gluster-users-request at gluster.org
>
> You can reach the person managing the list at
> gluster-users-owner at gluster.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Gluster-users digest..."
>
>
> Today's Topics:
>
> 1. Re: gnfs split brain when 1 server in 3x1 down (high load) -
> help request (Erik Jacobson)
> 2. Re: gnfs split brain when 1 server in 3x1 down (high load) -
> help request (Erik Jacobson)
> 3. Re: Repository down ? (Hu Bert)
> 4. One error/warning message after upgrade 5.11 -> 6.8 (Hu Bert)
> 5. Re: gnfs split brain when 1 server in 3x1 down (high load) -
> help request (Ravishankar N)
> 6. Gluster testcase Hackathon (Hari Gowtham)
> 7. gluster v6.8: systemd units disabled after install (Hu Bert)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Sun, 5 Apr 2020 18:49:56 -0500
> From: Erik Jacobson <erik.jacobson at hpe.com>
> To: Ravishankar N <ravishankar at redhat.com>
> Cc: gluster-users at gluster.org
> Subject: Re: [Gluster-users] gnfs split brain when 1 server in 3x1
> down (high load) - help request
> Message-ID: <20200405234956.GB29598 at metalio.americas.hpqcorp.net>
> Content-Type: text/plain; charset=us-ascii
>
> First, it's possible our analysis is off somewhere. I never get to your
> print message. I put a debug statement at the start of the function so I
> know we get there (just to verify my print statements were taking
> affect).
>
> I put a print statement for the if (call_count == 0) { call there, right
> after the if. I ran some tests.
>
> I suspect that isn't a problem area. There were some interesting
results
> with an NFS stale file handle error going through that path. Otherwise
> it's always errno=0 even in the heavy test case. I'm not concerned
about
> a stale NFS file handle this moment. That print was also hit heavily when
> one server was down (which surprised me but I don't know the
internals).
>
> I'm trying to re-read and work through Scott's message to see if
any
> other print statements might be helpful.
>
> Thank you for your help so far. I will reply back if I find something.
> Otherwise suggestions welcome!
>
> The MFG system I can access got smaller this weekend but is still large
> enough to reproduce the error.
>
> As you can tell, I work mostly at a level well above filesystem code so
> thank you for staying with me as I struggle through this.
>
> Erik
>
> > After we hear from all children, afr_inode_refresh_subvol_cbk() then
> calls
>
afr_inode_refresh_done()-->afr_txn_refresh_done()-->afr_read_txn_refresh_done().
> > But you already know this flow now.
>
> > diff --git a/xlators/cluster/afr/src/afr-common.c
> b/xlators/cluster/afr/src/afr-common.c
> > index 4bfaef9e8..096ce06f0 100644
> > --- a/xlators/cluster/afr/src/afr-common.c
> > +++ b/xlators/cluster/afr/src/afr-common.c
> > @@ -1318,6 +1318,12 @@ afr_inode_refresh_subvol_cbk(call_frame_t
*frame,
> void *cookie, xlator_t *this,
> > if (xdata)
> > local->replies[call_child].xdata = dict_ref(xdata);
> > }
> > + if (op_ret == -1)
> > + gf_msg_callingfn(
> > + this->name, GF_LOG_ERROR, op_errno,
AFR_MSG_SPLIT_BRAIN,
> > + "Inode refresh on child:%d failed with errno:%d for
%s(%s)
> ",
> > + call_child, op_errno, local->loc.name,
> > + uuid_utoa(local->loc.inode->gfid));
> > if (xdata) {
> > ret = dict_get_int8(xdata, "link-count",
&need_heal);
> > local->replies[call_child].need_heal = need_heal;
>
>
>
> ------------------------------
>
> Message: 2
> Date: Sun, 5 Apr 2020 20:22:21 -0500
> From: Erik Jacobson <erik.jacobson at hpe.com>
> To: Ravishankar N <ravishankar at redhat.com>
> Cc: gluster-users at gluster.org
> Subject: Re: [Gluster-users] gnfs split brain when 1 server in 3x1
> down (high load) - help request
> Message-ID: <20200406012221.GD29598 at metalio.americas.hpqcorp.net>
> Content-Type: text/plain; charset=us-ascii
>
> During the problem case, near as I can tell, afr_final_errno(),
> in the loop where tmp_errno = local->replies[i].op_errno is set,
> the errno is always "2" when it gets to that point on server 3
(where
> the NFS load is).
>
> I never see a value other than 2.
>
> I later simply put the print at the end of the function too, to double
> verify non-zero exit codes. There are thousands of non-zero return
> codes, all 2 when not zero. Here is an exmaple flow right before a
> split-brain. I do not wish to imply the split-brain is related, it's
> just an example log snip:
>
>
> [2020-04-06 00:54:21.125373] E [MSGID: 0]
> [afr-common.c:2546:afr_final_errno] 0-erikj-afr_final_errno: erikj dbg
> afr_final_errno() errno from loop before afr_higher_errno was: 2
> [2020-04-06 00:54:21.125374] E [MSGID: 0]
> [afr-common.c:2551:afr_final_errno] 0-erikj-afr_final_errno: erikj dbg
> returning non-zero: 2
> [2020-04-06 00:54:23.315397] E [MSGID: 0]
> [afr-read-txn.c:283:afr_read_txn_refresh_done] 0-cm_shared-replicate-0:
> erikj dbg crapola 1st if in afr_read_txn_refresh_done()
> !priv->thin_arbiter_count -- goto to readfn
> [2020-04-06 00:54:23.315432] E [MSGID: 108008]
> [afr-read-txn.c:314:afr_read_txn_refresh_done] 0-cm_shared-replicate-0:
> Failing READLINK on gfid 57f269ef-919d-40ec-b7fc-a7906fee648b: split-brain
> observed. [Input/output error]
> [2020-04-06 00:54:23.315450] W [MSGID: 112199]
> [nfs3-helpers.c:3327:nfs3_log_readlink_res] 0-nfs-nfsv3:
> /image/images_ro_nfs/rhel8.0/usr/lib64/libmlx5.so.1 => (XID: 1fdba2bc,
> READLINK: NFS: 5(I/O error), POSIX: 5(Input/output error)) target: (null)
>
>
> I am missing something. I will see if Scott and I can work together
> tomorrow. Happy for any more ideas, Thank you!!
>
>
> On Sun, Apr 05, 2020 at 06:49:56PM -0500, Erik Jacobson wrote:
> > First, it's possible our analysis is off somewhere. I never get to
your
> > print message. I put a debug statement at the start of the function so
I
> > know we get there (just to verify my print statements were taking
> > affect).
> >
> > I put a print statement for the if (call_count == 0) { call there,
right
> > after the if. I ran some tests.
> >
> > I suspect that isn't a problem area. There were some interesting
results
> > with an NFS stale file handle error going through that path. Otherwise
> > it's always errno=0 even in the heavy test case. I'm not
concerned about
> > a stale NFS file handle this moment. That print was also hit heavily
when
> > one server was down (which surprised me but I don't know the
internals).
> >
> > I'm trying to re-read and work through Scott's message to see
if any
> > other print statements might be helpful.
> >
> > Thank you for your help so far. I will reply back if I find something.
> > Otherwise suggestions welcome!
> >
> > The MFG system I can access got smaller this weekend but is still
large
> > enough to reproduce the error.
> >
> > As you can tell, I work mostly at a level well above filesystem code
so
> > thank you for staying with me as I struggle through this.
> >
> > Erik
> >
> > > After we hear from all children, afr_inode_refresh_subvol_cbk()
then
> calls
>
afr_inode_refresh_done()-->afr_txn_refresh_done()-->afr_read_txn_refresh_done().
> > > But you already know this flow now.
> >
> > > diff --git a/xlators/cluster/afr/src/afr-common.c
> b/xlators/cluster/afr/src/afr-common.c
> > > index 4bfaef9e8..096ce06f0 100644
> > > --- a/xlators/cluster/afr/src/afr-common.c
> > > +++ b/xlators/cluster/afr/src/afr-common.c
> > > @@ -1318,6 +1318,12 @@ afr_inode_refresh_subvol_cbk(call_frame_t
> *frame, void *cookie, xlator_t *this,
> > > if (xdata)
> > > local->replies[call_child].xdata =
dict_ref(xdata);
> > > }
> > > + if (op_ret == -1)
> > > + gf_msg_callingfn(
> > > + this->name, GF_LOG_ERROR, op_errno,
AFR_MSG_SPLIT_BRAIN,
> > > + "Inode refresh on child:%d failed with errno:%d
for
> %s(%s) ",
> > > + call_child, op_errno, local->loc.name,
> > > + uuid_utoa(local->loc.inode->gfid));
> > > if (xdata) {
> > > ret = dict_get_int8(xdata, "link-count",
&need_heal);
> > > local->replies[call_child].need_heal = need_heal;
>
>
>
>
> ------------------------------
>
> Message: 3
> Date: Mon, 6 Apr 2020 06:03:22 +0200
> From: Hu Bert <revirii at googlemail.com>
> To: Renaud Fortier <Renaud.Fortier at fsaa.ulaval.ca>
> Cc: "gluster-users at gluster.org" <gluster-users at
gluster.org>
> Subject: Re: [Gluster-users] Repository down ?
> Message-ID:
> <CAAV-989P_=hNXj_PJdEBB8rG29b> jrzwDY3X+MBHi40zddzPgg at
mail.gmail.com>
> Content-Type: text/plain; charset="UTF-8"
>
> Good morning,
>
> upgraded from 5.11 to 6.8 today; 2 servers worked smoothly, one again
> had connection problems:
>
> Err:1
>
https://download.gluster.org/pub/gluster/glusterfs/6/6.8/Debian/buster/amd64/apt
> buster/main amd64 libglusterfs-dev amd64 6.8-1
> Could not connect to download.gluster.org:443 (8.43.85.185),
> connection timed out
> Err:2
>
https://download.gluster.org/pub/gluster/glusterfs/6/6.8/Debian/buster/amd64/apt
> buster/main amd64 libgfxdr0 amd64 6.8-1
> Unable to connect to download.gluster.org:https:
>
> As a workaround i downloaded the packages manually on one of the other
> 2 servers, copied them to server3 and installed them manually.
>
> Any idea why this happens? /etc/hosts, /etc/resolv.conf are identical.
> The servers are behind the same gateway (switch in datacenter of
> provider), the server IPs differ only in the last number.
>
>
> Best regards,
> Hubert
>
> Am Fr., 3. Apr. 2020 um 10:33 Uhr schrieb Hu Bert <revirii at
googlemail.com
> >:
> >
> > ok, half an hour later it worked. Not funny during an upgrade.
> Strange... :-)
> >
> >
> > Regards,
> > Hubert
> >
> > Am Fr., 3. Apr. 2020 um 10:19 Uhr schrieb Hu Bert <
> revirii at googlemail.com>:
> > >
> > > Hi,
> > >
> > > i'm currently preparing an upgrade 5.x -> 6.8; the
download of the
> > > repository key works on 2 of 3 servers. nameserver settings are
> > > identical. On the 3rd server i get this:
> > >
> > > wget -O -
https://download.gluster.org/pub/gluster/glusterfs/6/rsa.pub
> > > | apt-key add -
> > > --2020-04-03 10:15:43--
> > > https://download.gluster.org/pub/gluster/glusterfs/6/rsa.pub
> > > Resolving download.gluster.org (download.gluster.org)...
8.43.85.185
> > > Connecting to download.gluster.org
> > > (download.gluster.org)|8.43.85.185|:443... failed: Connection
timed
> > > out.
> > > Retrying.
> > >
> > > and this goes on and on... Which errors do you see?
> > >
> > >
> > > Regards,
> > > Hubert
> > >
> > > Am Mo., 30. M?rz 2020 um 20:40 Uhr schrieb Renaud Fortier
> > > <Renaud.Fortier at fsaa.ulaval.ca>:
> > > >
> > > > Hi,
> > > >
> > > > I?m trying to download packages from the gluster repository
> https://download.gluster.org/ but it failed for every download I?ve tried.
> > > >
> > > >
> > > >
> > > > Is it happening only to me ?
> > > >
> > > > Thank you
> > > >
> > > >
> > > >
> > > > Renaud Fortier
> > > >
> > > >
> > > >
> > > > ________
> > > >
> > > >
> > > >
> > > > Community Meeting Calendar:
> > > >
> > > > Schedule -
> > > > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> > > > Bridge: https://bluejeans.com/441850968
> > > >
> > > > Gluster-users mailing list
> > > > Gluster-users at gluster.org
> > > > https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
> ------------------------------
>
> Message: 4
> Date: Mon, 6 Apr 2020 06:13:23 +0200
> From: Hu Bert <revirii at googlemail.com>
> To: gluster-users <gluster-users at gluster.org>
> Subject: [Gluster-users] One error/warning message after upgrade 5.11
> -> 6.8
> Message-ID:
> <
> CAAV-988kaD0bat2p3Xfw8vHYhY-pcekeoPOyLq297XEqRosSyw at mail.gmail.com>
> Content-Type: text/plain; charset="UTF-8"
>
> Hello,
>
> i just upgraded my servers and clients from 5.11 to 6.8; besides one
> connection problem to the gluster download server everything went
> fine.
>
> On the 3 gluster servers i mount the 2 volumes as well, and only there
> (and not on all the other clients) there are some messages in the log
> file of both mount logs:
>
> [2020-04-06 04:10:53.552561] W [MSGID: 114031]
> [client-rpc-fops_v2.c:851:client4_0_setxattr_cbk]
> 0-persistent-client-2: remote operation failed [Permission denied]
> [2020-04-06 04:10:53.552635] W [MSGID: 114031]
> [client-rpc-fops_v2.c:851:client4_0_setxattr_cbk]
> 0-persistent-client-1: remote operation failed [Permission denied]
> [2020-04-06 04:10:53.552639] W [MSGID: 114031]
> [client-rpc-fops_v2.c:851:client4_0_setxattr_cbk]
> 0-persistent-client-0: remote operation failed [Permission denied]
> [2020-04-06 04:10:53.553226] E [MSGID: 148002]
> [utime.c:146:gf_utime_set_mdata_setxattr_cbk] 0-persistent-utime: dict
> set of key for set-ctime-mdata failed [Permission denied]
> The message "W [MSGID: 114031]
> [client-rpc-fops_v2.c:851:client4_0_setxattr_cbk]
> 0-persistent-client-2: remote operation failed [Permission denied]"
> repeated 4 times between [2020-04-06 04:10:53.552561] and [2020-04-06
> 04:10:53.745542]
> The message "W [MSGID: 114031]
> [client-rpc-fops_v2.c:851:client4_0_setxattr_cbk]
> 0-persistent-client-1: remote operation failed [Permission denied]"
> repeated 4 times between [2020-04-06 04:10:53.552635] and [2020-04-06
> 04:10:53.745610]
> The message "W [MSGID: 114031]
> [client-rpc-fops_v2.c:851:client4_0_setxattr_cbk]
> 0-persistent-client-0: remote operation failed [Permission denied]"
> repeated 4 times between [2020-04-06 04:10:53.552639] and [2020-04-06
> 04:10:53.745632]
> The message "E [MSGID: 148002]
> [utime.c:146:gf_utime_set_mdata_setxattr_cbk] 0-persistent-utime: dict
> set of key for set-ctime-mdata failed [Permission denied]" repeated 4
> times between [2020-04-06 04:10:53.553226] and [2020-04-06
> 04:10:53.746080]
>
> Anything to worry about?
>
>
> Regards,
> Hubert
>
>
> ------------------------------
>
> Message: 5
> Date: Mon, 6 Apr 2020 10:06:41 +0530
> From: Ravishankar N <ravishankar at redhat.com>
> To: Erik Jacobson <erik.jacobson at hpe.com>
> Cc: gluster-users at gluster.org
> Subject: Re: [Gluster-users] gnfs split brain when 1 server in 3x1
> down (high load) - help request
> Message-ID: <4f5c5a73-69b9-f575-5974-d582e2a06051 at redhat.com>
> Content-Type: text/plain; charset=windows-1252; format=flowed
>
> afr_final_errno() is called from many other places other than the inode
> refresh code path, so the 2 (ENOENT) could be from one of those (mostly
> afr_lookup_done) but it is puzzling that you are not seeing EIO even
> once when it is called from afr_inode_refresh_subvol_cbk() code path.
> Not sure what is happening here.
>
> On 06/04/20 6:52 am, Erik Jacobson wrote:
> > During the problem case, near as I can tell, afr_final_errno(),
> > in the loop where tmp_errno = local->replies[i].op_errno is set,
> > the errno is always "2" when it gets to that point on server
3 (where
> > the NFS load is).
> >
> > I never see a value other than 2.
> >
> > I later simply put the print at the end of the function too, to double
> > verify non-zero exit codes. There are thousands of non-zero return
> > codes, all 2 when not zero. Here is an exmaple flow right before a
> > split-brain. I do not wish to imply the split-brain is related,
it's
> > just an example log snip:
> >
> >
> > [2020-04-06 00:54:21.125373] E [MSGID: 0]
> [afr-common.c:2546:afr_final_errno] 0-erikj-afr_final_errno: erikj dbg
> afr_final_errno() errno from loop before afr_higher_errno was: 2
> > [2020-04-06 00:54:21.125374] E [MSGID: 0]
> [afr-common.c:2551:afr_final_errno] 0-erikj-afr_final_errno: erikj dbg
> returning non-zero: 2
> > [2020-04-06 00:54:23.315397] E [MSGID: 0]
> [afr-read-txn.c:283:afr_read_txn_refresh_done] 0-cm_shared-replicate-0:
> erikj dbg crapola 1st if in afr_read_txn_refresh_done()
> !priv->thin_arbiter_count -- goto to readfn
> > [2020-04-06 00:54:23.315432] E [MSGID: 108008]
> [afr-read-txn.c:314:afr_read_txn_refresh_done] 0-cm_shared-replicate-0:
> Failing READLINK on gfid 57f269ef-919d-40ec-b7fc-a7906fee648b: split-brain
> observed. [Input/output error]
> > [2020-04-06 00:54:23.315450] W [MSGID: 112199]
> [nfs3-helpers.c:3327:nfs3_log_readlink_res] 0-nfs-nfsv3:
> /image/images_ro_nfs/rhel8.0/usr/lib64/libmlx5.so.1 => (XID: 1fdba2bc,
> READLINK: NFS: 5(I/O error), POSIX: 5(Input/output error)) target: (null)
> >
> >
> > I am missing something. I will see if Scott and I can work together
> > tomorrow. Happy for any more ideas, Thank you!!
> >
> >
> > On Sun, Apr 05, 2020 at 06:49:56PM -0500, Erik Jacobson wrote:
> >> First, it's possible our analysis is off somewhere. I never
get to your
> >> print message. I put a debug statement at the start of the
function so I
> >> know we get there (just to verify my print statements were taking
> >> affect).
> >>
> >> I put a print statement for the if (call_count == 0) { call there,
right
> >> after the if. I ran some tests.
> >>
> >> I suspect that isn't a problem area. There were some
interesting results
> >> with an NFS stale file handle error going through that path.
Otherwise
> >> it's always errno=0 even in the heavy test case. I'm not
concerned about
> >> a stale NFS file handle this moment. That print was also hit
heavily
> when
> >> one server was down (which surprised me but I don't know the
internals).
> >>
> >> I'm trying to re-read and work through Scott's message to
see if any
> >> other print statements might be helpful.
> >>
> >> Thank you for your help so far. I will reply back if I find
something.
> >> Otherwise suggestions welcome!
> >>
> >> The MFG system I can access got smaller this weekend but is still
large
> >> enough to reproduce the error.
> >>
> >> As you can tell, I work mostly at a level well above filesystem
code so
> >> thank you for staying with me as I struggle through this.
> >>
> >> Erik
> >>
> >>> After we hear from all children,
afr_inode_refresh_subvol_cbk() then
> calls
>
afr_inode_refresh_done()-->afr_txn_refresh_done()-->afr_read_txn_refresh_done().
> >>> But you already know this flow now.
> >>> diff --git a/xlators/cluster/afr/src/afr-common.c
> b/xlators/cluster/afr/src/afr-common.c
> >>> index 4bfaef9e8..096ce06f0 100644
> >>> --- a/xlators/cluster/afr/src/afr-common.c
> >>> +++ b/xlators/cluster/afr/src/afr-common.c
> >>> @@ -1318,6 +1318,12 @@
afr_inode_refresh_subvol_cbk(call_frame_t
> *frame, void *cookie, xlator_t *this,
> >>> if (xdata)
> >>> local->replies[call_child].xdata =
dict_ref(xdata);
> >>> }
> >>> + if (op_ret == -1)
> >>> + gf_msg_callingfn(
> >>> + this->name, GF_LOG_ERROR, op_errno,
AFR_MSG_SPLIT_BRAIN,
> >>> + "Inode refresh on child:%d failed with
errno:%d for
> %s(%s) ",
> >>> + call_child, op_errno, local->loc.name,
> >>> + uuid_utoa(local->loc.inode->gfid));
> >>> if (xdata) {
> >>> ret = dict_get_int8(xdata, "link-count",
&need_heal);
> >>> local->replies[call_child].need_heal = need_heal;
> >
>
>
>
> ------------------------------
>
> Message: 6
> Date: Mon, 6 Apr 2020 10:54:10 +0530
> From: Hari Gowtham <hgowtham at redhat.com>
> To: gluster-users <gluster-users at gluster.org>, gluster-devel
> <gluster-devel at gluster.org>
> Subject: [Gluster-users] Gluster testcase Hackathon
> Message-ID:
> <CAKh1kXshSzz1FxmoitCPLyX7_jXe+=j0Nqwt3__hfr> 671jN8w at
mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Hi all,
>
> We have been seeing a good number of CI test cases failing. This does
> become a problem with taking in new fixes and having them marked as bad
> tests reduces the test coverage. As a result, we are planning to have a
> hackathon on 09th April. This will be a virtual one happening on google
> meet. Information to join can be found below[4].
>
> We will be working on the tests that are currently failing spuriously[1]
> and also on the test cases that have been marked as bad[2] in the past and
> fix these. The consolidated list [2] has all the test cases we will look
> into during this hackathon. If any of you have come across a test which has
> failed and was missed in the list, feel free to link the test case and the
> link to failure in the consolidated list [2].
>
> Prerequisite would be to have a centos machine where you can clone gluster
> and work on.
> The action item would be to take up as many test cases as possible(write
> down your name against the test case to avoid rework), run it on the local
> environment, find out why it fails, send out a fix for it and review
> others' patches. If you want to add more test cases as per your usage,
feel
> free to do so.
>
> You can go through the link[3] to get the basic idea on how to set up and
> contribute. If there are more questions, please feel free to ask them, also
> you can discuss it with us during the hackathon and sort it as well.
>
> We will send out a google calendar invite soon. The tentative timing will
> be from 11am to 4:30pm IST
>
> [1] https://fstat.gluster.org/summary
> [2]
>
>
https://docs.google.com/spreadsheets/d/1_j_JfJw1YjEziVT1pe8I_8-kMf7LVVmlleaukEikHw0/edit?usp=sharing
> [3]
>
>
https://docs.gluster.org/en/latest/Developer-guide/Simplified-Development-Workflow/
> [4] To join the video meeting, click this link:
> https://meet.google.com/cde-uycs-koz
> Otherwise, to join by phone, dial +1 501-939-4201 and enter this PIN: 803
> 155 592#
> To view more phone numbers, click this link:
> https://tel.meet/cde-uycs-koz?hs=5
>
> --
> Regards,
> Hari Gowtham.
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
>
http://lists.gluster.org/pipermail/gluster-users/attachments/20200406/4a25d280/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 7
> Date: Mon, 6 Apr 2020 12:30:41 +0200
> From: Hu Bert <revirii at googlemail.com>
> To: gluster-users <gluster-users at gluster.org>
> Subject: [Gluster-users] gluster v6.8: systemd units disabled after
> install
> Message-ID:
> <CAAV-98-KT5iv63N32S_ta0A9Fw1YaKZQWhNjB5x> R4jU4NTbeg at
mail.gmail.com>
> Content-Type: text/plain; charset="UTF-8"
>
> Hello,
>
> after a server reboot (with a fresh gluster 6.8 install) i noticed
> that the gluster services weren't running.
>
> systemctl status glusterd.service
> ? glusterd.service - GlusterFS, a clustered file-system server
> Loaded: loaded (/lib/systemd/system/glusterd.service; disabled;
> vendor preset: enabled)
> Active: inactive (dead)
> Docs: man:glusterd(8)
>
> Apr 06 11:34:18 glfsserver1 systemd[1]:
> /lib/systemd/system/glusterd.service:9: PIDFile= references path below
> legacy directory /var/run/, updating /var/run/glusterd.pid ?
> /run/glusterd.pid; please update the unit file accordingly.
>
> systemctl status glustereventsd.service
> ? glustereventsd.service - Gluster Events Notifier
> Loaded: loaded (/lib/systemd/system/glustereventsd.service;
> disabled; vendor preset: enabled)
> Active: inactive (dead)
> Docs: man:glustereventsd(8)
>
> Apr 06 11:34:27 glfsserver1 systemd[1]:
> /lib/systemd/system/glustereventsd.service:11: PIDFile= references
> path below legacy directory /var/run/, updating
> /var/run/glustereventsd.pid ? /run/glustereventsd.pid; please update
> the unit file accordingly.
>
> You have to enable them manually:
>
> systemctl enable glusterd.service
> Created symlink
> /etc/systemd/system/multi-user.target.wants/glusterd.service ?
> /lib/systemd/system/glusterd.service.
> systemctl enable glustereventsd.service
> Created symlink
> /etc/systemd/system/multi-user.target.wants/glustereventsd.service ?
> /lib/systemd/system/glustereventsd.service.
>
> Is this a bug? If so: already known?
>
>
> Regards,
> Hubert
>
>
> ------------------------------
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
> End of Gluster-users Digest, Vol 144, Issue 6
> *********************************************
> ________
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
>
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
--
Suvendu Mitra
GSM - +358504821066
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20200406/2a8c9f2a/attachment.html>