Hi Marco
The checksum difference refers to the difference in the contents of the
`var/lib/glusterd` directory. Maybe you can compare the contents and see if
there are any differences from gluster-9 to gluster-10?
Also, even if you find a difference please do save that, and then upgrade
one more node to gluster-10 and see if they(the 2 upgraded nodes) are back
in a `connected` state or not (considering this is your test env).
--
Thanks and Regards,
*NiKHIL LADHA*
On Fri, Nov 19, 2021 at 4:07 AM Marco Fais <evilmf at gmail.com> wrote:
> Hi all,
>
> is the online upgrade from 9.x to 10.0 supported?
>
> I am experimenting with it in our test cluster and following the procedure
> I always end up with the upgraded node having the peers in the "peer
> rejected" status.
>
> In upgraded node logs I can see:
> [2021-11-18 22:21:43.752585 +0000] E [MSGID: 106010]
> [glusterd-utils.c:3827:glusterd_compare_friend_volume] 0-management:
> Version of Cksums Backup_Storage differ. local cksum = 2467304182, remote
> cksum = 998029999 on peer ovirt-node3
> -storage
> [2021-11-18 22:21:43.752743 +0000] I [MSGID: 106493]
> [glusterd-handler.c:3821:glusterd_xfer_friend_add_resp] 0-glusterd:
> Responded to ovirt-node3-storage (0), ret: 0, op_ret: -1
>
> And in one of the peers I have similar messages:
> [2021-11-18 22:21:43.744106 +0000] E [MSGID: 106010]
> [glusterd-utils.c:3844:glusterd_compare_friend_volume] 0-management:
> Version of Cksums Backup_Storage differ. local cksum = 998029999, remote
> cksum = 2467304182 on peer 192.168.30.1
> [2021-11-18 22:21:43.744233 +0000] I [MSGID: 106493]
> [glusterd-handler.c:3893:glusterd_xfer_friend_add_resp] 0-glusterd:
> Responded to 192.168.30.1 (0), ret: 0, op_ret: -1
> [2021-11-18 22:21:43.756298 +0000] I [MSGID: 106493]
> [glusterd-rpc-ops.c:474:__glusterd_friend_add_cbk] 0-glusterd: Received RJT
> from uuid: acb80b35-d6ac-4085-87cd-ba69ff3f81e6, host: 192.168.30.1, port:
0
>
>
> (I have tried with gluster v Backup_Storage heal as per instructions).
>
> Trying to restart the upgraded glusterd node doesn't help. I have also
> tried to clean up /var/lib/glusterd without success.
>
> Am I missing something?
>
> Downgrading again to 9.3 works again. All my volumes are
> distributrd_replicate and the cluster is composed by 3 nodes.
>
> Thanks,
> Marco
> ________
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20211119/7fae944d/attachment.html>