similar to: Upgrade 10.4 -> 11.1 making problems

Displaying 20 results from an estimated 500 matches similar to: "Upgrade 10.4 -> 11.1 making problems"

2024 Jan 17
2
Upgrade 10.4 -> 11.1 making problems
ok, finally managed to get all servers, volumes etc runnung, but took a couple of restarts, cksum checks etc. One problem: a volume doesn't heal automatically or doesn't heal at all. gluster volume status Status of volume: workdata Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------
2024 Jan 18
2
Upgrade 10.4 -> 11.1 making problems
That's the same kind of errors I keep seeing on my 2 clusters, regenerated some months ago. Seems a pseudo-split-brain that should be impossible on a replica 3 cluster but keeps happening. Sadly going to ditch Gluster ASAP. Diego Il 18/01/2024 07:11, Hu Bert ha scritto: > Good morning, > heal still not running. Pending heals now sum up to 60K per brick. > Heal was starting
2024 Jan 18
1
Upgrade 10.4 -> 11.1 making problems
Good morning, heal still not running. Pending heals now sum up to 60K per brick. Heal was starting instantly e.g. after server reboot with version 10.4, but doesn't with version 11. What could be wrong? I only see these errors on one of the "good" servers in glustershd.log: [2024-01-18 06:08:57.328480 +0000] W [MSGID: 114031] [client-rpc-fops_v2.c:2561:client4_0_lookup_cbk]
2024 Jan 18
2
Upgrade 10.4 -> 11.1 making problems
Are you able to set the logs to debug level ?It might provide a clue what it is going on. Best Regards,Strahil Nikolov On Thu, Jan 18, 2024 at 13:08, Diego Zuccato<diego.zuccato at unibo.it> wrote: That's the same kind of errors I keep seeing on my 2 clusters, regenerated some months ago. Seems a pseudo-split-brain that should be impossible on a replica 3 cluster but keeps
2024 Jan 18
1
Upgrade 10.4 -> 11.1 making problems
were you able to solve the problem? Can it be treated like a "normal" split brain? 'gluster peer status' and 'gluster volume status' are ok, so kinda looks like "pseudo"... hubert Am Do., 18. Jan. 2024 um 08:28 Uhr schrieb Diego Zuccato <diego.zuccato at unibo.it>: > > That's the same kind of errors I keep seeing on my 2 clusters, >
2024 Jan 18
1
Upgrade 10.4 -> 11.1 making problems
Since glusterd does not consider it a split brain, you can't solve it with standard split brain tools. I've found no way to resolve it except by manually handling one file at a time: completely unmanageable with thousands of files and having to juggle between actual path on brick and metadata files! Previously I "fixed" it by: 1) moving all the data from the volume to a temp
2024 Jan 19
1
Upgrade 10.4 -> 11.1 making problems
Hi Strahil, hm, don't get me wrong, it may sound a bit stupid, but... where do i set the log level? Using debian... https://access.redhat.com/documentation/de-de/red_hat_gluster_storage/3/html/administration_guide/configuring_the_log_level ls /etc/glusterfs/ eventsconfig.json glusterfs-georep-logrotate gluster-rsyslog-5.8.conf group-db-workload group-gluster-block group-nl-cache
2024 Jan 19
1
Upgrade 10.4 -> 11.1 making problems
gluster volume set testvol diagnostics.brick-log-level WARNING gluster volume set testvol diagnostics.brick-sys-log-level WARNING gluster volume set testvol diagnostics.client-log-level ERROR gluster --log-level=ERROR volume status --- Gilberto Nunes Ferreira Em sex., 19 de jan. de 2024 ?s 05:49, Hu Bert <revirii at googlemail.com> escreveu: > Hi Strahil, > hm, don't get me
2024 Jan 20
1
Upgrade 10.4 -> 11.1 making problems
Good morning, thx Gilberto, did the first three (set to WARNING), but the last one doesn't work. Anyway, with setting these three some new messages appear: [2024-01-20 07:23:58.561106 +0000] W [MSGID: 114061] [client-common.c:796:client_pre_lk_v2] 0-workdata-client-11: remote_fd is -1. EBADFD [{gfid=faf59566-10f5-4ddd-8b0c-a87bc6a334fb}, {errno=77}, {error=File descriptor in bad state}]
2024 Jan 24
1
Upgrade 10.4 -> 11.1 making problems
Hi, Can you find and check the files with gfids: 60465723-5dc0-4ebe-aced-9f2c12e52642faf59566-10f5-4ddd-8b0c-a87bc6a334fb Use 'getfattr -d -e hex -m. ' command from https://docs.gluster.org/en/main/Troubleshooting/resolving-splitbrain/#analysis-of-the-output . Best Regards,Strahil Nikolov On Sat, Jan 20, 2024 at 9:44, Hu Bert<revirii at googlemail.com> wrote: Good morning,
2024 Jan 25
1
Upgrade 10.4 -> 11.1 making problems
Good morning, hope i got it right... using: https://access.redhat.com/documentation/de-de/red_hat_gluster_storage/3.1/html/administration_guide/ch27s02 mount -t glusterfs -o aux-gfid-mount glusterpub1:/workdata /mnt/workdata gfid 1: getfattr -n trusted.glusterfs.pathinfo -e text /mnt/workdata/.gfid/faf59566-10f5-4ddd-8b0c-a87bc6a334fb getfattr: Removing leading '/' from absolute path
2024 Jan 27
1
Upgrade 10.4 -> 11.1 making problems
You don't need to mount it. Like this : # getfattr -d -e hex -m. /path/to/brick/.glusterfs/00/46/00462be8-3e61-4931-8bda-dae1645c639e # file: 00/46/00462be8-3e61-4931-8bda-dae1645c639e trusted.gfid=0x00462be83e6149318bdadae1645c639e trusted.gfid2path.05fcbdafdeea18ab=0x30326333373930632d386637622d346436652d393464362d3936393132313930643131312f66696c656c6f636b696e672e7079
2001 Sep 30
1
2.4.9-ac18; issues with '/'
Hello, I just went back to 2.4.9-ac18 from 2.4.10 (i386), and then for the first time migrated all my ext2 partitions to ext3. I did this over a series of reboots, as I confirmed that indeed things continued to work. The last step I took was to convert /. I have set up /etc/fstab to mount my 'ext3' partitions with 'auto', I suppose thinking this will make it easier to roll
2001 Jun 08
1
VALinux's 2.4.5 beta kernel with Ext3
Anyone try this yet? ftp://ftp.valinux.com/pub/software/kernel/beta/2.4.5-beta2va3.11/ List of SRPM contents follows. -- TheBS atomic-lookup.patch atomicalloc.patch byteprofiling.patch comtrol-1.23.patch configs-2.4.5.tar.gz copy-user-reschedule.patch dac960-enclosure-quiet.patch dma-livelock-fix.patch e100-1.5.5.tar.gz e1000-3.0.7.tar.gz eepro100-speedo-1.patch emu10k1-tone.patch
2018 Feb 07
2
Ip based peer probe volume create error
On 8/02/2018 4:45 AM, Gaurav Yadav wrote: > After seeing command history, I could see that you have 3 nodes, and > firstly you are peer probing 51.15.90.60? and 163.172.151.120 from? > 51.15.77.14 > So here itself you have 3 node cluster, after all this you are going > on node 2 and again peer probing 51.15.77.14. > ?Ideally it should work, with above steps, but due to some
2018 Mar 06
4
Fixing a rejected peer
Hello, So I'm seeing a rejected peer with 3.12.6. This is with a replica 3 volume. It actually began as the same problem with a different peer. I noticed with (call it) gluster-2, when I couldn't make a new volume. I compared /var/lib/glusterd between them, and found that somehow the options in one of the vols differed. (I suspect this was due to attempting to create the volume via the
2017 Nov 30
2
Problems joining new gluster 3.10 nodes to existing 3.8
Hi, I have a problem joining four Gluster 3.10 nodes to an existing Gluster 3.8 nodes. My understanding that this should work and not be too much of a problem. Peer robe is successful but the node is rejected: gluster> peer detach elkpinfglt07 peer detach: success gluster> peer probe elkpinfglt07 peer probe: success. gluster> peer status Number of Peers: 6 Hostname: elkpinfglt02
2017 Aug 29
3
peer rejected but connected
hi fellas, same old same in log of the probing peer I see: ... 2017-08-29 13:36:16.882196] I [MSGID: 106493] [glusterd-handler.c:3020:__glusterd_handle_probe_query] 0-glusterd: Responded to priv.xx.xx.priv.xx.xx.x, op_ret: 0, op_errno: 0, ret: 0 [2017-08-29 13:36:16.904961] I [MSGID: 106490] [glusterd-handler.c:2606:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid:
2017 Dec 01
0
Problems joining new gluster 3.10 nodes to existing 3.8
On Fri, Dec 1, 2017 at 1:55 AM, Ziemowit Pierzycki <ziemowit at pierzycki.com> wrote: > Hi, > > I have a problem joining four Gluster 3.10 nodes to an existing > Gluster 3.8 nodes. My understanding that this should work and not be > too much of a problem. > > Peer robe is successful but the node is rejected: > > gluster> peer detach elkpinfglt07 > peer
2018 Mar 06
0
Fixing a rejected peer
On Tue, Mar 6, 2018 at 6:00 AM, Jamie Lawrence <jlawrence at squaretrade.com> wrote: > Hello, > > So I'm seeing a rejected peer with 3.12.6. This is with a replica 3 volume. > > It actually began as the same problem with a different peer. I noticed > with (call it) gluster-2, when I couldn't make a new volume. I compared > /var/lib/glusterd between them, and