Ronny Adsetts
2020-Mar-01 00:02 UTC
[Gluster-users] Advice on moving volumes/bricks to new servers
Hi all,
I have a 4-server system running a distributed-replicate setup, 4 x (2 + 1) =
12. Bricks are staggered across the servers. Sharding is enabled. (v info shown
below)
Now, the storage is slow on the these servers and not really up to the job so we
have 4 new servers with SSDs. I have to move everything over to the new servers
whilst not taking down the storage.
The four old servers are running Gluster 6.4 and the new ones, 6.5.
So having read tons of docs and mailing lists, etc, I think I ought to be able
to use add-brick, remove-brick to get everything moved safely like so:
# gluster volume add-brick iscsi replica 3 arbiter 1 srv{13..15}:/brick1
# gluster volume remove-brick iscsi replica 3 srv{1..3}:/brick1 start
Then once complete, do:
# gluster volume remove-brick iscsi replica 3 srv{1..3}:/brick1 commit
So I created a test volume to try this out. On the third add/remove of 4, I get
a 'failed' on the remove-brick status. The rebalance log shows:
[2020-02-28 22:25:28.133902] I [dht-rebalance.c:1589:dht_migrate_file] 0-testmig
rate-dht: /linux-5.4.22/arch/arm/boot/dts/exynos4412-itop-scp-core.dtsi: attempt
ing to move from testmigrate-replicate-0 to testmigrate-replicate-2
[2020-02-28 22:25:28.144258] W [MSGID: 108015] [afr-self-heal-name.c:138:__afr_s
elfheal_name_expunge] 0-testmigrate-replicate-0: expunging file a75a83b7-2c34-40
77-b4fc-3126a9d6058a/exynos4210-smdkv310.dts (11a47b1f-2c24-4d4b-9402-9130125cf9
53) on testmigrate-client-6
[2020-02-28 22:25:28.146321] E [MSGID: 109023]
[dht-rebalance.c:1707:dht_migrate_file] 0-testmigrate-dht: Migrate file
failed:/linux-5.4.22/arch/arm/boot/dts/exynos4210-smdkv310.dts: lookup failed on
testmigrate-replicate-0 [No such file or directory]
[2020-02-28 22:25:28.149104] E [MSGID: 109023]
[dht-rebalance.c:2874:gf_defrag_migrate_single_file] 0-testmigrate-dht:
migrate-data failed for /linux-5.4.22/arch/arm/boot/dts/exynos4210-smdkv310.dts
[No such file or directory]
This is show for 4 files.
When I look at the FUSE-mounted volume, the file is there and correct but the
file permissions of this and lots of others are screwed. Lots of dirs with
d--------- permissions, lots of root:root owned files.
So any advice for how to proceed from here:
I did a force on the remove-brick as the data seemed to be in place which is
fine, but now I can't do an add-brick as gluster seems to think a rebalance
is taking place:
---
volume add-brick: failed: Pre Validation failed on
terek-stor.amazing-internet.net. Volume name testmigrate rebalance is in
progress. Please retry after completion
---
$ sudo gluster volume rebalance testmigrate status volume
rebalance: testmigrate: failed: Rebalance not started for volume testmigrate.
Thanks for any insight anyone can offer.
Ronny
$ sudo gluster volume info iscsi
Volume Name: iscsi
Type: Distributed-Replicate
Volume ID: 40ff42a7-5dee-4a98-991b-c4ba5bc50438
Status: Started
Snapshot Count: 0
Number of Bricks: 4 x (2 + 1) = 12
Transport-type: tcp
Bricks:
Brick1: ahren-stor.amazing-internet.net:/data/glusterfs/iscsi/brick1/brick
Brick2: mareth-stor.amazing-internet.net:/data/glusterfs/iscsi/brick1/brick
Brick3: terek-stor.amazing-internet.net:/data/glusterfs/iscsi/brick1a/brick
(arbiter)
Brick4: walker-stor.amazing-internet.net:/data/glusterfs/iscsi/brick2/brick
Brick5: ahren-stor.amazing-internet.net:/data/glusterfs/iscsi/brick2/brick
Brick6: mareth-stor.amazing-internet.net:/data/glusterfs/iscsi/brick2a/brick
(arbiter)
Brick7: terek-stor.amazing-internet.net:/data/glusterfs/iscsi/brick3/brick
Brick8: walker-stor.amazing-internet.net:/data/glusterfs/iscsi/brick3/brick
Brick9: ahren-stor.amazing-internet.net:/data/glusterfs/iscsi/brick3a/brick
(arbiter)
Brick10: mareth-stor.amazing-internet.net:/data/glusterfs/iscsi/brick4/brick
Brick11: terek-stor.amazing-internet.net:/data/glusterfs/iscsi/brick4/brick
Brick12: walker-stor.amazing-internet.net:/data/glusterfs/iscsi/brick4a/brick
(arbiter)
Options Reconfigured:
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
performance.open-behind: off
performance.readdir-ahead: off
performance.strict-o-direct: on
network.remote-dio: disable
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
features.shard: on
features.shard-block-size: 64MB
user.cifs: off
server.allow-insecure: on
cluster.choose-local: off
auth.allow: 127.0.0.1,172.16.36.*,172.16.40.*
ssl.cipher-list: HIGH:!SSLv2
server.ssl: on
client.ssl: on
ssl.certificate-depth: 1
performance.cache-size: 1GB
client.event-threads: 4
server.event-threads: 4
--
Ronny Adsetts
Technical Director
Amazing Internet Ltd, London
t: +44 20 8977 8943
w: www.amazinginternet.com
Registered office: 85 Waldegrave Park, Twickenham, TW1 4TJ
Registered in England. Company No. 4042957
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 195 bytes
Desc: OpenPGP digital signature
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20200301/0b17f134/attachment.sig>
Strahil Nikolov
2020-Mar-01 07:12 UTC
[Gluster-users] Advice on moving volumes/bricks to new servers
On March 1, 2020 2:02:33 AM GMT+02:00, Ronny Adsetts <ronny.adsetts at amazinginternet.com> wrote:>Hi all, > >I have a 4-server system running a distributed-replicate setup, 4 x (2 >+ 1) = 12. Bricks are staggered across the servers. Sharding is >enabled. (v info shown below) > >Now, the storage is slow on the these servers and not really up to the >job so we have 4 new servers with SSDs. I have to move everything over >to the new servers whilst not taking down the storage. > >The four old servers are running Gluster 6.4 and the new ones, 6.5. > >So having read tons of docs and mailing lists, etc, I think I ought to >be able to use add-brick, remove-brick to get everything moved safely >like so: > ># gluster volume add-brick iscsi replica 3 arbiter 1 >srv{13..15}:/brick1 > ># gluster volume remove-brick iscsi replica 3 srv{1..3}:/brick1 start > >Then once complete, do: > ># gluster volume remove-brick iscsi replica 3 srv{1..3}:/brick1 commit > > >So I created a test volume to try this out. On the third add/remove of >4, I get a 'failed' on the remove-brick status. The rebalance log >shows: > >[2020-02-28 22:25:28.133902] I [dht-rebalance.c:1589:dht_migrate_file] >0-testmig >rate-dht: >/linux-5.4.22/arch/arm/boot/dts/exynos4412-itop-scp-core.dtsi: attempt >ing to move from testmigrate-replicate-0 to testmigrate-replicate-2 >[2020-02-28 22:25:28.144258] W [MSGID: 108015] >[afr-self-heal-name.c:138:__afr_s >elfheal_name_expunge] 0-testmigrate-replicate-0: expunging file >a75a83b7-2c34-40 >77-b4fc-3126a9d6058a/exynos4210-smdkv310.dts >(11a47b1f-2c24-4d4b-9402-9130125cf9 >53) on testmigrate-client-6 >[2020-02-28 22:25:28.146321] E [MSGID: 109023] >[dht-rebalance.c:1707:dht_migrate_file] 0-testmigrate-dht: Migrate file >failed:/linux-5.4.22/arch/arm/boot/dts/exynos4210-smdkv310.dts: lookup >failed on testmigrate-replicate-0 [No such file or directory] >[2020-02-28 22:25:28.149104] E [MSGID: 109023] >[dht-rebalance.c:2874:gf_defrag_migrate_single_file] 0-testmigrate-dht: >migrate-data failed for >/linux-5.4.22/arch/arm/boot/dts/exynos4210-smdkv310.dts [No such file >or directory] > > >This is show for 4 files. > >When I look at the FUSE-mounted volume, the file is there and correct >but the file permissions of this and lots of others are screwed. Lots >of dirs with d--------- permissions, lots of root:root owned files. > > >So any advice for how to proceed from here: > > >I did a force on the remove-brick as the data seemed to be in place >which is fine, but now I can't do an add-brick as gluster seems to >think a rebalance is taking place: > >--- >volume add-brick: failed: Pre Validation failed on >terek-stor.amazing-internet.net. Volume name testmigrate rebalance is >in progress. Please retry after completion >--- > >$ sudo gluster volume rebalance testmigrate status >volume rebalance: testmigrate: failed: Rebalance not started for volume >testmigrate. > >Thanks for any insight anyone can offer. > >Ronny > > > > > >$ sudo gluster volume info iscsi > >Volume Name: iscsi >Type: Distributed-Replicate >Volume ID: 40ff42a7-5dee-4a98-991b-c4ba5bc50438 >Status: Started >Snapshot Count: 0 >Number of Bricks: 4 x (2 + 1) = 12 >Transport-type: tcp >Bricks: >Brick1: >ahren-stor.amazing-internet.net:/data/glusterfs/iscsi/brick1/brick >Brick2: >mareth-stor.amazing-internet.net:/data/glusterfs/iscsi/brick1/brick >Brick3: >terek-stor.amazing-internet.net:/data/glusterfs/iscsi/brick1a/brick >(arbiter) >Brick4: >walker-stor.amazing-internet.net:/data/glusterfs/iscsi/brick2/brick >Brick5: >ahren-stor.amazing-internet.net:/data/glusterfs/iscsi/brick2/brick >Brick6: >mareth-stor.amazing-internet.net:/data/glusterfs/iscsi/brick2a/brick >(arbiter) >Brick7: >terek-stor.amazing-internet.net:/data/glusterfs/iscsi/brick3/brick >Brick8: >walker-stor.amazing-internet.net:/data/glusterfs/iscsi/brick3/brick >Brick9: >ahren-stor.amazing-internet.net:/data/glusterfs/iscsi/brick3a/brick >(arbiter) >Brick10: >mareth-stor.amazing-internet.net:/data/glusterfs/iscsi/brick4/brick >Brick11: >terek-stor.amazing-internet.net:/data/glusterfs/iscsi/brick4/brick >Brick12: >walker-stor.amazing-internet.net:/data/glusterfs/iscsi/brick4a/brick >(arbiter) >Options Reconfigured: >performance.client-io-threads: off >nfs.disable: on >transport.address-family: inet >performance.quick-read: off >performance.read-ahead: off >performance.io-cache: off >performance.stat-prefetch: off >performance.open-behind: off >performance.readdir-ahead: off >performance.strict-o-direct: on >network.remote-dio: disable >cluster.eager-lock: enable >cluster.quorum-type: auto >cluster.data-self-heal-algorithm: full >cluster.locking-scheme: granular >cluster.shd-max-threads: 8 >cluster.shd-wait-qlength: 10000 >features.shard: on >features.shard-block-size: 64MB >user.cifs: off >server.allow-insecure: on >cluster.choose-local: off >auth.allow: 127.0.0.1,172.16.36.*,172.16.40.* >ssl.cipher-list: HIGH:!SSLv2 >server.ssl: on >client.ssl: on >ssl.certificate-depth: 1 >performance.cache-size: 1GB >client.event-threads: 4 >server.event-threads: 4Hi Ronny, Have you checked the brick logs of 'testmigrate-replicate-0' which should be your srv1 ? Maybe there were some pending heals at that time and the brick didn't have the necessary data. Another way to migrate the data is to: 1. Add the new disks on the old srv1,2,3 2. Add the new disks to the VG 3. pvmove all LVs to the new disks (I prefer to use the '--atomic' option) 4. vgreduce with the old disks 5. pvremove the old disks 6. Then just delete the block devices from the kernel and remove them physically Of course, this requires 'hotplugging' and available slots on the systems. Or you can stop 1 gluster node (no pending heals) , remove the old disks -> swap with new. Then power up. Create the VG/LV and mount on the same place. Then you can just 'replace-brick' or 'reset-brick' and gluster will heal the data. Repeat for the other 3 nodes and you will be ready. Best Regards, Strahil Nikolov
Ronny Adsetts
2020-Mar-02 14:19 UTC
[Gluster-users] Advice on moving volumes/bricks to new servers
Ronny Adsetts wrote on 01/03/2020 00:02: [...]> > When I look at the FUSE-mounted volume, the file is there and correct > but the file permissions of this and lots of others are screwed. Lots > of dirs with d--------- permissions, lots of root:root owned files.Replying to myself here... I tried a second add-brick/remove-brick test which completed fine this time and all subvolumes were migrated to the new servers. I specifically checked for pending heals prior to each remove-brick. However, I'm seeing some anomalies following the migration. During a remove-disk, as a test, I did a "mv linux-5.4.22 linux-5.4.22-orig" and the linux-5.4.22-orig folder has issues: 1. We're seeing lots of directories with "d---------" permissions and owned by root:root. Not all but more than 0 is a worry. 2. The folder has files missing. Diff shows 5717 files. This is obviously unexpected. $ sudo du -s linux-5.4.22 linux-5.4.22-orig2 linux-5.4.22-orig 898898 linux-5.4.22 898898 linux-5.4.22-orig2 830588 linux-5.4.22-orig $ ls -ald linux-5.4.22* drwxr-xr-x 24 ronny allusers 4096 Feb 24 07:37 linux-5.4.22 d--------- 24 root root 4096 Mar 2 12:17 linux-5.4.22-orig drwxr-xr-x 24 ronny allusers 4096 Feb 24 07:37 linux-5.4.22-orig2 -rw-r--r-- 1 ronny allusers 109491488 Feb 24 07:44 linux-5.4.22.tar.xz $ sudo ls -al linux-5.4.22-orig total 807 d--------- 24 root root 4096 Mar 2 12:17 . drwxr-xr-x 9 ronny allusers 4096 Mar 2 14:08 .. d--------- 27 root root 4096 Mar 2 12:33 arch d--------- 3 root root 4096 Mar 2 12:34 block d--------- 2 root root 4096 Mar 2 12:16 certs -rw-r--r-- 1 ronny allusers 15318 Feb 24 07:37 .clang-format -rw-r--r-- 1 ronny allusers 59 Feb 24 07:37 .cocciconfig -rw-r--r-- 1 ronny allusers 423 Feb 24 07:37 COPYING -rw-r--r-- 1 ronny allusers 99537 Feb 24 07:37 CREDITS d--------- 4 root root 4096 Mar 2 12:34 crypto drwxr-xr-x 82 ronny allusers 4096 Mar 2 12:44 Documentation drwxr-xr-x 138 ronny allusers 4096 Mar 2 12:16 drivers drwxr-xr-x 76 ronny allusers 4096 Mar 2 12:35 fs -rw-r--r-- 1 ronny allusers 71 Feb 24 07:37 .get_maintainer.ignore -rw-r--r-- 1 ronny allusers 30 Feb 24 07:37 .gitattributes -rw-r--r-- 1 ronny allusers 1740 Feb 24 07:37 .gitignore drwxr-xr-x 27 ronny allusers 4096 Mar 2 12:16 include d--------- 2 root root 4096 Mar 2 12:17 init d--------- 2 root root 4096 Mar 2 12:35 ipc -rw-r--r-- 1 ronny allusers 1321 Feb 24 07:37 Kbuild -rw-r--r-- 1 ronny allusers 595 Feb 24 07:37 Kconfig d--------- 18 root root 4096 Mar 2 12:17 kernel d--------- 18 root root 4096 Mar 2 12:44 lib d--------- 6 root root 4096 Mar 2 12:15 LICENSES -rw-r--r-- 1 ronny allusers 13825 Feb 24 07:37 .mailmap -rw-r--r-- 1 ronny allusers 529379 Feb 24 07:37 MAINTAINERS -rw-r--r-- 1 ronny allusers 60910 Feb 24 07:37 Makefile d--------- 3 root root 4096 Mar 2 12:17 mm drwxr-xr-x 70 ronny allusers 4096 Mar 2 12:17 net -rw-r--r-- 1 ronny allusers 727 Feb 24 07:37 README drwxr-xr-x 29 ronny allusers 4096 Mar 2 12:17 samples d--------- 15 root root 4096 Mar 2 12:44 scripts drwxr-xr-x 12 ronny allusers 4096 Mar 2 12:35 security d--------- 26 root root 4096 Mar 2 12:17 sound drwxr-xr-x 35 ronny allusers 4096 Mar 2 12:45 tools d--------- 3 root root 4096 Mar 2 12:35 usr drwxr-xr-x 4 ronny allusers 4096 Mar 2 12:44 virt Have I missed something in doing the remove-brick? Trying to get to the bottom of this before I press go on production data. Thanks. Ronny -- Ronny Adsetts Technical Director Amazing Internet Ltd, London t: +44 20 8977 8943 w: www.amazinginternet.com Registered office: 85 Waldegrave Park, Twickenham, TW1 4TJ Registered in England. Company No. 4042957 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 195 bytes Desc: OpenPGP digital signature URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20200302/33da3975/attachment.sig>