Displaying 20 results from an estimated 56 matches for "rchecksum".
Did you mean:
checksum
2024 Nov 08
1
Add an arbiter when have multiple bricks at same server.
...biter)
Brick7: gluster1:/disco1TB-1/vms
Brick8: gluster2:/disco1TB-1/vms
Brick9: arbiter:/arbiter3 (arbiter)
Options Reconfigured:
cluster.self-heal-daemon: off
cluster.entry-self-heal: off
cluster.metadata-self-heal: off
cluster.data-self-heal: off
cluster.granular-entry-heal: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
performance.client-io-threads: off
pve01:~#
---
Gilberto Nunes Ferreira
Em sex., 8 de nov. de 2024 ?s 06:38, Strahil Nikolov <hunter86_bg at yahoo.com>
escreveu:
> What's the volume structure right now?
>
> Best Regards,
> Strahil Ni...
2024 Mar 14
3
Adding storage capacity to a production disperse volume
.../researchdata-1
Brick2: node-2:/mnt/data/researchdata-2
Brick3: node-3:/mnt/data/researchdata-3
Brick4: node-4:/mnt/data/researchdata-4
Brick5: node-5:/mnt/data/researchdata-5
Options Reconfigured:
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
locks.mandatory-locking: optimal
Adding the node to the cluster was no problem. But adding a brick using
'add-brick' to the volume resulted in "volume add-brick: failed: Incorrect
number of bricks supplied 1 with count 5". So...
2024 Nov 08
1
Add an arbiter when have multiple bricks at same server.
...k1: gluster1:/disco2TB-0/vms
Brick2: gluster2:/disco2TB-0/vms
Brick3: gluster1:/disco1TB-0/vms
Brick4: gluster2:/disco1TB-0/vms
Brick5: gluster1:/disco1TB-1/vms
Brick6: gluster2:/disco1TB-1/vms
Options Reconfigured:
performance.client-io-threads: off
transport.address-family: inet
storage.fips-mode-rchecksum: on
cluster.granular-entry-heal: on
cluster.data-self-heal: off
cluster.metadata-self-heal: off
cluster.entry-self-heal: off
cluster.self-heal-daemon: off
What am I doing wrong?
---
Gilberto Nunes Ferreira(47) 99676-7530 - Whatsapp / Telegram
Em qua., 6 de nov. de 2024 ?s 11:32, Str...
2023 Oct 25
1
Replace faulty host
...Status: Started
Snapshot Count: 26
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: urd-gds-031:/urd-gds/gds-common
Brick2: urd-gds-032:/urd-gds/gds-common
Brick3: urd-gds-030:/urd-gds/gds-common (arbiter)
Options Reconfigured:
cluster.granular-entry-heal: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
performance.client-io-threads: off
features.barrier: disable
The arbiter node has a faulty root disk but it is still
up and glusterd is still running.
I have a spare server equal to the arbiter node,
so my plan is to replace the arbiter host and
then I can calml...
2019 Feb 01
1
Help analise statedumps
...nt-threads: 4
cluster.lookup-optimize: on
network.inode-lru-limit: 90000
performance.md-cache-timeout: 600
performance.cache-invalidation: on
performance.cache-samba-metadata: on
performance.stat-prefetch: on
features.cache-invalidation-timeout: 600
features.cache-invalidation: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: on
features.utime: on
storage.ctime: on
server.event-threads: 4
performance.cache-size: 256MB
performance.read-ahead: on
cluster.readdir-optimize: on
cluster.strict-readdir: on
performance.io-thread-count: 8
server.al...
2023 Jan 19
1
really large number of skipped files after a scrub
...> Transport-type: tcp
> Bricks:
> Brick1: gluster1:/data/brick1/gv0
> Brick2: gluster2:/data/brick1/gv0
> Brick3: gluster3:/data/brick1/gv0
> Options Reconfigured:
> features.scrub-freq: daily
> auth.allow: x.y.z.q
> transport.address-family: inet
> storage.fips-mode-rchecksum: on
> nfs.disable: on
> performance.client-io-threads: off
> features.bitrot: on
> features.scrub: Active
> features.scrub-throttle: aggressive
> storage.build-pgfid: on
>
> I have two issues:
>
> 1) scrubs are configured to run daily (see above) but they don't
>...
2024 Nov 06
1
Add an arbiter when have multiple bricks at same server.
...k1: gluster1:/disco2TB-0/vms
Brick2: gluster2:/disco2TB-0/vms
Brick3: gluster1:/disco1TB-0/vms
Brick4: gluster2:/disco1TB-0/vms
Brick5: gluster1:/disco1TB-1/vms
Brick6: gluster2:/disco1TB-1/vms
Options Reconfigured:
performance.client-io-threads: off
transport.address-family: inet
storage.fips-mode-rchecksum: on
cluster.granular-entry-heal: on
cluster.data-self-heal: off
cluster.metadata-self-heal: off
cluster.entry-self-heal: off
cluster.self-heal-daemon: off
What am I doing wrong?
---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram
Em qua., 6 de nov. de 2024 ?s 11:32, Strah...
2018 Apr 30
3
Finding performance bottlenecks
...OKUP
0.02 2972.84 us 30.72 us 788018.47 us 496
STATFS
0.03 10951.33 us 35.36 us 695155.13 us 166
STAT
0.42 2574.98 us 208.73 us 1710282.73 us 11877
FXATTROP
2.80 609.20 us 468.51 us 321422.91 us 333946
RCHECKSUM
5.04 548.76 us 14.83 us 76288179.46 us 668188
INODELK
18.46 149940.70 us 13.59 us 79966278.04 us 8949
FINODELK
20.04 395073.91 us 84.99 us 3835355.67 us 3688
FSYNC
53.17 131171.66 us 85.76 us 3838020.34 us 29470...
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
...t; Brick5: gluster1:/disco1TB-1/vms
> Brick6: gluster2:/disco1TB-1/vms
> Options Reconfigured:
> cluster.self-heal-daemon: off
> cluster.entry-self-heal: off
> cluster.metadata-self-heal: off
> cluster.data-self-heal: off
> cluster.granular-entry-heal: on
> storage.fips-mode-rchecksum: on
> transport.address-family: inet
> performance.client-io-threads: off
> pve01:~# gluster vol add-brick VMS replica 3 arbiter 1
> gluster1:/disco2TB-0/vms gluster2:/disco2TB-0/vms gluster1:/disco1TB-0/vms
> gluster2:/disco1TB-0/vms gluster1:/disco1TB-1/vms gluster2:
> /disco1TB...
2023 Oct 27
1
Replace faulty host
...Status: Started
Snapshot Count: 26
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: urd-gds-031:/urd-gds/gds-common
Brick2: urd-gds-032:/urd-gds/gds-common
Brick3: urd-gds-030:/urd-gds/gds-common (arbiter)
Options Reconfigured:
cluster.granular-entry-heal: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
performance.client-io-threads: off
features.barrier: disable
The arbiter node has a faulty root disk but it is still
up and glusterd is still running.
I have a spare server equal to the arbiter node,
so my plan is to replace the arbiter host and
then I can calml...
2024 Nov 06
1
Add an arbiter when have multiple bricks at same server.
...ster2:/disco2TB-0/vms
> Brick3: gluster1:/disco1TB-0/vms
> Brick4: gluster2:/disco1TB-0/vms
> Brick5: gluster1:/disco1TB-1/vms
> Brick6: gluster2:/disco1TB-1/vms
> Options Reconfigured:
> performance.client-io-threads: off
> transport.address-family: inet
> storage.fips-mode-rchecksum: on
> cluster.granular-entry-heal: on
> cluster.data-self-heal: off
> cluster.metadata-self-heal: off
> cluster.entry-self-heal: off
> cluster.self-heal-daemon: off
>
> What am I doing wrong?
>
>
>
>
> ---
>
>
> Gilberto Nunes Ferreira
> (47) 99676-75...
2024 Nov 11
1
Disk size and virtual size drive me crazy!
...f
cluster.data-self-heal-algorithm: full
cluster.favorite-child-policy: mtime
network.ping-timeout: 20
cluster.quorum-count: 1
cluster.quorum-reads: false
cluster.self-heal-daemon: enable
cluster.heal-timeout: 5
user.cifs: off
features.shard: on
cluster.granular-entry-heal: enable
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
---
Gilberto Nunes Ferreira
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20241111/8d5aba17/attachment.html>
2024 Nov 06
1
Add an arbiter when have multiple bricks at same server.
...rick3: gluster1:/disco1TB-0/vms
>> Brick4: gluster2:/disco1TB-0/vms
>> Brick5: gluster1:/disco1TB-1/vms
>> Brick6: gluster2:/disco1TB-1/vms
>> Options Reconfigured:
>> performance.client-io-threads: off
>> transport.address-family: inet
>> storage.fips-mode-rchecksum: on
>> cluster.granular-entry-heal: on
>> cluster.data-self-heal: off
>> cluster.metadata-self-heal: off
>> cluster.entry-self-heal: off
>> cluster.self-heal-daemon: off
>>
>> What am I doing wrong?
>>
>>
>>
>>
>> ---
>>...
2024 Nov 20
1
Disk size and virtual size drive me crazy!
...f
cluster.data-self-heal-algorithm: full
cluster.favorite-child-policy: mtime
network.ping-timeout: 20
cluster.quorum-count: 1
cluster.quorum-reads: false
cluster.self-heal-daemon: enable
cluster.heal-timeout: 5
user.cifs: off
features.shard: on
cluster.granular-entry-heal: enable
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on---
Gilberto Nunes Ferreira
________
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users at gluster.org
https:/...
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
...1TB-0/vms
Brick4: gluster2:/disco1TB-0/vms
Brick5: gluster1:/disco1TB-1/vms
Brick6: gluster2:/disco1TB-1/vms
Options Reconfigured:
cluster.self-heal-daemon: off
cluster.entry-self-heal: off
cluster.metadata-self-heal: off
cluster.data-self-heal: off
cluster.granular-entry-heal: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
performance.client-io-threads: off
pve01:~# gluster volume add-brick VMS replica 3 arbiter 1
gluster1:/disco2TB-0/vms gluster2:/disco2TB-0/vms arbiter:/arbiter1 force
volume add-brick: failed: Brick: gluster1:/disco2TB-0/vms not available.
Brick may be containing...
2024 Jan 03
1
Files exist, but sometimes are not seen by the clients: "No such file or directory"
...-cache: on
network.inode-lru-limit: 200000
performance.md-cache-timeout: 600
performance.cache-invalidation: on
performance.cache-samba-metadata: on
features.cache-invalidation-timeout: 600
features.cache-invalidation: on
performance.write-behind: off
performance.cache-size: 128MB
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
features.inode-quota: on
features.quota: on
server.event-threads: 32
client.event-threads: 16
cluster.readdir-optimize: off
performance.io-thread-count: 64
performance.readdir-ahead: on
performance.client-io-threads: on
performance.parallel-readdir...
2023 Jun 30
1
remove_me files building up
...ick2/brick
Brick8: uk2-prod-gfs-01:/data/glusterfs/gv1/brick2/brick
Brick9: uk3-prod-gfs-arb-01:/data/glusterfs/gv1/brick2/brick (arbiter)
Options Reconfigured:
cluster.entry-self-heal: on
cluster.metadata-self-heal: on
cluster.data-self-heal: on
performance.client-io-threads: off
storage.fips-mode-rchecksum: on
transport.address-family: inet
cluster.lookup-optimize: off
performance.readdir-ahead: off
cluster.readdir-optimize: off
cluster.self-heal-daemon: enable
features.shard: enable
features.shard-block-size: 512MB
cluster.min-free-disk: 10%
cluster.use-anonymous-inode: yes
root at uk3-prod-gfs-arb...
2024 Oct 13
1
XFS corruption reported by QEMU virtual machine with image hosted on gluster
...orum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
performance.strict-o-direct: on
network.remote-dio: disable
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
cluster.granular-entry-heal: enable
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: on
Any help or ideas would be appreciated. Let us know if we have a setting incorrect or have made an error.
Thank you all!
Erik
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <h...
2018 May 01
0
Finding performance bottlenecks
...30.72 us 788018.47 us 496
> STATFS
> 0.03 10951.33 us 35.36 us 695155.13 us 166
> STAT
> 0.42 2574.98 us 208.73 us 1710282.73 us 11877
> FXATTROP
> 2.80 609.20 us 468.51 us 321422.91 us 333946
> RCHECKSUM
> 5.04 548.76 us 14.83 us 76288179.46 us 668188
> INODELK
> 18.46 149940.70 us 13.59 us 79966278.04 us 8949
> FINODELK
> 20.04 395073.91 us 84.99 us 3835355.67 us 3688
> FSYNC
> 53.17 131171.66 us 85.7...
2023 Sep 29
0
gluster volume status shows -> Online "N" after node reboot.
...r.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
performance.strict-o-direct: on
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
auth.allow: *
user.cifs: off
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: on
Regards,
Martin
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20230929/78ff49c5/attachment.html>