Displaying 20 results from an estimated 253 matches for "shd".
Did you mean:
sh
2017 Jun 02
1
?==?utf-8?q? Heal operation detail of EC volumes
Hi Serkan,
On Thursday, June 01, 2017 21:31 CEST, Serkan ?oban <cobanserkan at gmail.com> wrote:
?>Is it possible that this matches your observations ?
Yes that matches what I see. So 19 files is being in parallel by 19
SHD processes. I thought only one file is being healed at a time.
Then what is the meaning of disperse.shd-max-threads parameter? If I
set it to 2 then each SHD thread will heal two files at the same time?Each SHD normally heals a single file at a time. However there's an SHD on each node so all of...
2017 Jun 01
3
Heal operation detail of EC volumes
...ore than 10MB incoming traffic.
> Only heal operation is happening on cluster right now, no client/other
> traffic. I see constant 7-8MB write to healing brick disk. So where is
> the missing traffic?
Not sure about your configuration, but probably you are seeing the
result of having the SHD of each server doing heals. That would explain
the network traffic you have.
Suppose that all SHD but the one on the damaged brick are working. In
this case 19 servers will peek 16 fragments each. This gives 19 * 16 =
304 fragments to be requested. EC balances the reads among all available
ser...
2017 Jun 01
0
Heal operation detail of EC volumes
>Is it possible that this matches your observations ?
Yes that matches what I see. So 19 files is being in parallel by 19
SHD processes. I thought only one file is being healed at a time.
Then what is the meaning of disperse.shd-max-threads parameter? If I
set it to 2 then each SHD thread will heal two files at the same time?
>How many IOPS can handle your bricks ?
Bricks are 7200RPM NL-SAS disks. 70-80 random IOPS ma...
2017 Jun 08
1
Heal operation detail of EC volumes
On Fri, Jun 2, 2017 at 1:01 AM, Serkan ?oban <cobanserkan at gmail.com> wrote:
> >Is it possible that this matches your observations ?
> Yes that matches what I see. So 19 files is being in parallel by 19
> SHD processes. I thought only one file is being healed at a time.
> Then what is the meaning of disperse.shd-max-threads parameter? If I
> set it to 2 then each SHD thread will heal two files at the same time?
>
Yes that is the idea.
>
> >How many IOPS can handle your bricks ?
>...
2017 Jul 05
1
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
...nable
network.remote-dio: off
cluster.quorum-type: auto
cluster.server-quorum-type: server
storage.owner-uid: 36
storage.owner-gid: 36
features.shard: on
features.shard-block-size: 512MB
performance.low-prio-threads: 32
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 6
network.ping-timeout: 30
user.cifs: off
nfs.disable: on
performance.strict-o-direct: on
[root at ovirt02 peers]#
And on ovirt03
[root at ovirt03 ~]# gluster volume info export
Volume Name: export
Type: Replicate
Volume ID: b00e5839-becb-47e7-844f-6c...
2017 May 29
1
Heal operation detail of EC volumes
Hi,
When a brick fails in EC, What is the healing read/write data path?
Which processes do the operations?
Assume a 2GB file is being healed in 16+4 EC configuration. I was
thinking that SHD deamon on failed brick host will read 2GB from
network and reconstruct its 100MB chunk and write it on to brick. Is
this right?
2017 Jul 05
1
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
...type: auto
> cluster.server-quorum-type: server
> storage.owner-uid: 36
> storage.owner-gid: 36
> features.shard: on
> features.shard-block-size: 512MB
> performance.low-prio-threads: 32
> cluster.data-self-heal-algorithm: full
> cluster.locking-scheme: granular
> cluster.shd-wait-qlength: 10000
> cluster.shd-max-threads: 6
> network.ping-timeout: 30
> user.cifs: off
> nfs.disable: on
> performance.strict-o-direct: on
> [root at ovirt01 bricks]# gluster volume reset-brick export
> ovirt02.localdomain.local:/gluster/brick3/export start
> volume re...
2017 Sep 13
0
Issues with bricks and shd failing to start
If you encounter issues where bricks and/or sometimes self-heal daemon fail
to start, please see these bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1491059
https://bugzilla.redhat.com/show_bug.cgi?id=1491060
The above bugs are filed against 3.10.4.
and this post where the OP was running 3.11.2:
http://lists.gluster.org/pipermail/gluster-users/2017-September/032433.html
Hopes this helps.
2024 Jul 14
1
Splitting of sshd binaries in 9.8?
I realize that the splitting of the sshd binaries is a work in progress.
Nonetheless I am trying to make a diagram of the situation as of 9.8.
How close have I gotten?
Is it correct that currently for a basic session, binaries are run four
ways?
1. A privileged binary to listen for incoming connections (66717 below)
2. A privileged s...
2017 Jun 27
2
Gluster volume not mounted
...e.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
performance.low-prio-threads: 32
network.remote-dio: off
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
features.shard: on
user.cifs: off
storage.owner-uid: 36
storage.owner-gid: 36
network.ping-timeout: 30
performance.strict-o-direct: on
cluster.granular-entry-heal: enable
Volume Name: engine
Type: Replicate
Volume ID: b160f0b2-8bd3-4ff2-a07c-134cab151...
2017 Jul 05
1
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
...nable
network.remote-dio: off
cluster.quorum-type: auto
cluster.server-quorum-type: server
storage.owner-uid: 36
storage.owner-gid: 36
features.shard: on
features.shard-block-size: 512MB
performance.low-prio-threads: 32
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 6
network.ping-timeout: 30
user.cifs: off
nfs.disable: on
performance.strict-o-direct: on
[root at ovirt01 bricks]# gluster volume reset-brick export
ovirt02.localdomain.local:/gluster/brick3/export start
volume reset-brick: success: reset-brick start op...
2017 Jul 11
1
Replica 3 with arbiter - heal error?
...uick-read: off
performance.io-cache: off
performance.stat-prefetch: off
performance.low-prio-threads: 32
network.remote-dio: enable
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
user.cifs: off
-ps
2017 Jul 21
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...formance.io-cache: off*
*performance.stat-prefetch: off*
*performance.low-prio-threads: 32*
*network.remote-dio: enable*
*cluster.eager-lock: enable*
*cluster.quorum-type: auto*
*cluster.server-quorum-type: server*
*cluster.data-self-heal-algorithm: full*
*cluster.locking-scheme: granular*
*cluster.shd-max-threads: 8*
*cluster.shd-wait-qlength: 10000*
*features.shard: on*
*user.cifs: off*
*storage.owner-gid: 36*
*features.shard-block-size: 512MB*
*network.ping-timeout: 30*
*performance.strict-o-direct: on*
*cluster.granular-entry-heal: on*
*auth.allow: **
*server.allow-insecure: on*
*Volume...
2017 Jun 28
0
Gluster volume not mounted
...gt; performance.stat-prefetch: off
> performance.low-prio-threads: 32
> network.remote-dio: off
> cluster.eager-lock: enable
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> cluster.data-self-heal-algorithm: full
> cluster.locking-scheme: granular
> cluster.shd-max-threads: 8
> cluster.shd-wait-qlength: 10000
> features.shard: on
> user.cifs: off
> storage.owner-uid: 36
> storage.owner-gid: 36
> network.ping-timeout: 30
> performance.strict-o-direct: on
> cluster.granular-entry-heal: enable
>
> Volume Name: engine
> Type:...
2017 Sep 04
2
Slow performance of gluster volume
...luster/vms/brick
Brick2: gluster1:/gluster/vms/brick
Brick3: gluster2:/gluster/vms/brick (arbiter)
Options Reconfigured:
cluster.granular-entry-heal: enable
performance.strict-o-direct: on
network.ping-timeout: 30
storage.owner-gid: 36
storage.owner-uid: 36
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: off
performance.low-prio-threads: 32
performance.stat-prefetch: off
perf...
2017 Sep 13
1
[3.11.2] Bricks disconnect from gluster with 0-transport: EPOLLERR
...k fails to start.
b. self-heal-deamon pid file leave stale pid and indiscriminately
kills pid when glusterd is started. pid files are stored in
`/var/lib/glusterd` which persists across reboots. When glusterd is
started (or restarted or host rebooted) the pid of any process
matching the pid in the shd pid file is killed.
due to the nature of these bugs sometimes bricks/shd will start,
sometimes they will not, restart success may be intermittent. This bug
is most likely to occur when services were running with a low pid,
then the host is rebooted since reboots tend to densely group pids in
lower...
2017 Sep 06
2
Slow performance of gluster volume
...ds: on
>> features.shard-block-size: 512MB
>> cluster.granular-entry-heal: enable
>> performance.strict-o-direct: on
>> network.ping-timeout: 30
>> storage.owner-gid: 36
>> storage.owner-uid: 36
>> user.cifs: off
>> features.shard: on
>> cluster.shd-wait-qlength: 10000
>> cluster.shd-max-threads: 8
>> cluster.locking-scheme: granular
>> cluster.data-self-heal-algorithm: full
>> cluster.server-quorum-type: server
>> cluster.quorum-type: auto
>> cluster.eager-lock: enable
>> network.remote-dio: off
>&...
2017 Jul 05
1
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
...type: auto
> cluster.server-quorum-type: server
> storage.owner-uid: 36
> storage.owner-gid: 36
> features.shard: on
> features.shard-block-size: 512MB
> performance.low-prio-threads: 32
> cluster.data-self-heal-algorithm: full
> cluster.locking-scheme: granular
> cluster.shd-wait-qlength: 10000
> cluster.shd-max-threads: 6
> network.ping-timeout: 30
> user.cifs: off
> nfs.disable: on
> performance.strict-o-direct: on
> [root at ovirt02 peers]#
>
> And on ovirt03
>
> [root at ovirt03 ~]# gluster volume info export
>
> Volume Name: exp...
2017 Jul 21
2
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
2017-07-20 14:48 GMT+02:00 Ravishankar N <ravishankar at redhat.com>:
>
> But it does say something. All these gfids of completed heals in the log
> below are the for the ones that you have given the getfattr output of. So
> what is likely happening is there is an intermittent connection problem
> between your mount and the brick process, leading to pending heals again
>
2009 Apr 20
1
AstDB & MixMonitor queries
...ally want all this to be stored in a single file which my agent would listen and record a response to it. My queries : i) Is it advisable to record the caller responses as a single prompt in append modes or 3 separate files and combine them using php since everytine the IVR wud confimr the input it shd play only that particular promt for ex: while asking for age confirmation it shd only play "you entered age as:" <caller response on age>, how will asterisk know where to play from in that single file if i choose to go wiht just 1 file? how can i use mixmonitor effectively
Thanks
S...