Displaying 20 results from an estimated 7000 matches similar to: "Issues with bricks and shd failing to start"
2017 Sep 13
1
[3.11.2] Bricks disconnect from gluster with 0-transport: EPOLLERR
I ran into something like this in 3.10.4 and filed two bugs for it:
https://bugzilla.redhat.com/show_bug.cgi?id=1491059
https://bugzilla.redhat.com/show_bug.cgi?id=1491060
Please see the above bugs for full detail.
In summary, my issue was related to glusterd's pid handling of pid files
when is starts self-heal and bricks. The issues are:
a. brick pid file leaves stale pid and brick fails
2017 Sep 28
1
one brick one volume process dies?
On 13/09/17 20:47, Ben Werthmann wrote:
> These symptoms appear to be the same as I've recorded in
> this post:
>
> http://lists.gluster.org/pipermail/gluster-users/2017-September/032435.html
>
> On Wed, Sep 13, 2017 at 7:01 AM, Atin Mukherjee
> <atin.mukherjee83 at gmail.com
> <mailto:atin.mukherjee83 at gmail.com>> wrote:
>
> Additionally the
2017 Sep 13
0
one brick one volume process dies?
These symptoms appear to be the same as I've recorded in this post:
http://lists.gluster.org/pipermail/gluster-users/2017-September/032435.html
On Wed, Sep 13, 2017 at 7:01 AM, Atin Mukherjee <atin.mukherjee83 at gmail.com>
wrote:
> Additionally the brick log file of the same brick would be required.
> Please look for if brick process went down or crashed. Doing a volume start
2017 Sep 13
2
one brick one volume process dies?
Additionally the brick log file of the same brick would be required. Please
look for if brick process went down or crashed. Doing a volume start force
should resolve the issue.
On Wed, 13 Sep 2017 at 16:28, Gaurav Yadav <gyadav at redhat.com> wrote:
> Please send me the logs as well i.e glusterd.logs and cmd_history.log.
>
>
> On Wed, Sep 13, 2017 at 1:45 PM, lejeczek
2017 Jul 05
1
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
On Wed, Jul 5, 2017 at 5:22 PM, Atin Mukherjee <amukherj at redhat.com> wrote:
> And what does glusterd log indicate for these failures?
>
See here in gzip format
https://drive.google.com/file/d/0BwoPbcrMv8mvYmlRLUgyV0pFN0k/view?usp=sharing
It seems that on each host the peer files have been updated with a new
entry "hostname2":
[root at ovirt01 ~]# cat
2017 Aug 06
1
[3.11.2] Bricks disconnect from gluster with 0-transport: EPOLLERR
Hi,
I have a distributed volume which runs on Fedora 26 systems with
glusterfs 3.11.2 from gluster.org repos:
----------
[root at taupo ~]# glusterd --version
glusterfs 3.11.2
gluster> volume info gv2
Volume Name: gv2
Type: Distribute
Volume ID: 6b468f43-3857-4506-917c-7eaaaef9b6ee
Status: Started
Snapshot Count: 0
Number of Bricks: 6
Transport-type: tcp
Bricks:
Brick1:
2018 Jan 08
0
different names for bricks
I just noticed that gluster volume info foo and gluster volume heal
foo statistics use different indices for brick numbers. Info uses 1
based but heal statistics uses 0 based.
gluster volume info clifford Volume Name: cliffordType: Distributed-
ReplicateVolume ID: 0e33ff98-53e8-40cf-bdb0-3e18406a945aStatus:
StartedSnapshot Count: 0Number of Bricks: 2 x 2 = 4Transport-type:
tcpBricks:Brick1:
2017 Jul 05
1
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
And what does glusterd log indicate for these failures?
On Wed, Jul 5, 2017 at 8:43 PM, Gianluca Cecchi <gianluca.cecchi at gmail.com>
wrote:
>
>
> On Wed, Jul 5, 2017 at 5:02 PM, Sahina Bose <sabose at redhat.com> wrote:
>
>>
>>
>> On Wed, Jul 5, 2017 at 8:16 PM, Gianluca Cecchi <
>> gianluca.cecchi at gmail.com> wrote:
>>
>>>
2017 Jun 02
1
?==?utf-8?q? Heal operation detail of EC volumes
Hi Serkan,
On Thursday, June 01, 2017 21:31 CEST, Serkan ?oban <cobanserkan at gmail.com> wrote:
?>Is it possible that this matches your observations ?
Yes that matches what I see. So 19 files is being in parallel by 19
SHD processes. I thought only one file is being healed at a time.
Then what is the meaning of disperse.shd-max-threads parameter? If I
set it to 2 then each SHD thread
2017 Jun 08
1
Heal operation detail of EC volumes
On Fri, Jun 2, 2017 at 1:01 AM, Serkan ?oban <cobanserkan at gmail.com> wrote:
> >Is it possible that this matches your observations ?
> Yes that matches what I see. So 19 files is being in parallel by 19
> SHD processes. I thought only one file is being healed at a time.
> Then what is the meaning of disperse.shd-max-threads parameter? If I
> set it to 2 then each SHD
2017 Jun 01
0
Heal operation detail of EC volumes
>Is it possible that this matches your observations ?
Yes that matches what I see. So 19 files is being in parallel by 19
SHD processes. I thought only one file is being healed at a time.
Then what is the meaning of disperse.shd-max-threads parameter? If I
set it to 2 then each SHD thread will heal two files at the same time?
>How many IOPS can handle your bricks ?
Bricks are 7200RPM NL-SAS
2017 Jul 05
1
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
OK, so the log just hints to the following:
[2017-07-05 15:04:07.178204] E [MSGID: 106123]
[glusterd-mgmt.c:1532:glusterd_mgmt_v3_commit] 0-management: Commit failed
for operation Reset Brick on local node
[2017-07-05 15:04:07.178214] E [MSGID: 106123]
[glusterd-replace-brick.c:649:glusterd_mgmt_v3_initiate_replace_brick_cmd_phases]
0-management: Commit Op Failed
While going through the code,
2017 Jun 01
3
Heal operation detail of EC volumes
Hi Serkan,
On 30/05/17 10:22, Serkan ?oban wrote:
> Ok I understand that heal operation takes place on server side. In
> this case I should see X KB
> out network traffic from 16 servers and 16X KB input traffic to the
> failed brick server right? So that process will get 16 chunks
> recalculate our chunk and write it to disk.
That should be the normal operation for a single
2017 Sep 21
0
Backup and Restore strategy in Gluster FS 3.8.4
Not so fast, 3.8.4 is the latest if you are using official RHEL rpms
from Red Hat Gluster Storage, so support for that should go through
your Red Hat subscription. If you are using the community packages,
then yes, you want to update to a more current version.
Seems like the latest is: 3.8.4-44.el7
Diego
On Wed, Sep 20, 2017 at 1:45 PM, Ben Werthmann <ben at apcera.com> wrote:
>
2017 Jul 05
1
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
On Wed, Jul 5, 2017 at 5:02 PM, Sahina Bose <sabose at redhat.com> wrote:
>
>
> On Wed, Jul 5, 2017 at 8:16 PM, Gianluca Cecchi <gianluca.cecchi at gmail.com
> > wrote:
>
>>
>>
>> On Wed, Jul 5, 2017 at 7:42 AM, Sahina Bose <sabose at redhat.com> wrote:
>>
>>>
>>>
>>>> ...
>>>>
>>>> then
2017 Sep 21
0
Backup and Restore strategy in Gluster FS 3.8.4
Good point Diego. Thanks!
On Thu, Sep 21, 2017 at 9:49 AM, Diego Remolina <dijuremo at gmail.com> wrote:
> Not so fast, 3.8.4 is the latest if you are using official RHEL rpms
> from Red Hat Gluster Storage, so support for that should go through
> your Red Hat subscription. If you are using the community packages,
> then yes, you want to update to a more current version.
>
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
Still getting error
pve01:~# gluster vol info
Volume Name: VMS
Type: Distributed-Replicate
Volume ID: e1a4f787-3f62-441e-a7ce-c0ae6b111ebf
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: gluster1:/disco2TB-0/vms
Brick2: gluster2:/disco2TB-0/vms
Brick3: gluster1:/disco1TB-0/vms
Brick4: gluster2:/disco1TB-0/vms
Brick5: gluster1:/disco1TB-1/vms
2017 Nov 16
0
Missing files on one of the bricks
On 15 November 2017 at 19:57, Frederic Harmignies <
frederic.harmignies at elementai.com> wrote:
> Hello, we have 2x files that are missing from one of the bricks. No idea
> how to fix this.
>
> Details:
>
> # gluster volume info
>
> Volume Name: data01
> Type: Replicate
> Volume ID: 39b4479c-31f0-4696-9435-5454e4f8d310
> Status: Started
> Snapshot Count:
2024 Nov 06
1
Add an arbiter when have multiple bricks at same server.
Ok.
I have a 3rd host with Debian 12 installed and Gluster v11. The name of the
host is arbiter!
I already add this host into the pool:
arbiter:~# gluster pool list
UUID Hostname State
0cbbfc27-3876-400a-ac1d-2d73e72a4bfd gluster1.home.local Connected
99ed1f1e-7169-4da8-b630-a712a5b71ccd gluster2 Connected
2024 Nov 08
1
Add an arbiter when have multiple bricks at same server.
After force the add-brick
gluster volume add-brick VMS replica 3 arbiter 1 arbiter:/arbiter1
arbiter:/arbiter2 arbiter:/arbiter3 force
volume add-brick: success
pve01:~# gluster volume info
Volume Name: VMS
Type: Distributed-Replicate
Volume ID: e1a4f787-3f62-441e-a7ce-c0ae6b111ebf
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x (2 + 1) = 9
Transport-type: tcp
Bricks:
Brick1: