Displaying 20 results from an estimated 3000 matches similar to: "[Gluster-devel] Removal of use-compound-fops option in afr"
2017 Oct 26
0
not healing one file
Hi Richard,
Thanks for the informations. As you said there is gfid mismatch for the
file.
On brick-1 & brick-2 the gfids are same & on brick-3 the gfid is different.
This is not considered as split-brain because we have two good copies here.
Gluster 3.10 does not have a method to resolve this situation other than the
manual intervention [1]. Basically what you need to do is remove the
2017 Jun 29
1
afr-self-heald.c:479:afr_shd_index_sweep
Il 29/06/2017 16:27, Pranith Kumar Karampuri ha scritto:
>
>
> On Thu, Jun 29, 2017 at 7:48 PM, Paolo Margara
> <paolo.margara at polito.it <mailto:paolo.margara at polito.it>> wrote:
>
> Hi Pranith,
>
> I'm using this guide
> https://github.com/nixpanic/glusterdocs/blob/f6d48dc17f2cb6ee4680e372520ec3358641b2bc/Upgrade-Guide/upgrade_to_3.8.md
2017 Jun 29
0
afr-self-heald.c:479:afr_shd_index_sweep
On Thu, Jun 29, 2017 at 7:48 PM, Paolo Margara <paolo.margara at polito.it>
wrote:
> Hi Pranith,
>
> I'm using this guide https://github.com/nixpanic/glusterdocs/blob/
> f6d48dc17f2cb6ee4680e372520ec3358641b2bc/Upgrade-Guide/upgrade_to_3.8.md
>
> Definitely my fault, but I think that is better to specify somewhere that
> restarting the service is not enough simply
2017 Jun 29
0
afr-self-heald.c:479:afr_shd_index_sweep
Paolo,
Which document did you follow for the upgrade? We can fix the
documentation if there are any issues.
On Thu, Jun 29, 2017 at 2:07 PM, Ravishankar N <ravishankar at redhat.com>
wrote:
> On 06/29/2017 01:08 PM, Paolo Margara wrote:
>
> Hi all,
>
> for the upgrade I followed this procedure:
>
> - put node in maintenance mode (ensure no client are active)
2017 Jun 29
2
afr-self-heald.c:479:afr_shd_index_sweep
Hi Pranith,
I'm using this guide
https://github.com/nixpanic/glusterdocs/blob/f6d48dc17f2cb6ee4680e372520ec3358641b2bc/Upgrade-Guide/upgrade_to_3.8.md
Definitely my fault, but I think that is better to specify somewhere
that restarting the service is not enough simply because in many other
case, with other services, is sufficient.
Now I'm restarting every brick process (and waiting for
2017 Jun 29
2
afr-self-heald.c:479:afr_shd_index_sweep
On 06/29/2017 01:08 PM, Paolo Margara wrote:
>
> Hi all,
>
> for the upgrade I followed this procedure:
>
> * put node in maintenance mode (ensure no client are active)
> * yum versionlock delete glusterfs*
> * service glusterd stop
> * yum update
> * systemctl daemon-reload
> * service glusterd start
> * yum versionlock add glusterfs*
> *
2017 Jun 29
0
afr-self-heald.c:479:afr_shd_index_sweep
Hi all,
for the upgrade I followed this procedure:
* put node in maintenance mode (ensure no client are active)
* yum versionlock delete glusterfs*
* service glusterd stop
* yum update
* systemctl daemon-reload
* service glusterd start
* yum versionlock add glusterfs*
* gluster volume heal vm-images-repo full
* gluster volume heal vm-images-repo info
on each server every time
2017 Jun 28
2
afr-self-heald.c:479:afr_shd_index_sweep
On Wed, Jun 28, 2017 at 9:45 PM, Ravishankar N <ravishankar at redhat.com>
wrote:
> On 06/28/2017 06:52 PM, Paolo Margara wrote:
>
>> Hi list,
>>
>> yesterday I noted the following lines into the glustershd.log log file:
>>
>> [2017-06-28 11:53:05.000890] W [MSGID: 108034]
>> [afr-self-heald.c:479:afr_shd_index_sweep]
>>
2018 Mar 06
0
SQLite3 on 3 node cluster FS?
>
>
> If anyone would like our test scripts, I can either tar them up and
> email them or put them in github - either is fine with me. (they rely
> on current builds of docker and docker-compose)
>
>
Sure, sharing the test cases makes it very easy for us to see what would be
the issue. I would recommend a github repo for the script.
Regards,
Amar
-------------- next part
2017 Jun 28
0
afr-self-heald.c:479:afr_shd_index_sweep
On 06/28/2017 06:52 PM, Paolo Margara wrote:
> Hi list,
>
> yesterday I noted the following lines into the glustershd.log log file:
>
> [2017-06-28 11:53:05.000890] W [MSGID: 108034]
> [afr-self-heald.c:479:afr_shd_index_sweep]
> 0-iso-images-repo-replicate-0: unable to get index-dir on
> iso-images-repo-client-0
> [2017-06-28 11:53:05.001146] W [MSGID: 108034]
>
2017 Jun 28
3
afr-self-heald.c:479:afr_shd_index_sweep
Hi list,
yesterday I noted the following lines into the glustershd.log log file:
[2017-06-28 11:53:05.000890] W [MSGID: 108034]
[afr-self-heald.c:479:afr_shd_index_sweep]
0-iso-images-repo-replicate-0: unable to get index-dir on
iso-images-repo-client-0
[2017-06-28 11:53:05.001146] W [MSGID: 108034]
[afr-self-heald.c:479:afr_shd_index_sweep] 0-vm-images-repo-replicate-0:
unable to get index-dir
2017 Sep 20
1
"Input/output error" on mkdir for PPC64 based client
I put the share into debug mode and then repeated the process from a ppc64
client and an x86 client. Weirdly the client logs were almost identical.
Here's the ppc64 gluster client log of attempting to create a folder...
-------------
[2017-09-20 13:34:23.344321] D
[rpc-clnt-ping.c:93:rpc_clnt_remove_ping_timer_locked] (-->
2017 Sep 20
0
"Input/output error" on mkdir for PPC64 based client
Looks like it is an issue with architecture compatibility in RPC layer (ie,
with XDRs and how it is used). Just glance the logs of the client process
where you saw the errors, which could give some hints. If you don't
understand the logs, share them, so we will try to look into it.
-Amar
On Wed, Sep 20, 2017 at 2:40 AM, Walter Deignan <WDeignan at uline.com> wrote:
> I recently
2017 Aug 25
0
Gluster 4.0: Update
I did a quick google to see what Haolo Replication was - nice feature, very
useful.
Unfortunately I also found this:
https://www.google.com/patents/US20160028806
>Halo based file system replication
>US 20160028806 A1
Is this an issue?
On 25 August 2017 at 10:33, Amar Tumballi <atumball at redhat.com> wrote:
> Hello Everyone,
>
> 3 weeks back we (most of the
2018 Jan 31
3
df does not show full volume capacity after update to 3.12.4
Amar,
Thanks for your prompt reply. No, I do not plan to fix the code and re-compile. I was hoping it could be fixed with setting the shared-brick-count or some other option. Since this is a production system, we will wait until a fix is in a release.
Thanks,
Eva (865) 574-6894
From: Amar Tumballi <atumball at redhat.com>
Date: Wednesday, January 31, 2018 at 12:15 PM
To: Eva Freer
2018 Feb 01
0
df does not show full volume capacity after update to 3.12.4
Hi,
I think we have a workaround for until we have a fix in the code. The
following worked on my system.
Copy the attached file to */usr/lib*/glusterfs/**3.12.4**/filter/*. (You
might need to create the *filter* directory in this path.)
Make sure the file has execute permissions. On my system:
[root at rhgsserver1 fuse2]# cd /usr/lib/glusterfs/3.12.5/
[root at rhgsserver1 3.12.5]# l
total 4.0K
2018 Apr 13
0
Proposal to make Design Spec and Document for a feature mandatory.
All,
Thanks to Nigel, this is now deployed, and any new patches referencing
github (ie, new features) need the 'DocApproved' and 'SpecApproved' label.
Regards,
Amar
On Mon, Apr 2, 2018 at 10:40 AM, Amar Tumballi <atumball at redhat.com> wrote:
> Hi all,
>
> A better documentation about the feature, and also information about how
> to use the features are one
2018 Jan 31
0
df does not show full volume capacity after update to 3.12.4
Hi Freer,
Our analysis is that this issue is caused by
https://review.gluster.org/17618. Specifically, in
'gd_set_shared_brick_count()' from
https://review.gluster.org/#/c/17618/9/xlators/mgmt/glusterd/src/glusterd-utils.c
.
But even if we fix it today, I don't think we have a release planned
immediately for shipping this. Are you planning to fix the code and
re-compile?
Regards,
2011 Aug 24
1
Input/output error
Hi, everyone.
Its nice meeting you.
I am poor at English....
I am writing this because I'd like to update GlusterFS to 3.2.2-1,and I want
to change from gluster mount to nfs mount.
I have installed GlusterFS 3.2.1 one week ago,and replication 2 server.
OS:CentOS5.5 64bit
RPM:glusterfs-core-3.2.1-1
glusterfs-fuse-3.2.1-1
command
gluster volume create syncdata replica 2 transport tcp
2017 Aug 25
2
Gluster 4.0: Update
Hello Everyone,
3 weeks back we (most of the maintainers of Gluster projects) had a
meeting, and we discussed about features required for Gluster 4.0 and also
the possible dates.
<https://hackmd.io/GwIwnGBmCsCM0FoDMsAcBTBAWADLLCYATLJiOmBVgMYAmAhvbLUA#summary>
Summary:
-
It is agreed unanimously that the Gluster 4.0 should be feature based
release, and not just time based.
-