Thanks for your anwser! I haven't foud the link file.what is the link file
name or format?
------------------
------LiLi
------------------ Original ------------------
From: "gluster-users-request";<gluster-users-request at
gluster.org>;
Date: Fri, Dec 23, 2016 08:00 PM
To: "gluster-users"<gluster-users at gluster.org>;
Subject: Gluster-users Digest, Vol 104, Issue 22
Send Gluster-users mailing list submissions to
gluster-users at gluster.org
To subscribe or unsubscribe via the World Wide Web, visit
http://www.gluster.org/mailman/listinfo/gluster-users
or, via email, send a message with subject or body 'help' to
gluster-users-request at gluster.org
You can reach the person managing the list at
gluster-users-owner at gluster.org
When replying, please edit your Subject line so it is more specific
than "Re: Contents of Gluster-users digest..."
Today's Topics:
1. Re: Heal command stopped (Mohammed Rafi K C)
2. Re: install Gluster 3.9 on CentOS (Grant Ridder)
3. DHT DHTLINKFILE location (=?gb18030?B?wO7Bog==?=)
4. Re: DHT DHTLINKFILE location (Mohammed Rafi K C)
5. Re: File operation failure on simple distributed volume
(Mohammed Rafi K C)
----------------------------------------------------------------------
Message: 1
Date: Thu, 22 Dec 2016 18:26:34 +0530
From: Mohammed Rafi K C <rkavunga at redhat.com>
To: Milo? ?u?ulovi? - MDPI <cuculovic at mdpi.com>,
"gluster-users at gluster.org" <gluster-users at gluster.org>
Subject: Re: [Gluster-users] Heal command stopped
Message-ID: <85da4ca8-82b4-52ab-091d-3951f20983e9 at redhat.com>
Content-Type: text/plain; charset=UTF-8
Hi Milo? ?u?ulovi?
Can you please give us gluster volume info output and log files fore
bricks,glusterd and selfheal daemon.
Regards
Rafi KC
On 12/22/2016 03:56 PM, Milo? ?u?ulovi? - MDPI wrote:> I recently added a new replica server and have now:
> Number of Bricks: 1 x 2 = 2
>
> The heal was launched automatically and was working until yesterday
> (copied 5.5TB of files from total of 6.2TB). Now, the copy seems
> stopped, I do not see any file change on the new replica brick server.
>
> When trying to add a new file to the volume and checking the physical
> files on the replica brick, the file is not there.
>
> When I try to run a full heal with the command:
> sudo gluster volume heal storage full
>
> I am getting:
>
> Launching heal operation to perform full self heal on volume storage
> has been unsuccessful on bricks that are down. Please check if all
> brick processes are running.
>
> My storage info shows both bricks there.
>
> Any idea?
>
>
------------------------------
Message: 2
Date: Thu, 22 Dec 2016 16:51:48 -0800
From: Grant Ridder <shortdudey123 at gmail.com>
To: "Kaleb S. KEITHLEY" <kkeithle at redhat.com>
Cc: gluster-users at gluster.org
Subject: Re: [Gluster-users] install Gluster 3.9 on CentOS
Message-ID:
<CAPiURgXNNasmJ3Mc2JTuCX=A74DxU5vDytM4Sr1jxwwUTCH--w at mail.gmail.com>
Content-Type: text/plain; charset="utf-8"
Thanks for the info! Generally speaking, how long has it taken in the past
to be promoted to the main mirror? (i realize this might be skewed right
now due to the holiday season)
-Grant
On Tue, Dec 20, 2016 at 10:36 AM, Kaleb S. KEITHLEY <kkeithle at
redhat.com>
wrote:
> On 12/20/2016 12:19 PM, Grant Ridder wrote:
>
>> Hi,
>>
>> I am not seeing 3.9 in the Storage SIG for CentOS 6 or 7
>> http://mirror.centos.org/centos/7.2.1511/storage/x86_64/
>> http://mirror.centos.org/centos/6.8/storage/x86_64/
>>
>> However, i do see it
>> here: http://buildlogs.centos.org/centos/7/storage/x86_64/
>>
>> Is that expected?
>>
>
> Yes.
>
> did the Storage SIG repo change locations?
>>
>
> No.
>
> Until someone tests and gives positive feedback they remain in buildlogs.
>
> Much the same way Fedora RPMs remain in Updates-Testing until they receive
> +3 karma (or wait for 14 days).
>
> --
>
> Kaleb
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://www.gluster.org/pipermail/gluster-users/attachments/20161222/85f97d63/attachment-0001.html>
------------------------------
Message: 3
Date: Fri, 23 Dec 2016 10:56:34 +0800
From: "=?gb18030?B?wO7Bog==?=" <dylan-lili at foxmail.com>
To: "=?gb18030?B?Z2x1c3Rlci11c2Vycw==?=" <gluster-users at
gluster.org>
Subject: [Gluster-users] DHT DHTLINKFILE location
Message-ID: <tencent_2DB5F74A2E4E0B927A76FC68 at qq.com>
Content-Type: text/plain; charset="gb18030"
In glusterfs 3.8, glusterfs creates a DHTLINKFILE file in hash volume when the
volume have no size or inode over the limit.But I cann't find the
DHTLINKFILE to indicate real volume .
Thanks!
------------------
------LiLi
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://www.gluster.org/pipermail/gluster-users/attachments/20161223/45fd74ca/attachment-0001.html>
------------------------------
Message: 4
Date: Fri, 23 Dec 2016 11:46:56 +0530
From: Mohammed Rafi K C <rkavunga at redhat.com>
To: ?? <dylan-lili at foxmail.com>, gluster-users
<gluster-users at gluster.org>
Subject: Re: [Gluster-users] DHT DHTLINKFILE location
Message-ID: <87940ba0-9590-46da-9d2f-d98957c5493c at redhat.com>
Content-Type: text/plain; charset="utf-8"
If you are sure that the likfile has been created, then it will be in
hashed subvolume only. Just do a find on the file from backend and see .
Regards
Rafi KC
On 12/23/2016 08:26 AM, ?? wrote:> In glusterfs 3.8, glusterfs creates a DHTLINKFILE file in hash volume
> when the volume have no size or inode over the limit.But I cann't
> find the DHTLINKFILE to indicate real volume .
> Thanks!
>
> ------------------
> *
> *
> *
> *
> *
> ------*
> LiLi
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://www.gluster.org/pipermail/gluster-users/attachments/20161223/8e8b41bf/attachment-0001.html>
------------------------------
Message: 5
Date: Fri, 23 Dec 2016 14:33:56 +0530
From: Mohammed Rafi K C <rkavunga at redhat.com>
To: yonex <yonexyonex at icloud.com>
Cc: gluster-users at gluster.org
Subject: Re: [Gluster-users] File operation failure on simple
distributed volume
Message-ID: <8fb4baca-98bd-28eb-b96c-6787be80a829 at redhat.com>
Content-Type: text/plain; charset="utf-8"
Hi Yonex,
As we discussed in irc #gluster-devel , I have attached the gdb script
along with this mail.
Procedure to run the gdb script.
1) Install gdb,
2) Download and install gluster debuginfo for your machine . packages
location --- > https://cbs.centos.org/koji/buildinfo?buildID=12757
3) find the process id and attach gdb to the process using the command
gdb attach <pid> -x <path_to_script>
4) Continue running the script till you hit the problem
5) Stop the gdb
6) You will see a file called mylog.txt in the location where you ran
the gdb
Please keep an eye on the attached process. If you have any doubt please
feel free to revert me.
Regards
Rafi KC
On 12/19/2016 05:33 PM, Mohammed Rafi K C wrote:>
> On 12/19/2016 05:32 PM, Mohammed Rafi K C wrote:
>> Client 0-glusterfs01-client-2 has disconnected from bricks around
>> 2016-12-15 11:21:17.854249 . Can you look and/or paste the brick logs
>> around the time.
> You can find the brick name and hostname for 0-glusterfs01-client-2 from
> client graph.
>
> Rafi
>
>> Are you there in any of gluster irc channel, if so Have you got a
>> nickname that I can search.
>>
>> Regards
>> Rafi KC
>>
>> On 12/19/2016 04:28 PM, yonex wrote:
>>> Rafi,
>>>
>>> OK. Thanks for your guide. I found the debug log and pasted lines
around that.
>>> http://pastebin.com/vhHR6PQN
>>>
>>> Regards
>>>
>>>
>>> 2016-12-19 14:58 GMT+09:00 Mohammed Rafi K C <rkavunga at
redhat.com>:
>>>> On 12/16/2016 09:10 PM, yonex wrote:
>>>>> Rafi,
>>>>>
>>>>> Thanks, the .meta feature I didn't know is very nice. I
finally have
>>>>> captured debug logs from a client and bricks.
>>>>>
>>>>> A mount log:
>>>>> - http://pastebin.com/Tjy7wGGj
>>>>>
>>>>> FYI rickdom126 is my client's hostname.
>>>>>
>>>>> Brick logs around that time:
>>>>> - Brick1: http://pastebin.com/qzbVRSF3
>>>>> - Brick2: http://pastebin.com/j3yMNhP3
>>>>> - Brick3: http://pastebin.com/m81mVj6L
>>>>> - Brick4: http://pastebin.com/JDAbChf6
>>>>> - Brick5: http://pastebin.com/7saP6rsm
>>>>>
>>>>> However I could not find any message like "EOF on
socket". I hope
>>>>> there is any helpful information in the logs above.
>>>> Indeed. I understand that the connections are in disconnected
state. But
>>>> what particularly I'm looking for is the cause of the
disconnect, Can
>>>> you paste the debug logs when it start disconnects, and around
that. You
>>>> may see a debug logs that says "disconnecting now".
>>>>
>>>>
>>>> Regards
>>>> Rafi KC
>>>>
>>>>
>>>>> Regards.
>>>>>
>>>>>
>>>>> 2016-12-14 15:20 GMT+09:00 Mohammed Rafi K C <rkavunga
at redhat.com>:
>>>>>> On 12/13/2016 09:56 PM, yonex wrote:
>>>>>>> Hi Rafi,
>>>>>>>
>>>>>>> Thanks for your response. OK, I think it is
possible to capture debug
>>>>>>> logs, since the error seems to be reproduced a few
times per day. I
>>>>>>> will try that. However, so I want to avoid
redundant debug outputs if
>>>>>>> possible, is there a way to enable debug log only
on specific client
>>>>>>> nodes?
>>>>>> if you are using fuse mount, there is proc kind of
feature called .meta
>>>>>> . You can set log level through that for a particular
client [1] . But I
>>>>>> also want log from bricks because I suspect bricks
process for
>>>>>> initiating the disconnects.
>>>>>>
>>>>>>
>>>>>> [1] eg : echo 8 >
/mnt/glusterfs/.meta/logging/loglevel
>>>>>>
>>>>>>> Regards
>>>>>>>
>>>>>>> Yonex
>>>>>>>
>>>>>>> 2016-12-13 23:33 GMT+09:00 Mohammed Rafi K C
<rkavunga at redhat.com>:
>>>>>>>> Hi Yonex,
>>>>>>>>
>>>>>>>> Is this consistently reproducible ? if so, Can
you enable debug log [1]
>>>>>>>> and check for any message similar to [2].
Basically you can even search
>>>>>>>> for "EOF on socket".
>>>>>>>>
>>>>>>>> You can set your log level back to default
(INFO) after capturing for
>>>>>>>> some time.
>>>>>>>>
>>>>>>>>
>>>>>>>> [1] : gluster volume set <volname>
diagnostics.brick-log-level DEBUG and
>>>>>>>> gluster volume set <volname>
diagnostics.client-log-level DEBUG
>>>>>>>>
>>>>>>>> [2] : http://pastebin.com/xn8QHXWa
>>>>>>>>
>>>>>>>>
>>>>>>>> Regards
>>>>>>>>
>>>>>>>> Rafi KC
>>>>>>>>
>>>>>>>> On 12/12/2016 09:35 PM, yonex wrote:
>>>>>>>>> Hi,
>>>>>>>>>
>>>>>>>>> When my application moves a file from
it's local disk to FUSE-mounted
>>>>>>>>> GlusterFS volume, the client outputs many
warnings and errors not
>>>>>>>>> always but occasionally. The volume is a
simple distributed volume.
>>>>>>>>>
>>>>>>>>> A sample of logs pasted:
http://pastebin.com/axkTCRJX
>>>>>>>>>
>>>>>>>>> It seems to come from something like a
network disconnection
>>>>>>>>> ("Transport endpoint is not
connected") at a glance, but other
>>>>>>>>> networking applications on the same machine
don't observe such a
>>>>>>>>> thing. So I guess there may be a problem
somewhere in GlusterFS stack.
>>>>>>>>>
>>>>>>>>> It ended in failing to rename a file,
logging PHP Warning like below:
>>>>>>>>>
>>>>>>>>> PHP Warning:
rename(/glusterfs01/db1/stack/f0/13a9a2f0): failed
>>>>>>>>> to open stream: Input/output error in
[snipped].php on line 278
>>>>>>>>> PHP Warning:
>>>>>>>>>
rename(/var/stack/13a9a2f0,/glusterfs01/db1/stack/f0/13a9a2f0):
>>>>>>>>> Input/output error in [snipped].php on line
278
>>>>>>>>>
>>>>>>>>> Conditions:
>>>>>>>>>
>>>>>>>>> - GlusterFS 3.8.5 installed via yum
CentOS-Gluster-3.8.repo
>>>>>>>>> - Volume info and status pasted:
http://pastebin.com/JPt2KeD8
>>>>>>>>> - Client machines' OS: Scientific Linux
6 or CentOS 6.
>>>>>>>>> - Server machines' OS: CentOS 6.
>>>>>>>>> - Kernel version is
2.6.32-642.6.2.el6.x86_64 on all machines.
>>>>>>>>> - The number of connected FUSE clients is
260.
>>>>>>>>> - No firewall between connected machines.
>>>>>>>>> - Neither remounting volumes nor rebooting
client machines take effect.
>>>>>>>>> - It is caused by not only rename() but
also copy() and filesize() operation.
>>>>>>>>> - No outputs in brick logs when it happens.
>>>>>>>>>
>>>>>>>>> Any ideas? I'd appreciate any help.
>>>>>>>>>
>>>>>>>>> Regards.
>>>>>>>>>
_______________________________________________
>>>>>>>>> Gluster-users mailing list
>>>>>>>>> Gluster-users at gluster.org
>>>>>>>>>
http://www.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
set pagination off
set logging file mylog.txt
set logging on
handle SIGPIPE nostop
b socket.c:596
commands 1
shell date -u
p priv->incoming.ra_read
p priv->incoming.ra_max
p priv->incoming.ra_served
p priv->incoming.record_state
p priv->sock
bt
continue
end
b socket.c:2108
commands 2
shell date -u
p in->total_bytes_read
p in->msg_type
bt
continue
end
b socket.c:2142 if ret < 0
commands 3
shell date -u
p frag->bytes_read
p ret
bt
continue
end
b socket.c:1011 if size >= 1073741824ULL
commands 4
shell date -u
p size
p iov_length (msg->rpchdr, msg->rpchdrcount)
p iov_length (msg->proghdr, msg->proghdrcount)
p iov_length (msg->progpayload, msg->progpayloadcount)
bt
continue
end
continue
------------------------------
_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
End of Gluster-users Digest, Vol 104, Issue 22
**********************************************
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://www.gluster.org/pipermail/gluster-users/attachments/20161224/9555a301/attachment.html>