Displaying 8 results from an estimated 8 matches for "flypen".
Did you mean:
olypen
2017 Dec 29
1
A Problem of readdir-optimize
...M, Nithya Balachandran <nbalacha at redhat.com>
wrote:
> Hi Paul,
>
> A few questions:
> What type of volume is this and what client protocol are you using?
> What version of Gluster are you using?
>
> Regards,
> Nithya
>
> On 28 December 2017 at 20:09, Paul <flypen at gmail.com> wrote:
>
>> Hi, All,
>>
>> If I set cluster.readdir-optimize to on, the performance of "ls" is
>> better, but I find one problem.
>>
>> # ls
>> # ls
>> files.1 files.2 file.3
>>
>> I run ls twice. At the firs...
2017 Dec 28
0
A Problem of readdir-optimize
Hi Paul,
A few questions:
What type of volume is this and what client protocol are you using?
What version of Gluster are you using?
Regards,
Nithya
On 28 December 2017 at 20:09, Paul <flypen at gmail.com> wrote:
> Hi, All,
>
> If I set cluster.readdir-optimize to on, the performance of "ls" is
> better, but I find one problem.
>
> # ls
> # ls
> files.1 files.2 file.3
>
> I run ls twice. At the first time, ls returns nothing. At the second ti...
2017 Dec 28
2
A Problem of readdir-optimize
Hi, All,
If I set cluster.readdir-optimize to on, the performance of "ls" is better,
but I find one problem.
# ls
# ls
files.1 files.2 file.3
I run ls twice. At the first time, ls returns nothing. At the second time,
ls returns all file names.
If turn off cluster.readdir-optimize, I don't see this problem.
Is there a way to solve this problem? If ls doesn't return the
2017 Nov 16
0
xfs_rename error and brick offline
On Thu, Nov 16, 2017 at 6:23 AM, Paul <flypen at gmail.com> wrote:
> Hi,
>
> I have a 5-nodes GlusterFS cluster with Distributed-Replicate. There are
> 180 bricks in total. The OS is CentOS6.5, and GlusterFS is 3.11.0. I find
> many bricks are offline when we generate some empty files and rename them.
> I see xfs call tra...
2017 Nov 16
2
xfs_rename error and brick offline
Hi,
I have a 5-nodes GlusterFS cluster with Distributed-Replicate. There are
180 bricks in total. The OS is CentOS6.5, and GlusterFS is 3.11.0. I find
many bricks are offline when we generate some empty files and rename them.
I see xfs call trace in every node.
For example,
Nov 16 11:15:12 node10 kernel: XFS (rdc00d28p2): Internal error
xfs_trans_cancel at line 1948 of file fs/xfs/xfs_trans.c.
2017 Nov 04
1
Fwd: Ignore failed connection messages during copying files with tiering
Hi,
We create a GlusterFS cluster with tiers. The hot tier is
distributed-replicated SSDs. The cold tier is a n*(6+2) disperse volume.
When copy millions of files to the cluster, we find these logs:
W [socket.c:3292:socket_connect] 0-tierd: Ignore failed connection attempt
on /var/run/gluster/39668fb028de4b1bb6f4880e7450c064.socket, (No such file
or directory)
W [socket.c:3292:socket_connect]
2018 Jan 15
1
"linkfile not having link" occurrs sometimes after renaming
There are two users u1 & u2 in the cluster. Some files are created by u1,
and they are read only for u2. Of course u2 can read these files. Later
these files are renamed by u1. Then I switch to the user u2. I find that u2
can't list or access the renamed files. I see these errors in log:
[2018-01-15 17:35:05.133711] I [MSGID: 109045]
[dht-common.c:2393:dht_lookup_cbk] 25-data-dht:
2017 Nov 03
1
Ignore failed connection messages during copying files with tiering
Hi, All,
We create a GlusterFS cluster with tiers. The hot tier is
distributed-replicated SSDs. The cold tier is a n*(6+2) disperse volume.
When copy millions of files to the cluster, we find these logs:
W [socket.c:3292:socket_connect] 0-tierd: Ignore failed connection attempt
on /var/run/gluster/39668fb028de4b1bb6f4880e7450c064.socket, (No such file
or directory)
W