Displaying 10 results from an estimated 10 matches for "hatazaki".
2017 Aug 15
3
Is transport=rdma tested with "stripe"?
looks like your rdma is not functional did you tested with qperf?
On Mon, Aug 14, 2017 at 7:37 PM, Hatazaki, Takao <takao.hatazaki at hpe.com>
wrote:
> Forgot to mention that I was using CentOS7.3 and GlusterFS 3.10.3 that is
> the latest available.
>
>
>
> *From:* gluster-users-bounces at gluster.org [mailto:gluster-users-bounces@
> gluster.org] *On Behalf Of *Hatazaki, Takao...
2017 Aug 14
0
Is transport=rdma tested with "stripe"?
Forgot to mention that I was using CentOS7.3 and GlusterFS 3.10.3 that is the latest available.
From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of Hatazaki, Takao
Sent: Tuesday, August 15, 2017 2:32 AM
To: gluster-users at gluster.org
Subject: [Gluster-users] Is transport=rdma tested with "stripe"?
Hi,
I have 2 servers with Mellanox InfiniBand FDR hardware/software installed. A volume with "replica 2 transport rdma" works (creat...
2017 Aug 14
2
Is transport=rdma tested with "stripe"?
Hi,
I have 2 servers with Mellanox InfiniBand FDR hardware/software installed. A volume with "replica 2 transport rdma" works (create on servers, mount and test on clients) ok. A volume with "stripe 2 transport tcp" works ok, too. A volume with "stripe 2 transport rdma" created ok, and mounted ok on a client, but writing a file caused "endpoint not
2017 Aug 15
2
Is transport=rdma tested with "stripe"?
[
Hi Takao.
Could you attach some logs which we can diagnostic?
On 2017? 08? 15? 19:42, Hatazaki, Takao wrote:
>
> Hi,
>
>
>
> I did ib_write_lat in perftest. It worked fine. Between servers and
> between server and client, 2-byte latency was ~0.8us, 8MB bandwidth
> was ~6GB/s. Very normal with IB/FDR.
>
>
>
> ___________________________________________...
2017 Aug 15
0
Is transport=rdma tested with "stripe"?
Hi,
I did ib_write_lat in perftest. It worked fine. Between servers and between server and client, 2-byte latency was ~0.8us, 8MB bandwidth was ~6GB/s. Very normal with IB/FDR.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170815/4217c89e/attachment.html>
2017 Aug 15
0
Is transport=rdma tested with "stripe"?
Ji-Hyeon,
You're saying that "stripe=2 transport=rdma" should work. Ok, that was firstly I wanted to know. I'll put together logs later this week. Thank you.
Takao
2017 Aug 18
1
Is transport=rdma tested with "stripe"?
On Wed, Aug 16, 2017 at 4:44 PM, Hatazaki, Takao <takao.hatazaki at hpe.com> wrote:
>> Note that "stripe" is not tested much and practically unmaintained.
>
> Ah, this was what I suspected. Understood. I'll be happy with "shard".
>
> Having said that, "stripe" works fine with tran...
2017 Aug 15
2
Is transport=rdma tested with "stripe"?
On Tue, Aug 15, 2017 at 01:04:11PM +0000, Hatazaki, Takao wrote:
> Ji-Hyeon,
>
> You're saying that "stripe=2 transport=rdma" should work. Ok, that
> was firstly I wanted to know. I'll put together logs later this week.
Note that "stripe" is not tested much and practically unmaintained. We
do not advise y...
2017 Aug 16
0
Is transport=rdma tested with "stripe"?
> Note that "stripe" is not tested much and practically unmaintained.
Ah, this was what I suspected. Understood. I'll be happy with "shard".
Having said that, "stripe" works fine with transport=tcp. The failure reproduces with just 2 RDMA servers (with InfiniBand), one of those acts also as a client.
I looked into logs. I paste lengthy logs below with
2017 Aug 14
0
Is Intel Omni-Path tested/supported?
Hi,
I am new to GlusterFS. I have 2 computers with Intel Omni-Path hardware/software installed. RDMA is running ok on those according to "ibv_devices" output. GlusterFS "server" installed and set up ok on those. Peer probes on TCP-over-OPA worked ok. I created a replicated volume with transport=tcp, then tested them by mounting the volume on one of servers. Things worked