Displaying 12 results from an estimated 12 matches for "gluesi".
Did you mean:
glues
2017 Aug 15
0
Is transport=rdma tested with "stripe"?
Hi,
I did ib_write_lat in perftest. It worked fine. Between servers and between server and client, 2-byte latency was ~0.8us, 8MB bandwidth was ~6GB/s. Very normal with IB/FDR.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170815/4217c89e/attachment.html>
2017 Aug 15
3
Is transport=rdma tested with "stripe"?
looks like your rdma is not functional did you tested with qperf?
On Mon, Aug 14, 2017 at 7:37 PM, Hatazaki, Takao <takao.hatazaki at hpe.com>
wrote:
> Forgot to mention that I was using CentOS7.3 and GlusterFS 3.10.3 that is
> the latest available.
>
>
>
> *From:* gluster-users-bounces at gluster.org [mailto:gluster-users-bounces@
> gluster.org] *On Behalf Of
2017 Aug 15
2
Is transport=rdma tested with "stripe"?
[
Hi Takao.
Could you attach some logs which we can diagnostic?
On 2017? 08? 15? 19:42, Hatazaki, Takao wrote:
>
> Hi,
>
>
>
> I did ib_write_lat in perftest. It worked fine. Between servers and
> between server and client, 2-byte latency was ~0.8us, 8MB bandwidth
> was ~6GB/s. Very normal with IB/FDR.
>
>
>
>
2017 Aug 06
0
State: Peer Rejected (Connected)
On 2017? 08? 06? 15:59, mabi wrote:
> Hi,
>
> I have a 3 nodes replica (including arbiter) volume with GlusterFS
> 3.8.11 and this night one of my nodes (node1) had an out of memory for
> some unknown reason and as such the Linux OOM killer has killed the
> glusterd and glusterfs process. I restarted the glusterd process but
> now that node is in "Peer Rejected"
2017 Aug 06
2
State: Peer Rejected (Connected)
Hi,
I have a 3 nodes replica (including arbiter) volume with GlusterFS 3.8.11 and this night one of my nodes (node1) had an out of memory for some unknown reason and as such the Linux OOM killer has killed the glusterd and glusterfs process. I restarted the glusterd process but now that node is in "Peer Rejected" state from the other nodes and from itself it rejects the two other nodes
2017 Aug 06
1
State: Peer Rejected (Connected)
Hi Ji-Hyeon,
Thanks to your help I could find out the problematic file. This would be the quota file of my volume it has a different checksum on node1 whereas node2 and arbiternode have the same checksum. This is expected as I had issues which my quota file and had to fix it manually with a script (more details on this mailing list in a previous post) and I only did that on node1.
So what I now
2017 Jul 18
1
License for product
Hi all,
I?ve been developing a product using glusterfs within my team.
I want to know issues about license when I use glusterfs as product.
Q1. What should I do when I use glusterfs which is modified by myself.
Q2. I?ve developed a management and monitoring software for gluster and things for cluster system.
Should I open this source codes ?
Thanks in advance.
2017 Nov 09
0
[Gluster-devel] Poor performance of block-store with RDMA
Hi Kalever!
First of all, I really appreciate your test results for
block-store(https://github.com/pkalever/iozone_results_gluster/tree/master/block-store)
:-)
My teammate and I tested block-store(glfs backstore with tcmu-runner)
but we have met a problem of performance.
We tested some cases with one server that has RDMA volume and one client
that is connected to same RDMA network.
two
2017 Sep 21
0
Sharding option for distributed volumes
Hello Ji-Hyeon,
Thanks, is that option available in 3.12 gluster release? because we're
still on 3.8 and just playing around latest version in order to have our
solution migrated.
Thank you!
9/21/17 2:26 PM, Ji-Hyeon Gim ?????:
> Hello Pavel!
>
> In my opinion, you need to check features.shard-block-size option first.
> If a file nobigger than this value, it would not be
2017 Jul 12
1
Hi all
I have setup a distributed glusterfs volume with 3 servers. the network is
1GbE, i get filebench test with a client.
refer to this link:
https://s3.amazonaws.com/aws001/guided_trek/Performance_in_a_Gluster_Systemv6F.pdf
the more server for gluster, more throughput should gain. I have tested the
network, the bandwidth is 117 MB/s, so when i have 3 servers i should gain
about 300 MB/s (3*117
2018 May 29
1
glustefs as vmware datastore in production
Sometimes os disk hang occured and re-mounted with ro in vm guest(centos6)
when storage was busy.
After install vmware plugin, increased block response timeout to 30 sec.
But os workload reponse time was not good.
I guess my system composed with 5400 rpm disks with raid6.
Overall storage performance is not good for multiple os images.
best regards.
After tha
2018? 5? 29? (?) ?? 1:45, Jo?o
2018 May 26
2
glustefs as vmware datastore in production
> Hi,
>
> Does anyone have glusterfs as vmware datastore working in production in a
> real world case? How to serve the glusterfs cluster? As iscsi, NFS?
>
>
Hi,
I am using glusterfs 3.10.x for VMware ESXi 5.5 NFS DataStore.
Our Environment is
- 4 node supermicro server (each 50TB, NL SAS 4TB used, LSI 9260-8i)
- Totally 100TB service volume
- 10G Storage Network and Service