search for: gluesys

Displaying 12 results from an estimated 12 matches for "gluesys".

Did you mean: genesys
2017 Aug 15
0
Is transport=rdma tested with "stripe"?
Hi, I did ib_write_lat in perftest. It worked fine. Between servers and between server and client, 2-byte latency was ~0.8us, 8MB bandwidth was ~6GB/s. Very normal with IB/FDR. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170815/4217c89e/attachment.html>
2017 Aug 15
3
Is transport=rdma tested with "stripe"?
looks like your rdma is not functional did you tested with qperf? On Mon, Aug 14, 2017 at 7:37 PM, Hatazaki, Takao <takao.hatazaki at hpe.com> wrote: > Forgot to mention that I was using CentOS7.3 and GlusterFS 3.10.3 that is > the latest available. > > > > *From:* gluster-users-bounces at gluster.org [mailto:gluster-users-bounces@ > gluster.org] *On Behalf Of
2017 Aug 15
2
Is transport=rdma tested with "stripe"?
...bandwidth > was ~6GB/s. Very normal with IB/FDR. > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users Best regards. -- Ji-Hyeon Gim Research Engineer, Gluesys Address. Gluesys R&D Center, 5F, 11-31, Simin-daero 327beon-gil, Dongan-gu, Anyang-si, Gyeonggi-do, Korea (14055) Phone. +82-70-8787-1053 Fax. +82-31-388-3261 Mobile. +82-10-7293-8858 E-Mail. potatogim at potatogim.net Website. www.potatogim.net The time I w...
2017 Aug 06
0
State: Peer Rejected (Connected)
...host: 172.26.178.254, port: 0 if it is, you need to sync volfile files/directories under /var/lib/glusterd/vols/<VOLNAME> from one of GOOD nodes. for details to resolve this problem, please show more information such as glusterd log :) -- Best regards. -- Ji-Hyeon Gim Research Engineer, Gluesys Address. Gluesys R&D Center, 5F, 11-31, Simin-daero 327beon-gil, Dongan-gu, Anyang-si, Gyeonggi-do, Korea (14055) Phone. +82-70-8787-1053 Fax. +82-31-388-3261 Mobile. +82-10-7293-8858 E-Mail. potatogim at potatogim.net Website. www.potatogim.net The time I w...
2017 Aug 06
2
State: Peer Rejected (Connected)
Hi, I have a 3 nodes replica (including arbiter) volume with GlusterFS 3.8.11 and this night one of my nodes (node1) had an out of memory for some unknown reason and as such the Linux OOM killer has killed the glusterd and glusterfs process. I restarted the glusterd process but now that node is in "Peer Rejected" state from the other nodes and from itself it rejects the two other nodes
2017 Aug 06
1
State: Peer Rejected (Connected)
...t is, you need to sync volfile files/directories under > /var/lib/glusterd/vols/<VOLNAME> from one of GOOD nodes. > for details to resolve this problem, please show more information such > as glusterd log :) > -- > Best regards. > -- > Ji-Hyeon Gim > Research Engineer, Gluesys > Address. Gluesys R&D Center, 5F, 11-31, Simin-daero 327beon-gil, > Dongan-gu, Anyang-si, > Gyeonggi-do, Korea > (14055) > Phone. +82-70-8787-1053 > Fax. +82-31-388-3261 > Mobile. +82-10-7293-8858 > E-Mail. potatogim at potatogim.net > Website. www.potatogim.net >...
2017 Jul 18
1
License for product
...rfs as product. Q1. What should I do when I use glusterfs which is modified by myself. Q2. I?ve developed a management and monitoring software for gluster and things for cluster system. Should I open this source codes ? Thanks in advance. ----------------------------------------- Taehwa Lee Gluesys Co.,Ltd. alghost.lee at gmail.com +82-10-3420-6114, +82-70-8785-6591 ----------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170718/cdfad6a9/attachment.html>
2017 Nov 09
0
[Gluster-devel] Poor performance of block-store with RDMA
...It may be caused by tcmu-runner itself or glfs backstore handler so I will take similar tests with other handlers(like qcow) in order to find this point. If there's anything I missed, Could you give me some tips for resolving this issue? :-) Best regards. -- Ji-Hyeon Gim Research Engineer, Gluesys Address. Gluesys R&D Center, 5F, 11-31, Simin-daero 327beon-gil, Dongan-gu, Anyang-si, Gyeonggi-do, Korea (14055) Phone. +82-70-8787-1053 Fax. +82-31-388-3261 Mobile. +82-10-7293-8858 E-Mail. potatogim at potatogim.net Website. www.potatogim.net The time I w...
2017 Sep 21
0
Sharding option for distributed volumes
...ed >> volumes. At this moment i'm facing problem with exporting big file >> which are not going to be distributed across bricks inside one volume. >> >> Thank you in advance. >> >> > Best regards. > > -- > > Ji-Hyeon Gim > Research Engineer, Gluesys > > Address. Gluesys R&D Center, 5F, 11-31, Simin-daero 327beon-gil, > Dongan-gu, Anyang-si, > Gyeonggi-do, Korea > (14055) > Phone. +82-70-8787-1053 > Fax. +82-31-388-3261 > Mobile. +82-10-7293-8858 > E-Mail. potatogim at potatog...
2017 Jul 12
1
Hi all
I have setup a distributed glusterfs volume with 3 servers. the network is 1GbE, i get filebench test with a client. refer to this link: https://s3.amazonaws.com/aws001/guided_trek/Performance_in_a_Gluster_Systemv6F.pdf the more server for gluster, more throughput should gain. I have tested the network, the bandwidth is 117 MB/s, so when i have 3 servers i should gain about 300 MB/s (3*117
2018 May 29
1
glustefs as vmware datastore in production
Sometimes os disk hang occured and re-mounted with ro in vm guest(centos6) when storage was busy. After install vmware plugin, increased block response timeout to 30 sec. But os workload reponse time was not good. I guess my system composed with 5400 rpm disks with raid6. Overall storage performance is not good for multiple os images. best regards. After tha 2018? 5? 29? (?) ?? 1:45, Jo?o
2018 May 26
2
glustefs as vmware datastore in production
> Hi, > > Does anyone have glusterfs as vmware datastore working in production in a > real world case? How to serve the glusterfs cluster? As iscsi, NFS? > > Hi, I am using glusterfs 3.10.x for VMware ESXi 5.5 NFS DataStore. Our Environment is - 4 node supermicro server (each 50TB, NL SAS 4TB used, LSI 9260-8i) - Totally 100TB service volume - 10G Storage Network and Service