similar to: Change transport type on volume from tcp to rdma

Displaying 20 results from an estimated 7000 matches similar to: "Change transport type on volume from tcp to rdma"

2017 Sep 15
3
0-client_t: null client [Invalid argument] & high CPU usage (Gluster 3.12)
Howdy, I'm setting up several gluster 3.12 clusters running on CentOS 7 and have having issues with glusterd.log and glustershd.log both being filled with errors relating to null client errors and client-callback functions. They seem to be related to high CPU usage across the nodes although I don't have a way of confirming that (suggestions welcomed!). in
2017 Sep 18
2
0-client_t: null client [Invalid argument] & high CPU usage (Gluster 3.12)
Thanks Milind, Yes I?m hanging out for CentOS?s Storage / Gluster SIG to release the packages for 3.12.1, I can see the packages were built a week ago but they?re still not on the repo :( -- Sam > On 18 Sep 2017, at 9:57 pm, Milind Changire <mchangir at redhat.com> wrote: > > Sam, > You might want to give glusterfs-3.12.1 a try instead. > > > >> On Fri, Sep
2017 Sep 18
0
0-client_t: null client [Invalid argument] & high CPU usage (Gluster 3.12)
Sam, You might want to give glusterfs-3.12.1 a try instead. On Fri, Sep 15, 2017 at 6:42 AM, Sam McLeod <mailinglists at smcleod.net> wrote: > Howdy, > > I'm setting up several gluster 3.12 clusters running on CentOS 7 and have > having issues with glusterd.log and glustershd.log both being filled with > errors relating to null client errors and client-callback
2017 Nov 13
4
What is it with trusted.io-stats-dump?
Hi, I am trying to understand how the extended attribute trusted.io-stats-dump works. setfattr -n trusted.io-stats-dump -v /tmp/gluster_perf_stats/io-stats-pre.txt /mnt/gluster/gv0_glusterfs I can see that the io-stats-pre.txt is created. But how and what happened in the background? And why I can't I see the attribute with getfattr again? getfattr -dm- /mnt/gluster/gv0_glusterfs # file:
2011 Nov 05
1
glusterfs over rdma ... not.
OK - finished some tests over tcp and ironed out a lot of problems. rdma is next; should be snap now.... [I must admit that this is my 1st foray into the land of IB, so some of the following may be obvious to a non-naive admin..] except that while I can create and start the volume with rdma as transport: ================================== root at pbs3:~ 622 $ gluster volume info glrdma
2017 Sep 25
0
0-client_t: null client [Invalid argument] & high CPU usage (Gluster 3.12)
FYI - I've been testing the Gluster 3.12.1 packages with the help of the SIG maintainer and I can confirm that the logs are no longer being filled with NFS or null client errors after the upgrade. -- Sam McLeod @s_mcleod https://smcleod.net > On 18 Sep 2017, at 10:14 pm, Sam McLeod <mailinglists at smcleod.net> wrote: > > Thanks Milind, > > Yes I?m hanging out for
2011 Nov 14
1
RDMA/Ethernet wi ROCEE - failed to modify QP to RTR
Did any RDMA/Ethernet users see this Gluster error? If so do you know what caused it and how to fix? If you haven't seen it, what RPMs and configuration do you use specific to RDMA/Ethernet? [2011-11-10 10:30:20.595801] C [rdma.c:2417:rdma_connect_qp]0-rpc-transport/rdma: Failed to modify QP to RTR [2011-11-10 10:30:20.595930] E [rdma.c:4159:rdma_handshake_pollin] 0-rpc-transport/rdma:
2011 Oct 18
2
gluster rebalance taking three months
Hi guys, we have a rebalance running on eight bricks since July and this is what the status looks like right now: ===Tue Oct 18 13:45:01 CST 2011 ==== rebalance step 1: layout fix in progress: fixed layout 223623 There are roughly 8T photos in the storage,so how long should this rebalance take? What does the number (in this case) 22362 represent? Our gluster infomation: Repository
2011 Jul 25
3
gluster client performance
Hi- I'm new to Gluster, but am trying to get it set up on a new compute cluster we're building. We picked Gluster for one of our cluster file systems (we're also using Lustre for fast scratch space), but the Gluster performance has been so bad that I think maybe we have a configuration problem -- perhaps we're missing a tuning parameter that would help, but I can't find
2011 May 12
1
Slow reading speed over RDMA
Hi everyone, I hope you can help me with some performance troubles I've been having. I'm doing some tests with gluster 3.2.0, and I can't understand some of the behavior I'm getting with gluster. The test is using a volume with 12 stripped bricks (each brick is an HD), with no replication, via RDMA. I'm doing random readings of 4GByte files with the FIO tool. Gluster
2018 Apr 25
1
RDMA Client Hang Problem
Thank you for your mail. ibv_rc_pingpong seems working between servers and client. Also udaddy, ucmatose, rping etc are working. root at gluster1:~# ibv_rc_pingpong -d mlx5_0 -g 0 ? local address:? LID 0x0000, QPN 0x0001e4, PSN 0x10090e, GID fe80::ee0d:9aff:fec0:1dc8 ? remote address: LID 0x0000, QPN 0x00014c, PSN 0x09402b, GID fe80::ee0d:9aff:fec0:1b14 8192000 bytes in 0.01 seconds =
2017 May 29
2
Best way to know a call is being transfered
Hello using Asterisk 1.8.32.3. What is the best way of knowing a call is being transfered (attended and unattended) ? And also knowing whereto (sip user) the call is being transfered and who is the transferer ? So I can log this information. Kind regards. J. -------------- next part -------------- An HTML attachment was scrubbed... URL:
2018 Feb 05
2
Very slow rsync to gluster volume UNLESS `ls` or `find` scan dir on gluster volume first
Thanks for the report Artem, Looks like the issue is about cache warming up. Specially, I suspect rsync doing a 'readdir(), stat(), file operations' loop, where as when a find or ls is issued, we get 'readdirp()' request, which contains the stat information along with entries, which also makes sure cache is up-to-date (at md-cache layer). Note that this is just a off-the memory
2018 Apr 12
2
Turn off replication
On Wed, Apr 11, 2018 at 7:38 PM, Jose Sanchez <josesanc at carc.unm.edu> wrote: > Hi Karthik > > Looking at the information you have provided me, I would like to make sure > that I?m running the right commands. > > 1. gluster volume heal scratch info > If the count is non zero, trigger the heal and wait for heal info count to become zero. > 2. gluster volume
2018 Apr 25
2
Turn off replication
Looking at the logs , it seems that it is trying to add using the same port was assigned for gluster01ib: Any Ideas?? Jose [2018-04-25 22:08:55.169302] I [MSGID: 106482] [glusterd-brick-ops.c:447:__glusterd_handle_add_brick] 0-management: Received add brick req [2018-04-25 22:08:55.186037] I [run.c:191:runner_log] (-->/usr/lib64/glusterfs/3.8.15/xlator/mgmt/glusterd.so(+0x33045)
2010 Mar 02
1
sem package and growth curves
I have been working through the book "Applied longitudinal data analysis: modeling change and event occurrence" by Judith D. Singer and John B. Willett. I have been working examples using SAS and also using it as an opportunity for learning to use R for statistical analysis. I ran into some difficulties in chapter 8 which deals with using structural equation modeling. I have tried to
2011 Dec 08
1
Can't create striped replicated volume
Hi, I'm trying to create striped replicated volume but getting tis error: gluster volume create cloud stripe 4 replica 2 transport tcp nebula1:/dataPool nebula2:/dataPool nebula3:/dataPool nebula4:/dataPool wrong brick type: replica, use<HOSTNAME>:<export-dir-abs-path> Usage: volume create<NEW-VOLNAME> [stripe<COUNT>] [replica<COUNT>]
2018 Apr 30
2
Turn off replication
Hi All We were able to get all 4 bricks are distributed , we can see the right amount of space. but we have been rebalancing since 4 days ago for 16Tb. and still only 8tb. is there a way to speed up. there is also data we can remove from it to speed it up, but what is the best procedures removing data , is it from the Gluster main export point or going on each brick and remove it . We would like
2007 Jan 04
1
asterisk sip peer/user matching methodsforauthentication backwards?
I have considered opening a bug report on this, but wanted to get some feedback and make sure I am not missing something in the way of a simple work around. What is the scenario in which this impacts your implementation? Ours is the desire to use the same realtime SIP database for many asterisk servers, and route the call based on a "home server" value in the realtime database. The
2011 Apr 04
1
rdma or tcp?
Is there a document with some guidelines for setting up bricks with tcp or rdma transport? I'm looking at a new deployment where the storage cluster hosts connect via 10GigE, but clients are on 1GigE. Over time, there will be 10GigE clients, but the majority will remain on 1GigE. In this setup, should the storage bricks use tcp or rdma? If tcp is the better choice, and at some point in the