search for: julianfamili

Displaying 20 results from an estimated 44 matches for "julianfamili".

Did you mean: julianfamily
2012 Dec 17
2
Transport endpoint
Hi, I've got Gluster error: Transport endpoint not connected. It came up twice after trying to rsync 2 TB filesystem over; it reached about 1.8 TB and got the error. Logs on the server side (on reverse time order): [2012-12-15 00:53:24.747934] I [server-helpers.c:629:server_connection_destroy] 0-RedhawkShared-server: destroyed connection of
2012 Dec 18
1
Infiniband performance issues answered?
In IRC today, someone who was hitting that same IB performance ceiling that occasionally gets reported had this to say [11:50] <nissim> first, I ran fedora which is not supported by Mellanox OFED distro [11:50] <nissim> so I moved to CentOS 6.3 [11:51] <nissim> next I removed all distibution related infiniband rpms and build the latest OFED package [11:52] <nissim>
2017 Jun 01
0
Who's using OpenStack Cinder & Gluster? [ Was Re: [Gluster-devel] Fwd: Re: GlusterFS removal from Openstack Cinder]
Joe, Agree with you on turning this around into something more positive. One aspect that would really help us decide on our next steps here is the actual number of deployments that will be affected by the removal of the gluster driver in Cinder. If you are running or aware of a deployment of OpenStack Cinder & Gluster, can you please respond on this thread or to me & Niels in private
2017 Nov 21
1
Ganesha or Storhaug
Yeah I saw that, which is what brought me to the mailing list. Ta From: Joe Julian <joe at julianfamily.org> To: Jonathan Archer <jf_archer at yahoo.com> Sent: Tuesday, 21 November 2017, 14:04 Subject: Re: [Gluster-users] Ganesha or Storhaug Not according to storehaug's GitHub page: > Currently this is a WIP content dump. If you want to get this up and running,
2017 Dec 29
0
Exact purpose of network.ping-timeout
Restarts will go through a shutdown process. As long as the network isn't actively unconfigured before the final kill, the tcp connection will be shutdown and there will be no wait. On 12/28/17 20:19, Sam McLeod wrote: > Sure, if you never restart / autoscale anything and if your use case > isn't bothered with up to 42 seconds of downtime, for us - 42 seconds > is a really
2017 Dec 29
1
Exact purpose of network.ping-timeout
Hi, I know that "glusterbot" text about ping-timeout almost by heart by now ;-) I have searched the complete IRC logs and Mailing list from the last 4 or 5 years for anything related to ping-timeout. The problem with "can be a very expensive operation" is that this is extremely vague. It would be helpful to put some numbers behind it. Of course I also understand that any
2017 Dec 18
0
Gluster consulting
Thanks for the replies Joe. Yes, it does seem that Gluster is a very in-demand expertise. And it's hard to justify the cost of Red Hat's commercial offering without first putting a POC in place to confirm viability. Thanks again, HB On Mon, Dec 18, 2017 at 12:08 PM, Joe Julian <joe at julianfamily.org> wrote: > Yeah, unfortunately that's all that have come forward as
2013 Mar 02
0
Gluster-users Digest, Vol 59, Issue 15 - GlusterFS performance
----- Original Message ----- > From: gluster-users-request at gluster.org > To: gluster-users at gluster.org > Sent: Friday, March 1, 2013 4:03:13 PM > Subject: Gluster-users Digest, Vol 59, Issue 15 > > ------------------------------ > > Message: 2 > Date: Fri, 01 Mar 2013 10:22:21 -0800 > From: Joe Julian <joe at julianfamily.org> > To: gluster-users at
2017 Dec 29
3
Exact purpose of network.ping-timeout
Sure, if you never restart / autoscale anything and if your use case isn't bothered with up to 42 seconds of downtime, for us - 42 seconds is a really long time for something like a patient management system to refuse file attachments from being uploaded etc... We apply a strict patching policy for security and kernel updates, we often also load balance between underlying physical hosts and
2017 Dec 18
2
Gluster consulting
Yeah, unfortunately that's all that have come forward as available. I think the demand for gluster expertise is just so high and the pool of experts so low that there's nobody left to do consulting work. On 12/18/2017 12:04 PM, Herb Burnswell wrote: > Hi, > > Sorry, I just saw the post > <http://lists.gluster.org/pipermail/gluster-users/2017-December/033053.html>
2012 Sep 18
4
cannot create a new volume with a brick that used to be part of a deleted volume?
Greetings, I'm running v3.3.0 on Fedora16-x86_64. I used to have a replicated volume on two bricks. This morning I deleted it successfully: ######## [root at farm-ljf0 ~]# gluster volume stop gv0 Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y Stopping volume gv0 has been successful [root at farm-ljf0 ~]# gluster volume delete gv0 Deleting volume will erase
2012 Aug 03
1
Gluster-users Digest, Vol 51, Issue 49
> Message: 4 > Date: Fri, 27 Jul 2012 15:29:41 -0700 > From: Harry Mangalam <hjmangalam at gmail.com> > Subject: [Gluster-users] Change NFS parameters post-start > To: gluster-users <gluster-users at gluster.org> > Message-ID: > <CAEib2OnKfENr8NhVwkvpsw21C5QJmzu_=C9j144p2Gkn7KP=LQ at mail.gmail.com> > Content-Type: text/plain; charset=ISO-8859-1 >
2017 Jul 11
1
Gluster native mount is really slow compared to nfs
Hello Vijay, ? ? What do you mean exactly? What info is missing? ? PS: I already found out that for this particular test all the difference is made by :?negative-timeout=600 , when removing it, it's much much slower again. ? ? Regards Jo ? -----Original message----- From:Vijay Bellur <vbellur at redhat.com> Sent:Tue 11-07-2017 18:16 Subject:Re: [Gluster-users] Gluster native mount is
2017 Jul 11
2
Gluster native mount is really slow compared to nfs
Hello Joe, ? ? I just did a mount like this (added the bold): ? mount -t glusterfs -o attribute-timeout=600,entry-timeout=600,negative-timeout=600,fopen-keep-cache,use-readdirp=no,log-level=WARNING,log-file=/var/log/glusterxxx.log 192.168.140.41:/www /var/www ?Results: ? root at app1:~/smallfile-master# ./smallfile_cli.py ?--top /var/www/test --host-set 192.168.140.41 --threads 8 --files 5000
2012 Sep 24
1
using KVM on glusterfs
hi, all: I'm now working on constructing KVM servers based on glusterfs. After many times searching on web, i found just a little information for this task, so I had to write this email for seeking systematically instruction of how to make glusterfs and KVM working together perfectly. And here are some questions I can not find any clearly answers: 1. what are the
2017 Jun 19
1
different brick using the same port?
Isn't this just brick multiplexing? On June 19, 2017 5:55:54 AM PDT, Atin Mukherjee <amukherj at redhat.com> wrote: >On Sun, Jun 18, 2017 at 1:40 PM, Yong Zhang <hiscal at outlook.com> wrote: > >> Hi, all >> >> >> >> I found two of my bricks from different volumes are using the same >port >> 49154 on the same glusterfs server node, is
2017 Jul 11
0
Gluster native mount is really slow compared to nfs
On 07/11/2017 08:14 AM, Jo Goossens wrote: > RE: [Gluster-users] Gluster native mount is really slow compared to nfs > > Hello Joe, > > I really appreciate your feedback, but I already tried the opcache > stuff (to not valildate at all). It improves of course then, but not > completely somehow. Still quite slow. > > I did not try the mount options yet, but I will now!
2018 May 22
2
split brain? but where?
Hi, Which version of gluster you are using? You can find which file is that using the following command find <brickpath> -samefile <brickpath/.glusterfs/<first two bits of gfid>/<next 2 bits of gfid>/<full gfid> Please provide the getfatr output of the file which is in split brain. The steps to recover from split-brain can be found here,
2017 Jul 11
2
Gluster native mount is really slow compared to nfs
Hello Joe, ? ? I really appreciate your feedback, but I already tried the opcache stuff (to not valildate at all). It improves of course then, but not completely somehow. Still quite slow. ? I did not try the mount options yet, but I will now! ? ? With nfs (doesnt matter much built-in version 3 or ganesha version 4) I can even host the site perfectly fast without these extreme opcache settings.
2018 May 22
0
split brain? but where?
I tried this already. 8><--- [root at glusterp2 fb]# find /bricks/brick1/gv0 -samefile /bricks/brick1/gv0/.glusterfs/ea/fb/eafb8799-4e7a-4264-9213-26997c5a4693 /bricks/brick1/gv0/.glusterfs/ea/fb/eafb8799-4e7a-4264-9213-26997c5a4693 [root at glusterp2 fb]# 8><--- gluster 4 Centos 7.4 8><--- df -h [root at glusterp2 fb]# df -h Filesystem