similar to: Using the host name of the volume, its related commands can become very slow

Displaying 20 results from an estimated 800 matches similar to: "Using the host name of the volume, its related commands can become very slow"

2018 Jan 16
0
Using the host name of the volume, its related commands can become very slow
On Mon, Jan 15, 2018 at 6:30 PM, ?? <chenxi at shudun.com> wrote: > Using the host name of the volume, its related gluster commands can become > very slow .For example,create,start,stop volume,nfs related commands. and > some time And in some cases, the command will return Error : Request timed > out > but If using ip address to create the volume. The volume all gluster >
2017 Dec 15
3
Production Volume will not start
Hi all, I have an issue where our volume will not start from any node. When attempting to start the volume it will eventually return: Error: Request timed out For some time after that, the volume is locked and we either have to wait or restart Gluster services. In the gluserd.log, it shows the following: [2017-12-15 18:00:12.423478] I [glusterd-utils.c:5926:glusterd_brick_start]
2017 Dec 18
0
Production Volume will not start
On Sat, Dec 16, 2017 at 12:45 AM, Matt Waymack <mwaymack at nsgdv.com> wrote: > Hi all, > > > > I have an issue where our volume will not start from any node. When > attempting to start the volume it will eventually return: > > Error: Request timed out > > > > For some time after that, the volume is locked and we either have to wait > or restart
2018 Mar 21
2
Brick process not starting after reinstall
Hi all, our systems have suffered a host failure in a replica three setup. The host needed a complete reinstall. I followed the RH guide to 'replace a host with the same hostname' (https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3/html/administration_guide/sect-replacing_hosts). The machine has the same OS (CentOS 7). The new machine got a minor version number newer
2018 Apr 25
2
Turn off replication
Looking at the logs , it seems that it is trying to add using the same port was assigned for gluster01ib: Any Ideas?? Jose [2018-04-25 22:08:55.169302] I [MSGID: 106482] [glusterd-brick-ops.c:447:__glusterd_handle_add_brick] 0-management: Received add brick req [2018-04-25 22:08:55.186037] I [run.c:191:runner_log] (-->/usr/lib64/glusterfs/3.8.15/xlator/mgmt/glusterd.so(+0x33045)
2018 Mar 21
0
Brick process not starting after reinstall
Could you share the following information: 1. gluster --version 2. output of gluster volume status 3. glusterd log and all brick log files from the node where bricks didn't come up. On Wed, Mar 21, 2018 at 12:35 PM, Richard Neuboeck <hawk at tbi.univie.ac.at> wrote: > Hi all, > > our systems have suffered a host failure in a replica three setup. > The host needed a
2018 Apr 27
0
Turn off replication
Hi Jose, Why are all the bricks visible in volume info if the pre-validation for add-brick failed? I suspect that the remove brick wasn't done properly. You can provide the cmd_history.log to verify this. Better to get the other log messages. Also I need to know what are the bricks that were actually removed, the command used and its output. On Thu, Apr 26, 2018 at 3:47 AM, Jose Sanchez
2018 Apr 30
2
Turn off replication
Hi All We were able to get all 4 bricks are distributed , we can see the right amount of space. but we have been rebalancing since 4 days ago for 16Tb. and still only 8tb. is there a way to speed up. there is also data we can remove from it to speed it up, but what is the best procedures removing data , is it from the Gluster main export point or going on each brick and remove it . We would like
2018 May 02
0
Turn off replication
Hi, Removing data to speed up from rebalance is not something that is recommended. Rebalance can be stopped but if started again it will start from the beginning (will have to check and skip the files already moved). Rebalance will take a while, better to let it run. It doesn't have any down side. Unless you touch the backend the data on gluster volume will be available for usage in spite of
2018 Apr 25
0
Turn off replication
Hello Karthik Im having trouble adding the two bricks back online. Any help is appreciated thanks when i try to add-brick command this is what i get [root at gluster01 ~]# gluster volume add-brick scratch gluster02ib:/gdata/brick2/scratch/ volume add-brick: failed: Pre Validation failed on gluster02ib. Brick: gluster02ib:/gdata/brick2/scratch not available. Brick may be containing or be
2018 Jan 12
0
Creating cluster replica on 2 nodes 2 bricks each.
---------- Forwarded message ---------- From: Jose Sanchez <josesanc at carc.unm.edu> Date: 11 January 2018 at 22:05 Subject: Re: [Gluster-users] Creating cluster replica on 2 nodes 2 bricks each. To: Nithya Balachandran <nbalacha at redhat.com> Cc: gluster-users <gluster-users at gluster.org> Hi Nithya Thanks for helping me with this, I understand now , but I have few
2018 Jan 11
3
Creating cluster replica on 2 nodes 2 bricks each.
Hi Nithya Thanks for helping me with this, I understand now , but I have few questions. When i had it setup in replica (just 2 nodes with 2 bricks) and tried to added , it failed. > [root at gluster01 ~]# gluster volume add-brick scratch replica 2 gluster01ib:/gdata/brick2/scratch gluster02ib:/gdata/brick2/scratch > volume add-brick: failed: /gdata/brick2/scratch is already part of a
2018 Jan 14
0
Volume can not write to data if this volume quota limits capacity and mount itself volume on arm64(aarch64) architecture
Thanks for reading this email?I found a problem while using Glusterfs? First?I created a Distributed Dispersed volume on three nodes?and Limit the volume capacity use quota command?this volume is auto mounted on /run/gluster/VOLUME_NAME. This volume can be read and written normally? After, I manually mounted the volume in another path to provide data storage of SAMBA and ISCSI services, after
2018 Mar 06
4
Fixing a rejected peer
Hello, So I'm seeing a rejected peer with 3.12.6. This is with a replica 3 volume. It actually began as the same problem with a different peer. I noticed with (call it) gluster-2, when I couldn't make a new volume. I compared /var/lib/glusterd between them, and found that somehow the options in one of the vols differed. (I suspect this was due to attempting to create the volume via the
2018 Jan 15
1
Creating cluster replica on 2 nodes 2 bricks each.
On Fri, 12 Jan 2018 at 21:16, Nithya Balachandran <nbalacha at redhat.com> wrote: > ---------- Forwarded message ---------- > From: Jose Sanchez <josesanc at carc.unm.edu> > Date: 11 January 2018 at 22:05 > Subject: Re: [Gluster-users] Creating cluster replica on 2 nodes 2 bricks > each. > To: Nithya Balachandran <nbalacha at redhat.com> > Cc: gluster-users
2018 Apr 12
2
Turn off replication
On Wed, Apr 11, 2018 at 7:38 PM, Jose Sanchez <josesanc at carc.unm.edu> wrote: > Hi Karthik > > Looking at the information you have provided me, I would like to make sure > that I?m running the right commands. > > 1. gluster volume heal scratch info > If the count is non zero, trigger the heal and wait for heal info count to become zero. > 2. gluster volume
2017 Sep 13
1
[3.11.2] Bricks disconnect from gluster with 0-transport: EPOLLERR
I ran into something like this in 3.10.4 and filed two bugs for it: https://bugzilla.redhat.com/show_bug.cgi?id=1491059 https://bugzilla.redhat.com/show_bug.cgi?id=1491060 Please see the above bugs for full detail. In summary, my issue was related to glusterd's pid handling of pid files when is starts self-heal and bricks. The issues are: a. brick pid file leaves stale pid and brick fails
2018 Mar 06
0
Fixing a rejected peer
On Tue, Mar 6, 2018 at 6:00 AM, Jamie Lawrence <jlawrence at squaretrade.com> wrote: > Hello, > > So I'm seeing a rejected peer with 3.12.6. This is with a replica 3 volume. > > It actually began as the same problem with a different peer. I noticed > with (call it) gluster-2, when I couldn't make a new volume. I compared > /var/lib/glusterd between them, and
2018 Mar 20
0
brick processes not starting
Hi all, our systems have suffered a node failure in a replica three setup. The node needed a complete reinstall. I followed the RH guide to replace a host with the same hostname (https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3/html/administration_guide/sect-replacing_hosts). The machine has the same OS (CentOS 7). The new machine got a minor version number newer gluster
2017 Aug 21
1
Glusterd not working with systemd in redhat 7
Hi! Please see bellow. Note that web1.dasilva.network is the address of the local machine where one of the bricks is installed and that ties to mount. [2017-08-20 20:30:40.359236] I [MSGID: 100030] [glusterfsd.c:2476:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.11.2 (args: /usr/sbin/glusterd -p /var/run/glusterd.pid) [2017-08-20 20:30:40.973249] I [MSGID: 106478]