Displaying 20 results from an estimated 400 matches similar to: "Upgrade from 3.8.15 to 3.12.5"
2018 Feb 19
0
Upgrade from 3.8.15 to 3.12.5
I believe the peer rejected issue is something we recently identified and
has been fixed through https://bugzilla.redhat.com/show_bug.cgi?id=1544637
and is available in 3.12.6. I'd request you to upgrade to the latest
version in 3.12 series.
On Mon, Feb 19, 2018 at 12:27 PM, <rwecker at ssd.org> wrote:
> Hi,
>
> I have a 3 node cluster (Found1, Found2, Found2) which i wanted
2018 Feb 20
0
Stale File handle
Hello,
I have a file in my gluster volume that has the following when i try to access it (ls)
ls: cannot access 37019600-c34e-4d10-8829-ac08cb141f19.meta: Stale file handle
37019600-c34e-4d10-8829-ac08cb141f19 37019600-c34e-4d10-8829-ac08cb141f19.lease 37019600-c34e-4d10-8829-ac08cb141f19.meta
when i look at gluster volume heal VMData info i get the following
Brick
2018 Mar 06
0
Fixing a rejected peer
On Tue, Mar 6, 2018 at 6:00 AM, Jamie Lawrence <jlawrence at squaretrade.com>
wrote:
> Hello,
>
> So I'm seeing a rejected peer with 3.12.6. This is with a replica 3 volume.
>
> It actually began as the same problem with a different peer. I noticed
> with (call it) gluster-2, when I couldn't make a new volume. I compared
> /var/lib/glusterd between them, and
2018 Mar 06
4
Fixing a rejected peer
Hello,
So I'm seeing a rejected peer with 3.12.6. This is with a replica 3 volume.
It actually began as the same problem with a different peer. I noticed with (call it) gluster-2, when I couldn't make a new volume. I compared /var/lib/glusterd between them, and found that somehow the options in one of the vols differed. (I suspect this was due to attempting to create the volume via the
2018 Feb 20
0
Split brain
Hi,
I am having a problem with a split-brain issue that does not seem to match up with documentation on how to solve it.
gluster volume heal VMData2 info gives:
Brick found2.ssd.org:/data/brick6/data
/08aa5fc4-c9ba-4fcf-af57-72450b875d1a/images/d1164a9b-ba63-46c4-a9ec-76ea4a7a2c45/82a7027b-321c-4bd9-8afc-2a12cfa23bfc
2003 Aug 12
4
print points from a huge matrix
Hi All,
I have a 8000*8000 matrix and I want to print out a file with the row name,
column name and the value for those point with values satisfying a condition.
I tried using a for loop, however, it took me forever to get the result. Is
there a fast way to do this? Thanks!
Bing
---------------------------------
1060 Commerce Park
Oak Ridge National Laboratory
P.O. Box 2008, MS 6480
Oak
2010 Aug 17
1
MySQL Connect problem...
Right, I'm baffled.
I have:
exten => s,1,MYSQL(Connect DB1 127.0.0.1 geraint xxx amis2)
exten => s,n,MYSQL(Query NORESULT ${DB1} INSERT\ INTO\ recordings\
(caller_number\,called_number\,date_created\,date_started\,in_use\,server_id)\
VALUES\ (\'${CALLERID(number)}\'\,\'${ARG1}\'\,NOW()\,NOW()\,\'Yes\'\,12))
exten => s,n,MYSQL(Query RESULT1 ${DB1} SELECT\
2017 Dec 15
3
Production Volume will not start
Hi all,
I have an issue where our volume will not start from any node. When attempting to start the volume it will eventually return:
Error: Request timed out
For some time after that, the volume is locked and we either have to wait or restart Gluster services. In the gluserd.log, it shows the following:
[2017-12-15 18:00:12.423478] I [glusterd-utils.c:5926:glusterd_brick_start]
2018 Jan 11
3
Creating cluster replica on 2 nodes 2 bricks each.
Hi Nithya
Thanks for helping me with this, I understand now , but I have few questions.
When i had it setup in replica (just 2 nodes with 2 bricks) and tried to added , it failed.
> [root at gluster01 ~]# gluster volume add-brick scratch replica 2 gluster01ib:/gdata/brick2/scratch gluster02ib:/gdata/brick2/scratch
> volume add-brick: failed: /gdata/brick2/scratch is already part of a
2018 Jan 12
0
Creating cluster replica on 2 nodes 2 bricks each.
---------- Forwarded message ----------
From: Jose Sanchez <josesanc at carc.unm.edu>
Date: 11 January 2018 at 22:05
Subject: Re: [Gluster-users] Creating cluster replica on 2 nodes 2 bricks
each.
To: Nithya Balachandran <nbalacha at redhat.com>
Cc: gluster-users <gluster-users at gluster.org>
Hi Nithya
Thanks for helping me with this, I understand now , but I have few
2017 Jul 20
2
glusterd-locks.c:572:glusterd_mgmt_v3_lock
Hi list,
recently I've noted a strange behaviour of my gluster storage, sometimes
while executing a simple command like "gluster volume status
vm-images-repo" as a response I got "Another transaction is in progress
for vm-images-repo. Please try again after sometime.". This situation
does not get solved simply waiting for but I've to restart glusterd on
the node that
2017 Jul 20
2
glusterd-locks.c:572:glusterd_mgmt_v3_lock
In attachment the requested logs for all the three nodes.
thanks,
Paolo
Il 20/07/2017 11:38, Atin Mukherjee ha scritto:
> Please share the cmd_history.log file from all the storage nodes.
>
> On Thu, Jul 20, 2017 at 2:34 PM, Paolo Margara
> <paolo.margara at polito.it <mailto:paolo.margara at polito.it>> wrote:
>
> Hi list,
>
> recently I've
2018 Jan 11
0
Creating cluster replica on 2 nodes 2 bricks each.
Hi Jose,
Gluster is working as expected. The Distribute-replicated type just means
that there are now 2 replica sets and files will be distributed across
them.
A volume of type Replicate (1xn where n is the number of bricks in the
replica set) indicates there is no distribution (all files on the
volume will be present on all the bricks in the volume).
A volume of type Distributed-Replicate
2017 Jul 20
2
glusterd-locks.c:572:glusterd_mgmt_v3_lock
OK, on my nagios instance I've disabled gluster status check on all
nodes except on one, I'll check if this is enough.
Thanks,
Paolo
Il 20/07/2017 13:50, Atin Mukherjee ha scritto:
> So from the cmd_history.logs across all the nodes it's evident that
> multiple commands on the same volume are run simultaneously which can
> result into transactions collision and you can
2018 Jan 15
1
Creating cluster replica on 2 nodes 2 bricks each.
On Fri, 12 Jan 2018 at 21:16, Nithya Balachandran <nbalacha at redhat.com>
wrote:
> ---------- Forwarded message ----------
> From: Jose Sanchez <josesanc at carc.unm.edu>
> Date: 11 January 2018 at 22:05
> Subject: Re: [Gluster-users] Creating cluster replica on 2 nodes 2 bricks
> each.
> To: Nithya Balachandran <nbalacha at redhat.com>
> Cc: gluster-users
2017 Jul 26
2
glusterd-locks.c:572:glusterd_mgmt_v3_lock
Technically if only one node is pumping all these status commands, you
shouldn't get into this situation. Can you please help me with the latest
cmd_history & glusterd log files from all the nodes?
On Wed, Jul 26, 2017 at 1:41 PM, Paolo Margara <paolo.margara at polito.it>
wrote:
> Hi Atin,
>
> I've initially disabled gluster status check on all nodes except on one on
2017 Dec 18
0
Production Volume will not start
On Sat, Dec 16, 2017 at 12:45 AM, Matt Waymack <mwaymack at nsgdv.com> wrote:
> Hi all,
>
>
>
> I have an issue where our volume will not start from any node. When
> attempting to start the volume it will eventually return:
>
> Error: Request timed out
>
>
>
> For some time after that, the volume is locked and we either have to wait
> or restart
2018 Jan 10
2
Creating cluster replica on 2 nodes 2 bricks each.
Hi Nithya
This is what i have so far, I have peer both cluster nodes together as replica, from node 1A and 1B , now when i tried to add it , i get the error that it is already part of a volume. when i run the cluster volume info , i see that has switch to distributed-replica.
Thanks
Jose
[root at gluster01 ~]# gluster volume status
Status of volume: scratch
Gluster process
2017 Jul 20
0
glusterd-locks.c:572:glusterd_mgmt_v3_lock
Please share the cmd_history.log file from all the storage nodes.
On Thu, Jul 20, 2017 at 2:34 PM, Paolo Margara <paolo.margara at polito.it>
wrote:
> Hi list,
>
> recently I've noted a strange behaviour of my gluster storage, sometimes
> while executing a simple command like "gluster volume status
> vm-images-repo" as a response I got "Another transaction
2017 Jul 20
0
glusterd-locks.c:572:glusterd_mgmt_v3_lock
So from the cmd_history.logs across all the nodes it's evident that
multiple commands on the same volume are run simultaneously which can
result into transactions collision and you can end up with one command
succeeding and others failing. Ideally if you are running volume status
command for monitoring it's suggested to be run from only one node.
On Thu, Jul 20, 2017 at 3:54 PM, Paolo