Displaying 13 results from an estimated 13 matches similar to: "gluster peer probe"
2011 Sep 07
2
Gluster-users Digest, Vol 41, Issue 16
Hi Phil,
we?d the same Problem, try to compile with debug options.
Yes this sounds strange but it help?s when u are using SLES, the
glusterd works ok and u can start to work with it.
just put
exportCFLAGS='-g3 -O0'
between %build and %configure in the glusterfs spec file.
But be warned don?t use it with important data especially when u are
planing to use the replication feature,
2011 Apr 20
1
add brick unsuccessful
Hi again,
I'm having trouble testing the add-brick feature. I'm using a
replicate 2 setup with currently 2 nodes in and am
trying to add 2 more. The command blocks for a bit until i get an "Add
Brick unsuccessful" message.
According to etc-glusterfs-glusterd.vol.log below i can't seem to find
anything :
[2011-04-20 12:55:06.944593] I
2018 Jan 16
0
Using the host name of the volume, its related commands can become very slow
On Mon, Jan 15, 2018 at 6:30 PM, ?? <chenxi at shudun.com> wrote:
> Using the host name of the volume, its related gluster commands can become
> very slow .For example,create,start,stop volume,nfs related commands. and
> some time And in some cases, the command will return Error : Request timed
> out
> but If using ip address to create the volume. The volume all gluster
>
2018 Jan 15
2
Using the host name of the volume, its related commands can become very slow
Using the host name of the volume, its related gluster commands can become very slow .For example,create,start,stop volume,nfs related commands. and some time And in some cases, the command will return Error : Request timed out
but If using ip address to create the volume. The volume all gluster commands are normal.
I have configured /etc/hosts correctly,Because,SSH can normally use the
2012 Apr 23
5
'filesystem resize max' tries to use devid 1
Back story:
I started my pool with a 200gb partition at the end of my drive (sdc5)
, until I was able to clear out the data at the beginning of my drive.
When I was ready, I ran `btrfs dev add /dev/sdc4 /` then `btrfs dev
del /dev/sdc5 /`,
$ sudo btrfs fi resize max /
Resize ''/'' of ''max''
ERROR: unable to resize ''/'' - Invalid argument
in
2018 Mar 06
0
Fixing a rejected peer
On Tue, Mar 6, 2018 at 6:00 AM, Jamie Lawrence <jlawrence at squaretrade.com>
wrote:
> Hello,
>
> So I'm seeing a rejected peer with 3.12.6. This is with a replica 3 volume.
>
> It actually began as the same problem with a different peer. I noticed
> with (call it) gluster-2, when I couldn't make a new volume. I compared
> /var/lib/glusterd between them, and
2018 Feb 06
0
strange hostname issue on volume create command with famous Peer in Cluster state error message
I'm guessing there's something wrong w.r.t address resolution on node 1.
>From the logs it's quite clear to me that node 1 is unable to resolve the
address configured in /etc/hosts where as the other nodes do. Could you
paste the gluster peer status output from all the nodes?
Also can you please check if you're able to ping "pri.ostechnix.lan" from
node1 only? Does
2018 Mar 06
4
Fixing a rejected peer
Hello,
So I'm seeing a rejected peer with 3.12.6. This is with a replica 3 volume.
It actually began as the same problem with a different peer. I noticed with (call it) gluster-2, when I couldn't make a new volume. I compared /var/lib/glusterd between them, and found that somehow the options in one of the vols differed. (I suspect this was due to attempting to create the volume via the
2011 Jul 11
0
Instability when using RDMA transport
I've run into a problem with Gluster stability with the RDMA transport. Below is a description of the environment, a simple script that can replicate the problem, and log files from my test system.
I can work around the problem by using the TCP transport over IPoIB but would like some input onto what may be making the RDMA transport fail in this case.
=====
Symptoms
=====
- Error from test
2018 Feb 06
5
strange hostname issue on volume create command with famous Peer in Cluster state error message
Hello,
i installed glusterfs 3.11.3 version 3 nodes ubuntu 16.04 machine. All machines have same /etc/hosts.
node1 hostname
pri.ostechnix.lan
node2 hostname
sec.ostechnix.lan
node2 hostname
third.ostechnix.lan
51.15.77.14 pri.ostechnix.lan pri
51.15.90.60 sec.ostechnix.lan sec
163.172.151.120 third.ostechnix.lan third
volume create command is
root at
2017 Oct 26
0
not healing one file
Hey Richard,
Could you share the following informations please?
1. gluster volume info <volname>
2. getfattr output of that file from all the bricks
getfattr -d -e hex -m . <brickpath/filepath>
3. glustershd & glfsheal logs
Regards,
Karthik
On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote:
> On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it
does diagnose any issues in setup. Currently you may have to run it in all
the three machines.
On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote:
> Thanks for this report. This week many of the developers are at Gluster
> Summit in Prague, will be checking this and respond next
2017 Oct 26
2
not healing one file
Hi Karthik,
thanks for taking a look at this. I'm not working with gluster long
enough to make heads or tails out of the logs. The logs are attached to
this mail and here is the other information:
# gluster volume info home
Volume Name: home
Type: Replicate
Volume ID: fe6218ae-f46b-42b3-a467-5fc6a36ad48a
Status: Started
Snapshot Count: 1
Number of Bricks: 1 x 3 = 3
Transport-type: tcp