Displaying 20 results from an estimated 2699 matches for "msgids".
Did you mean:
msgid
2017 Oct 26
0
not healing one file
Hey Richard,
Could you share the following informations please?
1. gluster volume info <volname>
2. getfattr output of that file from all the bricks
getfattr -d -e hex -m . <brickpath/filepath>
3. glustershd & glfsheal logs
Regards,
Karthik
On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote:
> On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it
does diagnose any issues in setup. Currently you may have to run it in all
the three machines.
On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote:
> Thanks for this report. This week many of the developers are at Gluster
> Summit in Prague, will be checking this and respond next
2017 Oct 26
2
not healing one file
Hi Karthik,
thanks for taking a look at this. I'm not working with gluster long
enough to make heads or tails out of the logs. The logs are attached to
this mail and here is the other information:
# gluster volume info home
Volume Name: home
Type: Replicate
Volume ID: fe6218ae-f46b-42b3-a467-5fc6a36ad48a
Status: Started
Snapshot Count: 1
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
2018 Jan 14
0
Volume can not write to data if this volume quota limits capacity and mount itself volume on arm64(aarch64) architecture
Thanks for reading this email?I found a problem while using Glusterfs?
First?I created a Distributed Dispersed volume on three nodes?and Limit the volume capacity use quota command?this volume is auto mounted on /run/gluster/VOLUME_NAME. This volume can be read and written normally?
After, I manually mounted the volume in another path to provide data storage of SAMBA and ISCSI services, after
2017 Oct 19
3
gluster tiering errors
All,
I am new to gluster and have some questions/concerns about some tiering
errors that I see in the log files.
OS: CentOs 7.3.1611
Gluster version: 3.10.5
Samba version: 4.6.2
I see the following (scrubbed):
Node 1 /var/log/glusterfs/tier/<vol>/tierd.log:
[2017-10-19 17:52:07.519614] I [MSGID: 109038]
[tier.c:1169:tier_migrate_using_query_file]
0-<vol>-tier-dht: Promotion failed
2017 Nov 30
2
Problems joining new gluster 3.10 nodes to existing 3.8
Hi,
I have a problem joining four Gluster 3.10 nodes to an existing
Gluster 3.8 nodes. My understanding that this should work and not be
too much of a problem.
Peer robe is successful but the node is rejected:
gluster> peer detach elkpinfglt07
peer detach: success
gluster> peer probe elkpinfglt07
peer probe: success.
gluster> peer status
Number of Peers: 6
Hostname: elkpinfglt02
2020 Apr 10
0
doveadm backup from gmail with imapc
On Thu, 2020-04-09 at 13:48 +0300, Sami Ketola wrote:
>
> >
> > On 31 Mar 2020, at 23.18, Ben Mulvihill <ben.mulvihill at gmail.com>
> > wrote:
> >
> > Hello again,
> >
> > I am still stuck I'm afraid.
> >
> > I now have doveadm backup working perfectly from
> > a small gmail mailbox (a few hundred messages), but
>
2018 Jan 15
2
Using the host name of the volume, its related commands can become very slow
Using the host name of the volume, its related gluster commands can become very slow .For example,create,start,stop volume,nfs related commands. and some time And in some cases, the command will return Error : Request timed out
but If using ip address to create the volume. The volume all gluster commands are normal.
I have configured /etc/hosts correctly,Because,SSH can normally use the
2017 Oct 22
0
gluster tiering errors
Herb,
What are the high and low watermarks for the tier set at ?
# gluster volume get <vol> cluster.watermark-hi
# gluster volume get <vol> cluster.watermark-low
What is the size of the file that failed to migrate as per the following
tierd log:
[2017-10-19 17:52:07.519614] I [MSGID: 109038]
[tier.c:1169:tier_migrate_using_query_file] 0-<vol>-tier-dht: Promotion
failed for
2018 Feb 07
2
Ip based peer probe volume create error
On 8/02/2018 4:45 AM, Gaurav Yadav wrote:
> After seeing command history, I could see that you have 3 nodes, and
> firstly you are peer probing 51.15.90.60? and 163.172.151.120 from?
> 51.15.77.14
> So here itself you have 3 node cluster, after all this you are going
> on node 2 and again peer probing 51.15.77.14.
> ?Ideally it should work, with above steps, but due to some
2018 Feb 06
5
strange hostname issue on volume create command with famous Peer in Cluster state error message
Hello,
i installed glusterfs 3.11.3 version 3 nodes ubuntu 16.04 machine. All machines have same /etc/hosts.
node1 hostname
pri.ostechnix.lan
node2 hostname
sec.ostechnix.lan
node2 hostname
third.ostechnix.lan
51.15.77.14 pri.ostechnix.lan pri
51.15.90.60 sec.ostechnix.lan sec
163.172.151.120 third.ostechnix.lan third
volume create command is
root at
2020 Apr 09
2
doveadm backup from gmail with imapc
> On 31 Mar 2020, at 23.18, Ben Mulvihill <ben.mulvihill at gmail.com> wrote:
>
> Hello again,
>
> I am still stuck I'm afraid.
>
> I now have doveadm backup working perfectly from
> a small gmail mailbox (a few hundred messages), but
> when I try the same configuration (apart from usernames
> and passwords obviously) with a large gmail mailbox
>
2018 Feb 26
1
Problems with write-behind with large files on Gluster 3.8.4
Hello,
I'm having problems when write-behind is enabled on Gluster 3.8.4.
I have 2 Gluster servers each with a single brick that is mirrored between
them. The code causing these issues reads two data files each approx.
128G in size. It opens a third file, mmap()'s that file, and
subsequently reads and writes to it. The third file, on sucessful runs
(without write-behind enabled)
2017 Oct 22
1
gluster tiering errors
There are several messages "no space left on device". I would check first
that free disk space is available for the volume.
On Oct 22, 2017 18:42, "Milind Changire" <mchangir at redhat.com> wrote:
> Herb,
> What are the high and low watermarks for the tier set at ?
>
> # gluster volume get <vol> cluster.watermark-hi
>
> # gluster volume get
2009 Jul 15
0
[PATCH] Make Perl strings translatable
This patch makes Perl strings translatable. The Perl strings end up
in the PO files as usual. It does not touch the embedded POD.
Internationalizing the Perl strings was pleasantly simple. Just add:
use Locale::TextDomain 'libguestfs';
at the top of any *.pl or *.pm file. Then for each string in the file
that you want to be translatable you place TWO underscores before it:
-
2018 Mar 21
2
Brick process not starting after reinstall
Hi all,
our systems have suffered a host failure in a replica three setup.
The host needed a complete reinstall. I followed the RH guide to
'replace a host with the same hostname'
(https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3/html/administration_guide/sect-replacing_hosts).
The machine has the same OS (CentOS 7). The new machine got a minor
version number newer
2018 Apr 09
2
Gluster cluster on two networks
Hi all!
I have setup a replicated/distributed gluster cluster 2 x (2 + 1).
Centos 7 and gluster version 3.12.6 on server.
All machines have two network interfaces and connected to two different networks,
10.10.0.0/16 (with hostnames in /etc/hosts, gluster version 3.12.6)
192.168.67.0/24 (with ldap, gluster version 3.13.1)
Gluster cluster was created on the 10.10.0.0/16 net, gluster peer
2018 Jan 16
0
Using the host name of the volume, its related commands can become very slow
On Mon, Jan 15, 2018 at 6:30 PM, ?? <chenxi at shudun.com> wrote:
> Using the host name of the volume, its related gluster commands can become
> very slow .For example,create,start,stop volume,nfs related commands. and
> some time And in some cases, the command will return Error : Request timed
> out
> but If using ip address to create the volume. The volume all gluster
>
2017 Dec 01
0
Problems joining new gluster 3.10 nodes to existing 3.8
On Fri, Dec 1, 2017 at 1:55 AM, Ziemowit Pierzycki <ziemowit at pierzycki.com>
wrote:
> Hi,
>
> I have a problem joining four Gluster 3.10 nodes to an existing
> Gluster 3.8 nodes. My understanding that this should work and not be
> too much of a problem.
>
> Peer robe is successful but the node is rejected:
>
> gluster> peer detach elkpinfglt07
> peer
2018 Feb 06
0
strange hostname issue on volume create command with famous Peer in Cluster state error message
I'm guessing there's something wrong w.r.t address resolution on node 1.
>From the logs it's quite clear to me that node 1 is unable to resolve the
address configured in /etc/hosts where as the other nodes do. Could you
paste the gluster peer status output from all the nodes?
Also can you please check if you're able to ping "pri.ostechnix.lan" from
node1 only? Does