similar to: Read-only export

Displaying 20 results from an estimated 20000 matches similar to: "Read-only export"

2017 Aug 18
1
Is transport=rdma tested with "stripe"?
On Wed, Aug 16, 2017 at 4:44 PM, Hatazaki, Takao <takao.hatazaki at hpe.com> wrote: >> Note that "stripe" is not tested much and practically unmaintained. > > Ah, this was what I suspected. Understood. I'll be happy with "shard". > > Having said that, "stripe" works fine with transport=tcp. The failure reproduces with just 2 RDMA servers
2013 Jul 07
1
Getting ERROR: parsing the volfile failed (No such file or directory) when starting glusterd on Fedora 19
I don't get this. I am using a freshly installed copy of Fedora 19 and starting up glusterd for the first time. The goal is to have a replicated directory on two systems. But for right now, I can't even start up the glusterd daemon right out of the box. Trying to follow the Quick Start directions at http://gluster.org/community/documentation/index.php/QuickStart is, well, challenging.
2017 Dec 13
1
'ERROR: parsing the volfile failed' on fresh install
Hi all, I?m trying out gluster by following the Quick Start guide on two fresh installs of Ubuntu 16.04. One one node I was able to install and start gluster just fine. One the other node I am running into the following: $ sudo service glusterd start Job for glusterd.service failed because the control process exited with error code. See "systemctl status glusterd.service" and
2017 Dec 15
0
'ERROR: parsing the volfile failed' on fresh install
glusterd.vol file is default installed with glusterfs package. So I'm not sure how did you end up into a state where this file was missing (as per my initial investigation looking at the logs). What you could do to check if this file is really missing or not by running "find / -iname glusterd.vol" and if it's indeed missing then copy it from one of the peer node and restart
2017 Aug 21
1
Glusterd not working with systemd in redhat 7
Hi! Please see bellow. Note that web1.dasilva.network is the address of the local machine where one of the bricks is installed and that ties to mount. [2017-08-20 20:30:40.359236] I [MSGID: 100030] [glusterfsd.c:2476:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.11.2 (args: /usr/sbin/glusterd -p /var/run/glusterd.pid) [2017-08-20 20:30:40.973249] I [MSGID: 106478]
2017 Aug 16
0
Is transport=rdma tested with "stripe"?
> Note that "stripe" is not tested much and practically unmaintained. Ah, this was what I suspected. Understood. I'll be happy with "shard". Having said that, "stripe" works fine with transport=tcp. The failure reproduces with just 2 RDMA servers (with InfiniBand), one of those acts also as a client. I looked into logs. I paste lengthy logs below with
2012 Oct 05
0
No subject
# gluster --version glusterfs 3.3.1 built on Oct 11 2012 22:01:05 # gluster volume info Volume Name: gdata Type: Distribute Volume ID: eccc3a90-212d-4563-ae8d-10a77758738d Status: Started Number of Bricks: 3 Transport-type: tcp Bricks: Brick1: gluster-0-0:/mseas-data-0-0 Brick2: gluster-0-1:/mseas-data-0-1 Brick3: gluster-data:/data [root at mseas-data ~]# ps -ef | grep gluster root 2783
2018 May 15
0
glusterfs 3.13 repo unavailable and downgrade to 3.12.9 fails
You can still get them from https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-3.13/ (I don't know how much longer they'll be there. I suggest you copy them if you think you're going to need them in the future.) n 05/15/2018 04:58 AM, Davide Obbi wrote: > hi, > > i noticed that this repo for glusterfs 3.13 does not exists anymore at: > >
2024 Jan 26
1
Gluster communication via TLS client problem
Hi Stefan, Does the combined?glusterfs.ca includes client nodes pem? Also this file need to be placed in Client node as well. -- Aravinda Kadalu Technologies ---- On Fri, 26 Jan 2024 15:14:39 +0530 Stefan Kania <stefan at kania-online.de> wrote --- Hi to all, The system is running Debian 12 with Gluster 10. All systems are using the same versions. I try to encrypt the
2017 Sep 22
0
BUG: After stop and start wrong port is advertised
Hi Darrell, ? ? Thanks, for us it's really easy to reproduce atm. Each restart or stop/start is causing the issue atm over here. ? Atin will look into it on monday fortunately :) Regards Jo ? ? ? -----Original message----- From:Darrell Budic <budic at onholyground.com> Sent:Fri 22-09-2017 17:24 Subject:Re: [Gluster-users] BUG: After stop and start wrong port is advertised To:Atin
2018 May 15
1
[External] Re: glusterfs 3.13 repo unavailable and downgrade to 3.12.9 fails
Thanks Kaleb, any chance i can make the node working after the downgrade? thanks On Tue, May 15, 2018 at 2:02 PM, Kaleb S. KEITHLEY <kkeithle at redhat.com> wrote: > > You can still get them from > https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-3.13/ > > (I don't know how much longer they'll be there. I suggest you copy them > if you think
2017 Jun 29
1
afr-self-heald.c:479:afr_shd_index_sweep
Il 29/06/2017 16:27, Pranith Kumar Karampuri ha scritto: > > > On Thu, Jun 29, 2017 at 7:48 PM, Paolo Margara > <paolo.margara at polito.it <mailto:paolo.margara at polito.it>> wrote: > > Hi Pranith, > > I'm using this guide > https://github.com/nixpanic/glusterdocs/blob/f6d48dc17f2cb6ee4680e372520ec3358641b2bc/Upgrade-Guide/upgrade_to_3.8.md
2017 Jun 29
0
afr-self-heald.c:479:afr_shd_index_sweep
On Thu, Jun 29, 2017 at 7:48 PM, Paolo Margara <paolo.margara at polito.it> wrote: > Hi Pranith, > > I'm using this guide https://github.com/nixpanic/glusterdocs/blob/ > f6d48dc17f2cb6ee4680e372520ec3358641b2bc/Upgrade-Guide/upgrade_to_3.8.md > > Definitely my fault, but I think that is better to specify somewhere that > restarting the service is not enough simply
2017 May 09
1
Empty info file preventing glusterd from starting
Hi Atin/Team, We are using gluster-3.7.6 with setup of two brick and while restart of system I have seen that the glusterd daemon is getting failed from start. At the time of analyzing the logs from etc-glusterfs.......log file I have received the below logs [2017-05-06 03:33:39.798087] I [MSGID: 100030] [glusterfsd.c:2348:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version
2018 May 15
1
[External] Re: glusterfs 3.13 repo unavailable and downgrade to 3.12.9 fails
On 05/15/2018 08:08 AM, Davide Obbi wrote: > Thanks Kaleb, > > any chance i can make the node working after the downgrade? > thanks Without knowing what doesn't work, I'll go out on a limb and guess that it's an op-version problem. Shut down your 3.13 nodes, change their op-version to one of the valid 3.12 op-versions (e.g. 31203) and restart. Then the 3.12 nodes should
2017 Oct 05
0
Glusterd not working with systemd in redhat 7
So I have the root cause. Basically as part of the patch we write the brickinfo->uuid in to the brickinfo file only when there is a change in the volume. As per the brickinfo files you shared the uuid was not saved as there is no new change in the volume and hence the uuid was always NULL in the resolve brick because of which glusterd went for local address resolution. Having this done with a
2024 Jan 26
1
Gluster communication via TLS client problem
Hi to all, The system is running Debian 12 with Gluster 10. All systems are using the same versions. I try to encrypt the communication between the peers and the clients via TLS. The encryption between the peers works, but when I try to mount the volume on the client I always get an error. What have I done? 1. all hosts and clients can resolve the name of all systems involved. 2. the
2017 Jun 29
0
afr-self-heald.c:479:afr_shd_index_sweep
Paolo, Which document did you follow for the upgrade? We can fix the documentation if there are any issues. On Thu, Jun 29, 2017 at 2:07 PM, Ravishankar N <ravishankar at redhat.com> wrote: > On 06/29/2017 01:08 PM, Paolo Margara wrote: > > Hi all, > > for the upgrade I followed this procedure: > > - put node in maintenance mode (ensure no client are active)
2017 Oct 04
2
Glusterd not working with systemd in redhat 7
Hello , it seems the problem still persists on 3.10.6. I have a 1 x (2 + 1) = 3 configuration. I upgraded the first server and then launched a reboot. Gluster is not starting. Seems that gluster starts before network layer. Some logs here: Thanks [2017-10-04 15:33:00.506396] I [MSGID: 106143] [glusterd-pmap.c:277:pmap_registry_bind] 0-pmap: adding brick /opt/glusterfs/advdemo on port
2017 Jun 29
0
afr-self-heald.c:479:afr_shd_index_sweep
Hi all, for the upgrade I followed this procedure: * put node in maintenance mode (ensure no client are active) * yum versionlock delete glusterfs* * service glusterd stop * yum update * systemctl daemon-reload * service glusterd start * yum versionlock add glusterfs* * gluster volume heal vm-images-repo full * gluster volume heal vm-images-repo info on each server every time