similar to: Three issues 3.2.4

Displaying 20 results from an estimated 11000 matches similar to: "Three issues 3.2.4"

2015 Aug 22
1
Configuration file not found when using non-standard installation path
Installing with: syslinux --directory otherdir -i my_unmounted_device will install the bootloader in the desired directory ("otherdir") under the root directory of the desired unmounted device ("my_unmounted_device"). All the corresponding syslinux-related files are located in the same installation directory. When booting this device, SYSLINUX fails to find a
2013 Dec 10
1
Error after crash of Virtual Machine during migration
Greetings, Legend: storage-gfs-3-prd - the first gluster. storage-1-saas - new gluster where "the first gluster" had to be migrated. storage-gfs-4-prd - the second gluster (which had to be migrated later). I've started command replace-brick: 'gluster volume replace-brick sa_bookshelf storage-gfs-3-prd:/ydp/shared storage-1-saas:/ydp/shared start' During that Virtual
2012 Jun 20
3
multiple ".." support
Hello HPA, I have a request. Since v4.04, SYSLINUX supports one ".." in relative paths. It also supports multiple "../", only if the relative path ends with a specific directory, as in "../../otherdirectory/". Is it possible to expand this support to multiple "../" in SYSLINUX? BTW, ISOLINUX already supports this type of relative paths. The rest of
2004 Dec 07
0
GFS 6.0.2-12
GFS 6.0.2-12 is now available for CentOS-3. This version has some small fixes is but is mostly just to support the new CentOS-3 kernel. The files are available from my site here: http://bender.it.swin.edu.au/centos-3/RHGFS/ Tips for updating: Because the dependencies are a bit screwed (thanks to RH), it is probably easiest to do rpm -U
2017 Oct 06
0
Gluster geo replication volume is faulty
On 09/29/2017 09:30 PM, rick sanchez wrote: > I am trying to get up geo replication between two gluster volumes > > I have set up two replica 2 arbiter 1 volumes with 9 bricks > > [root at gfs1 ~]# gluster volume info > Volume Name: gfsvol > Type: Distributed-Replicate > Volume ID: c2fb4365-480b-4d37-8c7d-c3046bca7306 > Status: Started > Snapshot Count: 0 > Number
2018 Mar 13
0
Can't heal a volume: "Please check if all brick processes are running."
Hi, Maybe someone can point me to a documentation or explain this? I can't find it myself. Do we have any other useful resources except doc.gluster.org? As I see many gluster options are not described there or there are no explanation what is doing... On 2018-03-12 15:58, Anatoliy Dmytriyev wrote: > Hello, > > We have a very fresh gluster 3.10.10 installation. > Our volume
2018 Mar 12
2
Can't heal a volume: "Please check if all brick processes are running."
Hello, We have a very fresh gluster 3.10.10 installation. Our volume is created as distributed volume, 9 bricks 96TB in total (87TB after 10% of gluster disk space reservation) For some reasons I can?t ?heal? the volume: # gluster volume heal gv0 Launching heal operation to perform index self heal on volume gv0 has been unsuccessful on bricks that are down. Please check if all brick processes
2013 Jan 02
0
Bug or strange behaviour or --output-prefix
Seems like what you really want is an --input-prefix parameter. You might also like a --create-output-directories option. In all cases except absolute paths, the input prefix must be assumed to be the current working directory. Therefore, any relative paths in input file names must be preserved on output to avoid collapsing multiple source directories into a single output directory, with
2018 Mar 13
0
Can't heal a volume: "Please check if all brick processes are running."
Can we add a smarter error message for this situation by checking volume type first? Cheers, Laura B On Wednesday, March 14, 2018, Karthik Subrahmanya <ksubrahm at redhat.com> wrote: > Hi Anatoliy, > > The heal command is basically used to heal any mismatching contents > between replica copies of the files. > For the command "gluster volume heal <volname>"
2018 Mar 14
0
Can't heal a volume: "Please check if all brick processes are running."
Hi Karthik, Thanks a lot for the explanation. Does it mean a distributed volume health can be checked only by "gluster volume status " command? And one more question: cluster.min-free-disk is 10% by default. What kind of "side effects" can we face if this option will be reduced to, for example, 5%? Could you point to any best practice document(s)? Regards, Anatoliy
2018 May 04
0
Crashing applications, RDMA_ERROR in logs
Hello gluster users and professionals, We are running gluster 3.10.10 distributed volume (9 nodes) using RDMA transport. From time to time applications crash with I/O errors (can't access file) and in the client logs we can see messages like: [2018-05-04 10:00:43.467490] W [MSGID: 114031] [client-rpc-fops.c:2640:client3_3_readdirp_cbk] 0-gv0-client-2: remote operation failed [Transport
2018 Mar 14
0
Can't heal a volume: "Please check if all brick processes are running."
On Wed, Mar 14, 2018 at 5:42 PM, Karthik Subrahmanya <ksubrahm at redhat.com> wrote: > > > On Wed, Mar 14, 2018 at 3:36 PM, Anatoliy Dmytriyev <tolid at tolid.eu.org> > wrote: > >> Hi Karthik, >> >> >> Thanks a lot for the explanation. >> >> Does it mean a distributed volume health can be checked only by "gluster >> volume
2017 Sep 29
1
Gluster geo replication volume is faulty
I am trying to get up geo replication between two gluster volumes I have set up two replica 2 arbiter 1 volumes with 9 bricks [root at gfs1 ~]# gluster volume info Volume Name: gfsvol Type: Distributed-Replicate Volume ID: c2fb4365-480b-4d37-8c7d-c3046bca7306 Status: Started Snapshot Count: 0 Number of Bricks: 3 x (2 + 1) = 9 Transport-type: tcp Bricks: Brick1: gfs2:/gfs/brick1/gv0 Brick2:
2013 Dec 12
2
Size detection/replair does not work with zlib
Hi! Usually dovecot auto detects or repairs the size of a maildir message. So I can place a message named "foo" in the cur directory and dovecot uses it. Now I tried the same with a zlib compressed message but here dovecot doesn't recognize/repair the size of the message. When I access this folder via IMAP the connection is diconnected and in dovecot logs I see the following
2023 Jul 04
1
remove_me files building up
Hi Strahil, We're using gluster to act as a share for an application to temporarily process and store files, before they're then archived off over night. The issue we're seeing isn't with the inodes running out of space, but the actual disk space on the arb server running low. This is the df -h? output for the bricks on the arb server: /dev/sdd1 15G 12G 3.3G 79%
2017 Apr 26
2
tempdir() may be deleted during long-running R session
On Tue, Apr 25, 2017 at 02:41:58PM +0000, Cook, Malcolm wrote: > Might this combination serve the purpose: > * R session keeps an open handle on the tempdir it creates, > * whatever tempdir harvesting cron job the user has be made sensitive enough not to delete open files (including open directories) Good suggestion but doesn't work with the (increasingly popular)
2017 Apr 26
0
tempdir() may be deleted during long-running R session
>>>>> <frederik at ofb.net> >>>>> on Tue, 25 Apr 2017 21:13:59 -0700 writes: > On Tue, Apr 25, 2017 at 02:41:58PM +0000, Cook, Malcolm wrote: >> Might this combination serve the purpose: >> * R session keeps an open handle on the tempdir it creates, >> * whatever tempdir harvesting cron job the user has be made
2018 Mar 14
2
Can't heal a volume: "Please check if all brick processes are running."
On Wed, Mar 14, 2018 at 3:36 PM, Anatoliy Dmytriyev <tolid at tolid.eu.org> wrote: > Hi Karthik, > > > Thanks a lot for the explanation. > > Does it mean a distributed volume health can be checked only by "gluster > volume status " command? > Yes. I am not aware of any other command which can give the status of plain distribute volume which is similar to
2018 Mar 05
0
tiering
Hi, There isn't a way to replace the failing tier brick through a single command as we don't have support for replace/ remove or add brick with tier. Once you bring the brick online(volume start force), the data in the brick will be built by the self heal daemon (Done because its a replicated tier). But adding brick will still not work. Else if you use the force option, it will work as
2023 Jun 30
1
remove_me files building up
Hi, We're running a cluster with two data nodes and one arbiter, and have sharding enabled. We had an issue a while back where one of the server's crashed, we got the server back up and running and ensured that all healing entries cleared, and also increased the server spec (CPU/Mem) as this seemed to be the potential cause. Since then however, we've seen some strange behaviour,