similar to: GFID is null after adding large amounts of data

Displaying 20 results from an estimated 400 matches similar to: "GFID is null after adding large amounts of data"

2017 Aug 29
0
GFID attir is missing after adding large amounts of data
This is strange, a couple of questions: 1. What volume type is this? What tuning have you done? gluster v info output would be helpful here. 2. How big are your bricks? 3. Can you write me a quick reproducer so I can try this in the lab? Is it just a single multi TB file you are untarring or many? If you give me the steps to repro, and I hit it, we can get a bug open. 4. Other than
2017 Aug 28
2
GFID attir is missing after adding large amounts of data
Hi Cluster Community, we are seeing some problems when adding multiple terrabytes of data to a 2 node replicated GlusterFS installation. The version is 3.8.11 on CentOS 7. The machines are connected via 10Gbit LAN and are running 24/7. The OS is virtualized on VMWare. After a restart of node-1 we see that the log files are growing to multiple Gigabytes a day. Also there seem to be problems
2017 Sep 01
1
GFID attir is missing after adding large amounts of data
I re-added gluster-users to get some more eye on this. ----- Original Message ----- > From: "Christoph Sch?bel" <christoph.schaebel at dc-square.de> > To: "Ben Turner" <bturner at redhat.com> > Sent: Wednesday, August 30, 2017 8:18:31 AM > Subject: Re: [Gluster-users] GFID attir is missing after adding large amounts of data > > Hello Ben, >
2017 Oct 04
2
Leer parquet files desde R
Hola Carlos. spark_read_parquet es de sparklyr y necesita un sparkcontext inicializado para leer el fichero de parquet. El mié., 4 oct. 2017 22:11, Carlos Ortega <cof en qualityexcellence.es> escribió: > Hola José Luis, > > ¿Has probado directamente con "dplyr"?... > > spark_read_parquet >
2017 Oct 04
2
Leer parquet files desde R
Buenas a todos. Ya sé que con sparkR o sparklyr puedo leer fácilmente ficheros con formato parquet, pero ¿hay alguna forma de leerlos sin tener que arrancar spark? Mi situación es que tengo unos ficheros en formato parquet en s3 y quiero leerlos desde una instancia pequeñita de amazon EC2 que quiero mantener sin instalarle spark. Estoy bicheando la librería https://github.com/cloudyr/aws.s3 y va
2018 May 02
0
Healing : No space left on device
Oh, and *there is* space on the device where the brick's data is located. ??? /dev/mapper/fedora-home?? 942G??? 868G?? 74G? 93% /export Le 02/05/2018 ? 11:49, Hoggins! a ?crit?: > Hello list, > > I have an issue on my Gluster cluster. It is composed of two data nodes > and an arbiter for all my volumes. > > After having upgraded my bricks to gluster 3.12.9 (Fedora 27), this
2018 May 02
3
Healing : No space left on device
Hello list, I have an issue on my Gluster cluster. It is composed of two data nodes and an arbiter for all my volumes. After having upgraded my bricks to gluster 3.12.9 (Fedora 27), this is what I get : ??? - on node 1, volumes won't start, and glusterd.log shows a lot of : ??? ??? [2018-05-02 09:46:06.267817] W [glusterd-locks.c:843:glusterd_mgmt_v3_unlock]
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
Here would be also the corresponding log entries on a gluster node brick log file: [2018-04-09 06:58:47.363536] W [MSGID: 113093] [posix-gfid-path.c:84:posix_remove_gfid2path_xattr] 0-myvol-private-posix: removing gfid2path xattr failed on /data/myvol-private/brick/.glusterfs/12/67/126759f6-8364-453c-9a9c-d9ed39198b7a: key = trusted.gfid2path.2529bb66b56be110 [No data available] [2018-04-09
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
Hello, Last Friday I upgraded my GlusterFS 3.10.7 3-way replica (with arbitrer) cluster to 3.12.7 and this morning I got a warning that 9 files on one of my volumes are not synced. Ineeded checking that volume with a "volume heal info" shows that the third node (the arbitrer node) has 9 files to be healed but are not being healed automatically. All nodes were always online and there
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
On 04/09/2018 05:40 PM, mabi wrote: > Again thanks that worked and I have now no more unsynched files. > > You mentioned that this bug has been fixed in 3.13, would it be possible to backport it to 3.12? I am asking because 3.13 is not a long-term release and as such I would not like to have to upgrade to 3.13. I don't think there will be another 3.12 release. Adding Karthik to see
2013 Feb 26
0
Replicated Volume Crashed
Hi, I have a gluster volume that consists of 22Bricks and includes a single folder with 3.6 Million files. Yesterday the volume crashed and turned out to be completely unresposible and I was forced to perform a hard reboot on all gluster servers because they were not able to execute a reboot command issued by the shell because they were that heavy overloaded. Each gluster server has 12 CPU cores
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
On 04/09/2018 04:36 PM, mabi wrote: > As I was suggested in the past by this mailing list a now ran a stat and getfattr on one of the problematic files on all nodes and at the end a stat on the fuse mount directly. The output is below: > > NODE1: > > STAT: > File:
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
On 04/09/2018 05:09 PM, mabi wrote: > Thanks Ravi for your answer. > > Stupid question but how do I delete the trusted.afr xattrs on this brick? > > And when you say "this brick", do you mean the brick on the arbitrer node (node 3 in my case)? Sorry I should have been clearer. Yes the brick on the 3rd node. `setfattr -x trusted.afr.myvol-private-client-0
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
As I was suggested in the past by this mailing list a now ran a stat and getfattr on one of the problematic files on all nodes and at the end a stat on the fuse mount directly. The output is below: NODE1: STAT: File: ?/data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfile? Size: 0 Blocks: 38
2018 May 23
0
New 3.12.7 possible split-brain on replica 3
Hello, I just wanted to ask if you had time to look into this bug I am encountering and if there is anything else I can do? For now in order to get rid of these 3 unsynched files shall I do the same method that was suggested to me in this thread? Thanks, Mabi ? ??????? Original Message ??????? On May 17, 2018 11:07 PM, mabi <mabi at protonmail.ch> wrote: > ?? > > Hi Ravi,
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
Again thanks that worked and I have now no more unsynched files. You mentioned that this bug has been fixed in 3.13, would it be possible to backport it to 3.12? I am asking because 3.13 is not a long-term release and as such I would not like to have to upgrade to 3.13. ??????? Original Message ??????? On April 9, 2018 1:46 PM, Ravishankar N <ravishankar at redhat.com> wrote: > ??
2018 May 23
1
New 3.12.7 possible split-brain on replica 3
On 05/23/2018 12:47 PM, mabi wrote: > Hello, > > I just wanted to ask if you had time to look into this bug I am encountering and if there is anything else I can do? > > For now in order to get rid of these 3 unsynched files shall I do the same method that was suggested to me in this thread? Sorry Mabi,? I haven't had a chance to dig deeper into this. The workaround of
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
Thanks Ravi for your answer. Stupid question but how do I delete the trusted.afr xattrs on this brick? And when you say "this brick", do you mean the brick on the arbitrer node (node 3 in my case)? ?? ??????? Original Message ??????? On April 9, 2018 1:24 PM, Ravishankar N <ravishankar at redhat.com> wrote: > ?? > > On 04/09/2018 04:36 PM, mabi wrote: > > >
2018 May 17
2
New 3.12.7 possible split-brain on replica 3
Hi Ravi, Please fine below the answers to your questions 1) I have never touched the cluster.quorum-type option. Currently it is set as following for this volume: Option Value ------ ----- cluster.quorum-type none 2) The .shareKey
2004 Aug 06
0
Meilleurs Vœux pour 2002 : année de mémoire, de mobilisation, d'action, de justice et de sérénité - Appel au soutien moral et financier
Meilleurs Vœux pour 2002 : année de mémoire, de mobilisation, d'action, de justice et de sérénité - Appel au soutien moral et financier ======================== M. Habib HAIBI, 7, Aguesseau St. 69007 LYON - France Tél. 00 33 4 72 73 19 08 - Fax 00 33 4 78 61 39 27 Email : haibi@free.fr http://haibi.free.fr Je suis qualifié pour exprimer mes voeux pour le Nouvel An à tous les survivants