similar to: "file changed as we read it" message during tar file creation on GlusterFS

Displaying 20 results from an estimated 300 matches similar to: ""file changed as we read it" message during tar file creation on GlusterFS"

2017 Dec 29
2
"file changed as we read it" message during tar file creation on GlusterFS
Hi Mauro, What version of Gluster are you running and what is your volume configuration? IIRC, this was seen because of mismatches in the ctime returned to the client. I don't think there were issues with the files but I will leave it to Ravi and Raghavendra to comment. Regards, Nithya On 29 December 2017 at 04:10, Mauro Tridici <mauro.tridici at cmcc.it> wrote: > > Hi All,
2017 Dec 28
0
"file changed as we read it" message during tar file creation on GlusterFS
Hi All, anyone had the same experience? Could you provide me some information about this error? It happens only on GlusterFS file system. Thank you, Mauro > Il giorno 20 dic 2017, alle ore 16:57, Mauro Tridici <mauro.tridici at cmcc.it> ha scritto: > > > Dear Users, > > I?m experiencing a random problem ( "file changed as we read it? error) during tar files
2018 Jan 02
2
"file changed as we read it" message during tar file creation on GlusterFS
Hi All, any news about this issue? Can I ignore this kind of error message or I have to do something to correct it? Thank you in advance and sorry for my insistence. Regards, Mauro > Il giorno 29 dic 2017, alle ore 11:45, Mauro Tridici <mauro.tridici at cmcc.it> ha scritto: > > > Hi Nithya, > > thank you very much for your support and sorry for the late. > Below
2017 Dec 29
0
"file changed as we read it" message during tar file creation on GlusterFS
Hi Nithya, thank you very much for your support and sorry for the late. Below you can find the output of ?gluster volume info tier2? command and the gluster software stack version: gluster volume info Volume Name: tier2 Type: Distributed-Disperse Volume ID: a28d88c5-3295-4e35-98d4-210b3af9358c Status: Started Snapshot Count: 0 Number of Bricks: 6 x (4 + 2) = 36 Transport-type: tcp Bricks:
2018 Jan 02
0
"file changed as we read it" message during tar file creation on GlusterFS
I think it is safe to ignore it. The problem exists? due to the minor difference in file time stamps in the backend bricks of the same sub volume (for a given file) and during the course of tar, the timestamp can be served from different bricks causing it to complain . The ctime xlator[1] feature once ready should fix this issue by storing time stamps as xattrs on the bricks. i.e. all bricks
2018 Jan 02
1
"file changed as we read it" message during tar file creation on GlusterFS
Hi Ravi, thank you very much for your support and explanation. If I understand, the ctime xlator feature is not present in the current gluster package but it will be in the future release, right? Thank you again, Mauro > Il giorno 02 gen 2018, alle ore 12:53, Ravishankar N <ravishankar at redhat.com> ha scritto: > > I think it is safe to ignore it. The problem exists due to the
2007 Dec 07
1
Adding a subset to a glm messes up factors?
Hi everyone, I have a problem with running a glm using a subset of my data. Whenever I choose a subset, in the summary the factors arent shown (as if the variable was a continuous variable). If I dont use subsets then all the factors are shown. I have copied the output from summary for both cases. Thanks for the help, Muri > model<-glm(log(cpue)~year,family=gaussian) Call: glm(formula =
2012 Sep 15
4
how to view only readings of a selected data from a column while the other columns remain
Hi Friends I am new here and have a problem Year Market Winner BID 1 1990 ABC Apple 0.1260 2 1990 ABC Apple 0.1395 3 1990 EFG Pear 0.1350 4 1991 EFG Apple 0.1113 5 1991 EFG Orange 0.1094 For each year and separately for the two
2009 May 04
1
how to change nlme() contrast parametrization?
How to set the nlme() function to return the answer without the intercept parametrization? #========================================================================================= library(nlme) Soybean[1:3, ] (fm1Soy.lis <- nlsList(weight ~ SSlogis(Time, Asym, xmid, scal),                        data = Soybean)) (fm1Soy.nlme <- nlme(fm1Soy.lis)) fm2Soy.nlme <- update(fm1Soy.nlme,
2017 Sep 18
6
how many hosts could be down in a 12x(4+2) distributed dispersed volume?
Dear All, I just implemented a (6x(4+2)) DISTRIBUTED DISPERSED gluster (v.3.10) volume based on the following hardware: - 3 gluster servers (each server with 2 CPU 10 cores, 64GB RAM, 12 hard disk SAS 12Gb/s, 10GbE storage network) Now, we need to add 3 new servers with the same hardware configuration respecting the current volume topology. If I'm right, we will obtain a DITRIBUTED
2017 Sep 18
2
how to calculate the ideal value for client.event-threads, server.event-threads and performance.io-thread-count?
Dear All, I just implemented a (6x(4+2)) DISTRIBUTED DISPERSED gluster (v.3.10) volume based on the following hardware: - 3 gluster servers (each server with 2 CPU 10 cores, 64GB RAM, 12 hard disk SAS 12Gb/s, 10GbE storage network) Is there a way to detect the ideal value for client.event-threads, server.event-threads and performance.io-thread-count? Thank you in advance, Mauro Tridici
2017 Sep 20
0
how many hosts could be down in a 12x(4+2) distributed dispersed volume?
Hi Mauro Tridici, >From the information provided it appears like you have placed 2 bricks of a subvolume on one host. Please confirm. The number of hosts that could go down without losing access to data can be derived based on the brick configuration/distribution. Please let us know the brick distribution plan. Regards, Sunil kumar Acharya Senior Software Engineer Red Hat
2017 Sep 20
0
how many hosts could be down in a 12x(4+2) distributed dispersed volume?
After adding 3 more nodes you will have 6 nodes and 2 HD on each nodes. It depends on the way you are going to add new bricks on the existing volume 'vol" I think you should remember that in a given EC sub volume of 4+2, at any point of time 2 bricks could be down. When you make 6 * (4+2) to 12 * (4+2) you have to provide path of the bricks you want to add. Suppose you want to add 6
2017 Sep 20
0
how to calculate the ideal value for client.event-threads, server.event-threads and performance.io-thread-count?
Defaults should be fine in your size. In big clusters I usually set event-threads to 4. On Mon, Sep 18, 2017 at 10:39 PM, Mauro Tridici <mauro.tridici at cmcc.it> wrote: > > Dear All, > > I just implemented a (6x(4+2)) DISTRIBUTED DISPERSED gluster (v.3.10) volume > based on the following hardware: > > - 3 gluster servers (each server with 2 CPU 10 cores, 64GB RAM, 12
2017 Sep 20
0
how many hosts could be down in a 12x(4+2) distributed dispersed volume?
If you add bricks to existing volume one host could be down in each three host group, If you recreate the volume with one brick on each host, then two random hosts can be tolerated. Assume s1,s2,s3 are current servers and you add s4,s5,s6 and extend volume. If any two servers in each group goes down you loose data. If you chose random two host the probability you loose data will be %20 in this
2017 Sep 26
2
df command shows transport endpoint mount error on gluster client v.3.10.5 + core dump
Hi Ashish, in attachment you can find the gdb output (with bt and thread outputs) and the complete log file until the crash happened. Thank you for your support. Mauro Tridici > Il giorno 26 set 2017, alle ore 10:11, Ashish Pandey <aspandey at redhat.com> ha scritto: > > Hi, > > Following are the command to get the debug info for gluster - > gdb
2017 Sep 27
0
df command shows transport endpoint mount error on gluster client v.3.10.5 + core dump
Hi Ashish, I?m sorry to disturb you again, but I would like to know if you received the log files correctly. Thank you, Mauro Tridici > Il giorno 26 set 2017, alle ore 10:19, Mauro Tridici <mauro.tridici at cmcc.it> ha scritto: > > > Hi Ashish, > > in attachment you can find the gdb output (with bt and thread outputs) and the complete log file until the crash happened.
2017 Sep 26
2
df command shows transport endpoint mount error on gluster client v.3.10.5 + core dump
Hi Ashish, thank you for your answer. Do you need complete client log file only or something else in particular? Unfortunately, I never used ?bt? command. Could you please provide me an usage example string? I will provide all logs you need. Thank you again, Mauro > Il giorno 26 set 2017, alle ore 09:30, Ashish Pandey <aspandey at redhat.com> ha scritto: > > Hi Mauro, > >
2017 Sep 26
0
df command shows transport endpoint mount error on gluster client v.3.10.5 + core dump
Hi, Following are the command to get the debug info for gluster - gdb /usr/local/sbin/glusterfs <core file path> Then on gdb prompt you need to type bt or backtrace (gdb) bt You can also provide the output of "thread apply all bt". (gdb) thread apply all bt Above commands should be executed on client node on which you have mounted the gluster volume and it crashed.
2017 Sep 25
2
df command shows transport endpoint mount error on gluster client v.3.10.5 + core dump
Dear Gluster Users, I implemented a distributed disperse 6x(4+2) gluster (v.3.10.5) volume with the following options: [root at s01 tier2]# gluster volume info Volume Name: tier2 Type: Distributed-Disperse Volume ID: a28d88c5-3295-4e35-98d4-210b3af9358c Status: Started Snapshot Count: 0 Number of Bricks: 6 x (4 + 2) = 36 Transport-type: tcp Bricks: Brick1: s01-stg:/gluster/mnt1/brick Brick2: