similar to: how many hosts could be down in a 12x(4+2) distributed dispersed volume?

Displaying 20 results from an estimated 8000 matches similar to: "how many hosts could be down in a 12x(4+2) distributed dispersed volume?"

2017 Sep 20
0
how many hosts could be down in a 12x(4+2) distributed dispersed volume?
Hi Mauro Tridici, >From the information provided it appears like you have placed 2 bricks of a subvolume on one host. Please confirm. The number of hosts that could go down without losing access to data can be derived based on the brick configuration/distribution. Please let us know the brick distribution plan. Regards, Sunil kumar Acharya Senior Software Engineer Red Hat
2017 Sep 20
0
how many hosts could be down in a 12x(4+2) distributed dispersed volume?
If you add bricks to existing volume one host could be down in each three host group, If you recreate the volume with one brick on each host, then two random hosts can be tolerated. Assume s1,s2,s3 are current servers and you add s4,s5,s6 and extend volume. If any two servers in each group goes down you loose data. If you chose random two host the probability you loose data will be %20 in this
2017 Sep 20
0
how many hosts could be down in a 12x(4+2) distributed dispersed volume?
After adding 3 more nodes you will have 6 nodes and 2 HD on each nodes. It depends on the way you are going to add new bricks on the existing volume 'vol" I think you should remember that in a given EC sub volume of 4+2, at any point of time 2 bricks could be down. When you make 6 * (4+2) to 12 * (4+2) you have to provide path of the bricks you want to add. Suppose you want to add 6
2017 Sep 18
2
how to calculate the ideal value for client.event-threads, server.event-threads and performance.io-thread-count?
Dear All, I just implemented a (6x(4+2)) DISTRIBUTED DISPERSED gluster (v.3.10) volume based on the following hardware: - 3 gluster servers (each server with 2 CPU 10 cores, 64GB RAM, 12 hard disk SAS 12Gb/s, 10GbE storage network) Is there a way to detect the ideal value for client.event-threads, server.event-threads and performance.io-thread-count? Thank you in advance, Mauro Tridici
2017 Sep 20
0
how to calculate the ideal value for client.event-threads, server.event-threads and performance.io-thread-count?
Defaults should be fine in your size. In big clusters I usually set event-threads to 4. On Mon, Sep 18, 2017 at 10:39 PM, Mauro Tridici <mauro.tridici at cmcc.it> wrote: > > Dear All, > > I just implemented a (6x(4+2)) DISTRIBUTED DISPERSED gluster (v.3.10) volume > based on the following hardware: > > - 3 gluster servers (each server with 2 CPU 10 cores, 64GB RAM, 12
2018 Jan 02
2
"file changed as we read it" message during tar file creation on GlusterFS
Hi All, any news about this issue? Can I ignore this kind of error message or I have to do something to correct it? Thank you in advance and sorry for my insistence. Regards, Mauro > Il giorno 29 dic 2017, alle ore 11:45, Mauro Tridici <mauro.tridici at cmcc.it> ha scritto: > > > Hi Nithya, > > thank you very much for your support and sorry for the late. > Below
2017 Dec 29
2
"file changed as we read it" message during tar file creation on GlusterFS
Hi Mauro, What version of Gluster are you running and what is your volume configuration? IIRC, this was seen because of mismatches in the ctime returned to the client. I don't think there were issues with the files but I will leave it to Ravi and Raghavendra to comment. Regards, Nithya On 29 December 2017 at 04:10, Mauro Tridici <mauro.tridici at cmcc.it> wrote: > > Hi All,
2018 Jan 02
1
"file changed as we read it" message during tar file creation on GlusterFS
Hi Ravi, thank you very much for your support and explanation. If I understand, the ctime xlator feature is not present in the current gluster package but it will be in the future release, right? Thank you again, Mauro > Il giorno 02 gen 2018, alle ore 12:53, Ravishankar N <ravishankar at redhat.com> ha scritto: > > I think it is safe to ignore it. The problem exists due to the
2018 Jan 02
0
"file changed as we read it" message during tar file creation on GlusterFS
I think it is safe to ignore it. The problem exists? due to the minor difference in file time stamps in the backend bricks of the same sub volume (for a given file) and during the course of tar, the timestamp can be served from different bricks causing it to complain . The ctime xlator[1] feature once ready should fix this issue by storing time stamps as xattrs on the bricks. i.e. all bricks
2017 Dec 29
0
"file changed as we read it" message during tar file creation on GlusterFS
Hi Nithya, thank you very much for your support and sorry for the late. Below you can find the output of ?gluster volume info tier2? command and the gluster software stack version: gluster volume info Volume Name: tier2 Type: Distributed-Disperse Volume ID: a28d88c5-3295-4e35-98d4-210b3af9358c Status: Started Snapshot Count: 0 Number of Bricks: 6 x (4 + 2) = 36 Transport-type: tcp Bricks:
2017 Dec 20
2
"file changed as we read it" message during tar file creation on GlusterFS
Dear Users, I?m experiencing a random problem ( "file changed as we read it? error) during tar files creation on a distributed dispersed Gluster file system. The tar files seem to be created correctly, but I can see a lot of message similar to the following ones: tar: ./year1990/lffd1990050706p.nc.gz: file changed as we read it tar: ./year1990/lffd1990052106p.nc.gz: file changed as we read
2017 Dec 28
0
"file changed as we read it" message during tar file creation on GlusterFS
Hi All, anyone had the same experience? Could you provide me some information about this error? It happens only on GlusterFS file system. Thank you, Mauro > Il giorno 20 dic 2017, alle ore 16:57, Mauro Tridici <mauro.tridici at cmcc.it> ha scritto: > > > Dear Users, > > I?m experiencing a random problem ( "file changed as we read it? error) during tar files
2017 Sep 26
2
df command shows transport endpoint mount error on gluster client v.3.10.5 + core dump
Hi Ashish, in attachment you can find the gdb output (with bt and thread outputs) and the complete log file until the crash happened. Thank you for your support. Mauro Tridici > Il giorno 26 set 2017, alle ore 10:11, Ashish Pandey <aspandey at redhat.com> ha scritto: > > Hi, > > Following are the command to get the debug info for gluster - > gdb
2017 Sep 27
0
df command shows transport endpoint mount error on gluster client v.3.10.5 + core dump
Hi Ashish, I?m sorry to disturb you again, but I would like to know if you received the log files correctly. Thank you, Mauro Tridici > Il giorno 26 set 2017, alle ore 10:19, Mauro Tridici <mauro.tridici at cmcc.it> ha scritto: > > > Hi Ashish, > > in attachment you can find the gdb output (with bt and thread outputs) and the complete log file until the crash happened.
2017 Sep 26
2
df command shows transport endpoint mount error on gluster client v.3.10.5 + core dump
Hi Ashish, thank you for your answer. Do you need complete client log file only or something else in particular? Unfortunately, I never used ?bt? command. Could you please provide me an usage example string? I will provide all logs you need. Thank you again, Mauro > Il giorno 26 set 2017, alle ore 09:30, Ashish Pandey <aspandey at redhat.com> ha scritto: > > Hi Mauro, > >
2017 Sep 26
0
df command shows transport endpoint mount error on gluster client v.3.10.5 + core dump
Hi, Following are the command to get the debug info for gluster - gdb /usr/local/sbin/glusterfs <core file path> Then on gdb prompt you need to type bt or backtrace (gdb) bt You can also provide the output of "thread apply all bt". (gdb) thread apply all bt Above commands should be executed on client node on which you have mounted the gluster volume and it crashed.
2017 Sep 25
2
df command shows transport endpoint mount error on gluster client v.3.10.5 + core dump
Dear Gluster Users, I implemented a distributed disperse 6x(4+2) gluster (v.3.10.5) volume with the following options: [root at s01 tier2]# gluster volume info Volume Name: tier2 Type: Distributed-Disperse Volume ID: a28d88c5-3295-4e35-98d4-210b3af9358c Status: Started Snapshot Count: 0 Number of Bricks: 6 x (4 + 2) = 36 Transport-type: tcp Bricks: Brick1: s01-stg:/gluster/mnt1/brick Brick2:
2017 Sep 26
0
df command shows transport endpoint mount error on gluster client v.3.10.5 + core dump
Hi Mauro, We would require complete log file to debug this issue. Also, could you please provide some more information of the core after attaching to gdb and using command "bt". --- Ashish ----- Original Message ----- From: "Mauro Tridici" <mauro.tridici at cmcc.it> To: "Gluster Users" <gluster-users at gluster.org> Sent: Monday, September 25,
2018 Apr 27
2
How to set up a 4 way gluster file system
Hi, I have 4 nodes, so a quorum would be 3 of 4. The Q is I suppose why does the documentation give this command as an example without qualifying it? SO I am running the wrong command? I want a "raid10" On 27 April 2018 at 18:05, Karthik Subrahmanya <ksubrahm at redhat.com> wrote: > Hi, > > With replica 2 volumes one can easily end up in split-brains if there are
2018 Jan 12
1
Reading over than the file size on dispersed volume
Hi All, I'm using gluster as dispersed volume and I send to ask for very serious thing. I have 3 servers and there are 9 bricks. My volume is like below. ------------------------------------------------------ Volume Name: TEST_VOL Type: Disperse Volume ID: be52b68d-ae83-46e3-9527-0e536b867bcc Status: Started Snapshot Count: 0 Number of Bricks: 1 x (6 + 3) = 9 Transport-type: tcp Bricks: