similar to: how to calculate the ideal value for client.event-threads, server.event-threads and performance.io-thread-count?

Displaying 20 results from an estimated 5000 matches similar to: "how to calculate the ideal value for client.event-threads, server.event-threads and performance.io-thread-count?"

2017 Sep 20
0
how to calculate the ideal value for client.event-threads, server.event-threads and performance.io-thread-count?
Defaults should be fine in your size. In big clusters I usually set event-threads to 4. On Mon, Sep 18, 2017 at 10:39 PM, Mauro Tridici <mauro.tridici at cmcc.it> wrote: > > Dear All, > > I just implemented a (6x(4+2)) DISTRIBUTED DISPERSED gluster (v.3.10) volume > based on the following hardware: > > - 3 gluster servers (each server with 2 CPU 10 cores, 64GB RAM, 12
2017 Sep 18
6
how many hosts could be down in a 12x(4+2) distributed dispersed volume?
Dear All, I just implemented a (6x(4+2)) DISTRIBUTED DISPERSED gluster (v.3.10) volume based on the following hardware: - 3 gluster servers (each server with 2 CPU 10 cores, 64GB RAM, 12 hard disk SAS 12Gb/s, 10GbE storage network) Now, we need to add 3 new servers with the same hardware configuration respecting the current volume topology. If I'm right, we will obtain a DITRIBUTED
2017 Sep 20
0
how many hosts could be down in a 12x(4+2) distributed dispersed volume?
Hi Mauro Tridici, >From the information provided it appears like you have placed 2 bricks of a subvolume on one host. Please confirm. The number of hosts that could go down without losing access to data can be derived based on the brick configuration/distribution. Please let us know the brick distribution plan. Regards, Sunil kumar Acharya Senior Software Engineer Red Hat
2017 Sep 20
0
how many hosts could be down in a 12x(4+2) distributed dispersed volume?
After adding 3 more nodes you will have 6 nodes and 2 HD on each nodes. It depends on the way you are going to add new bricks on the existing volume 'vol" I think you should remember that in a given EC sub volume of 4+2, at any point of time 2 bricks could be down. When you make 6 * (4+2) to 12 * (4+2) you have to provide path of the bricks you want to add. Suppose you want to add 6
2017 Sep 20
0
how many hosts could be down in a 12x(4+2) distributed dispersed volume?
If you add bricks to existing volume one host could be down in each three host group, If you recreate the volume with one brick on each host, then two random hosts can be tolerated. Assume s1,s2,s3 are current servers and you add s4,s5,s6 and extend volume. If any two servers in each group goes down you loose data. If you chose random two host the probability you loose data will be %20 in this
2018 Jan 02
1
"file changed as we read it" message during tar file creation on GlusterFS
Hi Ravi, thank you very much for your support and explanation. If I understand, the ctime xlator feature is not present in the current gluster package but it will be in the future release, right? Thank you again, Mauro > Il giorno 02 gen 2018, alle ore 12:53, Ravishankar N <ravishankar at redhat.com> ha scritto: > > I think it is safe to ignore it. The problem exists due to the
2018 Jan 02
2
"file changed as we read it" message during tar file creation on GlusterFS
Hi All, any news about this issue? Can I ignore this kind of error message or I have to do something to correct it? Thank you in advance and sorry for my insistence. Regards, Mauro > Il giorno 29 dic 2017, alle ore 11:45, Mauro Tridici <mauro.tridici at cmcc.it> ha scritto: > > > Hi Nithya, > > thank you very much for your support and sorry for the late. > Below
2017 Dec 29
2
"file changed as we read it" message during tar file creation on GlusterFS
Hi Mauro, What version of Gluster are you running and what is your volume configuration? IIRC, this was seen because of mismatches in the ctime returned to the client. I don't think there were issues with the files but I will leave it to Ravi and Raghavendra to comment. Regards, Nithya On 29 December 2017 at 04:10, Mauro Tridici <mauro.tridici at cmcc.it> wrote: > > Hi All,
2018 Jan 02
0
"file changed as we read it" message during tar file creation on GlusterFS
I think it is safe to ignore it. The problem exists? due to the minor difference in file time stamps in the backend bricks of the same sub volume (for a given file) and during the course of tar, the timestamp can be served from different bricks causing it to complain . The ctime xlator[1] feature once ready should fix this issue by storing time stamps as xattrs on the bricks. i.e. all bricks
2017 Dec 29
0
"file changed as we read it" message during tar file creation on GlusterFS
Hi Nithya, thank you very much for your support and sorry for the late. Below you can find the output of ?gluster volume info tier2? command and the gluster software stack version: gluster volume info Volume Name: tier2 Type: Distributed-Disperse Volume ID: a28d88c5-3295-4e35-98d4-210b3af9358c Status: Started Snapshot Count: 0 Number of Bricks: 6 x (4 + 2) = 36 Transport-type: tcp Bricks:
2017 Dec 28
0
"file changed as we read it" message during tar file creation on GlusterFS
Hi All, anyone had the same experience? Could you provide me some information about this error? It happens only on GlusterFS file system. Thank you, Mauro > Il giorno 20 dic 2017, alle ore 16:57, Mauro Tridici <mauro.tridici at cmcc.it> ha scritto: > > > Dear Users, > > I?m experiencing a random problem ( "file changed as we read it? error) during tar files
2017 Sep 27
0
df command shows transport endpoint mount error on gluster client v.3.10.5 + core dump
Hi Ashish, I?m sorry to disturb you again, but I would like to know if you received the log files correctly. Thank you, Mauro Tridici > Il giorno 26 set 2017, alle ore 10:19, Mauro Tridici <mauro.tridici at cmcc.it> ha scritto: > > > Hi Ashish, > > in attachment you can find the gdb output (with bt and thread outputs) and the complete log file until the crash happened.
2017 Sep 26
2
df command shows transport endpoint mount error on gluster client v.3.10.5 + core dump
Hi Ashish, in attachment you can find the gdb output (with bt and thread outputs) and the complete log file until the crash happened. Thank you for your support. Mauro Tridici > Il giorno 26 set 2017, alle ore 10:11, Ashish Pandey <aspandey at redhat.com> ha scritto: > > Hi, > > Following are the command to get the debug info for gluster - > gdb
2017 Dec 20
2
"file changed as we read it" message during tar file creation on GlusterFS
Dear Users, I?m experiencing a random problem ( "file changed as we read it? error) during tar files creation on a distributed dispersed Gluster file system. The tar files seem to be created correctly, but I can see a lot of message similar to the following ones: tar: ./year1990/lffd1990050706p.nc.gz: file changed as we read it tar: ./year1990/lffd1990052106p.nc.gz: file changed as we read
2017 Sep 26
0
df command shows transport endpoint mount error on gluster client v.3.10.5 + core dump
Hi, Following are the command to get the debug info for gluster - gdb /usr/local/sbin/glusterfs <core file path> Then on gdb prompt you need to type bt or backtrace (gdb) bt You can also provide the output of "thread apply all bt". (gdb) thread apply all bt Above commands should be executed on client node on which you have mounted the gluster volume and it crashed.
2017 Sep 26
2
df command shows transport endpoint mount error on gluster client v.3.10.5 + core dump
Hi Ashish, thank you for your answer. Do you need complete client log file only or something else in particular? Unfortunately, I never used ?bt? command. Could you please provide me an usage example string? I will provide all logs you need. Thank you again, Mauro > Il giorno 26 set 2017, alle ore 09:30, Ashish Pandey <aspandey at redhat.com> ha scritto: > > Hi Mauro, > >
2017 Sep 26
0
df command shows transport endpoint mount error on gluster client v.3.10.5 + core dump
Hi Mauro, We would require complete log file to debug this issue. Also, could you please provide some more information of the core after attaching to gdb and using command "bt". --- Ashish ----- Original Message ----- From: "Mauro Tridici" <mauro.tridici at cmcc.it> To: "Gluster Users" <gluster-users at gluster.org> Sent: Monday, September 25,
2017 Sep 25
2
df command shows transport endpoint mount error on gluster client v.3.10.5 + core dump
Dear Gluster Users, I implemented a distributed disperse 6x(4+2) gluster (v.3.10.5) volume with the following options: [root at s01 tier2]# gluster volume info Volume Name: tier2 Type: Distributed-Disperse Volume ID: a28d88c5-3295-4e35-98d4-210b3af9358c Status: Started Snapshot Count: 0 Number of Bricks: 6 x (4 + 2) = 36 Transport-type: tcp Bricks: Brick1: s01-stg:/gluster/mnt1/brick Brick2:
2017 Sep 18
0
wrong DF command output after deleting all files in gluster volume
Dear All, I just implemented a (6x(4+2)) DISTRIBUTED DISPERSED gluster (v.3.10) volume based on the following hardware: - 3 gluster servers (each server with 2 CPU 10 cores, 64GB RAM, 12 hard disk SAS 12Gb/s, 10GbE storage network) Everything seems to be ok, I created a first subdirectory in my volume "/volume/test_folder", I populated it with a lot of data. But, after deleting the
2023 Jul 18
2
Installation of R-4.3.1 with intel 2022
Note that 'intel 2022' is a bit vague. The current version is 2023.1.0, and that has both the 'classic' (icc/icpc/ifort which it seems you used) and new (icx/ixpx/ifx) compilers -- the former are said to be going to be discontinued later this year. R did not know about ifx so did not build with the new set. The parts of the manual Tomas referred to were about the old