similar to: Multi petabyte gluster

Displaying 20 results from an estimated 1100 matches similar to: "Multi petabyte gluster"

2017 Jun 30
2
Multi petabyte gluster
We are using 3.10 and have a 7 PB cluster. We decided against 16+3 as the rebuild time are bottlenecked by matrix operations which scale as the square of the number of data stripes. There are some savings because of larger data chunks but we ended up using 8+3 and heal times are about half compared to 16+3. -Alastair On 30 June 2017 at 02:22, Serkan ?oban <cobanserkan at gmail.com>
2017 Jun 30
0
Multi petabyte gluster
>Thanks for the reply. We will mainly use this for archival - near-cold storage. Archival usage is good for EC >Anything, from your experience, to keep in mind while planning large installations? I am using 3.7.11 and only problem is slow rebuild time when a disk fails. It takes 8 days to heal a 8TB disk.(This might be related with my EC configuration 16+4) 3.9+ versions has some
2017 Jun 30
0
Multi petabyte gluster
Did you test healing by increasing disperse.shd-max-threads? What is your heal times per brick now? On Fri, Jun 30, 2017 at 8:01 PM, Alastair Neil <ajneil.tech at gmail.com> wrote: > We are using 3.10 and have a 7 PB cluster. We decided against 16+3 as the > rebuild time are bottlenecked by matrix operations which scale as the square > of the number of data stripes. There are
2017 Jun 28
1
Multi petabyte gluster
Has anyone scaled to a multi petabyte gluster setup? How well does erasure code do with such a large setup? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170628/b030376d/attachment.html>
2017 Sep 07
2
Can I use 3.7.11 server with 3.10.5 client?
Hi, Is it safe to use 3.10.5 client with 3.7.11 server with read-only data move operation? Client will have 3.10.5 glusterfs-client packages. It will mount one volume from 3.7.11 cluster and one from 3.10.5 cluster. I will read from 3.7.11 and write to 3.10.5.
2017 Aug 29
2
Glusterd proccess hangs on reboot
Here is the logs after stopping all three volumes and restarting glusterd in all nodes. I waited 70 minutes after glusterd restart but it is still consuming %100 CPU. https://www.dropbox.com/s/pzl0f198v03twx3/80servers_after_glusterd_restart.zip?dl=0 On Tue, Aug 29, 2017 at 12:37 PM, Gaurav Yadav <gyadav at redhat.com> wrote: > > I believe logs you have shared logs which consist of
2017 Sep 04
2
Glusterd proccess hangs on reboot
On Mon, Sep 4, 2017 at 5:28 PM, Serkan ?oban <cobanserkan at gmail.com> wrote: > >1. On 80 nodes cluster, did you reboot only one node or multiple ones? > Tried both, result is same, but the logs/stacks are from stopping and > starting glusterd only on one server while others are running. > > >2. Are you sure that pstack output was always constantly pointing on >
2017 Aug 29
2
Glusterd proccess hangs on reboot
Here is the requested logs: https://www.dropbox.com/s/vt187h0gtu5doip/gluster_logs_20_40_80_servers.zip?dl=0 On Tue, Aug 29, 2017 at 7:48 AM, Gaurav Yadav <gyadav at redhat.com> wrote: > Till now I haven't found anything significant. > > Can you send me gluster logs along with command-history-logs for these > scenarios: > Scenario1 : 20 servers > Scenario2 : 40
2017 Aug 29
0
Glusterd proccess hangs on reboot
glusterd returned to normal, here is the logs: https://www.dropbox.com/s/41jx2zn3uizvr53/80servers_glusterd_normal_status.zip?dl=0 On Tue, Aug 29, 2017 at 1:47 PM, Serkan ?oban <cobanserkan at gmail.com> wrote: > Here is the logs after stopping all three volumes and restarting > glusterd in all nodes. I waited 70 minutes after glusterd restart but > it is still consuming %100 CPU.
2017 Aug 28
2
Glusterd proccess hangs on reboot
Hi Gaurav, Any progress about the problem? On Thursday, August 24, 2017, Serkan ?oban <cobanserkan at gmail.com> wrote: > Thank you Gaurav, > Here is more findings: > Problem does not happen using only 20 servers each has 68 bricks. > (peer probe only 20 servers) > If we use 40 servers with single volume, glusterd cpu %100 state > continues for 5 minutes and it goes to
2017 Aug 29
0
Glusterd proccess hangs on reboot
I believe logs you have shared logs which consist of create volume followed by starting the volume. However, you have mentioned that when a node from 80 server cluster gets rebooted, glusterd process hangs. Could you please provide the logs which led glusterd to hang for all the cases along with gusterd process utilization. Thanks Gaurav On Tue, Aug 29, 2017 at 2:44 PM, Serkan ?oban
2017 Sep 03
2
Glusterd proccess hangs on reboot
----- Original Message ----- > From: "Ben Turner" <bturner at redhat.com> > To: "Serkan ?oban" <cobanserkan at gmail.com> > Cc: "Gluster Users" <gluster-users at gluster.org> > Sent: Sunday, September 3, 2017 2:30:31 PM > Subject: Re: [Gluster-users] Glusterd proccess hangs on reboot > > ----- Original Message ----- > >
2017 Sep 08
0
Can I use 3.7.11 server with 3.10.5 client?
Any suggestions? On Thu, Sep 7, 2017 at 4:35 PM, Serkan ?oban <cobanserkan at gmail.com> wrote: > Hi, > > Is it safe to use 3.10.5 client with 3.7.11 server with read-only data > move operation? > Client will have 3.10.5 glusterfs-client packages. It will mount one > volume from 3.7.11 cluster and one from 3.10.5 cluster. I will read > from 3.7.11 and write to 3.10.5.
2017 Sep 04
0
Glusterd proccess hangs on reboot
I have been using a 60 server 1560 brick 3.7.11 cluster without problems for 1 years. I did not see this problem with it. Note that this problem does not happen when I install packages & start glusterd & peer probe and create the volumes. But after glusterd restart. Also note that this still happens without any volumes. So it is not related with brick count I think... On Mon, Sep 4, 2017
2017 Aug 29
0
Glusterd proccess hangs on reboot
Till now I haven't found anything significant. Can you send me gluster logs along with command-history-logs for these scenarios: Scenario1 : 20 servers Scenario2 : 40 servers Scenario3: 80 Servers Thanks Gaurav On Mon, Aug 28, 2017 at 11:22 AM, Serkan ?oban <cobanserkan at gmail.com> wrote: > Hi Gaurav, > Any progress about the problem? > > On Thursday, August 24,
2017 Aug 24
2
Glusterd proccess hangs on reboot
I am working on it and will share my findings as soon as possible. Thanks Gaurav On Thu, Aug 24, 2017 at 3:58 PM, Serkan ?oban <cobanserkan at gmail.com> wrote: > Restarting glusterd causes the same thing. I tried with 3.12.rc0, > 3.10.5. 3.8.15, 3.7.20 all same behavior. > My OS is centos 6.9, I tried with centos 6.8 problem remains... > Only way to a healthy state is
2017 Sep 03
2
Glusterd proccess hangs on reboot
No worries Serkan, You can continue to use your 40 node clusters. The backtrace has resolved the function names and it *should* be sufficient to debug the issue. Thanks for letting us know. We'll post on this thread again to notify you about the findings. On Sat, Sep 2, 2017 at 2:42 PM, Serkan ?oban <cobanserkan at gmail.com> wrote: > Hi Milind, > > Anything new about the
2017 Aug 24
6
Glusterd proccess hangs on reboot
Here you can find 10 stack trace samples from glusterd. I wait 10 seconds between each trace. https://www.dropbox.com/s/9f36goq5xn3p1yt/glusterd_pstack.zip?dl=0 Content of the first stack trace is here: Thread 8 (Thread 0x7f7a8cd4e700 (LWP 43069)): #0 0x0000003aa5c0f00d in nanosleep () from /lib64/libpthread.so.0 #1 0x000000303f837d57 in ?? () from /usr/lib64/libglusterfs.so.0 #2
2017 Aug 21
2
Brick count limit in a volume
Hi, Gluster version is 3.10.5. I am trying to create a 5500 brick volume, but getting an error stating that 4444 bricks is the limit. Is this a known limit? Can I change this with an option? Thanks, Serkan
2017 Sep 03
0
Glusterd proccess hangs on reboot
i usually change event threads to 4. But those logs are from a default installation. On Sun, Sep 3, 2017 at 9:52 PM, Ben Turner <bturner at redhat.com> wrote: > ----- Original Message ----- >> From: "Ben Turner" <bturner at redhat.com> >> To: "Serkan ?oban" <cobanserkan at gmail.com> >> Cc: "Gluster Users" <gluster-users at