Displaying 20 results from an estimated 3000 matches similar to: "3.10.5 vs 3.12.0 huge performance loss"
2017 Sep 07
2
3.10.5 vs 3.12.0 huge performance loss
It is sequential write with file size 2GB. Same behavior observed with
3.11.3 too.
On Thu, Sep 7, 2017 at 12:43 AM, Shyam Ranganathan <srangana at redhat.com> wrote:
> On 09/06/2017 05:48 AM, Serkan ?oban wrote:
>>
>> Hi,
>>
>> Just do some ingestion tests to 40 node 16+4EC 19PB single volume.
>> 100 clients are writing each has 5 threads total 500 threads.
2017 Sep 06
0
3.10.5 vs 3.12.0 huge performance loss
On 09/06/2017 05:48 AM, Serkan ?oban wrote:
> Hi,
>
> Just do some ingestion tests to 40 node 16+4EC 19PB single volume.
> 100 clients are writing each has 5 threads total 500 threads.
> With 3.10.5 each server has 800MB/s network traffic, cluster total is 32GB/s
> With 3.12.0 each server has 200MB/s network traffic, cluster total is 8GB/s
> I did not change any volume
2017 Sep 11
0
3.10.5 vs 3.12.0 huge performance loss
Here are my results:
Summary: I am not able to reproduce the problem, IOW I get relatively
equivalent numbers for sequential IO when going against 3.10.5 or 3.12.0
Next steps:
- Could you pass along your volfile (both for a brick and also the
client vol file (from
/var/lib/glusterd/vols/<yourvolname>/patchy.tcp-fuse.vol and a brick vol
file from the same place)
- I want to check
2017 Sep 12
2
3.10.5 vs 3.12.0 huge performance loss
Hi,
Servers are in production with 3.10.5, so I cannot provide 3.12
related information anymore.
Thanks for help, sorry for inconvenience.
2017 Sep 12
0
3.10.5 vs 3.12.0 huge performance loss
Serkan,
Will it be possible to provide gluster volume profile <volname>
info output with 3.10.5 vs 3.12.0? That should give us clues about what
could be happening.
On Tue, Sep 12, 2017 at 1:51 PM, Serkan ?oban <cobanserkan at gmail.com> wrote:
> Hi,
> Servers are in production with 3.10.5, so I cannot provide 3.12
> related information anymore.
> Thanks for help,
2017 Sep 06
1
Announcing GlusterFS release 3.12.0 (Long Term Maintenance)
On 09/05/2017 02:07 PM, Serkan ?oban wrote:
> For rpm packages you can use [1], just installed without any problems.
> It is taking time packages to land in Centos storage SIG repo...
Thank you for reporting this. The SIG does take a while to get updated
with the latest bits. We are looking at ways to improve that in the future.
>
> [1]
2017 Sep 07
2
Can I use 3.7.11 server with 3.10.5 client?
Hi,
Is it safe to use 3.10.5 client with 3.7.11 server with read-only data
move operation?
Client will have 3.10.5 glusterfs-client packages. It will mount one
volume from 3.7.11 cluster and one from 3.10.5 cluster. I will read
from 3.7.11 and write to 3.10.5.
2017 Sep 08
0
Can I use 3.7.11 server with 3.10.5 client?
Any suggestions?
On Thu, Sep 7, 2017 at 4:35 PM, Serkan ?oban <cobanserkan at gmail.com> wrote:
> Hi,
>
> Is it safe to use 3.10.5 client with 3.7.11 server with read-only data
> move operation?
> Client will have 3.10.5 glusterfs-client packages. It will mount one
> volume from 3.7.11 cluster and one from 3.10.5 cluster. I will read
> from 3.7.11 and write to 3.10.5.
2017 Aug 29
2
Glusterd proccess hangs on reboot
Here is the logs after stopping all three volumes and restarting
glusterd in all nodes. I waited 70 minutes after glusterd restart but
it is still consuming %100 CPU.
https://www.dropbox.com/s/pzl0f198v03twx3/80servers_after_glusterd_restart.zip?dl=0
On Tue, Aug 29, 2017 at 12:37 PM, Gaurav Yadav <gyadav at redhat.com> wrote:
>
> I believe logs you have shared logs which consist of
2017 Aug 29
2
Glusterd proccess hangs on reboot
Here is the requested logs:
https://www.dropbox.com/s/vt187h0gtu5doip/gluster_logs_20_40_80_servers.zip?dl=0
On Tue, Aug 29, 2017 at 7:48 AM, Gaurav Yadav <gyadav at redhat.com> wrote:
> Till now I haven't found anything significant.
>
> Can you send me gluster logs along with command-history-logs for these
> scenarios:
> Scenario1 : 20 servers
> Scenario2 : 40
2017 Aug 24
6
Glusterd proccess hangs on reboot
Here you can find 10 stack trace samples from glusterd. I wait 10
seconds between each trace.
https://www.dropbox.com/s/9f36goq5xn3p1yt/glusterd_pstack.zip?dl=0
Content of the first stack trace is here:
Thread 8 (Thread 0x7f7a8cd4e700 (LWP 43069)):
#0 0x0000003aa5c0f00d in nanosleep () from /lib64/libpthread.so.0
#1 0x000000303f837d57 in ?? () from /usr/lib64/libglusterfs.so.0
#2
2017 Sep 03
2
Glusterd proccess hangs on reboot
----- Original Message -----
> From: "Ben Turner" <bturner at redhat.com>
> To: "Serkan ?oban" <cobanserkan at gmail.com>
> Cc: "Gluster Users" <gluster-users at gluster.org>
> Sent: Sunday, September 3, 2017 2:30:31 PM
> Subject: Re: [Gluster-users] Glusterd proccess hangs on reboot
>
> ----- Original Message -----
> >
2017 Aug 28
2
Glusterd proccess hangs on reboot
Hi Gaurav,
Any progress about the problem?
On Thursday, August 24, 2017, Serkan ?oban <cobanserkan at gmail.com> wrote:
> Thank you Gaurav,
> Here is more findings:
> Problem does not happen using only 20 servers each has 68 bricks.
> (peer probe only 20 servers)
> If we use 40 servers with single volume, glusterd cpu %100 state
> continues for 5 minutes and it goes to
2017 Sep 03
2
Glusterd proccess hangs on reboot
No worries Serkan,
You can continue to use your 40 node clusters.
The backtrace has resolved the function names and it *should* be sufficient
to debug the issue.
Thanks for letting us know.
We'll post on this thread again to notify you about the findings.
On Sat, Sep 2, 2017 at 2:42 PM, Serkan ?oban <cobanserkan at gmail.com> wrote:
> Hi Milind,
>
> Anything new about the
2017 Aug 21
2
Brick count limit in a volume
Hi,
Gluster version is 3.10.5. I am trying to create a 5500 brick volume,
but getting an error stating that 4444 bricks is the limit. Is this a
known limit? Can I change this with an option?
Thanks,
Serkan
2017 Aug 29
0
Glusterd proccess hangs on reboot
glusterd returned to normal, here is the logs:
https://www.dropbox.com/s/41jx2zn3uizvr53/80servers_glusterd_normal_status.zip?dl=0
On Tue, Aug 29, 2017 at 1:47 PM, Serkan ?oban <cobanserkan at gmail.com> wrote:
> Here is the logs after stopping all three volumes and restarting
> glusterd in all nodes. I waited 70 minutes after glusterd restart but
> it is still consuming %100 CPU.
2017 Aug 24
2
Glusterd proccess hangs on reboot
I am working on it and will share my findings as soon as possible.
Thanks
Gaurav
On Thu, Aug 24, 2017 at 3:58 PM, Serkan ?oban <cobanserkan at gmail.com> wrote:
> Restarting glusterd causes the same thing. I tried with 3.12.rc0,
> 3.10.5. 3.8.15, 3.7.20 all same behavior.
> My OS is centos 6.9, I tried with centos 6.8 problem remains...
> Only way to a healthy state is
2017 Sep 01
2
Glusterd proccess hangs on reboot
Hi,
You can find pstack sampes here:
https://www.dropbox.com/s/6gw8b6tng8puiox/pstack_with_debuginfo.zip?dl=0
Here is the first one:
Thread 8 (Thread 0x7f92879ae700 (LWP 78909)):
#0 0x0000003d99c0f00d in nanosleep () from /lib64/libpthread.so.0
#1 0x000000310fe37d57 in gf_timer_proc () from /usr/lib64/libglusterfs.so.0
#2 0x0000003d99c07aa1 in start_thread () from /lib64/libpthread.so.0
#3
2017 Sep 04
2
Glusterd proccess hangs on reboot
On Mon, Sep 4, 2017 at 5:28 PM, Serkan ?oban <cobanserkan at gmail.com> wrote:
> >1. On 80 nodes cluster, did you reboot only one node or multiple ones?
> Tried both, result is same, but the logs/stacks are from stopping and
> starting glusterd only on one server while others are running.
>
> >2. Are you sure that pstack output was always constantly pointing on
>
2017 Sep 04
2
Glusterd proccess hangs on reboot
On Fri, Sep 1, 2017 at 8:47 AM, Milind Changire <mchangir at redhat.com> wrote:
> Serkan,
> I have gone through other mails in the mail thread as well but responding
> to this one specifically.
>
> Is this a source install or an RPM install ?
> If this is an RPM install, could you please install the
> glusterfs-debuginfo RPM and retry to capture the gdb backtrace.
>