Displaying 11 results from an estimated 11 matches for "jotta".
Did you mean:
gotta
2018 Feb 26
2
new Gluster cluster: 3.10 vs 3.12
After discussing with Xavi in #gluster-dev we found out that we could
eliminate the slow lstats by disabling disperse.eager-lock.
There is an open issue here :
https://bugzilla.redhat.com/show_bug.cgi?id=1546732
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180226/77e194f8/attachment.html>
2018 Apr 18
1
Very slow rsync to gluster volume UNLESS `ls` or `find` scan dir on gluster volume first
...Police <http://www.androidpolice.com>, APK Mirror
<http://www.apkmirror.com/>, Illogical Robot LLC
beerpla.net | +ArtemRussakovskii
<https://plus.google.com/+ArtemRussakovskii> | @ArtemR
<http://twitter.com/ArtemR>
On Tue, Feb 27, 2018 at 6:15 AM, Ingard Mev?g <ingard at jotta.no> wrote:
> We got extremely slow stat calls on our disperse cluster running latest
> 3.12 with clients also running 3.12.
> When we downgraded clients to 3.10 the slow stat problem went away.
>
> We later found out that by disabling disperse.eager-lock we could run the
> 3.1...
2017 Nov 09
2
GlusterFS healing questions
...an play with disperse.self-heal-window-size to read more
bytes at one time, but i did not test it.
On Thu, Nov 9, 2017 at 4:47 PM, Xavi Hernandez <jahernan at redhat.com> wrote:
> Hi Rolf,
>
> answers follow inline...
>
> On Thu, Nov 9, 2017 at 3:20 PM, Rolf Larsen <rolf at jotta.no> wrote:
>>
>> Hi,
>>
>> We ran a test on GlusterFS 3.12.1 with erasurecoded volumes 8+2 with 10
>> bricks (default config,tested with 100gb, 200gb, 400gb bricksizes,10gbit
>> nics)
>>
>> 1.
>> Tests show that healing takes about double the...
2017 Nov 09
0
GlusterFS healing questions
...o read more
> bytes at one time, but i did not test it.
>
>> On Thu, Nov 9, 2017 at 4:47 PM, Xavi Hernandez <jahernan at redhat.com> wrote:
>> Hi Rolf,
>>
>> answers follow inline...
>>
>>> On Thu, Nov 9, 2017 at 3:20 PM, Rolf Larsen <rolf at jotta.no> wrote:
>>>
>>> Hi,
>>>
>>> We ran a test on GlusterFS 3.12.1 with erasurecoded volumes 8+2 with 10
>>> bricks (default config,tested with 100gb, 200gb, 400gb bricksizes,10gbit
>>> nics)
>>>
>>> 1.
>>> Tests s...
2017 Nov 09
0
GlusterFS healing questions
Hi Rolf,
answers follow inline...
On Thu, Nov 9, 2017 at 3:20 PM, Rolf Larsen <rolf at jotta.no> wrote:
> Hi,
>
> We ran a test on GlusterFS 3.12.1 with erasurecoded volumes 8+2 with 10
> bricks (default config,tested with 100gb, 200gb, 400gb bricksizes,10gbit
> nics)
>
> 1.
> Tests show that healing takes about double the time on healing 200gb vs
> 100, and...
2017 Nov 09
2
GlusterFS healing questions
...:/mnt/test-ec-400/brick
Brick7: dn-310:/mnt/test-ec-400/brick
Brick8: dn-311:/mnt/test-ec-400_2/brick
Brick9: dn-312:/mnt/test-ec-400/brick
Brick10: dn-313:/mnt/test-ec-400/brick
Options Reconfigured:
nfs.disable: on
transport.address-family: inet
--
Regards
Rolf Arne Larsen
Ops Engineer
rolf at jottacloud.com <rolf at startsiden.no>
<http://www.jottacloud.com>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20171109/5c4ca7f1/attachment.html>
2018 Feb 27
2
Very slow rsync to gluster volume UNLESS `ls` or `find` scan dir on gluster volume first
Any updates on this one?
On Mon, Feb 5, 2018 at 8:18 AM, Tom Fite <tomfite at gmail.com> wrote:
> Hi all,
>
> I have seen this issue as well, on Gluster 3.12.1. (3 bricks per box, 2
> boxes, distributed-replicate) My testing shows the same thing -- running a
> find on a directory dramatically increases lstat performance. To add
> another clue, the performance degrades
2009 Jun 30
0
help friend :java nio problem
I don't think your example would copy the file correctly, but I'm getting
the same error when I run a similar test on our file system. It works on the
local file system.
Trygve
On Mon, Jun 29, 2009 at 4:59 PM, eagleeyes <eagleeyes at 126.com> wrote:
> Thanks ,the attachment is a java nio with mmap , You could use it for
> testing
> You should mount GFS at directory
2017 Sep 01
0
high number of pending calls
Hi
We're seeing high? number of pending calls on two of our glusterfs 3.10
clusters.
We have not tried to tune anything except changing server.event-threads: 2
gluster volume status callpool | grep Pending results in various numbers
but more often that not a fair few of the bricks have 200-400 pending calls.
Is there a way I can debug this further?
The servers are 8x dual 8 core with 64G
2018 Feb 16
0
slow lstat on 3.12 disperse volume
Hi
We've recently done some testing with a 3.12 disperse cluster. The
performance of filesystem stat calls was terrible, taking multiple seconds.
We dumped client side stats to see what was going on and noticed gluster
STAT was the culprit. tpcdump shows a STAT call being sent and replied to
quite fast, but still the client hangs for multiple seconds before
returning.
After downgrading
2017 Sep 04
2
high (sys) cpu util and high load
Hi
I'm seeing quite high cpu sys utilisation and an increased system load the
past few days on my servers. It appears it doesn't start at exactly the
same time for the different servers, but I've not (yet) been able to pin
the cpu usage to a specific task or entries in the logs.
The cluster is running distribute with 8nodes and 4 bricks each version 3.10
The nodes have 32 (HT) cores