Displaying 20 results from an estimated 20000 matches similar to: "Integration of GPU with glusterfs"
2018 Jan 11
0
Integration of GPU with glusterfs
Sounds like a good option to look into, but I wouldn?t want it to take time & resources away from other, non-GPU based, methods of improving this. Mainly because I don?t have discrete GPUs in most of my systems. While I could add them to my main server cluster pretty easily, many of my clients are 1U or blade systems and have no real possibility of having a GPU added.
It would also add
2018 Jan 15
2
[Gluster-devel] Integration of GPU with glusterfs
It is disappointing to see the limitation being put by Nvidia on low cost GPU usage on data centers.
https://www.theregister.co.uk/2018/01/03/nvidia_server_gpus/
We thought of providing an option in glusterfs by which we can control if we want to use GPU or not.
So, the concern of gluster eating out GPU's which could be used by others can be addressed.
---
Ashish
----- Original
2018 Jan 15
0
[Gluster-devel] Integration of GPU with glusterfs
On Mon, Jan 15, 2018 at 12:06 AM, Ashish Pandey <aspandey at redhat.com> wrote:
>
> It is disappointing to see the limitation being put by Nvidia on low cost
> GPU usage on data centers.
> https://www.theregister.co.uk/2018/01/03/nvidia_server_gpus/
>
> We thought of providing an option in glusterfs by which we can control if
> we want to use GPU or not.
> So, the
2017 Jun 01
3
Heal operation detail of EC volumes
Hi Serkan,
On 30/05/17 10:22, Serkan ?oban wrote:
> Ok I understand that heal operation takes place on server side. In
> this case I should see X KB
> out network traffic from 16 servers and 16X KB input traffic to the
> failed brick server right? So that process will get 16 chunks
> recalculate our chunk and write it to disk.
That should be the normal operation for a single
2017 Nov 14
2
Error logged in fuse-mount log file
I remember we have fixed 2 issues where such kind of error messages were coming and also we were seeing issues on mount.
In one of the case the problem was in dht. Unfortunately, I don't remember the BZ's for those issues.
As glusterfs 3.10.1 is an old version, I would request you to please upgrade it to latest one. I am sure this
would have fix .
----
Ashish
----- Original
2017 Jun 01
0
Heal operation detail of EC volumes
>Is it possible that this matches your observations ?
Yes that matches what I see. So 19 files is being in parallel by 19
SHD processes. I thought only one file is being healed at a time.
Then what is the meaning of disperse.shd-max-threads parameter? If I
set it to 2 then each SHD thread will heal two files at the same time?
>How many IOPS can handle your bricks ?
Bricks are 7200RPM NL-SAS
2017 Aug 31
1
error msg in the glustershd.log
Based on this BZ https://bugzilla.redhat.com/show_bug.cgi?id=1414287
it has been fixed in glusterfs-3.11.0
---
Ashish
----- Original Message -----
From: "Amudhan P" <amudhan83 at gmail.com>
To: "Ashish Pandey" <aspandey at redhat.com>
Cc: "Gluster Users" <gluster-users at gluster.org>
Sent: Thursday, August 31, 2017 1:07:16 PM
Subject:
2017 Nov 14
0
Error logged in fuse-mount log file
On 14 November 2017 at 08:36, Ashish Pandey <aspandey at redhat.com> wrote:
>
> I remember we have fixed 2 issues where such kind of error messages were
> coming and also we were seeing issues on mount.
> In one of the case the problem was in dht. Unfortunately, I don't
> remember the BZ's for those issues.
>
I think the DHT BZ you are referring to 1438423
2017 Sep 26
2
df command shows transport endpoint mount error on gluster client v.3.10.5 + core dump
Hi Ashish,
in attachment you can find the gdb output (with bt and thread outputs) and the complete log file until the crash happened.
Thank you for your support.
Mauro Tridici
> Il giorno 26 set 2017, alle ore 10:11, Ashish Pandey <aspandey at redhat.com> ha scritto:
>
> Hi,
>
> Following are the command to get the debug info for gluster -
> gdb
2017 Aug 29
2
error msg in the glustershd.log
I am using 3.10.1 from which version this update is available.
On Tue, Aug 29, 2017 at 5:03 PM, Ashish Pandey <aspandey at redhat.com> wrote:
>
> Whenever we do some fop on EC volume on a file, we check the xattr also to
> see if the file is healthy or not. If not, we trigger heal.
> lookup is the fop for which we don't take inodelk lock so it is possible
> that the
2017 Jun 08
1
Heal operation detail of EC volumes
On Fri, Jun 2, 2017 at 1:01 AM, Serkan ?oban <cobanserkan at gmail.com> wrote:
> >Is it possible that this matches your observations ?
> Yes that matches what I see. So 19 files is being in parallel by 19
> SHD processes. I thought only one file is being healed at a time.
> Then what is the meaning of disperse.shd-max-threads parameter? If I
> set it to 2 then each SHD
2018 Jan 12
3
Integration of GPU with glusterfs
On 12/01/2018 3:14 AM, Darrell Budic wrote:
> It would also add physical resource requirements to future client
> deploys, requiring more than 1U for the server (most likely), and I?m
> not likely to want to do this if I?m trying to optimize for client
> density, especially with the cost of GPUs today.
Nvidia has banned their GPU's being used in Data Centers now to, I
imagine
2017 Sep 27
0
df command shows transport endpoint mount error on gluster client v.3.10.5 + core dump
Hi Ashish,
I?m sorry to disturb you again, but I would like to know if you received the log files correctly.
Thank you,
Mauro Tridici
> Il giorno 26 set 2017, alle ore 10:19, Mauro Tridici <mauro.tridici at cmcc.it> ha scritto:
>
>
> Hi Ashish,
>
> in attachment you can find the gdb output (with bt and thread outputs) and the complete log file until the crash happened.
2017 Jun 02
1
?==?utf-8?q? Heal operation detail of EC volumes
Hi Serkan,
On Thursday, June 01, 2017 21:31 CEST, Serkan ?oban <cobanserkan at gmail.com> wrote:
?>Is it possible that this matches your observations ?
Yes that matches what I see. So 19 files is being in parallel by 19
SHD processes. I thought only one file is being healed at a time.
Then what is the meaning of disperse.shd-max-threads parameter? If I
set it to 2 then each SHD thread
2017 Aug 31
0
error msg in the glustershd.log
Ashish, which version has this issue fixed?
On Tue, Aug 29, 2017 at 6:38 PM, Amudhan P <amudhan83 at gmail.com> wrote:
> I am using 3.10.1 from which version this update is available.
>
>
> On Tue, Aug 29, 2017 at 5:03 PM, Ashish Pandey <aspandey at redhat.com>
> wrote:
>
>>
>> Whenever we do some fop on EC volume on a file, we check the xattr also
2018 Jan 12
0
Integration of GPU with glusterfs
On January 11, 2018 10:58:28 PM EST, Lindsay Mathieson <lindsay.mathieson at gmail.com> wrote:
>On 12/01/2018 3:14 AM, Darrell Budic wrote:
>> It would also add physical resource requirements to future client
>> deploys, requiring more than 1U for the server (most likely), and I?m
>
>> not likely to want to do this if I?m trying to optimize for client
>>
2017 Dec 21
3
Wrong volume size with df
Sure!
> 1 - output of gluster volume heal <volname> info
Brick pod-sjc1-gluster1:/data/brick1/gv0
Status: Connected
Number of entries: 0
Brick pod-sjc1-gluster2:/data/brick1/gv0
Status: Connected
Number of entries: 0
Brick pod-sjc1-gluster1:/data/brick2/gv0
Status: Connected
Number of entries: 0
Brick pod-sjc1-gluster2:/data/brick2/gv0
Status: Connected
Number of entries: 0
Brick
2018 Jan 02
0
Wrong volume size with df
For what it's worth here, after I added a hot tier to the pool, the brick
sizes are now reporting the correct size of all bricks combined instead of
just one brick.
Not sure if that gives you any clues for this... maybe adding another brick
to the pool would have a similar effect?
On Thu, Dec 21, 2017 at 11:44 AM, Tom Fite <tomfite at gmail.com> wrote:
> Sure!
>
> > 1 -
2017 Sep 26
2
df command shows transport endpoint mount error on gluster client v.3.10.5 + core dump
Hi Ashish,
thank you for your answer.
Do you need complete client log file only or something else in particular?
Unfortunately, I never used ?bt? command. Could you please provide me an usage example string?
I will provide all logs you need.
Thank you again,
Mauro
> Il giorno 26 set 2017, alle ore 09:30, Ashish Pandey <aspandey at redhat.com> ha scritto:
>
> Hi Mauro,
>
>
2017 Sep 26
0
df command shows transport endpoint mount error on gluster client v.3.10.5 + core dump
Hi,
Following are the command to get the debug info for gluster -
gdb /usr/local/sbin/glusterfs <core file path>
Then on gdb prompt you need to type bt or backtrace
(gdb) bt
You can also provide the output of "thread apply all bt".
(gdb) thread apply all bt
Above commands should be executed on client node on which you have mounted the gluster volume and it crashed.