Hi, We have been thinking of exploiting GPU capabilities to enhance performance of glusterfs. We would like to know others thoughts on this. In EC, we have been doing CPU intensive computations to encode and decode data before writing and reading. This requires a lot of CPU cycles and we have been observing 100% CPU usage on client side. Data healing will also have the same impact as it also needs to do read-decode-encode-write cycle. As most of the modern servers comes with GPU feature, having glusterfs GPU ready might give us performance improvements. This is not only specific to EC volume, there are other features which will require a lot of computations and could use this capability; For Example: 1 - Encryption/Decryption 2 - Compression and de-duplication 3 - Hashing 4 - Any other? [Please add if you have something in mind] Before proceeding further we would like to have your inputs on this. Do you have any other use case (existing or future) which could perform better on GPU? Do you think that it is worth to integrate GPU with glusterfs? The effort to have this performance gain could be achieved by some other better ways. Any input on the way we should implement it. There is a gihub issue opened for this. Please provide your comment or reply to this mail. A - https://github.com/gluster/glusterfs/issues/388 --- Ashish -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180111/fb29aa8f/attachment.html>
Milind Changire
2018-Jan-11 06:47 UTC
[Gluster-users] [Gluster-devel] Integration of GPU with glusterfs
bit-rot is another feature that consumes much CPU to calculate the file content hash On Thu, Jan 11, 2018 at 11:42 AM, Ashish Pandey <aspandey at redhat.com> wrote:> Hi, > > We have been thinking of exploiting GPU capabilities to enhance > performance of glusterfs. We would like to know others thoughts on this. > In EC, we have been doing CPU intensive computations to encode and decode > data before writing and reading. This requires a lot of CPU cycles and we > have > been observing 100% CPU usage on client side. Data healing will also have > the same impact as it also needs to do read-decode-encode-write cycle. > As most of the modern servers comes with GPU feature, having glusterfs > GPU ready might give us performance improvements. > This is not only specific to EC volume, there are other features which > will require a lot of computations and could use this capability; For > Example: > 1 - Encryption/Decryption > 2 - Compression and de-duplication > 3 - Hashing > 4 - Any other? [Please add if you have something in mind] > > Before proceeding further we would like to have your inputs on this. > Do you have any other use case (existing or future) which could perform > better on GPU? > Do you think that it is worth to integrate GPU with glusterfs? The effort > to have this performance gain could be achieved by some other better ways. > Any input on the way we should implement it. > > There is a gihub issue opened for this. Please provide your comment or > reply to this mail. > > A - https://github.com/gluster/glusterfs/issues/388 > > --- > Ashish > > _______________________________________________ > Gluster-devel mailing list > Gluster-devel at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-devel >-- Milind -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180111/45aab8b8/attachment.html>
I like the idea immensely. As long as the gpu usage can be specified for server-only, client and server, client and server with a client limit of X. Don't want to take gpu cycles away from machine learning for file IO. Also must support multiple GPUs and GPU pinning. Really useful for encryption/decryption on client side. On January 11, 2018 1:12:43 AM EST, Ashish Pandey <aspandey at redhat.com> wrote:>Hi, > >We have been thinking of exploiting GPU capabilities to enhance >performance of glusterfs. We would like to know others thoughts on >this. >In EC, we have been doing CPU intensive computations to encode and >decode data before writing and reading. This requires a lot of CPU >cycles and we have >been observing 100% CPU usage on client side. Data healing will also >have the same impact as it also needs to do read-decode-encode-write >cycle. >As most of the modern servers comes with GPU feature, having glusterfs >GPU ready might give us performance improvements. >This is not only specific to EC volume, there are other features which >will require a lot of computations and could use this capability; For >Example: >1 - Encryption/Decryption >2 - Compression and de-duplication >3 - Hashing >4 - Any other? [Please add if you have something in mind] > >Before proceeding further we would like to have your inputs on this. >Do you have any other use case (existing or future) which could perform >better on GPU? >Do you think that it is worth to integrate GPU with glusterfs? The >effort to have this performance gain could be achieved by some other >better ways. >Any input on the way we should implement it. > >There is a gihub issue opened for this. Please provide your comment or >reply to this mail. > >A - https://github.com/gluster/glusterfs/issues/388 > >--- >Ashish-- Sent from my Android device with K-9 Mail. All tyopes are thumb related and reflect authenticity. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180111/d2d5989f/attachment.html>
Sounds like a good option to look into, but I wouldn?t want it to take time & resources away from other, non-GPU based, methods of improving this. Mainly because I don?t have discrete GPUs in most of my systems. While I could add them to my main server cluster pretty easily, many of my clients are 1U or blade systems and have no real possibility of having a GPU added. It would also add physical resource requirements to future client deploys, requiring more than 1U for the server (most likely), and I?m not likely to want to do this if I?m trying to optimize for client density, especially with the cost of GPUs today.> From: Ashish Pandey <aspandey at redhat.com> > Subject: [Gluster-users] Integration of GPU with glusterfs > Date: January 11, 2018 at 12:12:43 AM CST > To: Gluster Users > Cc: Gluster Devel > > Hi, > > We have been thinking of exploiting GPU capabilities to enhance performance of glusterfs. We would like to know others thoughts on this. > In EC, we have been doing CPU intensive computations to encode and decode data before writing and reading. This requires a lot of CPU cycles and we have > been observing 100% CPU usage on client side. Data healing will also have the same impact as it also needs to do read-decode-encode-write cycle. > As most of the modern servers comes with GPU feature, having glusterfs GPU ready might give us performance improvements. > This is not only specific to EC volume, there are other features which will require a lot of computations and could use this capability; For Example: > 1 - Encryption/Decryption > 2 - Compression and de-duplication > 3 - Hashing > 4 - Any other? [Please add if you have something in mind] > > Before proceeding further we would like to have your inputs on this. > Do you have any other use case (existing or future) which could perform better on GPU? > Do you think that it is worth to integrate GPU with glusterfs? The effort to have this performance gain could be achieved by some other better ways. > Any input on the way we should implement it. > > There is a gihub issue opened for this. Please provide your comment or reply to this mail. > > A - https://github.com/gluster/glusterfs/issues/388 <https://github.com/gluster/glusterfs/issues/388> > > --- > Ashish > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180111/37aaf511/attachment.html>
Pranith Kumar Karampuri
2018-Jan-12 03:55 UTC
[Gluster-users] Integration of GPU with glusterfs
On Thu, Jan 11, 2018 at 10:44 PM, Darrell Budic <budic at onholyground.com> wrote:> Sounds like a good option to look into, but I wouldn?t want it to take > time & resources away from other, non-GPU based, methods of improving this. > Mainly because I don?t have discrete GPUs in most of my systems. While I > could add them to my main server cluster pretty easily, many of my clients > are 1U or blade systems and have no real possibility of having a GPU added. >In 3.9.0 Xavi added support to use hardware extensions on CPU if available. It uses special instructions if any of "x64", "sse", "avx" etc is available. There is also a plan to do systematic encoding in future which will be different from the current way of encoding. We need to check how performance will change with that as well.> > It would also add physical resource requirements to future client deploys, > requiring more than 1U for the server (most likely), and I?m not likely to > want to do this if I?m trying to optimize for client density, especially > with the cost of GPUs today. >> ------------------------------ > *From:* Ashish Pandey <aspandey at redhat.com> > *Subject:* [Gluster-users] Integration of GPU with glusterfs > *Date:* January 11, 2018 at 12:12:43 AM CST > *To:* Gluster Users > *Cc:* Gluster Devel > > Hi, > > We have been thinking of exploiting GPU capabilities to enhance > performance of glusterfs. We would like to know others thoughts on this. > In EC, we have been doing CPU intensive computations to encode and decode > data before writing and reading. This requires a lot of CPU cycles and we > have > been observing 100% CPU usage on client side. Data healing will also have > the same impact as it also needs to do read-decode-encode-write cycle. > As most of the modern servers comes with GPU feature, having glusterfs > GPU ready might give us performance improvements. > This is not only specific to EC volume, there are other features which > will require a lot of computations and could use this capability; For > Example: > 1 - Encryption/Decryption > 2 - Compression and de-duplication > 3 - Hashing > 4 - Any other? [Please add if you have something in mind] > > Before proceeding further we would like to have your inputs on this. > Do you have any other use case (existing or future) which could perform > better on GPU? > Do you think that it is worth to integrate GPU with glusterfs? The effort > to have this performance gain could be achieved by some other better ways. > Any input on the way we should implement it. > > There is a gihub issue opened for this. Please provide your comment or > reply to this mail. > > A - https://github.com/gluster/glusterfs/issues/388 > > --- > Ashish > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users >-- Pranith -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180112/f92f0302/attachment.html>
On 12/01/2018 3:14 AM, Darrell Budic wrote:> It would also add physical resource requirements to future client > deploys, requiring more than 1U for the server (most likely), and I?m > not likely to want to do this if I?m trying to optimize for client > density, especially with the cost of GPUs today.Nvidia has banned their GPU's being used in Data Centers now to, I imagine they are planning to add a licensing fee. -- Lindsay Mathieson