Xiubo Li
2019-Mar-21 03:29 UTC
[Gluster-users] Network Block device (NBD) on top of glusterfs
All, I am one of the contributor forgluster-block <https://github.com/gluster/gluster-block>[1] project, and also I contribute to linux kernel andopen-iscsi <https://github.com/open-iscsi> project.[2] NBD was around for some time, but in recent time, linux kernel?s Network Block Device (NBD) is enhanced and made to work with more devices and also the option to integrate with netlink is added. So, I tried to provide a glusterfs client based NBD driver recently. Please refergithub issue #633 <https://github.com/gluster/glusterfs/issues/633>[3], and good news is I have a working code, with most basic things @nbd-runner project <https://github.com/gluster/nbd-runner>[4]. While this email is about announcing the project, and asking for more collaboration, I would also like to discuss more about the placement of the project itself. Currently nbd-runner project is expected to be shared by our friends at Ceph project too, to provide NBD driver for Ceph. I have personally worked with some of them closely while contributing to open-iSCSI project, and we would like to take this project to great success. Now few questions: 1. Can I continue to usehttp://github.com/gluster/nbd-runneras home for this project, even if its shared by other filesystem projects? * I personally am fine with this. 2. Should there be a separate organization for this repo? * While it may make sense in future, for now, I am not planning to start any new thing? It would be great if we have some consensus on this soon as nbd-runner is a new repository. If there are no concerns, I will continue to contribute to the existing repository. Regards, Xiubo Li (@lxbsz) [1] -https://github.com/gluster/gluster-block [2] -https://github.com/open-iscsi [3] -https://github.com/gluster/glusterfs/issues/633 [4] -https://github.com/gluster/nbd-runner -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190321/5774dd21/attachment.html>
Prasanna Kalever
2019-Mar-21 10:09 UTC
[Gluster-users] [Gluster-devel] Network Block device (NBD) on top of glusterfs
On Thu, Mar 21, 2019 at 9:00 AM Xiubo Li <xiubli at redhat.com> wrote:> All, > > I am one of the contributor for gluster-block > <https://github.com/gluster/gluster-block>[1] project, and also I > contribute to linux kernel and open-iscsi <https://github.com/open-iscsi> > project.[2] > > NBD was around for some time, but in recent time, linux kernel?s Network > Block Device (NBD) is enhanced and made to work with more devices and also > the option to integrate with netlink is added. So, I tried to provide a > glusterfs client based NBD driver recently. Please refer github issue #633 > <https://github.com/gluster/glusterfs/issues/633>[3], and good news is I > have a working code, with most basic things @ nbd-runner project > <https://github.com/gluster/nbd-runner>[4]. > > While this email is about announcing the project, and asking for more > collaboration, I would also like to discuss more about the placement of the > project itself. Currently nbd-runner project is expected to be shared by > our friends at Ceph project too, to provide NBD driver for Ceph. I have > personally worked with some of them closely while contributing to > open-iSCSI project, and we would like to take this project to great success. > > Now few questions: > > 1. Can I continue to use http://github.com/gluster/nbd-runner as home > for this project, even if its shared by other filesystem projects? > > > - I personally am fine with this. > > > 1. Should there be a separate organization for this repo? > > > - While it may make sense in future, for now, I am not planning to > start any new thing? > > It would be great if we have some consensus on this soon as nbd-runner is > a new repository. If there are no concerns, I will continue to contribute > to the existing repository. >Thanks Xiubo Li, for finally sending this email out. Since this email is out on gluster mailing list, I would like to take a stand from gluster community point of view *only* and share my views. My honest answer is "If we want to maintain this within gluster org, then 80% of the effort is common/duplicate of what we did all these days with gluster-block", like: * rpc/socket code * cli/daemon parser/helper logics * gfapi util functions * logger framework * inotify & dyn-config threads * configure/Makefile/specfiles * docsAboutGluster and etc .. The repository gluster-block is actually a home for all the block related stuff within gluster and its designed to accommodate alike functionalities, if I was you I would have simply copied nbd-runner.c into https://github.com/gluster/gluster-block/tree/master/daemon/ just like ceph plays it here https://github.com/ceph/ceph/blob/master/src/tools/rbd_nbd/rbd-nbd.cc and be done. Advantages of keeping nbd client within gluster-block: -> No worry about maintenance code burdon -> No worry about monitoring a new component -> shipping packages to fedora/centos/rhel is handled -> This helps improve and stabilize the current gluster-block framework -> We can build a common CI -> We can use reuse common test framework and etc .. If you have an impression that gluster-block is for management, then I would really want to correct you at this point. Some of my near future plans for gluster-block: * Allow exporting blocks with FUSE access via fileIO backstore to improve large-file workloads, draft: https://github.com/gluster/gluster-block/pull/58 * Accommodate kernel loopback handling for local only applications * The same way we can accommodate nbd app/client, and IMHO this effort shouldn't take 1 or 2 days to get it merged with in gluster-block and ready for a go release. Hope that clarifies it. Best Regards, -- Prasanna> Regards, > Xiubo Li (@lxbsz) > > [1] - https://github.com/gluster/gluster-block > [2] - https://github.com/open-iscsi > [3] - https://github.com/gluster/glusterfs/issues/633 > [4] - https://github.com/gluster/nbd-runner > _______________________________________________ > Gluster-devel mailing list > Gluster-devel at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-devel-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190321/757f848a/attachment-0001.html>
Xiubo Li
2019-Mar-23 00:47 UTC
[Gluster-users] Network Block device (NBD) on top of glusterfs
On 2019/3/21 11:29, Xiubo Li wrote:> > All, > > I am one of the contributor forgluster-block > <https://github.com/gluster/gluster-block>[1] project, and also I > contribute to linux kernel andopen-iscsi > <https://github.com/open-iscsi> project.[2] > > NBD was around for some time, but in recent time, linux kernel?s > Network Block Device (NBD) is enhanced and made to work with more > devices and also the option to integrate with netlink is added. So, I > tried to provide a glusterfs client based NBD driver recently. Please > refergithub issue #633 > <https://github.com/gluster/glusterfs/issues/633>[3], and good news is > I have a working code, with most basic things @nbd-runner project > <https://github.com/gluster/nbd-runner>[4]. >As mentioned the nbd-runner(NBD proto) will work in the same layer with tcmu-runner(iSCSI proto), this is not trying to replace the gluster-block/ceph-iscsi-gateway great projects. It just provides the common library to do the low level stuff, like the sysfs/netlink operations and the IOs from the nbd kernel socket, and the great tcmu-runner project is doing the sysfs/uio operations and IOs from the kernel SCSI/iSCSI. The nbd-cli tool will work like the iscsi-initiator-utils, and the nbd-runner daemon will work like the tcmu-runner daemon, that's all. In tcmu-runner for different backend storages, they have separate handlers, glfs.c handler for Gluster, rbd.c handler for Ceph, etc. And what the handlers here are doing the actual IOs with the backend storage services once the IO paths setup are done by ceph-iscsi-gateway/gluster-block.... Then we can support all the kind of backend storages, like the Gluster/Ceph/Azure... as one separate handler in nbd-runner, which no need to care about the NBD low level's stuff updates and changes. Thanks.> While this email is about announcing the project, and asking for more > collaboration, I would also like to discuss more about the placement > of the project itself. Currently nbd-runner project is expected to be > shared by our friends at Ceph project too, to provide NBD driver for > Ceph. I have personally worked with some of them closely while > contributing to open-iSCSI project, and we would like to take this > project to great success. > > Now few questions: > > 1. Can I continue to usehttp://github.com/gluster/nbd-runneras home > for this project, even if its shared by other filesystem projects? > > * I personally am fine with this. > > 2. Should there be a separate organization for this repo? > > * While it may make sense in future, for now, I am not planning to > start any new thing? > > It would be great if we have some consensus on this soon as nbd-runner > is a new repository. If there are no concerns, I will continue to > contribute to the existing repository. > > Regards, > Xiubo Li (@lxbsz) > > [1] -https://github.com/gluster/gluster-block > [2] -https://github.com/open-iscsi > [3] -https://github.com/gluster/glusterfs/issues/633 > [4] -https://github.com/gluster/nbd-runner > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-users-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190323/7fcd78e4/attachment.html>