Hey All, Now that 3.7 is out, here are some thoughts on how we can shape up 3.8. I am thinking of releasing Gluster 3.8 towards the end of this year. Here is a tentative list of things that we are contemplating to do in 3.8: 1. Improvements for "Storage as a Service" "Storage as a Service" broadly refers to the model where storage can be provisioned or decommissioned on demand, storage caters to single or multi tenant workloads and completely automated provisoning of storage is possible. Storage as a Service is what public/private clouds use as a building block today. By selecting enhancements and improvements that fit into this paradigm, we can make Gluster to be more easily adopted in modern datacenters. Following are sample use cases/workloads that can benefit by gluster improvements: - Manila: File Share as a service project in OpenStack - Shared Storage for Containers - Any deployment where shares are created as a service Enhancements that can be accomplished in this release include: a. Intelligent Volume provisioning through Heketi [1] b. Kerberized support for GlusterFS protocol c. Better network management support [2] 2. Regression test & Quality improvements We have zeroed in on distaf[3] as the framework of choice where we will be adding support for multi-node regression tests. This will augment the single node pre-commit regression tests that we already run today with Jenkins. I expect tests in distaf passing as a gating factor for GA of all releases from 3.8. Here is what we would like to do in this release cycle: a. all gluster components to have tests populated in distaf b. CI using Jenkins for running tests in distaf on nightlies/release candidates 3. Storage for Containers There seems to be significant attention on storage for containers recently. We can cater to this interest by picking specific improvements for container storage like: a. shared storage for applications in containers (already possible with nfs today). Explore how we can do this with native client etc. b. shared storage for docker/container repositories c. hyperconvergence of containers & storage 4. Hyperconvergence with oVirt There is an ongoing effort to have hyperconvergence of gluster with oVirt for storing virtual machine images in a single cluster [4]. Improvements like the following can help in making Gluster a better fit for hyperconvergence: a. Throttling for maintenance operations in gluster (self-healing/rebalance etc.) b. Ensuring data locality for virtual machine images c. Integration of sharding for hyperconvergence (expect to reach here sooner than 3.8) 5. Performance improvements a. Continue the onoging small file performance improvements [5] b. multi-threaded self-heal daemon for improving performance of self-healing 6. Other improvements like full fledged IPv6 support, delegations/lease-lock improvements, more policies for tiering, support for systematic erasure codes, support for native object service etc. are also planned. There are other improvements which are being planned and have not found a mention here. If you are aware of such improvements, please reply to this thread. I will be collating this information and publish a release planning page for 3.8 in gluster.org. If you have come all the way till here, we would be interested in knowing the following: (i) What are your thoughts on the plan? (ii) What other improvements would you be interested in seeing? Thoughts and feedback would be very welcome! Thanks, Vijay [1] https://github.com/heketi/heketi/ [2] http://www.gluster.org/community/documentation/index.php/Features/SplitNetwork [3] https://github.com/gluster/distaf [4] http://www.ovirt.org/Features/GlusterFS-Hyperconvergence [5] http://www.gluster.org/community/documentation/index.php/Features/Feature_Smallfile_Perf -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150709/021b2b53/attachment.html>
Vijay, Instead of thinking about the next version, shouldn't you be trying to get at least one truly stable version? 3.5 still has problems and the only things I've been reading about 3.6 and 3.7 are problems, so they are far from stable. This idea of releasing a new version instead of fixing the current one is merely driving people away from Gluster. regards, John On 09/07/15 16:57, Vijay Bellur wrote:> Hey All, > > Now that 3.7 is out, here are some thoughts on how we can shape up > 3.8. I am thinking of releasing Gluster 3.8 towards the end of this > year. Here is a tentative list of things that we are contemplating to > do in 3.8: > > 1. Improvements for "Storage as a Service" > > "Storage as a Service" broadly refers to the model where storage > can be provisioned or decommissioned on demand, storage caters to > single or multi tenant workloads and completely automated provisoning > of storage is possible. Storage as a Service is what public/private > clouds use as a building block today. By selecting enhancements and > improvements that fit into this paradigm, we can make Gluster to be > more easily adopted in modern datacenters. Following are sample use > cases/workloads that can benefit by gluster improvements: > > - Manila: File Share as a service project in OpenStack > - Shared Storage for Containers > - Any deployment where shares are created as a service > > > Enhancements that can be accomplished in this release include: > > a. Intelligent Volume provisioning through Heketi [1] > b. Kerberized support for GlusterFS protocol > c. Better network management support [2] > > > 2. Regression test & Quality improvements > > We have zeroed in on distaf[3] as the framework of choice where we > will be adding support for multi-node regression tests. This will > augment the single node pre-commit regression tests that we already > run today with Jenkins. I expect tests in distaf passing as a gating > factor for GA of all releases from 3.8. Here is what we would like to > do in this release cycle: > > a. all gluster components to have tests populated in distaf > b. CI using Jenkins for running tests in distaf on nightlies/release > candidates > > > 3. Storage for Containers > > There seems to be significant attention on storage for containers > recently. We can cater to this interest by picking specific > improvements for container storage like: > > a. shared storage for applications in containers (already possible > with nfs today). Explore how we can do this with native client etc. > b. shared storage for docker/container repositories > c. hyperconvergence of containers & storage > > 4. Hyperconvergence with oVirt > > There is an ongoing effort to have hyperconvergence of gluster with > oVirt for storing virtual machine images in a single cluster [4]. > Improvements like the following can help in making Gluster a better > fit for hyperconvergence: > > a. Throttling for maintenance operations in gluster > (self-healing/rebalance etc.) > b. Ensuring data locality for virtual machine images > c. Integration of sharding for hyperconvergence (expect to reach here > sooner than 3.8) > > 5. Performance improvements > > a. Continue the onoging small file performance improvements [5] > b. multi-threaded self-heal daemon for improving performance of > self-healing > > 6. Other improvements like full fledged IPv6 support, > delegations/lease-lock improvements, more policies for tiering, > support for systematic erasure codes, support for native object > service etc. are also planned. > > There are other improvements which are being planned and have not > found a mention here. If you are aware of such improvements, please > reply to this thread. I will be collating this information and publish > a release planning page for 3.8 in gluster.org. > > If you have come all the way till here, we would be interested in > knowing the following: > > (i) What are your thoughts on the plan? > (ii) What other improvements would you be interested in seeing? > > Thoughts and feedback would be very welcome! > > Thanks, > Vijay > > > [1] https://github.com/heketi/heketi/ > > [2] > http://www.gluster.org/community/documentation/index.php/Features/SplitNetwork > > [3] https://github.com/gluster/distaf > > [4] http://www.ovirt.org/Features/GlusterFS-Hyperconvergence > > [5] > http://www.gluster.org/community/documentation/index.php/Features/Feature_Smallfile_Perf > > > ______________________________________________________________________ > This email has been scanned by the Symantec Email Security.cloud service. > For more information please visit http://www.symanteccloud.com > ______________________________________________________________________ > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150710/608ab4cf/attachment.html>
Good to see whats coming up. Another area where we want to have some improvements is alerts/notifications. Most of the times, user doesn't get notified instantly when there is something bad happened under the hood. It comes into user attention when the issue starts affecting the data flow. Some of the important events are, but not limited to - brick process crash - split-brain (requires manual fixing) - geo-rep session went faulty - quota limit reached We need to have a way to convey these events to an external system/module which further notifies the user through different means like SNMP, email etc. Thanks, Kanagaraj On 07/09/2015 12:27 PM, Vijay Bellur wrote:> Hey All, > > Now that 3.7 is out, here are some thoughts on how we can shape up > 3.8. I am thinking of releasing Gluster 3.8 towards the end of this > year. Here is a tentative list of things that we are contemplating to > do in 3.8: > > 1. Improvements for "Storage as a Service" > > "Storage as a Service" broadly refers to the model where storage > can be provisioned or decommissioned on demand, storage caters to > single or multi tenant workloads and completely automated provisoning > of storage is possible. Storage as a Service is what public/private > clouds use as a building block today. By selecting enhancements and > improvements that fit into this paradigm, we can make Gluster to be > more easily adopted in modern datacenters. Following are sample use > cases/workloads that can benefit by gluster improvements: > > - Manila: File Share as a service project in OpenStack > - Shared Storage for Containers > - Any deployment where shares are created as a service > > > Enhancements that can be accomplished in this release include: > > a. Intelligent Volume provisioning through Heketi [1] > b. Kerberized support for GlusterFS protocol > c. Better network management support [2] > > > 2. Regression test & Quality improvements > > We have zeroed in on distaf[3] as the framework of choice where we > will be adding support for multi-node regression tests. This will > augment the single node pre-commit regression tests that we already > run today with Jenkins. I expect tests in distaf passing as a gating > factor for GA of all releases from 3.8. Here is what we would like to > do in this release cycle: > > a. all gluster components to have tests populated in distaf > b. CI using Jenkins for running tests in distaf on nightlies/release > candidates > > > 3. Storage for Containers > > There seems to be significant attention on storage for containers > recently. We can cater to this interest by picking specific > improvements for container storage like: > > a. shared storage for applications in containers (already possible > with nfs today). Explore how we can do this with native client etc. > b. shared storage for docker/container repositories > c. hyperconvergence of containers & storage > > 4. Hyperconvergence with oVirt > > There is an ongoing effort to have hyperconvergence of gluster with > oVirt for storing virtual machine images in a single cluster [4]. > Improvements like the following can help in making Gluster a better > fit for hyperconvergence: > > a. Throttling for maintenance operations in gluster > (self-healing/rebalance etc.) > b. Ensuring data locality for virtual machine images > c. Integration of sharding for hyperconvergence (expect to reach here > sooner than 3.8) > > 5. Performance improvements > > a. Continue the onoging small file performance improvements [5] > b. multi-threaded self-heal daemon for improving performance of > self-healing > > 6. Other improvements like full fledged IPv6 support, > delegations/lease-lock improvements, more policies for tiering, > support for systematic erasure codes, support for native object > service etc. are also planned. > > There are other improvements which are being planned and have not > found a mention here. If you are aware of such improvements, please > reply to this thread. I will be collating this information and publish > a release planning page for 3.8 in gluster.org. > > If you have come all the way till here, we would be interested in > knowing the following: > > (i) What are your thoughts on the plan? > (ii) What other improvements would you be interested in seeing? > > Thoughts and feedback would be very welcome! > > Thanks, > Vijay > > > [1] https://github.com/heketi/heketi/ > > [2] > http://www.gluster.org/community/documentation/index.php/Features/SplitNetwork > > [3] https://github.com/gluster/distaf > > [4] http://www.ovirt.org/Features/GlusterFS-Hyperconvergence > > [5] > http://www.gluster.org/community/documentation/index.php/Features/Feature_Smallfile_Perf > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150710/9d7c85e2/attachment.html>