Niels de Vos
2014-Oct-05 12:44 UTC
[Gluster-users] glusterfs-3.5.3beta1 has been released for testing
GlusterFS 3.5.3 (beta1) has been released and is now available for testing. Get the tarball from here: - http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.5.3beta1.tar.gz Packages for different distributions will land on the download server over the next few days. When packages become available, the package maintainers will send a notification to this list. With this beta release, we make it possible for bug reporters and testers to check if issues have indeed been fixed. All community members are invited to test and/or comment on this release. This release for the 3.5 stable series includes the following bug fixes: - 1081016: glusterd needs xfsprogs and e2fsprogs packages - 1129527: DHT :- data loss - file is missing on renaming same file from multiple client at same time - 1129541: [DHT:REBALANCE]: Rebalance failures are seen with error message " remote operation failed: File exists" - 1132391: NFS interoperability problem: stripe-xlator removes EOF at end of READDIR - 1133949: Minor typo in afr logging - 1136221: The memories are exhausted quickly when handle the message which has multi fragments in a single record - 1136835: crash on fsync - 1138922: DHT + rebalance : rebalance process crashed + data loss + few Directories are present on sub-volumes but not visible on mount point + lookup is not healing directories - 1139103: DHT + Snapshot :- If snapshot is taken when Directory is created only on hashed sub-vol; On restoring that snapshot Directory is not listed on mount point and lookup on parent is not healing - 1139170: DHT :- rm -rf is not removing stale link file and because of that unable to create file having same name as stale link file - 1139245: vdsm invoked oom-killer during rebalance and Killed process 4305, UID 0, (glusterfs nfs process) - 1140338: rebalance is not resulting in the hash layout changes being available to nfs client - 1140348: Renaming file while rebalance is in progress causes data loss - 1140549: DHT: Rebalance process crash after add-brick and `rebalance start' operation - 1140556: Core: client crash while doing rename operations on the mount - 1141558: AFR : "gluster volume heal <volume_name> info" prints some random characters - 1141733: data loss when rebalance + renames are in progress and bricks from replica pairs goes down and comes back - 1142052: Very high memory usage during rebalance - 1142614: files with open fd's getting into split-brain when bricks goes offline and comes back online - 1144315: core: all brick processes crash when quota is enabled - 1145000: Spec %post server does not wait for the old glusterd to exit - 1147243: nfs: volume set help says the rmtab file is in "/var/lib/glusterd/rmtab" To get more information about the above bugs, go to https://bugzilla.redhat.com, enter the bug number in the search box and press enter. If a bug from this list has not been sufficiently fixed, please open the bug report, leave a comment with details of the testing and change the status of the bug to ASSIGNED. In case someone has successfully verified a fix for a bug, please change the status of the bug to VERIFIED. The release notes have been posted for review, and a blog post contains an easier readable version: - http://review.gluster.org/8903 - http://blog.nixpanic.net/2014/10/glusterfs-353beta1-has-been-released.html Comments in bug reports, over email or on IRC (#gluster on Freenode) are much appreciated. Thanks for testing, Niels
David F. Robinson
2014-Oct-06 14:30 UTC
[Gluster-users] glusterfs-3.5.3beta1 has been released for testing
When I installed the 3.5.3beta on my HPC cluster, I get the following warnings during the mounts: WARNING: getfattr not found, certain checks will be skipped.. I do not have attr installed on my compute nodes. Is this something that I need in order for gluster to work properly or can this safely be ignored? David ------ Original Message ------ From: "Niels de Vos" <ndevos at redhat.com> To: gluster-users at gluster.org; gluster-devel at gluster.org Sent: 10/5/2014 8:44:59 AM Subject: [Gluster-users] glusterfs-3.5.3beta1 has been released for testing>GlusterFS 3.5.3 (beta1) has been released and is now available for >testing. Get the tarball from here: >- >http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.5.3beta1.tar.gz > >Packages for different distributions will land on the download server >over the next few days. When packages become available, the package >maintainers will send a notification to this list. > >With this beta release, we make it possible for bug reporters and >testers to check if issues have indeed been fixed. All community >members >are invited to test and/or comment on this release. > >This release for the 3.5 stable series includes the following bug >fixes: >- 1081016: glusterd needs xfsprogs and e2fsprogs packages >- 1129527: DHT :- data loss - file is missing on renaming same file >from multiple client at same time >- 1129541: [DHT:REBALANCE]: Rebalance failures are seen with error >message " remote operation failed: File exists" >- 1132391: NFS interoperability problem: stripe-xlator removes EOF at >end of READDIR >- 1133949: Minor typo in afr logging >- 1136221: The memories are exhausted quickly when handle the message >which has multi fragments in a single record >- 1136835: crash on fsync >- 1138922: DHT + rebalance : rebalance process crashed + data loss + >few Directories are present on sub-volumes but not visible on mount >point + lookup is not healing directories >- 1139103: DHT + Snapshot :- If snapshot is taken when Directory is >created only on hashed sub-vol; On restoring that snapshot Directory is >not listed on mount point and lookup on parent is not healing >- 1139170: DHT :- rm -rf is not removing stale link file and because of >that unable to create file having same name as stale link file >- 1139245: vdsm invoked oom-killer during rebalance and Killed process >4305, UID 0, (glusterfs nfs process) >- 1140338: rebalance is not resulting in the hash layout changes being >available to nfs client >- 1140348: Renaming file while rebalance is in progress causes data >loss >- 1140549: DHT: Rebalance process crash after add-brick and `rebalance >start' operation >- 1140556: Core: client crash while doing rename operations on the >mount >- 1141558: AFR : "gluster volume heal <volume_name> info" prints some >random characters >- 1141733: data loss when rebalance + renames are in progress and >bricks from replica pairs goes down and comes back >- 1142052: Very high memory usage during rebalance >- 1142614: files with open fd's getting into split-brain when bricks >goes offline and comes back online >- 1144315: core: all brick processes crash when quota is enabled >- 1145000: Spec %post server does not wait for the old glusterd to exit >- 1147243: nfs: volume set help says the rmtab file is in >"/var/lib/glusterd/rmtab" > >To get more information about the above bugs, go to >https://bugzilla.redhat.com, enter the bug number in the search box and >press enter. > >If a bug from this list has not been sufficiently fixed, please open >the >bug report, leave a comment with details of the testing and change the >status of the bug to ASSIGNED. > >In case someone has successfully verified a fix for a bug, please >change >the status of the bug to VERIFIED. > >The release notes have been posted for review, and a blog post contains >an easier readable version: >- http://review.gluster.org/8903 >- >http://blog.nixpanic.net/2014/10/glusterfs-353beta1-has-been-released.html > >Comments in bug reports, over email or on IRC (#gluster on Freenode) >are >much appreciated. > >Thanks for testing, >Niels > >_______________________________________________ >Gluster-users mailing list >Gluster-users at gluster.org >http://supercolony.gluster.org/mailman/listinfo/gluster-users
Humble Devassy Chirammal
2014-Oct-07 12:09 UTC
[Gluster-users] [Gluster-devel] glusterfs-3.5.3beta1 has been released for testing
Hi All, JFYI, GlusterFs 3.5.3 beta1 RPMs for el5-7 (RHEL, CentOS, etc.), Fedora (19,20,21,22) are available at download.gluster.org [1]. [1]http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.5.3beta1/ --Humble On Sun, Oct 5, 2014 at 6:14 PM, Niels de Vos <ndevos at redhat.com> wrote:> GlusterFS 3.5.3 (beta1) has been released and is now available for > testing. Get the tarball from here: > - > http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.5.3beta1.tar.gz > > Packages for different distributions will land on the download server > over the next few days. When packages become available, the package > maintainers will send a notification to this list. > > With this beta release, we make it possible for bug reporters and > testers to check if issues have indeed been fixed. All community members > are invited to test and/or comment on this release. > > This release for the 3.5 stable series includes the following bug fixes: > - 1081016: glusterd needs xfsprogs and e2fsprogs packages > - 1129527: DHT :- data loss - file is missing on renaming same file from > multiple client at same time > - 1129541: [DHT:REBALANCE]: Rebalance failures are seen with error message > " remote operation failed: File exists" > - 1132391: NFS interoperability problem: stripe-xlator removes EOF at end > of READDIR > - 1133949: Minor typo in afr logging > - 1136221: The memories are exhausted quickly when handle the message > which has multi fragments in a single record > - 1136835: crash on fsync > - 1138922: DHT + rebalance : rebalance process crashed + data loss + few > Directories are present on sub-volumes but not visible on mount point + > lookup is not healing directories > - 1139103: DHT + Snapshot :- If snapshot is taken when Directory is > created only on hashed sub-vol; On restoring that snapshot Directory is not > listed on mount point and lookup on parent is not healing > - 1139170: DHT :- rm -rf is not removing stale link file and because of > that unable to create file having same name as stale link file > - 1139245: vdsm invoked oom-killer during rebalance and Killed process > 4305, UID 0, (glusterfs nfs process) > - 1140338: rebalance is not resulting in the hash layout changes being > available to nfs client > - 1140348: Renaming file while rebalance is in progress causes data loss > - 1140549: DHT: Rebalance process crash after add-brick and `rebalance > start' operation > - 1140556: Core: client crash while doing rename operations on the mount > - 1141558: AFR : "gluster volume heal <volume_name> info" prints some > random characters > - 1141733: data loss when rebalance + renames are in progress and bricks > from replica pairs goes down and comes back > - 1142052: Very high memory usage during rebalance > - 1142614: files with open fd's getting into split-brain when bricks goes > offline and comes back online > - 1144315: core: all brick processes crash when quota is enabled > - 1145000: Spec %post server does not wait for the old glusterd to exit > - 1147243: nfs: volume set help says the rmtab file is in > "/var/lib/glusterd/rmtab" > > To get more information about the above bugs, go to > https://bugzilla.redhat.com, enter the bug number in the search box and > press enter. > > If a bug from this list has not been sufficiently fixed, please open the > bug report, leave a comment with details of the testing and change the > status of the bug to ASSIGNED. > > In case someone has successfully verified a fix for a bug, please change > the status of the bug to VERIFIED. > > The release notes have been posted for review, and a blog post contains > an easier readable version: > - http://review.gluster.org/8903 > - > http://blog.nixpanic.net/2014/10/glusterfs-353beta1-has-been-released.html > > Comments in bug reports, over email or on IRC (#gluster on Freenode) are > much appreciated. > > Thanks for testing, > Niels > > _______________________________________________ > Gluster-devel mailing list > Gluster-devel at gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-devel >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20141007/9568f75a/attachment.html>