Hi All,
I have some updates on GlusterFS-3.7.3
## Issue with upgrades
We've come to know that a new feature in 3.7.3 is causing troubles
during upgrades from previous versions of GlusterFS to 3.7.3. I
apologize for not making note of this earlier. The details of the
feature, issue and work around are below.
Feature:
In GlusterFS-3.7.3, insecure-ports have been enabled by default. This
means that by default, servers accept connections from insecure ports,
clients use insecure ports to connect to servers. This change
particularly benefits usage of libgfapi, for example when it is used
in qemu run by a normal user.
Issue:
This has caused troubles when upgrading from previous versions to
3.7.3 in rolling upgrades and when attempting to use 3.7.3 clients
with older servers. The 3.7.3 clients establish connections using
insecure ports by default. But the older servers still expect
connections to come from secure-ports (if this setting has not been
changed). This causes servers to reject connections from 3.7.3, and
leads to broken clusters during upgrade and rejected clients.
Workaround:
There are two possible workarounds.
Before upgrading,
1. Set 'client.bind-insecure off' on all volumes. This forces 3.7.3
clients to use secure ports to connect to the servers. This does not
affect older clients as this setting is the default for them.
2. Set 'server.allow-insecure on' on all volumes. This enables servers
to accept connections from insecure ports as well and allows the new
clients to successfully connect to the servers.
If anyone faces any problems with these workarounds, please let us know.
## Other updates
Binary packages have been built. RPMS and debs for Debian are
available at http://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.3/
thanks to Kaleb and Humble. Ubuntu packages are available from
https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.7 .
Emmanuel has also update the pkgsrc for NetBSD.
Regards,
Kaushal
On Wed, Jul 29, 2015 at 11:41 AM, Kaushal M <kshlmster at gmail.com>
wrote:> Hi All.
>
> I'm pleased to announce the release of glusterfs-3.7.3. This release
> includes a lot of bug fixes and stabilizes the 3.7 branch further. The
> summary of the bugs fixed is available at the end of this mail.
>
> The source and RPMs can be available at
> http://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.3/ . I'll
> notify the list as other packages become available.
>
> Thanks all who submitted fixes for this release.
>
> Regards,
> Kaushal
>
> ## Bugs fixed in this release
>
> 1212842: tar on a glusterfs mount displays "file changed as we read
> it" even though the file was not changed
> 1214169: glusterfsd crashed while rebalance and self-heal were in progress
> 1217722: Tracker bug for Logging framework expansion.
> 1219358: Disperse volume: client crashed while running iozone
> 1223318: brick-op failure for glusterd command should log error
> message in cmd_history.log
> 1226666: BitRot :- Handle brick re-connection sanely in bitd/scrub process
> 1226830: Scrubber crash upon pause
> 1227572: Sharding - Fix posix compliance test failures.
> 1227808: Issues reported by Cppcheck static analysis tool
> 1228535: Memory leak in marker xlator
> 1228640: afr: unrecognized option in re-balance volfile
> 1229282: Disperse volume: Huge memory leak of glusterfsd process
> 1229563: Disperse volume: Failed to update version and size (error 2)
> seen during delete operations
> 1230327: context of access control translator should be updated
> properly for GF_POSIX_ACL_*_KEY xattrs
> 1230399: [Snapshot] Scheduled job is not processed when one of the
> node of shared storage volume is down
> 1230523: glusterd: glusterd crashing if you run re-balance and vol
> status command parallely.
> 1230857: Files migrated should stay on a tier for a full cycle
> 1231024: scrub frequecny and throttle change information need to be
> present in Scrubber log
> 1231608: Add regression test for cluster lock in a heterogeneous cluster
> 1231767: tiering:compiler warning with gcc v5.1.1
> 1232173: Incomplete self-heal and split-brain on directories found
> when self-healing files/dirs on a replaced disk
> 1232185: cli correction: if tried to create multiple bricks on same
> server shows replicate volume instead of disperse volume
> 1232199: Skip zero byte files when triggering signing
> 1232333: Ganesha-ha.sh cluster setup not working with RHEL7 and derivatives
> 1232335: nfs-ganesha: volume is not in list of exports in case of
> volume stop followed by volume start
> 1232602: bug-857330/xml.t fails spuriously
> 1232612: Disperse volume: misleading unsuccessful message with heal
> and heal full
> 1232660: Change default values of allow-insecure and bind-insecure
> 1232883: Snapshot daemon failed to run on newly created dist-rep
> volume with uss enabled
> 1232885: [SNAPSHOT]: "man gluster" needs modification for few
snapshot commands
> 1232886: [SNAPSHOT]: Output message when a snapshot create is issued
> when multiple bricks are down needs to be improved
> 1232887: [SNAPSHOT] : Snapshot delete fails with error - Snap might
> not be in an usable state
> 1232889: Snapshot: When Cluster.enable-shared-storage is enable,
> shared storage should get mount after Node reboot
> 1233041: glusterd crashed when testing heal full on replaced disks
> 1233158: Null pointer dreference in dht_migrate_complete_check_task
> 1233518: [Backup]: Glusterfind session(s) created before starting the
> volume results in 'changelog not available' error, eventually
> 1233555: gluster v set help needs to be updated for
> cluster.enable-shared-storage option
> 1233559: libglusterfs: avoid crash due to ctx being NULL
> 1233611: Incomplete conservative merge for split-brained directories
> 1233632: Disperse volume: client crashed while running iozone
> 1233651: pthread cond and mutex variables of fs struct has to be
> destroyed conditionally.
> 1234216: nfs-ganesha: add node fails to add a new node to the cluster
> 1234225: Data Tiering: add tiering set options to volume set help
> (cluster.tier-demote-frequency and cluster.tier-promote-frequency)
> 1234297: Quota: Porting logging messages to new logging framework
> 1234408: STACK_RESET may crash with concurrent statedump requests to a
> glusterfs process
> 1234584: nfs-ganesha:delete node throws error and pcs status also
> notifies about failures, in fact I/O also doesn't resume post grace
> period
> 1234679: Disperse volume : 'ls -ltrh' doesn't list correct size
of the
> files every time
> 1234695: [geo-rep]: Setting meta volume config to false when meta
> volume is stopped/deleted leads geo-rep to faulty
> 1234843: GlusterD does not store updated peerinfo objects.
> 1234898: [geo-rep]: Feature fan-out fails with the use of meta volume
config
> 1235203: tiering: tier status shows as " progressing " but there
is no
> rebalance daemon running
> 1235208: glusterd: glusterd crashes while importing a USS enabled
> volume which is already started
> 1235242: changelog: directory renames not getting recorded
> 1235258: nfs-ganesha: ganesha-ha.sh --refresh-config not working
> 1235297: [geo-rep]: set_geo_rep_pem_keys.sh needs modification in
> gluster path to support mount broker functionality
> 1235360: [geo-rep]: Mountbroker setup goes to Faulty with ssh
> 'Permission Denied' Errors
> 1235428: Mount broker user add command removes existing volume for a
> mountbroker user when second volume is attached to same user
> 1235512: quorum calculation might go for toss for a concurrent peer
> probe command
> 1235629: Missing trusted.ec.config xattr for files after heal process
> 1235904: fgetxattr() crashes when key name is NULL
> 1235923: POSIX: brick logs filled with _gf_log_callingfn due to
> this==NULL in dict_get
> 1235928: memory corruption in the way we maintain migration
> information in inodes.
> 1235934: Allow only lookup and delete operation on file that is in
split-brain
> 1235939: Provide and use a common way to do reference counting of
> (internal) structures
> 1235966: [RHEV-RHGS] After self-heal operation, VM Image file loses
> the sparseness property
> 1235990: quota: marker accounting miscalculated when renaming a file
> on with write is in progress
> 1236019: peer probe results in Peer Rejected(Connected)
> 1236093: [geo-rep]: worker died with "ESTALE" when performed rm
-rf on
> a directory from mount of master volume
> 1236260: [Quota] The root of the volume on which the quota is set
> shows the volume size more than actual volume size, when checked with
> "df" command.
> 1236269: FSAL_GLUSTER : symlinks are not working properly if acl is enabled
> 1236271: Introduce an ATOMIC_WRITE flag in posix writev
> 1236274: Upcall: Directory or file creation should send cache
> invalidation requests to parent directories
> 1236282: [Backup]: File movement across directories does not get
> captured in the output file in a X3 volume
> 1236288: Data Tiering: Files not getting promoted once demoted
> 1236933: Ganesha volume export failed
> 1238052: Quota list is not working on tiered volume.
> 1238057: Incorrect state created in '/var/lib/nfs/statd'
> 1238073: protocol/server doesn't reconfigure auth.ssl-allow options
> 1238476: Throttle background heals in disperse volumes
> 1238752: Consecutive volume start/stop operations when ganesha.enable
> is on, leads to errors
> 1239270: [Scheduler]: Unable to create Snapshots on RHEL-7.1 using
Scheduler
> 1240183: Renamed Files are missing after self-heal
> 1240190: do an explicit lookup on the inodes linked in readdirp
> 1240603: glusterfsd crashed after volume start force
> 1240607: [geo-rep]: UnboundLocalError: local variable 'fd'
referenced
> before assignment
> 1240616: Unable to pause georep session if one of the nodes in cluster
> is not part of master volume.
> 1240906: quota+afr: quotad crash "afr_local_init (local=0x0,
> priv=0x7fddd0372220, op_errno=0x7fddce1434dc) at afr-common.c:4112"
> 1240955: [USS]: snapd process is not killed once the glusterd comes back
> 1241134: nfs-ganesha: execution of script ganesha-ha.sh throws a error
> for a file
> 1241487: quota/marker: lk_owner is null while acquiring inodelk in
> rename operation
> 1241529: BitRot :- Files marked as 'Bad' should not be accessible
from mount
> 1241666: glfs_loc_link: Update loc.inode with the existing inode
> incase if already exits
> 1241776: [Data Tiering]: HOT Files get demoted from hot tier
> 1241784: Gluster commands timeout on SSL enabled system, after adding
> new node to trusted storage pool
> 1241831: quota: marker accounting can get miscalculated after upgrade to
3.7
> 1241841: gf_msg_callingfn does not log the callers of the function in
> which it is called
> 1241885: ganesha volume export fails in rhel7.1
> 1241963: Peer not recognized after IP address change
> 1242031: nfs-ganesha: bricks crash while executing acl related
> operation for named group/user
> 1242044: nfs-ganesha : Multiple setting of nfs4_acl on a same file
> will cause brick crash
> 1242192: nfs-ganesha: add-node logic does not copy the
> "/etc/ganesha/exports" directory to the correct path on the newly
> added node
> 1242274: Migration does not work when EC is used as a tiered volume.
> 1242329: [Quota] : Inode quota spurious failure
> 1242515: racy condition in nfs/auth-cache feature
> 1242718: [RFE] Improve I/O latency during signing
> 1242728: replacing a offline brick fails with "replace-brick"
command
> 1242734: GlusterD crashes when management encryption is enabled
> 1242882: Quota: Quota Daemon doesn't start after node reboot
> 1242898: Crash in Quota enforcer
> 1243408: syncop:Include iatt to 'syncop_link' args
> 1243642: GF_CONTENT_KEY should not be handled unless we are sure no
> other operations are in progress
> 1243644: Metadata self-heal is not handling failures while heal properly
> 1243647: Disperse volume : data corruption with appending writes in 8+4
config
> 1243648: Disperse volume: NFS crashed
> 1243654: fops fail with EIO on nfs mount after add-brick and rebalance
> 1243655: Sharding - Use (f)xattrop (as opposed to (f)setxattr) to
> update shard size and block count
> 1243898: huge mem leak in posix xattrop
> 1244100: using fop's dict for resolving causes problems
> 1244103: Gluster cli logs invalid argument error on every gluster
> command execution
> 1244114: unix domain sockets on Gluster/NFS are created as fifo/pipe
> 1244116: quota: brick crashes when create and remove performed in parallel
> 1245908: snap-view:mount crash if debug mode is enabled
> 1245934: [RHEV-RHGS] App VMs paused due to IO error caused by
> split-brain, after initiating remove-brick operation
> 1246121: Disperse volume : client glusterfs crashed while running IO
> 1246481: rpc: fix binding brick issue while bind-insecure is enabled
> 1246728: client3_3_removexattr_cbk floods the logs with "No data
> available" messages
> 1246809: glusterd crashed when a client which doesn't support SSL
> tries to mount a SSL enabled gluster volume
> 1246987: Deceiving log messages like "Failing STAT on gfid :
> split-brain observed. [Input/output error]" reported
> 1246988: sharding - Populate the aggregated ia_size and ia_blocks
> before unwinding (f)setattr to upper layers
> 1247012: Initialize daemons on demand