similar to: Upgrading from 3.6.3 to 3.10/3.11

Displaying 20 results from an estimated 20000 matches similar to: "Upgrading from 3.6.3 to 3.10/3.11"

2017 Aug 08
0
Upgrading from 3.6.3 to 3.10/3.11
I had a mixed experience going from 3.6.6 to 3.10.2 on a two server setup. I have since upgraded to 3.10.3 but I still have a bad problem with specific files (see CONS below). PROS - Back on a "supported" version. - Windows roaming profiles (small file performance) improved significantly via samba. This may be due to new tuning options added (see my tuning options for the volume below):
2017 Aug 08
1
Upgrading from 3.6.3 to 3.10/3.11
Thanks Diego. This is invaluable information, appreciate it immensely. I had heard previously that you can always go back to previous Gluster binaries, but without understanding the data structures behind Gluster, I had no idea how safe that was. Backing up the lib folder makes perfect sense. The performance issues we're specifically keen to address are the small-file performance improvements
2017 Nov 14
1
SMB copies failing with GlusterFS 3.10
Hi all We've got a brand new 6-node GlusterFS 3.10 deployment (previous 20 nodes were GlusterFS 3.6). Running on CentOS 7 using legit repos, so glusterfs-3.10.7-1.el7.x86_64 is the base. Our issue is that when we create a file with a Gluster client, e.g. a Mac or Windows machine, it works fine. However if we copy a file from a Mac or Windows machine to the Samba share, it fails with a
2017 Jun 12
1
URGENT: Update issues from 3.6.6 to 3.10.2 Accessing files via samba come up with permission denied
Did the logs provide any hints as to what the issue may be? Diego On Sat, Jun 3, 2017 at 12:16 PM, Diego Remolina <dijuremo at gmail.com> wrote: > Thanks for taking the time to look into this. Since we needed downtime > due to the gluster update, we also updated the OS, including samba. We > went from 4.2.x to 4.4.4 and many other packages for CentOS were > updated as well. OS
2017 Jun 03
2
URGENT: Update issues from 3.6.6 to 3.10.2 Accessing files via samba come up with permission denied
Thanks for taking the time to look into this. Since we needed downtime due to the gluster update, we also updated the OS, including samba. We went from 4.2.x to 4.4.4 and many other packages for CentOS were updated as well. OS and samba updates were installed, then server rebooted, then gluster was updated. Created a new test samba share to minimize logs, etc: [VfsGluster] path = /vfsgluster
2017 Jun 02
2
URGENT: Update issues from 3.6.6 to 3.10.2 Accessing files via samba come up with permission denied
Hi everyone, Is there anything else we could do to check on this problem and try to fix it? The issue is definitively related to either the samba vfs gluster plugin or gluster itself. I am not sure how to pin it down futher. I went ahead and created a new share in the samba server which is on a local filesystem where the OS is installed, not part of gluster: # mount | grep home ]# ls -ld /home
2017 Jun 03
0
URGENT: Update issues from 3.6.6 to 3.10.2 Accessing files via samba come up with permission denied
On 03-Jun-2017 3:27 AM, "Diego Remolina" <dijuremo at gmail.com> wrote: Hi everyone, Is there anything else we could do to check on this problem and try to fix it? The issue is definitively related to either the samba vfs gluster plugin or gluster itself. I am not sure how to pin it down futher. I don't think it is vfs plugin because you haven't updated samba packages
2017 Aug 25
2
Rolling upgrade from 3.6.3 to 3.10.5
Hi Diego, Just to clarify, so did you do an offline upgrade with an existing cluster (3.6.x => 3.10.x)? Thanks. On Fri, Aug 25, 2017 at 8:42 PM, Diego Remolina <dijuremo at gmail.com> wrote: > I was never able to go from 3.6.x to 3.7.x without downtime. Then > 3.7.x did not work well for me, so I stuck with 3.6.x until recently. > I went from 3.6.x to 3.10.x but downtime was
2017 Aug 25
2
Rolling upgrade from 3.6.3 to 3.10.5
Hi all, I'm currently in process of upgrading a replicated cluster (1 x 4) from 3.6.3 to 3.10.5. The nodes run CentOS 6. However after upgrading the first node, the said node fails to connect to other peers (as seen via 'gluster peer status'), but somehow other non-upgraded peers can still see the upgraded peer as connected. Writes to the Gluster volume via local mounts of
2017 Aug 25
0
Rolling upgrade from 3.6.3 to 3.10.5
Yes, I did an offline upgrade. 1. Stop all clients using gluster servers. 2. Stop glusterfsd and glusterd on both servers. 3. Backed up /var/lib/gluster* in all servers just to be safe. 4. Upgraded all servers from 3.6.x to 3.10.x (I did not have quotas or anything that required special steps) 5. Started gluster daemons again and confirmed everything was fine prior to letting clients connect. 5.
2017 Aug 25
0
Rolling upgrade from 3.6.3 to 3.10.5
You cannot do a rolling upgrade from 3.6.x to 3.10.x You will need downtime. Even 3.6 to 3.7 was not possible... see some references to it below: https://marc.info/?l=gluster-users&m=145136214452772&w=2 https://gluster.readthedocs.io/en/latest/release-notes/3.7.1/ # gluster volume set <volname> server.allow-insecure on Edit /etc/glusterfs/glusterd.vol to contain this line: option
2023 Mar 26
1
hardware issues and new server advice
Hi, sry if i hijack this, but maybe it's helpful for other gluster users... > pure NVME-based volume will be waste of money. Gluster excells when you have more servers and clients to consume that data. > I would choose LVM cache (NVMEs) + HW RAID10 of SAS 15K disks to cope with the load. At least if you decide to go with more disks for the raids, use several (not the built-in ones)
2023 Mar 24
2
hardware issues and new server advice
Actually, pure NVME-based volume will be waste of money. Gluster excells when you have more servers and clients to consume that data. I would choose? LVM cache (NVMEs) + HW RAID10 of SAS 15K disks to cope with the load. At least if you decide to go with more disks for the raids, use several? (not the built-in ones) controllers. @Martin, in order to get a more reliable setup, you will have to
2017 Oct 26
0
not healing one file
Hi Richard, Thanks for the informations. As you said there is gfid mismatch for the file. On brick-1 & brick-2 the gfids are same & on brick-3 the gfid is different. This is not considered as split-brain because we have two good copies here. Gluster 3.10 does not have a method to resolve this situation other than the manual intervention [1]. Basically what you need to do is remove the
2016 Feb 16
2
Mapping UIDs on Linux to same UID as AD-bound Mac is mapping to
Rowland writes: > > So, since the Linux Samba is the one using sequential UIDs where it > > generates a new UID each time a new user is identified, and the Mac is > > using somewhat AD-generated UIDs, my preference is to somehow make > > Linux Samba work the same way that Apple generates UIDs. > > Whilst something like this may happen sometime in the future, at the
2017 Sep 28
1
upgrade to 3.12.1 from 3.10: df returns wrong numbers
Hi, When I upgraded my cluster, df started returning some odd numbers for my legacy volumes. Newly created volumes after the upgrade, df works just fine. I have been researching since Monday and have not found any reference to this symptom. "vm-images" is the old legacy volume, "test" is the new one. [root at st-srv-03 ~]# (df -h|grep bricks;ssh st-srv-02 'df -h|grep
2017 Jun 16
2
About the maintenance time
I currently use it in the replica configuration of 3.10.2. The brick process may not start when restarting the storage server. Also, when using gnfs, the I / O may hang up and become unusable. After checking the release notes of 3.11.0, the following ID seems to be applicable, so please reflect it if it can be reflected in 3.10 series. Also, for items with high urgency Since 3.11.0 has just been
2016 Feb 16
4
Mapping UIDs on Linux to same UID as AD-bound Mac is mapping to
Hi all I have a Linux machine bound to AD, and a Mac bound to AD. Both have me log in with different UIDs for the same AD user. This makes sense, as AD doesn't have a UNIX-compliant uid/gid attribute. One thing I have found that interests me is this: https://books.google.com.au/books?id=yNILCwAAQBAJ
2017 Jun 15
2
About the maintenance time
What is the current stable version of glusterfs? Also, the current latest versions are 3.8.9, 3.10.3, and 3.11.0, respectively, but why the bug-fixed contents in 3.11.0 are not applied to 3.8 series and 3.10 series? Do you apply with the next minor version upgrade?
2017 Jun 16
0
About the maintenance time
On 06/16/2017 09:07 AM, te-yamauchi at usen.co.jp wrote: > I currently use it in the replica configuration of 3.10.2. > The brick process may not start when restarting the storage server. Also, when using gnfs, the I / O may hang up and become unusable. > After checking the release notes of 3.11.0, the following ID seems to be applicable, so please reflect it if it can be reflected in