Displaying 20 results from an estimated 400 matches similar to: "Upgrading from Gluster 3.8 to 3.12"
2017 Dec 19
0
Upgrading from Gluster 3.8 to 3.12
On Tue, Dec 19, 2017 at 1:10 AM, Ziemowit Pierzycki <ziemowit at pierzycki.com>
wrote:
> Hi,
>
> I have a cluster of 10 servers all running Fedora 24 along with
> Gluster 3.8. I'm planning on doing rolling upgrades to Fedora 27 with
> Gluster 3.12. I saw the documentation and did some testing but I
> would like to run my plan through some (more?) educated minds.
>
2017 Dec 19
2
Upgrading from Gluster 3.8 to 3.12
I have not done the upgrade yet. Since this is a production cluster I
need to make sure it stays up or schedule some downtime if it doesn't
doesn't. Thanks.
On Tue, Dec 19, 2017 at 10:11 AM, Atin Mukherjee <amukherj at redhat.com> wrote:
>
>
> On Tue, Dec 19, 2017 at 1:10 AM, Ziemowit Pierzycki <ziemowit at pierzycki.com>
> wrote:
>>
>> Hi,
>>
2017 Dec 20
2
Upgrading from Gluster 3.8 to 3.12
Looks like a bug as I see tier-enabled = 0 is an additional entry in the
info file in shchhv01. As per the code, this field should be written into
the glusterd store if the op-version is >= 30706 . What I am guessing is
since we didn't have the commit 33f8703a1 "glusterd: regenerate volfiles on
op-version bump up" in 3.8.4 while bumping up the op-version the info and
volfiles were
2017 Dec 20
0
Upgrading from Gluster 3.8 to 3.12
I was attempting the same on a local sandbox and also have the same problem.
Current: 3.8.4
Volume Name: shchst01
Type: Distributed-Replicate
Volume ID: bcd53e52-cde6-4e58-85f9-71d230b7b0d3
Status: Started
Snapshot Count: 0
Number of Bricks: 4 x 3 = 12
Transport-type: tcp
Bricks:
Brick1: shchhv01-sto:/data/brick3/shchst01
Brick2: shchhv02-sto:/data/brick3/shchst01
Brick3:
2017 Dec 20
0
Upgrading from Gluster 3.8 to 3.12
Yes Atin. I'll take a look.
On Wed, Dec 20, 2017 at 11:28 AM, Atin Mukherjee <amukherj at redhat.com> wrote:
> Looks like a bug as I see tier-enabled = 0 is an additional entry in the
> info file in shchhv01. As per the code, this field should be written into
> the glusterd store if the op-version is >= 30706 . What I am guessing is
> since we didn't have the commit
2013 Mar 20
2
Writing to the data brick path instead of fuse mount?
So I noticed if I create files in the data brick path, the files travel to
the other hosts too. Can I use the data brick path instead of using a fuse
mount instead. I'm running two machines with two replicas. What happens
if I do stripes? Some machines are clients as well as servers. Thanks!
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2017 Nov 30
2
Problems joining new gluster 3.10 nodes to existing 3.8
Hi,
I have a problem joining four Gluster 3.10 nodes to an existing
Gluster 3.8 nodes. My understanding that this should work and not be
too much of a problem.
Peer robe is successful but the node is rejected:
gluster> peer detach elkpinfglt07
peer detach: success
gluster> peer probe elkpinfglt07
peer probe: success.
gluster> peer status
Number of Peers: 6
Hostname: elkpinfglt02
2017 Dec 01
0
Problems joining new gluster 3.10 nodes to existing 3.8
On Fri, Dec 1, 2017 at 1:55 AM, Ziemowit Pierzycki <ziemowit at pierzycki.com>
wrote:
> Hi,
>
> I have a problem joining four Gluster 3.10 nodes to an existing
> Gluster 3.8 nodes. My understanding that this should work and not be
> too much of a problem.
>
> Peer robe is successful but the node is rejected:
>
> gluster> peer detach elkpinfglt07
> peer
2009 Jan 28
1
Mount fails with error status -22?
Hi,
I am little puzzled. I looked through the mailing list archive and some
other sources but doesn't seem like anything anyone encountered.
I have two systems with attached HP SAN. I'm using SLES 10.1 with
multipath-tools. When trying to mount OCFS2 device I get this:
SERVER:/ # mount.ocfs2 /dev/mapper/mpath0 /mnt/temp/
mount.ocfs2: Invalid argument while mounting
2003 Aug 09
18
[releng_4 tinderbox] failure on i386/i386
TB --- 2003-08-09 16:00:11 - starting RELENG_4 tinderbox run for i386/i386
TB --- 2003-08-09 16:00:11 - checking out the source tree
TB --- cd /home/des/tinderbox/RELENG_4/i386/i386
TB --- /usr/bin/cvs -f -R -q -d/home/ncvs update -Pd -rRELENG_4 src
TB --- 2003-08-09 16:00:11 - /usr/bin/cvs returned exit code 1
TB --- 2003-08-09 16:00:11 - ERROR: unable to check out the source tree
TB ---
2003 Aug 09
13
[releng_4 tinderbox] failure on i386/pc98
TB --- 2003-08-09 16:00:12 - starting RELENG_4 tinderbox run for i386/pc98
TB --- 2003-08-09 16:00:12 - checking out the source tree
TB --- cd /home/des/tinderbox/RELENG_4/i386/pc98
TB --- /usr/bin/cvs -f -R -q -d/home/ncvs update -Pd -rRELENG_4 src
TB --- 2003-08-09 16:00:12 - /usr/bin/cvs returned exit code 1
TB --- 2003-08-09 16:00:12 - ERROR: unable to check out the source tree
TB ---
2003 Aug 09
28
[releng_4 tinderbox] failure on alpha/alpha
TB --- 2003-08-09 16:00:11 - starting RELENG_4 tinderbox run for alpha/alpha
TB --- 2003-08-09 16:00:11 - checking out the source tree
TB --- cd /home/des/tinderbox/RELENG_4/alpha/alpha
TB --- /usr/bin/cvs -f -R -q -d/home/ncvs update -Pd -rRELENG_4 src
TB --- 2003-08-09 16:00:11 - /usr/bin/cvs returned exit code 1
TB --- 2003-08-09 16:00:11 - ERROR: unable to check out the source tree
TB ---
2009 Jan 29
1
7.1, mpt and slow writes
Hello,
I think this needs a few more eyes:
http://lists.freebsd.org/pipermail/freebsd-scsi/2009-January/003782.html
In short, writes are slow, likely do to the write-cache being enabled on
the controller. The sysctl used in 6.x to turn the cache off don't seem
to be in 7.x.
Thanks,
Charles
___
Charles Sprickman
NetEng/SysAdmin
Bway.net - New York's Best Internet - www.bway.net
2013 Jul 09
2
Re: Libvirt and Glusterfs
> Hi,
>
> I'm trying to use qemu native glusterfs integration with libvirt. It's
> all working well from the qemu side, but libvirt fails to start a domain
> with a gluster drive or attach a drive.
> I have exactly the same error as this person:
> https://www.redhat.com/archives/libvirt-users/2013-April/msg00204.html
>
> I use qemu 1.5.1 with glusterfs 3.4 beta 4
2012 Apr 12
1
CentOS 6.2 anaconda bug?
I have a kickstart file with the following partitioning directives:
part /boot --fstype ext3 --onpart=sda1
part pv.100000 --onpart=sda2 --noformat
volgroup vol0 pv.100000 --noformat
logvol / --vgname=vol0 --name=lvol1 --useexisting --fstype=ext4
logvol /tmp --vgname=vol0 --name=lvol2 --useexisting --fstype=ext4
logvol swap --vgname=vol0 --name=lvol3 --useexisting
logvol /data --vgname=vol0
2018 Mar 01
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya,
Below the output of both volumes:
[root at stor1t ~]# gluster volume rebalance volumedisk1 status
Node Rebalanced-files size
scanned failures skipped status run time in
h:m:s
--------- ----------- -----------
----------- ----------- ----------- ------------
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya,
I applied the workarround for this bug and now df shows the right size:
[root at stor1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
/dev/sdc1 50T 16T 34T 33% /mnt/glusterfs/vol1
stor1data:/volumedisk0
101T 3,3T 97T 4% /volumedisk0
stor1data:/volumedisk1
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya,
My initial setup was composed of 2 similar nodes: stor1data and stor2data.
A month ago I expanded both volumes with a new node: stor3data (2 bricks
per volume).
Of course, then to add the new peer with the bricks I did the 'balance
force' operation. This task finished successfully (you can see info below)
and number of files on the 3 nodes were very similar .
For volumedisk1 I
2013 Jul 10
1
Re: Libvirt and Glusterfs
On 07/09/2013 08:18 PM, Olivier Mauras wrote:
> On 2013-07-09 09:40, Vijay Bellur wrote:
>
>>> Hi, I'm trying to use qemu native glusterfs integration with libvirt.
>>> It's all working well from the qemu side, but libvirt fails to start
>>> a domain with a gluster drive or attach a drive. I have exactly the
>>> same error as this person:
2018 Feb 27
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi,
Some days ago all my glusterfs configuration was working fine. Today I
realized that the total size reported by df command was changed and is
smaller than the aggregated capacity of all the bricks in the volume.
I checked that all the volumes status are fine, all the glusterd daemons
are running, there is no error in logs, however df shows a bad total size.
My configuration for one volume: