Displaying 20 results from an estimated 3000 matches similar to: "Upgrading (online) GlusterFS-3.7.11 to 3.10 with Distributed-Disperse volume"
2017 Aug 25
0
3.8 Upgrade to 3.10
On 08/25/2017 09:17 AM, Lindsay Mathieson wrote:
> Currently running 3.8.12, planning to rolling upgrade it to 3.8.15 this
> weekend.
>
> * debian 8
> * 3 nodes
> * Replica 3
> * Sharded
> * VM Hosting only
>
> The release notes strongly recommend upgrading to 3.10
>
> * Is there any downside to staying on 3.8.15 for a while longer?
3.8 will
2018 Mar 20
0
Disperse volume recovery and healing
On Tue, Mar 20, 2018 at 5:26 AM, Victor T <hero_of_nothing_1 at hotmail.com>
wrote:
> That makes sense. In the case of "file damage," it would show up as files
> that could not be healed in logfiles or gluster volume heal [volume] info?
>
If the damage affects more bricks than the volume redundancy, then probably
yes. These files or directories will appear in
2024 Mar 14
1
Adding storage capacity to a production disperse volume
Hi,
On 14.03.2024 01:39, Theodore Buchwald wrote:
>
> ... So my question is. What would be the correct amount of bricks
> needed to expand the storage on the current configuration of 'Number
> of Bricks: 1 x (4 + 1) = 5'? ...
>
I tried something similar and ended up with a similar error. As far as I
understand the documentation the answer in your case is "5".
2017 Aug 25
2
3.8 Upgrade to 3.10
Currently running 3.8.12, planning to rolling upgrade it to 3.8.15 this
weekend.
* debian 8
* 3 nodes
* Replica 3
* Sharded
* VM Hosting only
The release notes strongly recommend upgrading to 3.10
* Is there any downside to staying on 3.8.15 for a while longer?
* I didn't see anything I had to have in 3.10, but ongoing updates are
always good :(
This mildly concerned me:
2018 Apr 10
0
glusterfs disperse volume input output error
Hi,
Could you help me?
i have a problem with file on disperse volume. When i try to read this from mount point i recieve error,
# md5sum /mnt/glfs/vmfs/slake-test-bck-m1-d1.qcow2
md5sum: /mnt/glfs/vmfs/slake-test-bck-m1-d1.qcow2: Input/output error
Configuration and status of volume is:
# gluster volume info vol1
Volume Name: vol1
Type: Disperse
Volume ID:
2018 Mar 13
2
Disperse volume recovery and healing
I have a question about how disperse volumes handle brick failure. I'm running version 3.10.10 on all systems. If I have a disperse volume in a 4+2 configuration with 6 servers each serving 1 brick, and maintenance needs to be performed on all systems, are there any general steps that need to be taken to ensure data is not lost or service interrupted? For example, can I just reboot each system
2017 Aug 20
0
Add brick to a disperse volume
Hi,
Adding bricks to a disperse volume is very easy and same as replica volume.
You just need to add bricks in the multiple of the number of bricks which you already have.
So if you have disperse volume with n+k configuration, you need to add n+k more bricks.
Example :
If your disperse volume is 4+2, where 2 is the redundancy count, you need to provide 6 (or multiple of 6) bricks (4+2 = 6)
2018 Mar 18
1
Disperse volume recovery and healing
No. After bringing up one brick and before stopping the next one, you need to be sure that there are no damaged files. You shouldn't reboot a node if "gluster volume heal <volname> info" shows damaged files.
What happens in this case then? I'm thinking about a situation where the servers are kept in an environment that we don't control - i.e. the cloud. If the VMs are
2018 Mar 16
0
Disperse volume recovery and healing
On Fri, Mar 16, 2018 at 4:57 AM, Victor T <hero_of_nothing_1 at hotmail.com>
wrote:
> Xavi, does that mean that even if every node was rebooted one at a time
> even without issuing a heal that the volume would have no issues after
> running gluster volume heal [volname] when all bricks are back online?
>
No. After bringing up one brick and before stopping the next one, you need
2018 Mar 15
0
Disperse volume recovery and healing
Hi Victor,
On Wed, Mar 14, 2018 at 12:30 AM, Victor T <hero_of_nothing_1 at hotmail.com>
wrote:
> I have a question about how disperse volumes handle brick failure. I'm
> running version 3.10.10 on all systems. If I have a disperse volume in a
> 4+2 configuration with 6 servers each serving 1 brick, and maintenance
> needs to be performed on all systems, are there any
2018 Mar 16
2
Disperse volume recovery and healing
Xavi, does that mean that even if every node was rebooted one at a time even without issuing a heal that the volume would have no issues after running gluster volume heal [volname] when all bricks are back online?
________________________________
From: Xavi Hernandez <jahernan at redhat.com>
Sent: Thursday, March 15, 2018 12:09:05 AM
To: Victor T
Cc: gluster-users at gluster.org
Subject:
2018 Jan 12
1
Reading over than the file size on dispersed volume
Hi All,
I'm using gluster as dispersed volume and I send to ask for very serious
thing.
I have 3 servers and there are 9 bricks.
My volume is like below.
------------------------------------------------------
Volume Name: TEST_VOL
Type: Disperse
Volume ID: be52b68d-ae83-46e3-9527-0e536b867bcc
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (6 + 3) = 9
Transport-type: tcp
Bricks:
2017 Oct 23
2
[Gluster-devel] Announcing Glusterfs release 3.12.2 (Long Term Maintenance)
Any idea when these packages will be in the CentOS mirrors? there is no
sign of them on download.gluster.org.
On 13 October 2017 at 08:45, Jiffin Tony Thottan <jthottan at redhat.com>
wrote:
> The Gluster community is pleased to announce the release of Gluster 3.12.2
> (packages available at [1,2,3]).
>
> Release notes for the release can be found at [4].
>
> We still
2017 Dec 11
1
[Gluster-devel] Announcing Glusterfs release 3.12.2 (Long Term Maintenance)
Neil I don;t know if this is adequate but I did run a simple smoke test
today on the 3.12.3-1 bits. I installed the 3.12.3-1 but on 3 fresh
install Centos 7 VMs
created a 2G image files and wrote a xfs files system on them on each
system
mount each under /export/brick1, and created /export/birck1/test on each
node.
probes the two other systems from one node (a). abd created a replica 3
2024 Mar 14
3
Adding storage capacity to a production disperse volume
Hi,
This is the first time I have tried to expand the storage of a live gluster
volume. I was able to get another supermicro storage unit for a gluster
cluster that I built. The current clustered storage configuration contains
five supermicro units. And the cluster volume is setup with the following
configuration:
node-6[/var/log/glusterfs]# gluster volume info
Volume Name: researchdata
2017 Aug 19
2
Add brick to a disperse volume
Hello,
I'm using Gluster since 2 years but only with distributed volumes.
I'm trying now to set dispersed volumes to have some redundancy.
I had any problem to create a functional test volume with 4 bricks and 1 redundancy ( Number of Bricks: 1 x (3 + 1) = 4 ).
I had also any problem to replace a supposed faulty brick with another one.
My problem is that I can not add a brick to
2017 Aug 25
0
GlusterFS as virtual machine storage
On 8/25/2017 2:21 PM, lemonnierk at ulrar.net wrote:
>> This concern me, and it is the reason I would like to avoid sharding.
>> How can I recover from such a situation? How can I "decide" which
>> (reconstructed) file is the one to keep rather than to delete?
>>
> No need, on a replica 3 that just doesn't happen. That's the main
> advantage of it,
2018 Apr 19
1
Announcing Glusterfs release 3.12.8 (Long Term Maintenance)
The Gluster community is pleased to announce the release of Gluster
3.12.8 (packages available at [1,2,3]).
Release notes for the release can be found at [4].
Thanks,
Gluster community
[1] https://download.gluster.org/pub/gluster/glusterfs/3.12/3.12.8/
[2] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12
[3] https://build.opensuse.org/project/subprojects/home:glusterfs
[4]
2018 Feb 20
1
Announcing Glusterfs release 3.12.6 (Long Term Maintenance)
The Gluster community is pleased to announce the release of Gluster
3.12.6 (packages available at [1,2,3]).
Release notes for the release can be found at [4].
We still carry following major issue that is reported in the
release-notes as follows,
1.) - Expanding a gluster volume that is sharded may cause file corruption
??? Sharded volumes are typically used for VM images, if such volumes
2017 Oct 13
1
Announcing Glusterfs release 3.12.2 (Long Term Maintenance)
The Gluster community is pleased to announce the release of Gluster
3.12.2 (packages available at [1,2,3]).
Release notes for the release can be found at [4].
We still carry following major issues that is reported in the
release-notes as follows,
1.) - Expanding a gluster volume that is sharded may cause file corruption
Sharded volumes are typically used for VM images, if such volumes