Displaying 20 results from an estimated 10000 matches similar to: "Sharding option for distributed volumes"
2017 Sep 21
0
Sharding option for distributed volumes
Hello Ji-Hyeon,
Thanks, is that option available in 3.12 gluster release? because we're
still on 3.8 and just playing around latest version in order to have our
solution migrated.
Thank you!
9/21/17 2:26 PM, Ji-Hyeon Gim ?????:
> Hello Pavel!
>
> In my opinion, you need to check features.shard-block-size option first.
> If a file nobigger than this value, it would not be
2018 Apr 05
1
Enable sharding on active volume
Hello,
I wanted to post this as a question to the group before we go launch it in a test environment. Will Gluster handle enabling sharding on an existing distributed-replicated environment, and is it safe to do?
The environment in question is a VM image storage cluster with some disk files starting to grow beyond the size of some of the smaller bricks.
-- Ian
-------------- next part
2017 Dec 18
0
Testing sharding on tiered volume
----- Original Message -----
> From: "Viktor Nosov" <vnosov at stonefly.com>
> To: gluster-users at gluster.org
> Cc: vnosov at stonefly.com
> Sent: Friday, December 8, 2017 5:45:25 PM
> Subject: [Gluster-users] Testing sharding on tiered volume
>
> Hi,
>
> I'm looking to use sharding on tiered volume. This is very attractive
> feature that could
2017 Dec 08
2
Testing sharding on tiered volume
Hi,
I'm looking to use sharding on tiered volume. This is very attractive
feature that could benefit tiered volume to let it handle larger files
without hitting the "out of (hot)space problem".
I decided to set test configuration on GlusterFS 3.12.3 when tiered volume
has 2TB cold and 1GB hot segments. Shard size is set to be 16MB.
For testing 100GB files are used. It seems writes
2018 Feb 27
1
On sharded tiered volume, only first shard of new file goes on hot tier.
Does anyone have any ideas about how to fix, or to work-around the
following issue?
Thanks!
Bug 1549714 - On sharded tiered volume, only first shard of new file
goes on hot tier.
https://bugzilla.redhat.com/show_bug.cgi?id=1549714
On sharded tiered volume, only first shard of new file goes on hot tier.
On a sharded tiered volume, only the first shard of a new file
goes on the hot tier, the rest
2017 Sep 08
0
GlusterFS as virtual machine storage
Seems to be so, but if we look back at the described setup and procedure -
what is the reason for iops to stop/fail? Rebooting a node is somewhat
similar to updating gluster, replacing cabling etc. IMO this should not
always end up with arbiter blaming the other node and even though I did not
investigate this issue deeply, I do not believe the blame is the reason for
iops to drop.
On Sep 7, 2017
2018 Jan 18
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
Another update.
I've setup a replica 3 volume without sharding and tried to install a VM
on a qcow2 volume on that device; however the result is the same and the
vm image has been corrupted, exactly at the same point.
Here's the volume info of the create volume:
Volume Name: gvtest
Type: Replicate
Volume ID: e2ddf694-ba46-4bc7-bc9c-e30803374e9d
Status: Started
Snapshot Count: 0
Number
2017 Sep 06
0
GlusterFS as virtual machine storage
Mh, I never had to do that and I never had that problem. Is that an
arbiter specific thing ? With replica 3 it just works.
On Wed, Sep 06, 2017 at 03:59:14PM -0400, Alastair Neil wrote:
> you need to set
>
> cluster.server-quorum-ratio 51%
>
> On 6 September 2017 at 10:12, Pavel Szalbot <pavel.szalbot at gmail.com> wrote:
>
> > Hi all,
> >
>
2018 Apr 22
0
Reconstructing files from shards
So a stock ovirt with gluster install that uses sharding
A. Can't safely have sharding turned off once files are in use
B. Can't be expanded with additional bricks
Ouch.
On April 22, 2018 5:39:20 AM EDT, Gandalf Corvotempesta <gandalf.corvotempesta at gmail.com> wrote:
>Il dom 22 apr 2018, 10:46 Alessandro Briosi <ab1 at metalit.com> ha
>scritto:
>
>> Imho
2017 Sep 07
0
GlusterFS as virtual machine storage
Hi Neil, docs mention two live nodes of replica 3 blaming each other and
refusing to do IO.
https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/#1-replica-3-volume
On Sep 7, 2017 17:52, "Alastair Neil" <ajneil.tech at gmail.com> wrote:
> *shrug* I don't use arbiter for vm work loads just straight replica 3.
2017 Sep 07
2
GlusterFS as virtual machine storage
True but to work your way into that problem with replica 3 is a lot harder
to achieve than with just replica 2 + arbiter.
On 7 September 2017 at 14:06, Pavel Szalbot <pavel.szalbot at gmail.com> wrote:
> Hi Neil, docs mention two live nodes of replica 3 blaming each other and
> refusing to do IO.
>
> https://gluster.readthedocs.io/en/latest/Administrator%
>
2018 Jan 18
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
Thanks for that input. Adding Niels since the issue is reproducible only
with libgfapi.
-Krutika
On Thu, Jan 18, 2018 at 1:39 PM, Ing. Luca Lazzeroni - Trend Servizi Srl <
luca at gvnet.it> wrote:
> Another update.
>
> I've setup a replica 3 volume without sharding and tried to install a VM
> on a qcow2 volume on that device; however the result is the same and the vm
>
2018 Jan 16
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
I've just done all the steps to reproduce the problem.
Tha VM volume has been created via "qemu-img create -f qcow2
Test-vda2.qcow2 20G" on the gluster volume mounted via FUSE. I've tried
also to create the volume with preallocated metadata, which moves the
problem a bit far away (in time). The volume is a replice 3 arbiter 1
volume hosted on XFS bricks.
Here are the
2018 Mar 26
0
Sharding problem - multiple shard copies with mismatching gfids
The gfid mismatch here is between the shard and its "link-to" file, the
creation of which happens at a layer below that of shard translator on the
stack.
Adding DHT devs to take a look.
-Krutika
On Mon, Mar 26, 2018 at 1:09 AM, Ian Halliday <ihalliday at ndevix.com> wrote:
> Hello all,
>
> We are having a rather interesting problem with one of our VM storage
>
2018 Jan 16
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
An update:
I've tried, for my tests, to create the vm volume as
qemu-img create -f qcow2 -o preallocation=full
gluster://gluster1/Test/Test-vda.img 20G
et voila !
No errors at all, neither in bricks' log file (the "link failed" message
disappeared), neither in VM (no corruption and installed succesfully).
I'll do another test with a fully preallocated raw image.
Il
2018 Jan 19
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
After other test (I'm trying to convice myself about gluster reliability
:-) I've found that with
performance.write-behind off
the vm works without problem. Now I'll try with write-behind on and
flush-behind on too.
Il 18/01/2018 13:30, Krutika Dhananjay ha scritto:
> Thanks for that input. Adding Niels since the issue is reproducible
> only with libgfapi.
>
>
2018 Jan 16
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
I've made the test with raw image format (preallocated too) and the
corruption problem is still there (but without errors in bricks' log file).
What does the "link" error in bricks log files means ?
I've seen the source code looking for the lines where it happens and it
seems a warning (it doesn't imply a failure).
Il 16/01/2018 17:39, Ing. Luca Lazzeroni - Trend
2017 Sep 06
2
GlusterFS as virtual machine storage
you need to set
cluster.server-quorum-ratio 51%
On 6 September 2017 at 10:12, Pavel Szalbot <pavel.szalbot at gmail.com> wrote:
> Hi all,
>
> I have promised to do some testing and I finally find some time and
> infrastructure.
>
> So I have 3 servers with Gluster 3.10.5 on CentOS 7. I created
> replicated volume with arbiter (2+1) and VM on KVM (via
2018 Jan 17
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
On Tue, Jan 16, 2018 at 10:47 PM, Ing. Luca Lazzeroni - Trend Servizi Srl <
luca at gvnet.it> wrote:
> I've made the test with raw image format (preallocated too) and the
> corruption problem is still there (but without errors in bricks' log file).
>
> What does the "link" error in bricks log files means ?
>
> I've seen the source code looking for the
2018 Jan 17
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
Of course. Here's the full log. Please, note that in FUSE mode
everything works apparently without problems. I've installed 4 vm and
updated them without problems.
Il 17/01/2018 11:00, Krutika Dhananjay ha scritto:
>
>
> On Tue, Jan 16, 2018 at 10:47 PM, Ing. Luca Lazzeroni - Trend Servizi
> Srl <luca at gvnet.it <mailto:luca at gvnet.it>> wrote:
>
>