Displaying 20 results from an estimated 4000 matches similar to: "[ovirt-users] GlusterFS performance with only one drive per host?"
2018 Mar 24
0
[ovirt-users] GlusterFS performance with only one drive per host?
I would go with at least 4 HDDs per host in RAID 10. Then focus on network
performance where bottleneck usualy is for gluster.
On Sat, Mar 24, 2018, 00:44 Jayme <jaymef at gmail.com> wrote:
> Do you feel that SSDs are worth the extra cost or am I better off using
> regular HDDs? I'm looking for the best performance I can get with glusterFS
>
> On Fri, Mar 23, 2018 at
2018 Mar 24
0
[ovirt-users] GlusterFS performance with only one drive per host?
My take is that unless you have loads of data and are trying to optimize
for cost/TB, HDDs are probably not the right choice. This is particularly
true for random I/O workloads for which HDDs are really quite bad.
I'd recommend a recent gluster release, and some tuning because the default
settings are not optimized for performance. Some options to consider:
client.event-threads
2017 Jun 20
2
[ovirt-users] Very poor GlusterFS performance
[Adding gluster-users]
On Mon, Jun 19, 2017 at 8:16 PM, Chris Boot <bootc at bootc.net> wrote:
> Hi folks,
>
> I have 3x servers in a "hyper-converged" oVirt 4.1.2 + GlusterFS 3.10
> configuration. My VMs run off a replica 3 arbiter 1 volume comprised of
> 6 bricks, which themselves live on two SSDs in each of the servers (one
> brick per SSD). The bricks are
2017 Jun 20
0
[ovirt-users] Very poor GlusterFS performance
Have you tried with:
performance.strict-o-direct : off
performance.strict-write-ordering : off
They can be changed dynamically.
On 20 June 2017 at 17:21, Sahina Bose <sabose at redhat.com> wrote:
> [Adding gluster-users]
>
> On Mon, Jun 19, 2017 at 8:16 PM, Chris Boot <bootc at bootc.net> wrote:
>
>> Hi folks,
>>
>> I have 3x servers in a
2017 Jun 20
5
[ovirt-users] Very poor GlusterFS performance
Couple of things:
1. Like Darrell suggested, you should enable stat-prefetch and increase
client and server event threads to 4.
# gluster volume set <VOL> performance.stat-prefetch on
# gluster volume set <VOL> client.event-threads 4
# gluster volume set <VOL> server.event-threads 4
2. Also glusterfs-3.10.1 and above has a shard performance bug fix -
2017 Jun 20
0
[ovirt-users] Very poor GlusterFS performance
Dear Krutika,
Sorry for asking so naively but can you tell me on what factor do you base that the client and server event-threads parameters for a volume should be set to 4?
Is this metric for example based on the number of cores a GlusterFS server has?
I am asking because I saw my GlusterFS volumes are set to 2 and would like to set these parameters to something meaningful for performance
2017 Jul 05
2
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
On Wed, Jul 5, 2017 at 8:16 PM, Gianluca Cecchi <gianluca.cecchi at gmail.com>
wrote:
>
>
> On Wed, Jul 5, 2017 at 7:42 AM, Sahina Bose <sabose at redhat.com> wrote:
>
>>
>>
>>> ...
>>>
>>> then the commands I need to run would be:
>>>
>>> gluster volume reset-brick export
2017 Jul 05
1
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
On Wed, Jul 5, 2017 at 5:02 PM, Sahina Bose <sabose at redhat.com> wrote:
>
>
> On Wed, Jul 5, 2017 at 8:16 PM, Gianluca Cecchi <gianluca.cecchi at gmail.com
> > wrote:
>
>>
>>
>> On Wed, Jul 5, 2017 at 7:42 AM, Sahina Bose <sabose at redhat.com> wrote:
>>
>>>
>>>
>>>> ...
>>>>
>>>> then
2017 Jul 25
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
On Tue, Jul 25, 2017 at 1:45 PM, yayo (j) <jaganz at gmail.com> wrote:
> 2017-07-25 7:42 GMT+02:00 Kasturi Narra <knarra at redhat.com>:
>
>> These errors are because not having glusternw assigned to the correct
>> interface. Once you attach that these errors should go away. This has
>> nothing to do with the problem you are seeing.
>>
>
> Hi,
>
2018 Apr 27
3
How to set up a 4 way gluster file system
Hi,
I have 4 servers each with 1TB of storage set as /dev/sdb1, I would like to
set these up in a raid 10 which will? give me 2TB useable. So Mirrored and
concatenated?
The command I am running is as per documents but I get a warning error,
how do I get this to proceed please as the documents do not say.
gluster volume create gv0 replica 2 glusterp1:/bricks/brick1/gv0
2017 Jul 25
2
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
2017-07-25 7:42 GMT+02:00 Kasturi Narra <knarra at redhat.com>:
> These errors are because not having glusternw assigned to the correct
> interface. Once you attach that these errors should go away. This has
> nothing to do with the problem you are seeing.
>
Hi,
You talking about errors like these?
2017-07-24 15:54:02,209+02 WARN [org.ovirt.engine.core.vdsbro
2017 Jul 19
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
On 07/19/2017 08:02 PM, Sahina Bose wrote:
> [Adding gluster-users]
>
> On Wed, Jul 19, 2017 at 2:52 PM, yayo (j) <jaganz at gmail.com
> <mailto:jaganz at gmail.com>> wrote:
>
> Hi all,
>
> We have an ovirt cluster hyperconverged with hosted engine on 3
> full replicated node . This cluster have 2 gluster volume:
>
> - data: volume for
2017 Jul 19
2
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
[Adding gluster-users]
On Wed, Jul 19, 2017 at 2:52 PM, yayo (j) <jaganz at gmail.com> wrote:
> Hi all,
>
> We have an ovirt cluster hyperconverged with hosted engine on 3 full
> replicated node . This cluster have 2 gluster volume:
>
> - data: volume for the Data (Master) Domain (For vm)
> - engine: volume fro the hosted_storage Domain (for hosted engine)
>
>
2017 Jul 03
1
[ovirt-users] Gluster issue with /var/lib/glusterd/peers/<ip> file
On Sun, Jul 2, 2017 at 5:38 AM, Mike DePaulo <mikedep333 at gmail.com> wrote:
> Hi everyone,
>
> I have ovirt 4.1.1/4.1.2 running on 3 hosts with a gluster hosted engine.
>
> I was working on setting up a network for gluster storage and
> migration. The addresses for it will be 10.0.20.x, rather than
> 192.168.1.x for the management network. However, I switched gluster
2017 Jul 06
2
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
On Thu, Jul 6, 2017 at 6:55 AM, Atin Mukherjee <amukherj at redhat.com> wrote:
>
>
>>
> You can switch back to info mode the moment this is hit one more time with
> the debug log enabled. What I'd need here is the glusterd log (with debug
> mode) to figure out the exact cause of the failure.
>
>
>>
>> Let me know,
>> thanks
>>
>>
2018 Apr 27
2
How to set up a 4 way gluster file system
Hi,
I have 4 nodes, so a quorum would be 3 of 4. The Q is I suppose why does
the documentation give this command as an example without qualifying it?
SO I am running the wrong command? I want a "raid10"
On 27 April 2018 at 18:05, Karthik Subrahmanya <ksubrahm at redhat.com> wrote:
> Hi,
>
> With replica 2 volumes one can easily end up in split-brains if there are
2023 Jun 19
9
[PATCH v2 0/5] clean up block_commit_write
Changelog:
v1--v2:
1. Re-order patches to avoid breaking compilation.
Bean Huo (5):
fs/buffer: clean up block_commit_write
ext4: No need to check return value of block_commit_write()
fs/ocfs2: No need to check return value of block_commit_write()
udf: No need to check return value of block_commit_write()
fs/buffer.c: convert block_commit_write to return void
fs/buffer.c
2018 Apr 27
0
How to set up a 4 way gluster file system
Hi,
With replica 2 volumes one can easily end up in split-brains if there are
frequent disconnects and high IOs going on.
If you use replica 3 or arbiter volumes, it will guard you by using the
quorum mechanism giving you both consistency and availability.
But in replica 2 volumes, quorum does not make sense since it needs both
the nodes up to guarantee consistency, which costs availability.
If
2017 Jul 20
2
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
Hi,
Thank you for the answer and sorry for delay:
2017-07-19 16:55 GMT+02:00 Ravishankar N <ravishankar at redhat.com>:
1. What does the glustershd.log say on all 3 nodes when you run the
> command? Does it complain anything about these files?
>
No, glustershd.log is clean, no extra log after command on all 3 nodes
> 2. Are these 12 files also present in the 3rd data brick?
2003 May 29
5
Comparison Operator
Does R have a comparison operator similar to the Like function, for example:
a<-"Is a Fish"
b<-"Fish"
if(b in a){c<-TRUE}
Michael R Howard
Micron Technology Inc. Boise ID.
Fab C Engineering Software (FCES)
Software Engineer