similar to: BoF - Gluster for VM store use case

Displaying 20 results from an estimated 4000 matches similar to: "BoF - Gluster for VM store use case"

2017 Nov 01
0
BoF - Gluster for VM store use case
----- Original Message ----- > From: "Sahina Bose" <sabose at redhat.com> > To: gluster-users at gluster.org > Cc: "Gluster Devel" <gluster-devel at gluster.org> > Sent: Tuesday, October 31, 2017 11:46:57 AM > Subject: [Gluster-users] BoF - Gluster for VM store use case > > During Gluster Summit, we discussed gluster volumes as storage for VM
2017 Nov 01
1
[Gluster-devel] BoF - Gluster for VM store use case
On 10/31/2017 08:36 PM, Ben Turner wrote: >> * Erasure coded volumes with sharding - seen as a good fit for VM disk >> storage > I am working on this with a customer, we have been able to do 400-500 MB / sec writes! Normally things max out at ~150-250. The trick is to use multiple files, create the lvm stack and use native LVM striping. We have found that 4-6 files seems to give
2017 Jul 05
2
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
On Wed, Jul 5, 2017 at 8:16 PM, Gianluca Cecchi <gianluca.cecchi at gmail.com> wrote: > > > On Wed, Jul 5, 2017 at 7:42 AM, Sahina Bose <sabose at redhat.com> wrote: > >> >> >>> ... >>> >>> then the commands I need to run would be: >>> >>> gluster volume reset-brick export
2017 Jun 20
2
[ovirt-users] Very poor GlusterFS performance
[Adding gluster-users] On Mon, Jun 19, 2017 at 8:16 PM, Chris Boot <bootc at bootc.net> wrote: > Hi folks, > > I have 3x servers in a "hyper-converged" oVirt 4.1.2 + GlusterFS 3.10 > configuration. My VMs run off a replica 3 arbiter 1 volume comprised of > 6 bricks, which themselves live on two SSDs in each of the servers (one > brick per SSD). The bricks are
2017 Jul 24
3
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
Hi, UI refreshed but problem still remain ... No specific error, I've only these errors but I've read that there is no problem if I have this kind of errors: 2017-07-24 15:53:59,823+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler2) [b7590c4] START, GlusterServersListVDSCommand(HostName = node01.localdomain.local,
2017 Jul 05
1
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
On Wed, Jul 5, 2017 at 5:02 PM, Sahina Bose <sabose at redhat.com> wrote: > > > On Wed, Jul 5, 2017 at 8:16 PM, Gianluca Cecchi <gianluca.cecchi at gmail.com > > wrote: > >> >> >> On Wed, Jul 5, 2017 at 7:42 AM, Sahina Bose <sabose at redhat.com> wrote: >> >>> >>> >>>> ... >>>> >>>> then
2017 Jun 20
5
[ovirt-users] Very poor GlusterFS performance
Couple of things: 1. Like Darrell suggested, you should enable stat-prefetch and increase client and server event threads to 4. # gluster volume set <VOL> performance.stat-prefetch on # gluster volume set <VOL> client.event-threads 4 # gluster volume set <VOL> server.event-threads 4 2. Also glusterfs-3.10.1 and above has a shard performance bug fix -
2017 Jun 30
2
Very slow performance on Sharded GlusterFS
Hi, I have an 2 nodes with 20 bricks in total (10+10). First test: 2 Nodes with Distributed - Striped - Replicated (2 x 2) 10GbE Speed between nodes "dd" performance: 400mb/s and higher Downloading a large file from internet and directly to the gluster: 250-300mb/s Now same test without Stripe but with sharding. This results are same when I set shard size 4MB or
2017 Jun 20
0
[ovirt-users] Very poor GlusterFS performance
Have you tried with: performance.strict-o-direct : off performance.strict-write-ordering : off They can be changed dynamically. On 20 June 2017 at 17:21, Sahina Bose <sabose at redhat.com> wrote: > [Adding gluster-users] > > On Mon, Jun 19, 2017 at 8:16 PM, Chris Boot <bootc at bootc.net> wrote: > >> Hi folks, >> >> I have 3x servers in a
2017 Jun 30
1
Very slow performance on Sharded GlusterFS
Hi, I have an 2 nodes with 20 bricks in total (10+10). First test: 2 Nodes with Distributed - Striped - Replicated (2 x 2) 10GbE Speed between nodes "dd" performance: 400mb/s and higher Downloading a large file from internet and directly to the gluster: 250-300mb/s Now same test without Stripe but with sharding. This results are same when I set shard size 4MB or
2017 Jun 30
3
Very slow performance on Sharded GlusterFS
Hi Krutika, Sure, here is volume info: root at sr-09-loc-50-14-18:/# gluster volume info testvol Volume Name: testvol Type: Distributed-Replicate Volume ID: 30426017-59d5-4091-b6bc-279a905b704a Status: Started Snapshot Count: 0 Number of Bricks: 10 x 2 = 20 Transport-type: tcp Bricks: Brick1: sr-09-loc-50-14-18:/bricks/brick1 Brick2: sr-09-loc-50-14-18:/bricks/brick2 Brick3:
2017 Jul 25
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
These errors are because not having glusternw assigned to the correct interface. Once you attach that these errors should go away. This has nothing to do with the problem you are seeing. sahina any idea about engine not showing the correct volume info ? On Mon, Jul 24, 2017 at 7:30 PM, yayo (j) <jaganz at gmail.com> wrote: > Hi, > > UI refreshed but problem still remain ... >
2017 Jun 30
0
Very slow performance on Sharded GlusterFS
Could you please provide the volume-info output? -Krutika On Fri, Jun 30, 2017 at 4:23 PM, <gencer at gencgiyen.com> wrote: > Hi, > > > > I have an 2 nodes with 20 bricks in total (10+10). > > > > First test: > > > > 2 Nodes with Distributed ? Striped ? Replicated (2 x 2) > > 10GbE Speed between nodes > > > > ?dd? performance:
2011 Oct 23
4
summarizing a data frame i.e. count -> group by
Hello, This is one problem at the time :) I have a data frame df that looks like this: time partitioning_mode workload runtime 1 1 sharding query 607 2 1 sharding query 85 3 1 sharding query 52 4 1 sharding query 79 5 1 sharding query 77 6 1 sharding query 67 7 1
2011 Oct 23
1
unfold list (variable number of columns) into a data frame
Hello, I used R a lot one year ago and now I am a bit rusty :) I have my raw data which correspond to the list of runtimes per minute (minute "1" "2" "3" in two database modes "sharding" and "query" and two workload types "query" and "refresh") and as a list of char arrays that looks like this: > str(data) List of 122 $ :
2017 Jun 20
0
[ovirt-users] Very poor GlusterFS performance
Dear Krutika, Sorry for asking so naively but can you tell me on what factor do you base that the client and server event-threads parameters for a volume should be set to 4? Is this metric for example based on the number of cores a GlusterFS server has? I am asking because I saw my GlusterFS volumes are set to 2 and would like to set these parameters to something meaningful for performance
2009 Feb 18
2
[LLVMdev] Parametric polymorphism
> Why do you say that people who compile, e.g., functional languages > would benefit from type variables in LLVM? > I like the level the LLVM is at, and would prefer to deal with > instantiating parametric polymorphism at a higher level. I'm surprised you're happy with a non-polymorphic llvm. Does Cayenne target llvm? Dependent types take polymorphism to new heights -- but
2017 Jun 12
0
Gluster deamon fails to start
On Mon, Jun 12, 2017 at 7:30 PM, Langley, Robert <Robert.Langley at ventura.org > wrote: > As far as the peer status (and I now remember seeing this earlier) the > issue appears to be that the host name for gsaov07 is attempting to resolve > over the wrong network for gluster "ent...." and not "stor.local". > So, it may be as simple as removing gsaov07 as a
2017 Jun 12
2
Gluster deamon fails to start
As far as the peer status (and I now remember seeing this earlier) the issue appears to be that the host name for gsaov07 is attempting to resolve over the wrong network for gluster "ent...." and not "stor.local". So, it may be as simple as removing gsaov07 as a peer, then probing over the correct network. I'll follow up with the Engine log. Sent using OWA for iPhone
2018 Apr 22
4
Reconstructing files from shards
Il dom 22 apr 2018, 10:46 Alessandro Briosi <ab1 at metalit.com> ha scritto: > Imho the easiest path would be to turn off sharding on the volume and > simply do a copy of the files (to a different directory, or rename and > then copy i.e.) > > This should simply store the files without sharding. > If you turn off sharding on a sharded volume with data in it, all sharded