similar to: [ovirt-users] GlusterFS performance with only one drive per host?

Displaying 20 results from an estimated 3000 matches similar to: "[ovirt-users] GlusterFS performance with only one drive per host?"

2018 Mar 24
0
[ovirt-users] GlusterFS performance with only one drive per host?
My take is that unless you have loads of data and are trying to optimize for cost/TB, HDDs are probably not the right choice. This is particularly true for random I/O workloads for which HDDs are really quite bad. I'd recommend a recent gluster release, and some tuning because the default settings are not optimized for performance. Some options to consider: client.event-threads
2018 Mar 22
2
[ovirt-users] GlusterFS performance with only one drive per host?
On Mon, Mar 19, 2018 at 5:57 PM, Jayme <jaymef at gmail.com> wrote: > I'm spec'ing a new oVirt build using three Dell R720's w/ 256GB. I'm > considering storage options. I don't have a requirement for high amounts > of storage, I have a little over 1TB to store but want some overhead so I'm > thinking 2TB of usable space would be sufficient. > >
2017 Jun 20
0
[ovirt-users] Very poor GlusterFS performance
Have you tried with: performance.strict-o-direct : off performance.strict-write-ordering : off They can be changed dynamically. On 20 June 2017 at 17:21, Sahina Bose <sabose at redhat.com> wrote: > [Adding gluster-users] > > On Mon, Jun 19, 2017 at 8:16 PM, Chris Boot <bootc at bootc.net> wrote: > >> Hi folks, >> >> I have 3x servers in a
2017 Jun 20
0
[ovirt-users] Very poor GlusterFS performance
Dear Krutika, Sorry for asking so naively but can you tell me on what factor do you base that the client and server event-threads parameters for a volume should be set to 4? Is this metric for example based on the number of cores a GlusterFS server has? I am asking because I saw my GlusterFS volumes are set to 2 and would like to set these parameters to something meaningful for performance
2017 Jun 20
2
[ovirt-users] Very poor GlusterFS performance
[Adding gluster-users] On Mon, Jun 19, 2017 at 8:16 PM, Chris Boot <bootc at bootc.net> wrote: > Hi folks, > > I have 3x servers in a "hyper-converged" oVirt 4.1.2 + GlusterFS 3.10 > configuration. My VMs run off a replica 3 arbiter 1 volume comprised of > 6 bricks, which themselves live on two SSDs in each of the servers (one > brick per SSD). The bricks are
2007 Mar 06
0
virus encontrado em mensagem enviada "Re: Valeu!!"
Atencao: openssh-unix-dev at mindrot.org Um virus foi encontrado numa mensagem de Email que acabou de ser enviada por voce. Este scanner de Email a interceptou e impediu a mensagem de chegar no seu destino. O virus foi reportado como sendo: Worm.Somefool.AR Por favor atualize seu antivirus ou contate o seu suporte tecnico o mais rapido possivel pois voce tem um virus no seu computador.
2002 Apr 02
2
Trouble with R and cronjobs
I am having problems with trying to run R from a crontab job. I have a c-shell file that calls the R script. I get an error concerning the X11 display (see below). I have included the c-shell file and the output from the crontab job. It appears that my DISPLAY environmental variable is not set. Is that necessary, even when the output of the plot command is to a png file? Can someone tell me how to
2017 Jun 20
5
[ovirt-users] Very poor GlusterFS performance
Couple of things: 1. Like Darrell suggested, you should enable stat-prefetch and increase client and server event threads to 4. # gluster volume set <VOL> performance.stat-prefetch on # gluster volume set <VOL> client.event-threads 4 # gluster volume set <VOL> server.event-threads 4 2. Also glusterfs-3.10.1 and above has a shard performance bug fix -
2014 Jun 05
0
Using BTRFS on SSD now ?
Hi, I just received a new laptop with a Micron 256GB SSD, and I plan to install Fedora 20 onto it. I'm considering either BTRFS or ext4 (over LUKS-encrypted LVM) for this machine, but I'm afraid BTRFS might generate too much writes and shorten the SSD lifespan... Or am I mistaken ? Is there any pro/cons currently, on a 3.14 kernel, about using BTRFS along with an SSD ? Is there
2006 Feb 28
1
Problem with incoming call, Please help
Hi All, I was able to install Asterisk and make outgoing calls. Recently I purchased two DID's and I am facing a problem configuring them to my Asterisk, I hope with the help I get from this list I will be able to configure successfully. Mu errors are Feb 28 08:31:58 NOTICE[19133]: pbx.c:1331 pbx_extension_helper: Cannot find extension context 'context_mantra2' Feb 28 08:31:58
2001 Jan 29
1
Problem with OpenSSH 2.3.0p1 and Linux kernel 2.4.1pre8 (Disconnecting: fork failed: Resource temporarily unavailable)
I'm having a problem with OpenSSH 2.3.0p1 and Linux kernel 2.4.1pre8. After a client connects and is authenticated ssh fails to fork, disconnects and dies. Normally I run sshd out of inetd with the -i flag. thinking this might be the problem I ran it in daemon mode from the command line. The results were the same. Running with maximum debugging on I captured the following: <snip>
2023 Jul 02
0
+ fs-buffer-clean-up-block_commit_write.patch added to mm-unstable branch
The patch titled Subject: fs/buffer: clean up block_commit_write has been added to the -mm mm-unstable branch. Its filename is fs-buffer-clean-up-block_commit_write.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/fs-buffer-clean-up-block_commit_write.patch This patch will later appear in the mm-unstable branch
2001 Dec 01
1
rsync-2.5.0 patch for "make check" bug
Attached is a patch for rsync 2.5.0 to fix the "make check" option. The find command was not being passwd the current directory in the hands.test and longdir.test testsuites, which caused them to fail on SunOS 4.X and Solaris 2.X systems. Tom -- Tom L. Schmidt, Manager/SysAdmin Characterization Equipment Micron Technology, Inc. 8000 S. Federal Way P.O. Box 6 Mail Stop 376 Boise,
2023 Jul 02
0
+ fs-convert-block_commit_write-to-return-void.patch added to mm-unstable branch
The patch titled Subject: fs: convert block_commit_write to return void has been added to the -mm mm-unstable branch. Its filename is fs-convert-block_commit_write-to-return-void.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/fs-convert-block_commit_write-to-return-void.patch This patch will later appear in the
2009 Jul 09
1
GlusterFS patches for ovirt-server, ovirt-node, ovirt-node-image
Patches coming through a Brief overview of the same below. Patches are rebased against current "master" branch for each projects. -------------------------------------------------------------------------- Packages with versions. GlusterFS - 2.0.2 (in house repository) Fuse - 2.8.0 (in house repository) libvirt - 0.6.4 (in house repository) ovirt-server - 0.99 (in house repository)
2009 Jul 09
1
[PATCH ovirt-node] add glusterfs-client dependency for ovirt-node
--- ovirt-node.spec.in | 3 ++- 1 files changed, 2 insertions(+), 1 deletions(-) diff --git a/ovirt-node.spec.in b/ovirt-node.spec.in index 746cf3d..2fdf4f5 100644 --- a/ovirt-node.spec.in +++ b/ovirt-node.spec.in @@ -21,7 +21,7 @@ Requires(post): /sbin/chkconfig Requires(preun): /sbin/chkconfig BuildRequires: libvirt-devel >= 0.5.1 BuildRequires: dbus-devel hal-devel -Requires:
2017 Jul 25
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
On Tue, Jul 25, 2017 at 1:45 PM, yayo (j) <jaganz at gmail.com> wrote: > 2017-07-25 7:42 GMT+02:00 Kasturi Narra <knarra at redhat.com>: > >> These errors are because not having glusternw assigned to the correct >> interface. Once you attach that these errors should go away. This has >> nothing to do with the problem you are seeing. >> > > Hi, >
2017 Jul 21
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
On 07/21/2017 02:55 PM, yayo (j) wrote: > 2017-07-20 14:48 GMT+02:00 Ravishankar N <ravishankar at redhat.com > <mailto:ravishankar at redhat.com>>: > > > But it does say something. All these gfids of completed heals in > the log below are the for the ones that you have given the > getfattr output of. So what is likely happening is there is an >
2023 Jun 19
0
[PATCH v1 1/5] fs/buffer: clean up block_commit_write
On Sun 18-06-23 23:32:46, Bean Huo wrote: > From: Bean Huo <beanhuo at micron.com> > > Originally inode is used to get blksize, after commit 45bce8f3e343 > ("fs/buffer.c: make block-size be per-page and protected by the page lock"), > __block_commit_write no longer uses this parameter inode, this patch is to > remove inode and clean up block_commit_write. >
2023 Jun 19
0
[PATCH v1 3/5] ext4: No need to check return value of block_commit_write()
On Sun 18-06-23 23:32:48, Bean Huo wrote: > From: Bean Huo <beanhuo at micron.com> > > Remove unnecessary check on the return value of block_commit_write(). > > Signed-off-by: Bean Huo <beanhuo at micron.com> Looks good to me. Feel free to add: Reviewed-by: Jan Kara <jack at suse.cz> Honza > --- > fs/ext4/move_extent.c | 7 ++----- > 1 file