similar to: [ovirt-users] Re: Gluster problems, cluster performance issues

Displaying 20 results from an estimated 2200 matches similar to: "[ovirt-users] Re: Gluster problems, cluster performance issues"

2018 May 30
1
[ovirt-users] Re: Gluster problems, cluster performance issues
Adding Ravi to look into the heal issue. As for the fsync hang and subsequent IO errors, it seems a lot like https://bugzilla.redhat.com/show_bug.cgi?id=1497156 and Paolo Bonzini from qemu had pointed out that this would be fixed by the following commit: commit e72c9a2a67a6400c8ef3d01d4c461dbbbfa0e1f0 Author: Paolo Bonzini <pbonzini at redhat.com> Date: Wed Jun 21 16:35:46 2017
2018 May 30
0
[ovirt-users] Re: Gluster problems, cluster performance issues
Hi all again: I'm now subscribed to gluster-users as well, so I should get any replies from that side too. At this point, I am seeing acceptable (although slower than I expect) performance much of the time, with periodic massive spikes in latency (occasionally so bad as to cause ovirt to detect a engine bad health status). Often, if I check the logs just then, I'll see those call traces
2018 May 30
1
[ovirt-users] Re: Gluster problems, cluster performance issues
I've been back at it, and still am unable to get more than one of my physical nodes to come online in ovirt, nor am I able to get more than the two gluster volumes (storage domains) to show online within ovirt. In Storage -> Volumes, they all show offline (many with one brick down, which is correct: I have one server off) However, in Storage -> domains, they all show down (although
2018 Jun 01
0
[ovirt-users] Re: Gluster problems, cluster performance issues
On Thu, May 31, 2018 at 3:16 AM, Jim Kusznir <jim at palousetech.com> wrote: > I've been back at it, and still am unable to get more than one of my > physical nodes to come online in ovirt, nor am I able to get more than the > two gluster volumes (storage domains) to show online within ovirt. > > In Storage -> Volumes, they all show offline (many with one brick down,
2018 May 30
0
[ovirt-users] Re: Gluster problems, cluster performance issues
[Adding gluster-users back] Nothing amiss with volume info and status. Can you check the agent.log and broker.log - will be under /var/log/ovirt-hosted-engine-ha/ Also the gluster client logs - under /var/log/glusterfs/rhev-data-center-mnt-glusterSD<volume>.log On Wed, May 30, 2018 at 12:08 PM, Jim Kusznir <jim at palousetech.com> wrote: > I believe the gluster data store for
2018 May 29
0
[ovirt-users] Gluster problems, cluster performance issues
[Adding gluster-users to look at the heal issue] On Tue, May 29, 2018 at 9:17 AM, Jim Kusznir <jim at palousetech.com> wrote: > Hello: > > I've been having some cluster and gluster performance issues lately. I > also found that my cluster was out of date, and was trying to apply updates > (hoping to fix some of these), and discovered the ovirt 4.1 repos were > taken
2018 May 29
0
[ovirt-users] Gluster problems, cluster performance issues
Do you see errors reported in the mount logs for the volume? If so, could you attach the logs? Any issues with your underlying disks. Can you also attach output of volume profiling? On Wed, May 30, 2018 at 12:13 AM, Jim Kusznir <jim at palousetech.com> wrote: > Ok, things have gotten MUCH worse this morning. I'm getting random errors > from VMs, right now, about a third of my
2018 May 29
0
[ovirt-users] Re: Gluster problems, cluster performance issues
I would check disks status and accessibility of mount points where your gluster volumes reside. On Tue, May 29, 2018, 22:28 Jim Kusznir <jim at palousetech.com> wrote: > On one ovirt server, I'm now seeing these messages: > [56474.239725] blk_update_request: 63 callbacks suppressed > [56474.239732] blk_update_request: I/O error, dev dm-2, sector 0 > [56474.240602]
2017 Sep 28
2
mounting an nfs4 file system as v4.0 in CentOS 7.4?
CentOS 7.4 client mounting a CentOS 7.4 server filesystem over nfs4. nfs seems to be much slower since the upgrade to 7.4, so I thought it might be nice to mount the directory as v4.0 rather than the new default of v4.1 to see if it makes a difference. The release notes state, without an example: "You can retain the original behavior by specifying 0 as the minor version" nfs(5)
2018 Oct 19
2
systemd automount of cifs share hangs
> > But if I start the automount unit and ls the mount point, the shell hangs > and eventually, a long time later (I haven't timed it, maybe an hour), I > eventually get a prompt again. Control-C won't interrupt it. I can still > ssh in and get another session so it's just the process that's accessing > the mount point that hangs. > I don't have a
2014 Oct 30
3
kernel BUG at drivers/block/virtio_blk.c:172
Hi, I've just hit this BUG at drivers/block/virtio_blk.c when updated to the kernel from the top of the Linus git tree. commit a7ca10f263d7e673c74d8e0946d6b9993405cc9c This is my virtual machine running on RHEL7 guest qemu-kvm-1.5.3-60.el7.x86_64 The last upstream kernel (3.17.0-rc4) worked well. I'll try to bisect, but meanwhile this is a backtrace I got very early in the boot. The
2014 Oct 30
3
kernel BUG at drivers/block/virtio_blk.c:172
Hi, I've just hit this BUG at drivers/block/virtio_blk.c when updated to the kernel from the top of the Linus git tree. commit a7ca10f263d7e673c74d8e0946d6b9993405cc9c This is my virtual machine running on RHEL7 guest qemu-kvm-1.5.3-60.el7.x86_64 The last upstream kernel (3.17.0-rc4) worked well. I'll try to bisect, but meanwhile this is a backtrace I got very early in the boot. The
2015 Oct 01
2
req->nr_phys_segments > queue_max_segments (was Re: kernel BUG at drivers/block/virtio_blk.c:172!)
On 10/01/2015 11:00 AM, Michael S. Tsirkin wrote: > On Thu, Oct 01, 2015 at 03:10:14AM +0200, Thomas D. wrote: >> Hi, >> >> I have a virtual machine which fails to boot linux-4.1.8 while mounting >> file systems: >> >>> * Mounting local filesystem ... >>> ------------[ cut here ]------------ >>> kernel BUG at
2015 Oct 01
2
req->nr_phys_segments > queue_max_segments (was Re: kernel BUG at drivers/block/virtio_blk.c:172!)
On 10/01/2015 11:00 AM, Michael S. Tsirkin wrote: > On Thu, Oct 01, 2015 at 03:10:14AM +0200, Thomas D. wrote: >> Hi, >> >> I have a virtual machine which fails to boot linux-4.1.8 while mounting >> file systems: >> >>> * Mounting local filesystem ... >>> ------------[ cut here ]------------ >>> kernel BUG at
2017 Sep 05
0
Slow performance of gluster volume
OK my understanding is that with preallocated disks the performance with and without shard will be the same. In any case, please attach the volume profile[1], so we can see what else is slowing things down. -Krutika [1] - https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Monitoring%20Workload/#running-glusterfs-volume-profile-command On Tue, Sep 5, 2017 at 2:32 PM, Abi Askushi
2015 Oct 01
2
req->nr_phys_segments > queue_max_segments (was Re: kernel BUG at drivers/block/virtio_blk.c:172!)
On 10/01/2015 11:00 AM, Michael S. Tsirkin wrote: > On Thu, Oct 01, 2015 at 03:10:14AM +0200, Thomas D. wrote: >> Hi, >> >> I have a virtual machine which fails to boot linux-4.1.8 while mounting >> file systems: >> >>> * Mounting local filesystem ... >>> ------------[ cut here ]------------ >>> kernel BUG at
2015 Oct 01
2
req->nr_phys_segments > queue_max_segments (was Re: kernel BUG at drivers/block/virtio_blk.c:172!)
On 10/01/2015 11:00 AM, Michael S. Tsirkin wrote: > On Thu, Oct 01, 2015 at 03:10:14AM +0200, Thomas D. wrote: >> Hi, >> >> I have a virtual machine which fails to boot linux-4.1.8 while mounting >> file systems: >> >>> * Mounting local filesystem ... >>> ------------[ cut here ]------------ >>> kernel BUG at
2013 Feb 12
1
Replication Ok, or not?
Setup a DC using 4.0.3 - all appears to go fine... Setup a second DC and everything works fine to here...but I'm not sure if replication is actually working or not. Here's what I get from ./samba-tool drs showrepl I've also done. [./samba-tool drs kcc -Uadministrator dc2.samba.somedom.local] in an attempt to fix the replication problem. (or what I think is a problem.) [The outbound
2017 Oct 22
0
Areca RAID controller on latest CentOS 7 (1708 i.e. RHEL 7.4) kernel 3.10.0-693.2.2.el7.x86_64
Is anyone running any Areca RAID controllers with the latest CentOS 7 kernel, 3.10.0-693.2.2.el7.x86_64? We recently updated (from 3.10.0-514.26.2.el7.x86_64), and we?ve started having lots of problems. To add to the confusion, there?s also a hardware problem (either with the controller or the backplane most likely) that we?re in the process of analyzing. Regardless, we have an ARC1883i, and
2011 Apr 07
5
R licence
Hi, is it possible to use some statistic computing by R in proprietary software? Our software is written in c#, and we intend to use http://rdotnet.codeplex.com/ to get R work there. Especially we want to use loess function. Thanks, Best regards, Stanislav [[alternative HTML version deleted]]