similar to: Ubuntu Xenial 3.12.2 Gluster DEB packages are missing VIRT group settings

Displaying 20 results from an estimated 5000 matches similar to: "Ubuntu Xenial 3.12.2 Gluster DEB packages are missing VIRT group settings"

2017 Oct 20
0
Ubuntu Xenial 3.12.2 Gluster DEB packages are missing VIRT group settings
On 10/20/2017 01:06 PM, WK wrote: > > 2. Can someone get the correct group settings over to the maintainer. > the best way to do that (as documented on the PPA page) is to open an issue at https://github.com/gluster/glusterfs-debian/issues Luckily email traffic is light today and I noticed this, otherwise it would probably have been lost. -- Kaleb
2017 Aug 24
1
GlusterFS as virtual machine storage
On 8/23/2017 10:44 PM, Pavel Szalbot wrote: > Hi, > > On Thu, Aug 24, 2017 at 2:13 AM, WK <wkmail at bneit.com> wrote: >> The default timeout for most OS versions is 30 seconds and the Gluster >> timeout is 42, so yes you can trigger an RO event. > I get read-only mount within approximately 2 seconds after failed IO. Hmm, we don't see that, even on busy VMs. We
2017 Aug 24
0
GlusterFS as virtual machine storage
Hi, On Thu, Aug 24, 2017 at 2:13 AM, WK <wkmail at bneit.com> wrote: > The default timeout for most OS versions is 30 seconds and the Gluster > timeout is 42, so yes you can trigger an RO event. I get read-only mount within approximately 2 seconds after failed IO. > Though it is easy enough to raise as Pavel mentioned > > # echo 90 > /sys/block/sda/device/timeout AFAIK
2017 Aug 24
2
GlusterFS as virtual machine storage
That really isnt an arbiter issue or for that matter a Gluster issue. We have seen that with vanilla NAS servers that had some issue or another. Arbiter simply makes it less likely to be an issue than replica 2 but in turn arbiter is less 'safe' than replica 3. However, in regards to Gluster and RO behaviour The default timeout for most OS versions is 30 seconds and the Gluster
2023 Jul 25
1
log file spewing on one node, but not the others
What is the uptime of the affected node ? There is a similar error reported in?https://access.redhat.com/solutions/5518661? which could?indicate?a possible problem in a memory area named ?lru? .Have you noticed any ECC errors in dmesg/IPMI of the system ? At least I would reboot the node and run hardware diagnostics to check that everything is fine. Best Regards,Strahil Nikolov? Sent from Yahoo
2018 Apr 23
0
Reconstructing files from shards
From some old May 2017 email. I asked the following: "From the docs, I see you can identify the shards by the GFID # getfattr -d -m. -e hex/path_to_file/ # ls /bricks/*/.shard -lh | grep /GFID Is there a gluster tool/script that will recreate the file? or can you just sort them sort them properly and then simply cat/copy+ them back together? cat shardGFID.1 .. shardGFID.X > thefile
2018 Apr 23
1
Reconstructing files from shards
> On Apr 23, 2018, at 10:49 AM, WK <wkmail at bneit.com> wrote: > > From some old May 2017 email. I asked the following: > "From the docs, I see you can identify the shards by the GFID > # getfattr -d -m. -e hex path_to_file > # ls /bricks/*/.shard -lh | grep GFID > > Is there a gluster tool/script that will recreate the file? > > or can you just sort
2017 Jul 05
1
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
On Wed, Jul 5, 2017 at 5:22 PM, Atin Mukherjee <amukherj at redhat.com> wrote: > And what does glusterd log indicate for these failures? > See here in gzip format https://drive.google.com/file/d/0BwoPbcrMv8mvYmlRLUgyV0pFN0k/view?usp=sharing It seems that on each host the peer files have been updated with a new entry "hostname2": [root at ovirt01 ~]# cat
2017 Jun 17
3
Teaming vs Bond?
I'm looking at tuning up a new site and the bonding issue came up A google search reveals that the gluster docs (and Lindsay) recommend balance-alb bonding. However, "team"ing came up which I wasn't familiar with. Its already in RH6/7 and Ubuntu and their Github page implies its stable. The libteam.org people seem to feel their solution is more lightweight and it seems easy
2017 Jul 05
1
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
OK, so the log just hints to the following: [2017-07-05 15:04:07.178204] E [MSGID: 106123] [glusterd-mgmt.c:1532:glusterd_mgmt_v3_commit] 0-management: Commit failed for operation Reset Brick on local node [2017-07-05 15:04:07.178214] E [MSGID: 106123] [glusterd-replace-brick.c:649:glusterd_mgmt_v3_initiate_replace_brick_cmd_phases] 0-management: Commit Op Failed While going through the code,
2017 Jun 19
1
Teaming vs Bond?
OK, at least its not an *issue* with Gluster. I didn't expect any but you never know. I have been amused at the 'lack' of discussion on Teaming performance found on Google searches. There are lots of 'here it is and here is how to set it up' articles/posts, but no 'ooh-wee-wow it is awesome' comments. It seems that for most people Bonding has worked it kinks out
2017 Sep 09
3
GlusterFS as virtual machine storage
Pavel. Is there a difference between native client (fuse) and libgfapi in regards to the crashing/read-only behaviour? We use Rep2 + Arb and can shutdown a node cleanly, without issue on our VMs. We do it all the time for upgrades and maintenance. However we are still on native client as we haven't had time to work on libgfapi yet. Maybe that is more tolerant. We have linux VMs mostly
2017 Jul 05
1
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
And what does glusterd log indicate for these failures? On Wed, Jul 5, 2017 at 8:43 PM, Gianluca Cecchi <gianluca.cecchi at gmail.com> wrote: > > > On Wed, Jul 5, 2017 at 5:02 PM, Sahina Bose <sabose at redhat.com> wrote: > >> >> >> On Wed, Jul 5, 2017 at 8:16 PM, Gianluca Cecchi < >> gianluca.cecchi at gmail.com> wrote: >> >>>
2017 Sep 09
0
GlusterFS as virtual machine storage
Hi, On Sat, Sep 9, 2017 at 2:35 AM, WK <wkmail at bneit.com> wrote: > Pavel. > > Is there a difference between native client (fuse) and libgfapi in regards > to the crashing/read-only behaviour? I switched to FUSE now and the VM crashed (read-only remount) immediately after one node started rebooting. I tried to mount.glusterfs same volume on different server (not VM), running
2017 Oct 05
2
data corruption - any update?
On 4 October 2017 at 23:34, WK <wkmail at bneit.com> wrote: > Just so I know. > > Is it correct to assume that this corruption issue is ONLY involved if you > are doing rebalancing with sharding enabled. > > So if I am not doing rebalancing I should be fine? > That is correct. > -bill > > > > On 10/3/2017 10:30 PM, Krutika Dhananjay wrote: > >
2017 Sep 06
0
GlusterFS as virtual machine storage
Hi all, I have promised to do some testing and I finally find some time and infrastructure. So I have 3 servers with Gluster 3.10.5 on CentOS 7. I created replicated volume with arbiter (2+1) and VM on KVM (via Openstack) with disk accessible through gfapi. Volume group is set to virt (gluster volume set gv_openstack_1 virt). VM runs current (all packages updated) Ubuntu Xenial. I set up
2017 Jul 05
1
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
On Wed, Jul 5, 2017 at 5:02 PM, Sahina Bose <sabose at redhat.com> wrote: > > > On Wed, Jul 5, 2017 at 8:16 PM, Gianluca Cecchi <gianluca.cecchi at gmail.com > > wrote: > >> >> >> On Wed, Jul 5, 2017 at 7:42 AM, Sahina Bose <sabose at redhat.com> wrote: >> >>> >>> >>>> ... >>>> >>>> then
2017 Sep 06
0
GlusterFS as virtual machine storage
Mh, I never had to do that and I never had that problem. Is that an arbiter specific thing ? With replica 3 it just works. On Wed, Sep 06, 2017 at 03:59:14PM -0400, Alastair Neil wrote: > you need to set > > cluster.server-quorum-ratio 51% > > On 6 September 2017 at 10:12, Pavel Szalbot <pavel.szalbot at gmail.com> wrote: > > > Hi all, > > >
2017 Jul 21
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
Hi, Sorry for follow up again, but, checking the ovirt interface I've found that ovirt report the "engine" volume as an "arbiter" configuration and the "data" volume as full replicated volume. Check these screenshots: https://drive.google.com/drive/folders/0ByUV7xQtP1gCTE8tUTFfVmR5aDQ?usp=sharing But the "gluster volume info" command report that all 2
2017 Jun 19
0
Teaming vs Bond?
I haven't done any testing of performance differences, but on my oVirt/rhev i use standard bonding as that's that it supports. On the stand along gluster nodes I use teaming for bonding. Teaming may be slightly easier to manage, but not by much if you are already used to bond setups. I haven't noticed any bugs or issues using teaming. *David Gossage* *Carousel Checks Inc. | System