Displaying 20 results from an estimated 6000 matches similar to: "Ubuntu Xenial 3.12.2 Gluster DEB packages are missing VIRT group settings"
2017 Oct 20
0
Ubuntu Xenial 3.12.2 Gluster DEB packages are missing VIRT group settings
On 10/20/2017 01:06 PM, WK wrote:
>
> 2. Can someone get the correct group settings over to the maintainer.
>
the best way to do that (as documented on the PPA page) is to open an
issue at https://github.com/gluster/glusterfs-debian/issues
Luckily email traffic is light today and I noticed this, otherwise it
would probably have been lost.
--
Kaleb
2017 Aug 24
1
GlusterFS as virtual machine storage
On 8/23/2017 10:44 PM, Pavel Szalbot wrote:
> Hi,
>
> On Thu, Aug 24, 2017 at 2:13 AM, WK <wkmail at bneit.com> wrote:
>> The default timeout for most OS versions is 30 seconds and the Gluster
>> timeout is 42, so yes you can trigger an RO event.
> I get read-only mount within approximately 2 seconds after failed IO.
Hmm, we don't see that, even on busy VMs.
We
2017 Aug 24
0
GlusterFS as virtual machine storage
Hi,
On Thu, Aug 24, 2017 at 2:13 AM, WK <wkmail at bneit.com> wrote:
> The default timeout for most OS versions is 30 seconds and the Gluster
> timeout is 42, so yes you can trigger an RO event.
I get read-only mount within approximately 2 seconds after failed IO.
> Though it is easy enough to raise as Pavel mentioned
>
> # echo 90 > /sys/block/sda/device/timeout
AFAIK
2017 Aug 24
2
GlusterFS as virtual machine storage
That really isnt an arbiter issue or for that matter a Gluster issue. We
have seen that with vanilla NAS servers that had some issue or another.
Arbiter simply makes it less likely to be an issue than replica 2 but in
turn arbiter is less 'safe' than replica 3.
However, in regards to Gluster and RO behaviour
The default timeout for most OS versions is 30 seconds and the Gluster
2023 Jul 25
1
log file spewing on one node, but not the others
What is the uptime of the affected node ?
There is a similar error reported in?https://access.redhat.com/solutions/5518661? which could?indicate?a possible problem in a memory area named ?lru? .Have you noticed any ECC errors in dmesg/IPMI of the system ?
At least I would reboot the node and run hardware diagnostics to check that everything is fine.
Best Regards,Strahil Nikolov?
Sent from Yahoo
2018 Apr 23
0
Reconstructing files from shards
From some old May 2017 email. I asked the following:
"From the docs, I see you can identify the shards by the GFID
# getfattr -d -m. -e hex/path_to_file/
# ls /bricks/*/.shard -lh | grep /GFID
Is there a gluster tool/script that will recreate the file?
or can you just sort them sort them properly and then simply cat/copy+
them back together?
cat shardGFID.1 .. shardGFID.X > thefile
2018 Apr 23
1
Reconstructing files from shards
> On Apr 23, 2018, at 10:49 AM, WK <wkmail at bneit.com> wrote:
>
> From some old May 2017 email. I asked the following:
> "From the docs, I see you can identify the shards by the GFID
> # getfattr -d -m. -e hex path_to_file
> # ls /bricks/*/.shard -lh | grep GFID
>
> Is there a gluster tool/script that will recreate the file?
>
> or can you just sort
2017 Jun 17
3
Teaming vs Bond?
I'm looking at tuning up a new site and the bonding issue came up
A google search reveals that the gluster docs (and Lindsay) recommend
balance-alb bonding.
However, "team"ing came up which I wasn't familiar with. Its already in
RH6/7 and Ubuntu and their Github page implies its stable.
The libteam.org people seem to feel their solution is more lightweight
and it seems easy
2017 Jun 19
1
Teaming vs Bond?
OK, at least its not an *issue* with Gluster. I didn't expect any but
you never know.
I have been amused at the 'lack' of discussion on Teaming performance
found on Google searches.
There are lots of 'here it is and here is how to set it up'
articles/posts, but no 'ooh-wee-wow it is awesome' comments.
It seems that for most people Bonding has worked it kinks out
2017 Sep 09
3
GlusterFS as virtual machine storage
Pavel.
Is there a difference between native client (fuse) and libgfapi in
regards to the crashing/read-only behaviour?
We use Rep2 + Arb and can shutdown a node cleanly, without issue on our
VMs. We do it all the time for upgrades and maintenance.
However we are still on native client as we haven't had time to work on
libgfapi yet. Maybe that is more tolerant.
We have linux VMs mostly
2017 Sep 09
0
GlusterFS as virtual machine storage
Hi,
On Sat, Sep 9, 2017 at 2:35 AM, WK <wkmail at bneit.com> wrote:
> Pavel.
>
> Is there a difference between native client (fuse) and libgfapi in regards
> to the crashing/read-only behaviour?
I switched to FUSE now and the VM crashed (read-only remount)
immediately after one node started rebooting.
I tried to mount.glusterfs same volume on different server (not VM),
running
2017 Oct 05
2
data corruption - any update?
On 4 October 2017 at 23:34, WK <wkmail at bneit.com> wrote:
> Just so I know.
>
> Is it correct to assume that this corruption issue is ONLY involved if you
> are doing rebalancing with sharding enabled.
>
> So if I am not doing rebalancing I should be fine?
>
That is correct.
> -bill
>
>
>
> On 10/3/2017 10:30 PM, Krutika Dhananjay wrote:
>
>
2017 Jun 19
0
Teaming vs Bond?
I haven't done any testing of performance differences, but on my oVirt/rhev
i use standard bonding as that's that it supports. On the stand along
gluster nodes I use teaming for bonding.
Teaming may be slightly easier to manage, but not by much if you are
already used to bond setups. I haven't noticed any bugs or issues using
teaming.
*David Gossage*
*Carousel Checks Inc. | System
2023 Jul 21
2
log file spewing on one node, but not the others
we have an older 2+1 arbiter gluster cluster running 6.10? on Ubuntu18LTS
It has run beautifully for years. Only occaisionally needing attention
as drives have died, etc
Each peer has two volumes. G1 and G2 with a shared 'gluster' network.
Since July 1st one of the peers for one volume is spewing the logfile
/var-lib-G1.log with the following errors.
The volume (G2) is not showing
2017 Sep 06
0
Slow performance of gluster volume
Do you see any improvement with 3.11.1 as that has a patch that improves
perf for this kind of a workload
Also, could you disable eager-lock and check if that helps? I see that max
time is being spent in acquiring locks.
-Krutika
On Wed, Sep 6, 2017 at 1:38 PM, Abi Askushi <rightkicktech at gmail.com> wrote:
> Hi Krutika,
>
> Is it anything in the profile indicating what is
2017 Jul 05
1
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
On Wed, Jul 5, 2017 at 5:22 PM, Atin Mukherjee <amukherj at redhat.com> wrote:
> And what does glusterd log indicate for these failures?
>
See here in gzip format
https://drive.google.com/file/d/0BwoPbcrMv8mvYmlRLUgyV0pFN0k/view?usp=sharing
It seems that on each host the peer files have been updated with a new
entry "hostname2":
[root at ovirt01 ~]# cat
2017 Sep 08
0
Slow performance of gluster volume
Following changes resolved the perf issue:
Added the option
/etc/glusterfs/glusterd.vol :
option rpc-auth-allow-insecure on
restarted glusterd
Then set the volume option:
gluster volume set vms server.allow-insecure on
I am reaching now the max network bandwidth and performance of VMs is quite
good.
Did not upgrade the glusterd.
As a next try I am thinking to upgrade gluster to 3.12 + test
2017 Sep 05
0
Slow performance of gluster volume
I'm assuming you are using this volume to store vm images, because I see
shard in the options list.
Speaking from shard translator's POV, one thing you can do to improve
performance is to use preallocated images.
This will at least eliminate the need for shard to perform multiple steps
as part of the writes - such as creating the shard and then writing to it
and then updating the
2017 Sep 06
2
Slow performance of gluster volume
Hi Krutika,
Is it anything in the profile indicating what is causing this bottleneck?
In case i can collect any other info let me know.
Thanx
On Sep 5, 2017 13:27, "Abi Askushi" <rightkicktech at gmail.com> wrote:
Hi Krutika,
Attached the profile stats. I enabled profiling then ran some dd tests.
Also 3 Windows VMs are running on top this volume but did not do any stress
2017 Sep 11
0
Slow performance of gluster volume
Did not upgrade yet gluster. I am still using 3.8.12. Only the mentioned
changes did provide the performance boost.
>From which version to which version did you see such performance boost? I
will try to upgrade and check difference also.
On Sep 11, 2017 2:45 AM, "Ben Turner" <bturner at redhat.com> wrote:
Great to hear!
----- Original Message -----
> From: "Abi