Displaying 20 results from an estimated 21 matches for "ajneil".
2017 Oct 18
0
warning spam in the logs after tiering experiment
forgot to mention Gluster version 3.10.6
On 18 October 2017 at 13:26, Alastair Neil <ajneil.tech at gmail.com> wrote:
> a short while ago I experimented with tiering on one of my volumes. I
> decided it was not working out so I removed the tier. I now have spam in
> the glusterd.log evert 7 seconds:
>
> [2017-10-18 17:17:29.578327] W [socket.c:3207:socket_connect] 0-t...
2017 Oct 18
2
warning spam in the logs after tiering experiment
a short while ago I experimented with tiering on one of my volumes. I
decided it was not working out so I removed the tier. I now have spam in
the glusterd.log evert 7 seconds:
[2017-10-18 17:17:29.578327] W [socket.c:3207:socket_connect] 0-tierd:
Ignore failed connection attempt on
/var/run/gluster/2e3df1c501d0a19e5076304179d1e43e.socket, (No such file or
directory)
[2017-10-18
2017 Jun 30
0
Multi petabyte gluster
Did you test healing by increasing disperse.shd-max-threads?
What is your heal times per brick now?
On Fri, Jun 30, 2017 at 8:01 PM, Alastair Neil <ajneil.tech at gmail.com> wrote:
> We are using 3.10 and have a 7 PB cluster. We decided against 16+3 as the
> rebuild time are bottlenecked by matrix operations which scale as the square
> of the number of data stripes. There are some savings because of larger
> data chunks but we ended...
2017 Sep 07
2
GlusterFS as virtual machine storage
...live nodes of replica 3 blaming each other and
> refusing to do IO.
>
> https://gluster.readthedocs.io/en/latest/Administrator%
> 20Guide/Split%20brain%20and%20ways%20to%20deal%20with%
> 20it/#1-replica-3-volume
>
>
>
> On Sep 7, 2017 17:52, "Alastair Neil" <ajneil.tech at gmail.com> wrote:
>
>> *shrug* I don't use arbiter for vm work loads just straight replica 3.
>> There are some gotchas with using an arbiter for VM workloads. If
>> quorum-type is auto and a brick that is not the arbiter drop out then if
>> the up brick i...
2017 Jun 30
2
Multi petabyte gluster
We are using 3.10 and have a 7 PB cluster. We decided against 16+3 as the
rebuild time are bottlenecked by matrix operations which scale as the
square of the number of data stripes. There are some savings because of
larger data chunks but we ended up using 8+3 and heal times are about half
compared to 16+3.
-Alastair
On 30 June 2017 at 02:22, Serkan ?oban <cobanserkan at gmail.com>
2017 Sep 08
0
GlusterFS as virtual machine storage
...somewhat
similar to updating gluster, replacing cabling etc. IMO this should not
always end up with arbiter blaming the other node and even though I did not
investigate this issue deeply, I do not believe the blame is the reason for
iops to drop.
On Sep 7, 2017 21:25, "Alastair Neil" <ajneil.tech at gmail.com> wrote:
> True but to work your way into that problem with replica 3 is a lot harder
> to achieve than with just replica 2 + arbiter.
>
> On 7 September 2017 at 14:06, Pavel Szalbot <pavel.szalbot at gmail.com>
> wrote:
>
>> Hi Neil, docs mention...
2017 Sep 07
0
GlusterFS as virtual machine storage
Hi Neil, docs mention two live nodes of replica 3 blaming each other and
refusing to do IO.
https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/#1-replica-3-volume
On Sep 7, 2017 17:52, "Alastair Neil" <ajneil.tech at gmail.com> wrote:
> *shrug* I don't use arbiter for vm work loads just straight replica 3.
> There are some gotchas with using an arbiter for VM workloads. If
> quorum-type is auto and a brick that is not the arbiter drop out then if
> the up brick is dirty as far as th...
2009 Feb 28
4
possibly a stupid question, why can I not set sharenfs="sec=krb5, rw"?
x86 snv 108
I have a pool with around 5300 file systems called home. I can do:
zfs set sharenfs=on home
however
zfs set sharenfs="sec=krb5,rw" home
complains:
cannot set property for ''home'': ''sharenfs'' cannot be set to invalid options
I feel I must be overlooking something elementary.
Thanks, Alastair
-------------- next part --------------
2017 Sep 07
3
GlusterFS as virtual machine storage
*shrug* I don't use arbiter for vm work loads just straight replica 3.
There are some gotchas with using an arbiter for VM workloads. If
quorum-type is auto and a brick that is not the arbiter drop out then if
the up brick is dirty as far as the arbiter is concerned i.e. the only good
copy is on the down brick you will get ENOTCONN and your VMs will halt on
IO.
On 6 September 2017 at 16:06,
2017 Nov 06
0
[Gluster-devel] Request for Comments: Upgrades from 3.x to 4.0+
On Fri, Nov 3, 2017 at 8:50 PM, Alastair Neil <ajneil.tech at gmail.com> wrote:
> Just so I am clear the upgrade process will be as follows:
>
> upgrade all clients to 4.0
>
> rolling upgrade all servers to 4.0 (with GD1)
>
> kill all GD1 daemons on all servers and run upgrade script (new clients
> unable to connect at this...
2017 Oct 10
0
small files performance
I just tried setting:
performance.parallel-readdir on
features.cache-invalidation on
features.cache-invalidation-timeout 600
performance.stat-prefetch
performance.cache-invalidation
performance.md-cache-timeout 600
network.inode-lru-limit 50000
performance.cache-invalidation on
and clients could not see their files with ls when accessing via a fuse
mount. The files and directories were there,
2017 Oct 24
0
brick is down but gluster volume status says it's fine
On Tue, Oct 24, 2017 at 11:13 PM, Alastair Neil <ajneil.tech at gmail.com>
wrote:
> gluster version 3.10.6, replica 3 volume, daemon is present but does not
> appear to be functioning
>
> peculiar behaviour. If I kill the glusterfs brick daemon and restart
> glusterd then the brick becomes available - but one of my other volumes
>...
2017 Nov 03
2
[Gluster-devel] Request for Comments: Upgrades from 3.x to 4.0+
Just so I am clear the upgrade process will be as follows:
upgrade all clients to 4.0
rolling upgrade all servers to 4.0 (with GD1)
kill all GD1 daemons on all servers and run upgrade script (new clients
unable to connect at this point)
start GD2 ( necessary or does the upgrade script do this?)
I assume that once the cluster had been migrated to GD2 the glusterd
startup script will be smart
2017 Oct 10
2
small files performance
2017-10-10 8:25 GMT+02:00 Karan Sandha <ksandha at redhat.com>:
> Hi Gandalf,
>
> We have multiple tuning to do for small-files which decrease the time for
> negative lookups , meta-data caching, parallel readdir. Bumping the server
> and client event threads will help you out in increasing the small file
> performance.
>
> gluster v set <vol-name> group
2017 Oct 24
2
brick is down but gluster volume status says it's fine
gluster version 3.10.6, replica 3 volume, daemon is present but does not
appear to be functioning
peculiar behaviour. If I kill the glusterfs brick daemon and restart
glusterd then the brick becomes available - but one of my other volumes
bricks on the same server goes down in the same way it's like wack-a-mole.
any ideas?
[root at gluster-2 bricks]# glv status digitalcorpora
> Status
2009 Feb 02
0
issue with sharesmb and sharenfs properties enabled on the same pool
My system is OS 8.11, updated to dev build 105. I have two pools
constructed from iscsi targets with around 5600 file-systems in each.
I was able to enable NFS sharing and CIFS/SMB sharing on both pools,
however, after a reboot the SMB shares comes up but the NFS server
service does not and eventually times out after about 3 hours of
trying.
Are there any know issues with large number of NFS
2018 Apr 13
0
how to get the true used capacity of the volume
You will get weird results like these if you put two bricks on a single
filesystem. In use case one (presumably replica 2) the data gets written
to both bricks, which means there are two copies on the disk and so twice
the disk space consumed. In the second case there is some overhead
involved in creating a volume that will consume some disk space even absent
any user data added, how much will
2017 Dec 11
1
[Gluster-devel] Announcing Glusterfs release 3.12.2 (Long Term Maintenance)
Neil I don;t know if this is adequate but I did run a simple smoke test
today on the 3.12.3-1 bits. I installed the 3.12.3-1 but on 3 fresh
install Centos 7 VMs
created a 2G image files and wrote a xfs files system on them on each
system
mount each under /export/brick1, and created /export/birck1/test on each
node.
probes the two other systems from one node (a). abd created a replica 3
2017 Oct 23
2
[Gluster-devel] Announcing Glusterfs release 3.12.2 (Long Term Maintenance)
Any idea when these packages will be in the CentOS mirrors? there is no
sign of them on download.gluster.org.
On 13 October 2017 at 08:45, Jiffin Tony Thottan <jthottan at redhat.com>
wrote:
> The Gluster community is pleased to announce the release of Gluster 3.12.2
> (packages available at [1,2,3]).
>
> Release notes for the release can be found at [4].
>
> We still
2017 Jun 29
1
issue with trash feature and arbiter volumes
Gluster 3.10.2
I have a replica 3 (2+1) volume and I have just seen both data bricks go
down (arbiter stayed up). I had to disable trash feature to get the bricks
to start. I had a quick look on bugzilla but did not see anything that
looked similar. I just wanted to check that I was not hitting some know
issue and/or doing something stupid, before I open a bug. This is from the
brick log: