Displaying 20 results from an estimated 33 matches for "nufa".
Did you mean:
nuca
2009 May 19
1
nufa and missing files
We're using gluster-2.0.1 and a nufa volume comprised of thirteen
subvolumes across thirteen hosts.
We've found today that there are some files in the local filesystem
associated with the subvolume from one of the hosts that are not
being seen in the nufa volume on any gluster client.
I don't know how or when this happened,...
2009 Mar 16
0
cluster/nufa not consistent across nodes running 2.0.0rc1
Running cluster/nufa on a 5 node cluster running version 2.0.0rc1 we
noticed that not all files are visible on two of the nodes.
So I switched from rc1 to rc4 and from cluster/nufa to
cluster/distribute (changing two things at once keeps it challenging to
debug :).
After the restart all of the files appear to be avai...
2012 Nov 14
3
Using local writes with gluster for temporary storage
...could also used gluster to
unite the disks on the compute nodes into a single "disk"
in which files would be written locally. Then we could
move the files after the runs were complete in a more
sequential manner (thus avoiding overloading the network).
What was originally suggested (the NUFA policy) has since
been deprecated. What would be the recommended method
of accomplishing our goal in the latest version of Gluster?
And where can we find documentation on it?
We have seen the following links, but would be interested
in any more pointers you may have. Thanks.
http://thr3ads.net/...
2017 Jul 18
1
Sporadic Bus error on mmap() on FUSE mount
...the
> problem is observed.
I've disabled the performance.write-behind, umounted, stopped and
started the volume, then mounted again, but no effect. After that I've
been successively disabling/enabling options and xlators, and I've found
that the problem is related to the cluster.nufa option. When NUFA
translator is disabled, rrdtool works fine on all mounts. When enabled
again, the problem shows up again.
>
> https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS&component=write-behind
>
> HTH,
> Niels
>
>
>> version: glusterfs 3.10.3
&...
2008 Oct 15
1
Glusterfs performance with large directories
...and each directory will have a small number of potentially large data files.
A similar setup on local disks (without gluster) has proven it's capabilities
over the years.
We use a distributed computing model, each node in the archive runs one
or more processes to update the archive. We use the nufa scheduler to favor
local files and we use a distributed hashing algorithm to prevent data from
moving around nodes (unless the configuration changes of course).
I've included the GlusterFS configuration at the bottom of this e-mail.
Data access and throughput are pretty good (good enough), b...
2008 Oct 17
6
GlusterFS compared to KosmosFS (now called cloudstore)?
Hi.
I'm evaluating GlusterFS for our DFS implementation, and wondered how it
compares to KFS/CloudStore?
These features here look especially nice (
http://kosmosfs.sourceforge.net/features.html). Any idea what of them exist
in GlusterFS as well?
Regards.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2017 Sep 03
3
Poor performance with shard
...uster version 3.8.13 --------
Volume name: data
Number of bricks : 4 * 3 = 12
Bricks:
Brick1: server1:/brick/data1
Brick2: server1:/brick/data2
Brick3: server1:/brick/data3
Brick4: server1:/brick/data4
Brick5: server2:/brick/data1
.
.
.
Options reconfigure:
Performance.strict-o-direct: off
Cluster.nufa: off
Features.shard-block-size: 512MB
Features.shard: on
Cluster.server-quorum-type: server
Cluster.quorum-type: auto
Cluster.eager-lock: enable
Network.remote-dio: on
Performance.readdir-ahead: on
Any idea on how to improve my performance?
-------------- next part --------------
An HTML attachmen...
2017 Jul 18
2
Sporadic Bus error on mmap() on FUSE mount
..._dir
Brick4: dc3.liberouter.org:/data/glusterfs/flow/brick2/safety_dir
Brick5: dc3.liberouter.org:/data/glusterfs/flow/brick1/safety_dir
Brick6: dc1.liberouter.org:/data/glusterfs/flow/brick2/safety_dir
Options Reconfigured:
performance.parallel-readdir: on
performance.client-io-threads: on
cluster.nufa: enable
network.ping-timeout: 10
transport.address-family: inet
nfs.disable: true
[root at dc1]# gluster volume status flow
Status of volume: flow
Gluster process TCP Port RDMA Port Online Pid
---------------------------------------------------------------------------...
2017 Jul 18
0
Sporadic Bus error on mmap() on FUSE mount
...g:/data/glusterfs/flow/brick2/safety_dir
> Brick5: dc3.liberouter.org:/data/glusterfs/flow/brick1/safety_dir
> Brick6: dc1.liberouter.org:/data/glusterfs/flow/brick2/safety_dir
> Options Reconfigured:
> performance.parallel-readdir: on
> performance.client-io-threads: on
> cluster.nufa: enable
> network.ping-timeout: 10
> transport.address-family: inet
> nfs.disable: true
>
> [root at dc1]# gluster volume status flow
> Status of volume: flow
> Gluster process TCP Port RDMA Port Online Pid
> -----------------------------------...
2017 Sep 04
0
Poor performance with shard
...uster version 3.8.13 --------
Volume name: data
Number of bricks : 4 * 3 = 12
Bricks:
Brick1: server1:/brick/data1
Brick2: server1:/brick/data2
Brick3: server1:/brick/data3
Brick4: server1:/brick/data4
Brick5: server2:/brick/data1
.
.
.
Options reconfigure:
Performance.strict-o-direct: off
Cluster.nufa: off
Features.shard-block-size: 512MB
Features.shard: on
Cluster.server-quorum-type: server
Cluster.quorum-type: auto
Cluster.eager-lock: enable
Network.remote-dio: on
Performance.readdir-ahead: on
Any idea on how to improve my performance?
-------------- next part --------------
An HTML attachmen...
2018 Mar 12
2
Can't heal a volume: "Please check if all brick processes are running."
...cn05-ib:/gfs/gv0/brick1/brick
Brick6: cn06-ib:/gfs/gv0/brick1/brick
Brick7: cn07-ib:/gfs/gv0/brick1/brick
Brick8: cn08-ib:/gfs/gv0/brick1/brick
Brick9: cn09-ib:/gfs/gv0/brick1/brick
Options Reconfigured:
client.event-threads: 8
performance.parallel-readdir: on
performance.readdir-ahead: on
cluster.nufa: on
nfs.disable: on
--
Best regards,
Anatoliy
2018 Apr 18
1
Replicated volume read request are served by remote brick
I have created a 2 brick replicated volume.
gluster> volume status
Status of volume: storage
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick master:/glusterfs/bricks/storage/mountpoint
49153 0 Y 5301
Brick worker1:/glusterfs/bricks/storage/mountpoint
2018 Mar 13
4
Can't heal a volume: "Please check if all brick processes are running."
...;> Brick7: cn07-ib:/gfs/gv0/brick1/brick
>> Brick8: cn08-ib:/gfs/gv0/brick1/brick
>> Brick9: cn09-ib:/gfs/gv0/brick1/brick
>> Options Reconfigured:
>> client.event-threads: 8
>> performance.parallel-readdir: on
>> performance.readdir-ahead: on
>> cluster.nufa: on
>> nfs.disable: on
>>
>
> --
> Best regards,
> Anatoliy
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part -----...
2019 Feb 01
1
Help analise statedumps
...-size: 2
cluster.background-self-heal-count: 20
network.ping-timeout: 5
disperse.eager-lock: off
performance.parallel-readdir: on
performance.readdir-ahead: on
performance.rda-cache-limit: 128MB
performance.cache-refresh-timeout: 10
performance.nl-cache-timeout: 600
performance.nl-cache: on
cluster.nufa: on
performance.enable-least-priority: off
server.outstanding-rpc-limit: 128
performance.strict-o-direct: on
cluster.shd-max-threads: 12
client.event-threads: 4
cluster.lookup-optimize: on
network.inode-lru-limit: 90000
performance.md-cache-timeout: 600
performance.cache-invalidation: on
performanc...
2018 Mar 13
0
Can't heal a volume: "Please check if all brick processes are running."
...n06-ib:/gfs/gv0/brick1/brick
> Brick7: cn07-ib:/gfs/gv0/brick1/brick
> Brick8: cn08-ib:/gfs/gv0/brick1/brick
> Brick9: cn09-ib:/gfs/gv0/brick1/brick
> Options Reconfigured:
> client.event-threads: 8
> performance.parallel-readdir: on
> performance.readdir-ahead: on
> cluster.nufa: on
> nfs.disable: on
--
Best regards,
Anatoliy
2018 Mar 14
2
Can't heal a volume: "Please check if all brick processes are running."
...v0/brick1/brick
>>> Brick8: cn08-ib:/gfs/gv0/brick1/brick
>>> Brick9: cn09-ib:/gfs/gv0/brick1/brick
>>> Options Reconfigured:
>>> client.event-threads: 8
>>> performance.parallel-readdir: on
>>> performance.readdir-ahead: on
>>> cluster.nufa: on
>>> nfs.disable: on
>>
>>
>> --
>> Best regards,
>> Anatoliy
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users...
2018 Mar 13
0
Can't heal a volume: "Please check if all brick processes are running."
...v0/brick1/brick
>>> Brick8: cn08-ib:/gfs/gv0/brick1/brick
>>> Brick9: cn09-ib:/gfs/gv0/brick1/brick
>>> Options Reconfigured:
>>> client.event-threads: 8
>>> performance.parallel-readdir: on
>>> performance.readdir-ahead: on
>>> cluster.nufa: on
>>> nfs.disable: on
>>>
>>
>> --
>> Best regards,
>> Anatoliy
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-u...
2018 Mar 14
0
Can't heal a volume: "Please check if all brick processes are running."
...n06-ib:/gfs/gv0/brick1/brick
> Brick7: cn07-ib:/gfs/gv0/brick1/brick
> Brick8: cn08-ib:/gfs/gv0/brick1/brick
> Brick9: cn09-ib:/gfs/gv0/brick1/brick
> Options Reconfigured:
> client.event-threads: 8
> performance.parallel-readdir: on
> performance.readdir-ahead: on
> cluster.nufa: on
> nfs.disable: on
> --
> Best regards,
> Anatoliy
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users [2]
--
Best regards,
Anatoliy
Links:
------
[1]...
2018 Jan 31
1
Tiered volume performance degrades badly after a volume stop/start or system restart.
....event-threads: 8
features.cache-invalidation: on
features.cache-invalidation-timeout: 600
performance.stat-prefetch: on
performance.cache-invalidation: on
network.inode-lru-limit: 90000
performance.cache-refresh-timeout: 10
performance.enable-least-priority: off
performance.cache-size: 2GB
cluster.nufa: on
cluster.choose-local: on
server.outstanding-rpc-limit: 128
fuse mounting defaults,_netdev,negative-timeout=10,attribute-timeout=30,fopen-keep-cache,direct-io-mode=enable,fetch-attempts=5
On Tue, Jan 30, 2018 at 6:29 PM, Jeff Byers <jbyers.sfly at gmail.com> wrote:
> I am fighting thi...
2018 Mar 14
0
Can't heal a volume: "Please check if all brick processes are running."
...> Brick8: cn08-ib:/gfs/gv0/brick1/brick
>>>> Brick9: cn09-ib:/gfs/gv0/brick1/brick
>>>> Options Reconfigured:
>>>> client.event-threads: 8
>>>> performance.parallel-readdir: on
>>>> performance.readdir-ahead: on
>>>> cluster.nufa: on
>>>> nfs.disable: on
>>>
>>>
>>> --
>>> Best regards,
>>> Anatoliy
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://lists.gl...