Displaying 20 results from an estimated 6000 matches similar to: "Feedback and Questions on afr+unify"
2008 Oct 17
6
GlusterFS compared to KosmosFS (now called cloudstore)?
Hi.
I'm evaluating GlusterFS for our DFS implementation, and wondered how it
compares to KFS/CloudStore?
These features here look especially nice (
http://kosmosfs.sourceforge.net/features.html). Any idea what of them exist
in GlusterFS as well?
Regards.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2009 Jun 11
2
Issue with files on glusterfs becoming unreadable.
elbert at host1:~$ dpkg -l|grep glusterfs
ii glusterfs-client
1.3.8-0pre2 GlusterFS fuse client
ii glusterfs-server
1.3.8-0pre2 GlusterFS fuse server
ii libglusterfs0
1.3.8-0pre2 GlusterFS libraries and
translator modules
I have 2 hosts set up to use AFR with
2012 Mar 10
1
High CPU Usage After Glusterfs install
Hi Guys,
I have 2 servers with a fresh install of glusterfs and I am seeing a very high CPU load.? I am trying to just do a very basic config to get this started and for the life of me, I don't know what could be causing it.? The CPU goes up to 100% across all 4 CPU's on each gluster node and I am seeing timeouts coming from the vms that I am testing with.? I simply copied the
2010 Jan 03
2
Where is log file of GlusterFS 3.0?
I not found log file of Gluster 3.0!
In the past, I install well with GlusterFS 2.06, and Log file of server
and Client placed in /var/log/glusterfs/...
But after install GlusterFS 3.0( on Centos5.4 64 bit), (4 server + 1
client),
I start glusterFS servers and client, and type *df -H* at client, result
is : "Transport endpoint is not connected"
*I want to detect BUG, but I not found
2008 Dec 14
1
Is that iozone result normal?
5-nodes server and 1 node client are connected by gigabits Ethernet.
#] iozone -r 32k -r 512k -s 8G
KB reclen write rewrite read reread read write
read rewrite read fwrite frewrite fread freread
8388608 32 10559 9792 62435 62260
8388608 512 63012 63409 63409 63138
It seems 32k write/rewrite performance are very
2017 Oct 19
3
gluster tiering errors
All,
I am new to gluster and have some questions/concerns about some tiering
errors that I see in the log files.
OS: CentOs 7.3.1611
Gluster version: 3.10.5
Samba version: 4.6.2
I see the following (scrubbed):
Node 1 /var/log/glusterfs/tier/<vol>/tierd.log:
[2017-10-19 17:52:07.519614] I [MSGID: 109038]
[tier.c:1169:tier_migrate_using_query_file]
0-<vol>-tier-dht: Promotion failed
2017 Oct 22
0
gluster tiering errors
Herb,
What are the high and low watermarks for the tier set at ?
# gluster volume get <vol> cluster.watermark-hi
# gluster volume get <vol> cluster.watermark-low
What is the size of the file that failed to migrate as per the following
tierd log:
[2017-10-19 17:52:07.519614] I [MSGID: 109038]
[tier.c:1169:tier_migrate_using_query_file] 0-<vol>-tier-dht: Promotion
failed for
2008 Aug 15
6
Add/remove new server volumes on the fly?
Hi list,
1. While learning GlusterFS, I'was wondering if it's possible to add
server volumes to increase space capacity of my cluster "on the FLY"?
I mean, a hot upgrade.
2. Second, when using "files replicating strategy" (scheduler), is it
possible to remove a server node without stopping the hole cluster
(ex. hardware maintaining reasons, add more disk/ram to the
2017 Sep 20
0
"Input/output error" on mkdir for PPC64 based client
Looks like it is an issue with architecture compatibility in RPC layer (ie,
with XDRs and how it is used). Just glance the logs of the client process
where you saw the errors, which could give some hints. If you don't
understand the logs, share them, so we will try to look into it.
-Amar
On Wed, Sep 20, 2017 at 2:40 AM, Walter Deignan <WDeignan at uline.com> wrote:
> I recently
2012 Jan 13
1
Quota problems with Gluster3.3b2
Hi everyone,
I'm playing with Gluster3.3b2, and everything is working fine when
uploading stuff through swift. However, when I enable quotas on Gluster,
I randomly get permission errors. Sometimes I can upload files, most
times I can't.
I'm mounting the partitions with the acl flag, I've tried wiping out
everything and starting from scratch, same result. As soon as I
2017 Oct 22
1
gluster tiering errors
There are several messages "no space left on device". I would check first
that free disk space is available for the volume.
On Oct 22, 2017 18:42, "Milind Changire" <mchangir at redhat.com> wrote:
> Herb,
> What are the high and low watermarks for the tier set at ?
>
> # gluster volume get <vol> cluster.watermark-hi
>
> # gluster volume get
2017 Oct 24
2
gluster tiering errors
Milind - Thank you for the response..
>> What are the high and low watermarks for the tier set at ?
# gluster volume get <vol> cluster.watermark-hi
Option Value
------ -----
cluster.watermark-hi 90
# gluster volume get <vol> cluster.watermark-low
Option
2017 Sep 20
1
"Input/output error" on mkdir for PPC64 based client
I put the share into debug mode and then repeated the process from a ppc64
client and an x86 client. Weirdly the client logs were almost identical.
Here's the ppc64 gluster client log of attempting to create a folder...
-------------
[2017-09-20 13:34:23.344321] D
[rpc-clnt-ping.c:93:rpc_clnt_remove_ping_timer_locked] (-->
2017 Sep 19
3
"Input/output error" on mkdir for PPC64 based client
I recently compiled the 3.10-5 client from source on a few PPC64 systems
running RHEL 7.3. They are mounting a Gluster volume which is hosted on
more traditional x86 servers.
Everything seems to be working properly except for creating new
directories from the PPC64 clients. The mkdir command gives a
"Input/output error" and for the first few minutes the new directory is
2018 Apr 11
3
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
On 4/9/2018 2:45 AM, Alex K wrote:
Hey Alex,
With two nodes, the setup works but both sides go down when one node is
missing. Still I set the below two params to none and that solved my issue:
cluster.quorum-type: none
cluster.server-quorum-type: none
Thank you for that.
Cheers,
Tom
> Hi,
>
> You need 3 nodes at least to have quorum enabled. In 2 node setup you
> need to
2018 Apr 11
0
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
On Wed, Apr 11, 2018 at 4:35 AM, TomK <tomkcpr at mdevsys.com> wrote:
> On 4/9/2018 2:45 AM, Alex K wrote:
> Hey Alex,
>
> With two nodes, the setup works but both sides go down when one node is
> missing. Still I set the below two params to none and that solved my issue:
>
> cluster.quorum-type: none
> cluster.server-quorum-type: none
>
> yes this disables
2017 Oct 27
0
gluster tiering errors
Herb,
I'm trying to weed out issues here.
So, I can see quota turned *on* and would like you to check the quota
settings and test to see system behavior *if quota is turned off*.
Although the file size that failed migration was 29K, I'm being a bit
paranoid while weeding out issues.
Are you still facing tiering errors ?
I can see your response to Alex with the disk space consumption and
2018 Jan 25
2
parallel-readdir is not recognized in GlusterFS 3.12.4
By the way, on a slightly related note, I'm pretty sure either
parallel-readdir or readdir-ahead has a regression in GlusterFS 3.12.x. We
are running CentOS 7 with kernel-3.10.0-693.11.6.el7.x86_6.
I updated my servers and clients to 3.12.4 and enabled these two options
after reading about them in the 3.10.0 and 3.11.0 release notes. In the
days after enabling these two options all of my
2008 Jun 11
1
software raid performance
Are there known performance issues with using glusterfs on software raid? I've
been playing with a variety of configs (AFR, AFR with Unify) on a two server
setup. Everything seems to work well, but performance (creating files,
reading files, appending to files) is very slow. Using the same configs on
two non-software raid machines shows significant performance increases.
Before I go a
2018 Apr 09
2
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
Hey All,
In a two node glusterfs setup, with one node down, can't use the second
node to mount the volume. I understand this is expected behaviour?
Anyway to allow the secondary node to function then replicate what
changed to the first (primary) when it's back online? Or should I just
go for a third node to allow for this?
Also, how safe is it to set the following to none?