Displaying 20 results from an estimated 40000 matches similar to: "Sharing Sub-Volumes"
2011 Jun 22
1
glusterfs 3.2.1 processes in an endless loop?
Hello,
I found a new issue with glusterfs 3.2.1 - im getting a glusterfs process for each mountpoint and
they are consuming all of the CPU time.
strace won't show a thing - so no system calls are made
Mounting the same volumes on another server works fine.
Has anyone seen such a thing? Oder any idea, what causes this and how to fix it?
The logfiles don't show any information about
2013 Sep 23
1
Mounting a sub directory of a glusterfs volume
I am not sure if posting with the subject copied from the webpage of
mail-list of an existing thread would loop my response under the same.
Apologies if it doesn't.
I am trying to figure a way to mount a directory within a gluster volume to
a web server. This directory is enabled with quota to limit a users' usage.
gluster config:
Volume Name: test-volume
features.limit-usage:
2011 May 05
1
CIFS Documentation
Hello,
It would be a good idea to update the Documentation about CIFS:
http://gluster.com/community/documentation/index.php/Gluster_3.2:_Exporting_Gluster_Volumes_Through_Samba
When the simple truth is, that there is no CIFS Support in gluster itself - it should not be in the
docs.
As I found out, this was also suggested earlier:
http://www.mail-archive.com/gluster-users at
2018 Feb 10
0
Tier Volumes
Hello everyone.
I have a new GlusterFS setup with 3 servers and 2 volumes. The "HotTier"
volume uses Nvme and the "ColdTier" volume uses HDD's. How do I specify the
tiers for each volume?
I will be adding 2 more HDDs to each server. I would then like to change
from a Replicate to Distributed-Replicated. Not sure if that makes a
difference in the tiering setup.
[root at
2011 Mar 31
1
Error rpmbuild Glusterfs 3.1.3
Hi,
i have a lot of troubles when i try to build rpm?s out of the glusterfs
3.1.3 tgz on my SLES Servers (SLES10.1 & SLES11.1)
all is running fine i guess until it try?s to build the rpm?s.
Then i always run into this error :
RPM build errors:
File not found:
/var/tmp/glusterfs-3.1.3-1-root/opt/glusterfs/3.1.3/local/libexec/gsyncd
File not found by glob:
2017 Jun 29
1
issue with trash feature and arbiter volumes
Gluster 3.10.2
I have a replica 3 (2+1) volume and I have just seen both data bricks go
down (arbiter stayed up). I had to disable trash feature to get the bricks
to start. I had a quick look on bugzilla but did not see anything that
looked similar. I just wanted to check that I was not hitting some know
issue and/or doing something stupid, before I open a bug. This is from the
brick log:
2012 Oct 23
1
Problems with striped-replicated volumes on 3.3.1
Good afternoon,
I am playing around with GlusterFS 3.1 in CentOS 6 virtual machines to see if I can get of proof of concept for a bigger project. In my setup, I have 4 GlusterFS servers with two bricks each of 10GB with XFS (per your quick-start guide). So, I have a total of 8 bricks. When bu
I have no problem with distributed-replicated volumes. However, when I set up a striped replicated
2017 Aug 22
2
Mapping subfolder of a samba share in Windows fails with access denied
Hi,
I am trying to map a network drive on a Windows 7 client. It is possible
to map the shared folder, but as soon as I try to map a subfolder,
Windows shows an access denied message and prompts for another username
and password. The user has full control over the subfolder (configured
via the Windows security tab). The samba.log shows:
Aug 22 10:25:19 FILESERVER smbd[5409]: Could not close
2017 Nov 12
1
Gluster clients can't see directories that exist or are created within a mounted volume, but can enter them.
> Clarification for below logs:
>
> - 'dev_static' is the gluster volume.
> - 'int-kube-01' is the gluster client.
> - '10.51.70.151' is the first node in a three node (2 replica, 1 arbiter) gluster cluster.
> - '/var/lib/kubelet/...../iss3dev-static' is a directory on the client that should be mounting
2017 Nov 08
0
Gluster clients can't see directories that exist or are created within a mounted volume, but can enter them.
> On 8 Nov 2017, at 9:03 pm, Nithya Balachandran <nbalacha at redhat.com> wrote:
>
>
> That is not the log for the mount. Please check /var/log/glusterfs/var-lib-mountedgluster.log on the system on which you are running the mount process.
>
> Please provide the volume config details as well (gluster volume info) from one of the server nodes.
>
Oh I'm sorry, I
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose,
There is a known issue with gluster 3.12.x builds (see [1]) so you may be
running into this.
The "shared-brick-count" values seem fine on stor1. Please send us "grep -n
"share" /var/lib/glusterd/vols/volumedisk1/*" results for the other nodes
so we can check if they are the cause.
Regards,
Nithya
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1517260
2017 Dec 20
4
Syntax for creating arbiter volumes in gluster 4.0
Hi,
The existing syntax in the gluster CLI for creating arbiter volumes is
`gluster volume create <volname> replica 3 arbiter 1 <list of bricks>` .
It means (or at least intended to mean) that out of the 3 bricks, 1
brick is the arbiter.
There has been some feedback while implementing arbiter support in
glusterd2 for glusterfs-4.0 that we should change this to `replica 2
arbiter
2017 Oct 18
0
Mounting of Gluster volumes in Kubernetes
Hi all,
Wondered if there are others in the community using GlusterFS on Google
Compute Engine and Kubernetes via Google Container Engine together.
We're running glusterfs 3.7.6 on Ubuntu Xenial across 3 GCE nodes. We have
a single replicated volume of ~800GB that our pods running in Kubernetes
are mounting.
We've observed a pattern of soft lockups on our Kubernetes nodes that mount
our
2011 Jun 20
0
Directory structure replication on distributed volumes
Most common disk file systems have a maximum of 2^32 inodes, whereas
GlusterFS can have 2^64 as far as I know. GlusterFS seems to replicate the
directory structure on distributed volumes on all bricks, unlike files
which it puts into only one brick.
Does this mean that if there are lots of folders, such as each file in a
leaf node of a directory tree, the inode count limit of the underlying
disk
2018 Feb 27
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi,
Some days ago all my glusterfs configuration was working fine. Today I
realized that the total size reported by df command was changed and is
smaller than the aggregated capacity of all the bricks in the volume.
I checked that all the volumes status are fine, all the glusterd daemons
are running, there is no error in logs, however df shows a bad total size.
My configuration for one volume:
2018 Apr 12
0
Unreasonably poor performance of replicated volumes
Guess you went through user lists and tried something like this already
http://lists.gluster.org/pipermail/gluster-users/2018-April/033811.html
I have a same exact setup and below is as far as it went after months of
trail and error.
We all have somewhat same setup and same issue with this - you can find
same post as yours on the daily basis.
On Wed, Apr 11, 2018 at 3:03 PM, Anastasia Belyaeva
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
I'm sorry for my last incomplete message.
Below the output of both volumes:
[root at stor1t ~]# gluster volume rebalance volumedisk1 status
Node Rebalanced-files size
scanned failures skipped status run time in
h:m:s
--------- ----------- -----------
----------- -----------
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose,
On 28 February 2018 at 22:31, Jose V. Carri?n <jocarbur at gmail.com> wrote:
> Hi Nithya,
>
> My initial setup was composed of 2 similar nodes: stor1data and stor2data.
> A month ago I expanded both volumes with a new node: stor3data (2 bricks
> per volume).
> Of course, then to add the new peer with the bricks I did the 'balance
> force' operation.
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose,
On 28 February 2018 at 18:28, Jose V. Carri?n <jocarbur at gmail.com> wrote:
> Hi Nithya,
>
> I applied the workarround for this bug and now df shows the right size:
>
> That is good to hear.
> [root at stor1 ~]# df -h
> Filesystem Size Used Avail Use% Mounted on
> /dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
> /dev/sdc1
2018 Apr 13
1
Unreasonably poor performance of replicated volumes
Thanks a lot for your reply!
You guessed it right though - mailing lists, various blogs, documentation,
videos and even source code at this point. Changing some off the options
does make performance slightly better, but nothing particularly
groundbreaking.
So, if I understand you correctly, no one has yet managed to get acceptable
performance (relative to underlying hardware capabilities) with