Displaying 20 results from an estimated 900 matches similar to: "fuse mount disconnecting..."
2013 Mar 21
1
sshfs -o rellinks (module option) rejected by fuse
New to sshfs and new to this mailing list so please guide me if required.
Is this a bug? When sshfs is given option -o rellinks, it responds with
fuse: unknown option `rellinks'
According to my understanding of the sshfs man page and --help output
this option a) is valid and b) should be passed to the module, not to fuse.
Versions:
SSHFS version 2.4
FUSE library version: 2.8.5
2013 Sep 10
4
compiling samba vfs module
hi All,
The system is Ubuntu 12.04
I download and extracted source packages of samba and glusterfs and I built glusterfs, so I get the
right necessary structure:
glusterfs version is 3.4 and it's from ppa.
# ls /data/gluster/glusterfs-3.4.0final/debian/tmp/usr/include/glusterfs/api/glfs.h
/data/gluster/glusterfs-3.4.0final/debian/tmp/usr/include/glusterfs/api/glfs.h
Unfortunately I'm
2010 Apr 09
1
[Gluster-devel] Gluster health/status
Gluster devs,
I found the message below in the archives. glfs-health.sh is not
included in the v3.0.3 sources - is there any plan to add this to the
"extras" directory? What's its status?
Ian
== snip ==
Raghavendra G
Mon, 22 Feb 2010 20:20:33 -0800
Hi all,
Here is some work related to Health monitoring. glfs-health.sh is a shell
script to check the health of glusterfs.
2023 Jul 05
1
remove_me files building up
Hi Strahil,
This is the output from the commands:
root at uk3-prod-gfs-arb-01:~# du -h -x -d 1 /data/glusterfs/gv1/brick1/brick
2.2G /data/glusterfs/gv1/brick1/brick/.glusterfs
24M /data/glusterfs/gv1/brick1/brick/scalelite-recordings
16K /data/glusterfs/gv1/brick1/brick/mytute
18M /data/glusterfs/gv1/brick1/brick/.shard
0
2023 Jul 04
1
remove_me files building up
Thanks for the clarification.
That behaviour is quite weird as arbiter bricks should hold?only metadata.
What does the following show on host?uk3-prod-gfs-arb-01:
du -h -x -d 1?/data/glusterfs/gv1/brick1/brickdu -h -x -d 1?/data/glusterfs/gv1/brick3/brickdu -h -x -d 1 /data/glusterfs/gv1/brick2/brick
If indeed the shards are taking space -?that is a really strange situation.From which version
2023 Jun 30
1
remove_me files building up
Hi,
We're running a cluster with two data nodes and one arbiter, and have sharding enabled.
We had an issue a while back where one of the server's crashed, we got the server back up and running and ensured that all healing entries cleared, and also increased the server spec (CPU/Mem) as this seemed to be the potential cause.
Since then however, we've seen some strange behaviour,
2023 Jul 03
1
remove_me files building up
Hi,
you mentioned that the arbiter bricks run out of inodes.Are you using XFS ?Can you provide the xfs_info of each brick ?
Best Regards,Strahil Nikolov?
On Sat, Jul 1, 2023 at 19:41, Liam Smith<liam.smith at ek.co> wrote: Hi,
We're running a cluster with two data nodes and one arbiter, and have sharding enabled.
We had an issue a while back where one of the server's
2023 Jul 04
1
remove_me files building up
Hi,
Thanks for your response, please find the xfs_info for each brick on the arbiter below:
root at uk3-prod-gfs-arb-01:~# xfs_info /data/glusterfs/gv1/brick1
meta-data=/dev/sdc1 isize=512 agcount=31, agsize=131007 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
=
2023 Jul 04
1
remove_me files building up
Hi Strahil,
We're using gluster to act as a share for an application to temporarily process and store files, before they're then archived off over night.
The issue we're seeing isn't with the inodes running out of space, but the actual disk space on the arb server running low.
This is the df -h? output for the bricks on the arb server:
/dev/sdd1 15G 12G 3.3G 79%
2023 Jul 04
1
remove_me files building up
Hi Liam,
I saw that your XFS uses ?imaxpct=25? which for an arbiter brick is a little bit low.
If you have free space on the bricks, increase the maxpct to a bigger value, like:xfs_growfs -m 80 /path/to/brickThat will set 80% of the Filesystem for inodes, which you can verify with df -i /brick/path (compare before and after).?This way?you won?t run out of inodes in the future.
Of course, always
2017 Jan 03
2
shadow_copy and glusterfs not working
Hello,
we are trying to configure a CTDB-Cluster with Glusterfs. We are using
Samba 4.5 together with gluster 3.9. We set up a lvm2 thin-provisioned
volume to use gluster-snapshots.
Then we configured the first share without using shadow_copy2 and
everything was working fine.
Then we added the shadow_copy2 parameters, when we did a "smbclient" we
got the following message:
root at
2018 May 07
2
Compiling 3.13.2 under FreeBSD 11.1?
Hello,
Has anyone managed to successfully compile the latest 3.13.2 under
FreeBSD 11.1? ./autogen.sh and ./configure seem to work but make
fails:
Making all in src
CC glfs.lo
cc: warning: argument unused during compilation: '-rdynamic'
[-Wunused-command-line-argument]
cc: warning: argument unused during compilation: '-rdynamic'
[-Wunused-command-line-argument]
fatal
2017 Jan 04
0
shadow_copy and glusterfs not working
On Tue, 2017-01-03 at 15:16 +0100, Stefan Kania via samba wrote:
> Hello,
>
> we are trying to configure a CTDB-Cluster with Glusterfs. We are using
> Samba 4.5 together with gluster 3.9. We set up a lvm2 thin-provisioned
> volume to use gluster-snapshots.
> Then we configured the first share without using shadow_copy2 and
> everything was working fine.
>
> Then we
2018 Feb 07
0
Fwd: Troubleshooting glusterfs
Hello Nithya! Thank you for your help on figuring this out!
We changed our configuration and after having a successful test yesterday
we have run into new issue today.
The test including moderate read/write (~20-30 Mb/s) and scaling the
storage was running about 3 hours and at some moment system got stuck:
On the user level there are such errors when trying to work with filesystem:
OSError:
2017 Sep 13
3
glusterfs expose iSCSI
Hi all
I want to configure glusterfs to expose iSCSI target. I followed this
artical
https://pkalever.wordpress.com/2016/06/23/gluster-solution-for-non-shared-persistent-storage-in-docker-container/
but when I install tcmu-runner. It doesn't work.
I setup on CentOS7 and installed tcmu-runner by rpm. When I run targetcli,
it not show *user:glfs* and *user:gcow*
*/>* ls
o- /
2018 Feb 05
2
Fwd: Troubleshooting glusterfs
Hello Nithya!
Thank you so much, I think we are close to build a stable storage solution
according to your recommendations. Here's our rebalance log - please don't
pay attention to error messages after 9AM - this is when we manually
destroyed volume to recreate it for further testing. Also all remove-brick
operations you could see in the log were executed manually when recreating
volume.
2018 Feb 08
5
self-heal trouble after changing arbiter brick
Hi folks,
I'm troubled moving an arbiter brick to another server because of I/O load issues. My setup is as follows:
# gluster volume info
Volume Name: myvol
Type: Distributed-Replicate
Volume ID: 43ba517a-ac09-461e-99da-a197759a7dc8
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x (2 + 1) = 9
Transport-type: tcp
Bricks:
Brick1: gv0:/data/glusterfs
Brick2: gv1:/data/glusterfs
Brick3:
2009 Mar 11
1
Enterprise Application with O_DIRECT access
Hello everyone,
I am learning and evaluating a glusterfs for film/video editing facilities.
Some major film/video editing realtime applications are using the
O_DIRECT file access for video/audio data files.
The GLFS client via fuse mechanism is disallow the open file with
O_DIRECT flag.
I made a little sample code for read a file with O_DIRECT flag, and
tried open the files on GLFS volumes.
It
2007 Jan 11
1
yum does not download i386 packages on x86_64 Hardware from the centos-4.4 extras repo
Hi,
I installed centos-4.4 on x86_64, and I wanted to install nx and freenx from the centos extras.
But
yum list available '*nx*
gives me only
yum list available '*nx*'
Setting up repositories
Reading repository metadata in from local files
Available Packages
lynx.x86_64 2.8.5-18.2 base
Question: How, by using yum, can I install
2005 Oct 17
1
CESA-2005:803 Critical CentOS 4 ia64 lynx - security update
CentOS Errata and Security Advisory 2005:803
https://rhn.redhat.com/errata/RHSA-2005-803.html
The following updated files have been uploaded and are currently
syncing to the mirrors:
files:
updates/ia64/RPMS/lynx-2.8.5-18.1.ia64.rpm
--
Pasi Pirhonen - upi at iki.fi - http://iki.fi/upi/
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: