Displaying 20 results from an estimated 20000 matches similar to: "ext4 issues"
2012 Aug 23
1
Stale NFS file handle
Hi, I'm a bit curious of error messages of the type "remote operation
failed: Stale NFS file handle". All clients using the file system use
Gluster Native Client, so why should stale nfs file handle be reported?
Regards,
/jon
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2013 Jul 15
4
GlusterFS 3.4.0 and 3.3.2 released!
Hi All,
3.4.0 and 3.3.2 releases of GlusterFS are now available. GlusterFS 3.4.0
can be downloaded from [1]
and release notes are available at [2]. Upgrade instructions can be
found at [3].
If you would like to propose bug fix candidates or minor features for
inclusion in 3.4.1, please add them at [4].
3.3.2 packages can be downloaded from [5].
A big note of thanks to everyone who helped in
2013 Feb 27
1
Slow read performance
Help please-
I am running 3.3.1 on Centos using a 10GB network. I get reasonable write speeds, although I think they could be faster. But my read speeds are REALLY slow.
Executive summary:
On gluster client-
Writes average about 700-800MB/s
Reads average about 70-80MB/s
On server-
Writes average about 1-1.5GB/s
Reads average about 2-3GB/s
Any thoughts?
Here are some additional details:
2013 Jun 11
1
cluster.min-free-disk working?
Hi,
have a system consisting of four bricks, using 3.3.2qa3. I used the
command
gluster volume set glusterKumiko cluster.min-free-disk 20%
Two of the bricks where empty, and two were full to just under 80% when
building the volume.
Now, when syncing data (from a primary system), and using min-free-disk
20% I thought new data would go to the two empty bricks, but gluster
does not seem
2012 Jun 11
1
"mismatching layouts" flooding in the logs
I have the following appended to gluster logs at around 100kB of logs per second, on all 10 gluster servers:
[2012-06-11 15:08:15.729429] I [dht-layout.c:682:dht_layout_dir_mismatch] 0-sites-dht: subvol: sites-client-41; inode layout - 966367638 - 1002159031; disk layout - 930576244 - 966367637
[2012-06-11 15:08:15.729465] I [dht-common.c:525:dht_revalidate_cbk] 0-sites-dht: mismatching layouts
2013 Jan 06
1
Link files showing on mount point, 3.3.1
Can anyone tell me how to fix having link files show on the client mount point. This started after an upgrade to 3.3.1
file name and user and group info have been changed, but this is the basic problem. There are about 5 files in just this directory, and I am sure there are more directories with this issue.
-rw-r--r-- 1 user group 29120 Aug 17 2010 file1
---------T 1 root root 0
2010 Sep 30
1
ldiskfs-ext4 interoperability question
Our current Lustre servers run the version 1.8.1.1 with the regular ldiskfs.
We are looking to expand our Lustre file system with new servers/storage and upgrade to all the lustre servers to 1.8.4 as well at the same time. We
would like to make use of the ldiskfs-ext4 on the new servers to use larger OSTs.
I just want to confirm the following facts:
1. Is is possible to run different versions
2013 Dec 06
2
How reliable is XFS under Gluster?
Hello,
I am in the point of picking up a FS for new brick nodes. I was used to
like and use ext4 until now but I recently red for an issue introduced by a
patch in ext4 that breaks the distributed translator. In the same time, it
looks like the recommended FS for a brick is no longer ext4 but XFS which
apparently will also be the default FS in the upcoming RedHat7. On the
other hand, XFS is being
2018 Apr 17
5
Getting glusterfs to expand volume size to brick size
pylon:/var/lib/glusterd/vols/dev_apkmirror_data # ack shared-brick-count
dev_apkmirror_data.pylon.mnt-pylon_block3-dev_apkmirror_data.vol
3: option shared-brick-count 3
dev_apkmirror_data.pylon.mnt-pylon_block2-dev_apkmirror_data.vol
3: option shared-brick-count 3
dev_apkmirror_data.pylon.mnt-pylon_block1-dev_apkmirror_data.vol
3: option shared-brick-count 3
Sincerely,
Artem
--
2018 Apr 16
2
Getting glusterfs to expand volume size to brick size
Hi Nithya,
I'm on Gluster 4.0.1.
I don't think the bricks were smaller before - if they were, maybe 20GB
because Linode's minimum is 20GB, then I extended them to 25GB, resized
with resize2fs as instructed, and rebooted many times over since. Yet,
gluster refuses to see the full disk size.
Here's the status detail output:
gluster volume status dev_apkmirror_data detail
Status
2012 Aug 14
1
question about list directory missing files or hang
Hi Gluster experts,
I'm new to glusterfs and I have encountered a problem about list directory
of glusters 3.3.
I have a volume configuration of 3(distribute) * 2(replica). When write
file on the glusterfs client mount directory some of the files can't be
listed through ls command but the file exists. Some times the ls command
hangs.
Any one know what's the problem is?
Thank you
2017 Apr 07
1
Slow write times to gluster disk
Hi,
We noticed a dramatic slowness when writing to a gluster disk when
compared to writing to an NFS disk. Specifically when using dd (data
duplicator) to write a 4.3 GB file of zeros:
* on NFS disk (/home): 9.5 Gb/s
* on gluster disk (/gdata): 508 Mb/s
The gluser disk is 2 bricks joined together, no replication or anything
else. The hardware is (literally) the same:
* one server with
2012 Jan 22
2
Best practices?
Suppose I start building nodes with (say) 24 drives each in them.
Would the standard/recommended approach be to make each drive its own
filesystem, and export 24 separate bricks, server1:/data1 ..
server1:/data24 ? Making a distributed replicated volume between this and
another server would then have to list all 48 drives individually.
At the other extreme, I could put all 24 drives into some
2018 Apr 17
0
Getting glusterfs to expand volume size to brick size
Ok, it looks like the same problem.
@Amar, this fix is supposed to be in 4.0.1. Is it possible to regenerate
the volfiles to fix this?
Regards,
Nithya
On 17 April 2018 at 09:57, Artem Russakovskii <archon810 at gmail.com> wrote:
> pylon:/var/lib/glusterd/vols/dev_apkmirror_data # ack shared-brick-count
> dev_apkmirror_data.pylon.mnt-pylon_block3-dev_apkmirror_data.vol
> 3:
2013 Mar 01
1
Gluster quotas, NFS quotas, brick quotas, quota-tools
Hi,
I'd like to try to migrate our NFS-based network to gluster, for replication.
We currently use an ext4 filesystem with quota enabled. We have a
couple thousand users (over LDAP), and their quotas vary wildly. We
manage them with edquota(8) on the NFS server. There are many
services which mount the NFS filesystem, and parse the output of
/usr/bin/quota (a webmail, two mail servers,
2011 Jun 28
2
Issue with Gluster Quota
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20110628/64de4f5c/attachment.html>
2013 May 03
1
GlusterFS VS OpenVZ kernel
Hi,
We have a problem with glusterFS. We are using it on Centos 5 machine
with OpenVZ kernel. Gluster daemon and gluster clients run on the host,
not in the container. Recently we noticed a problem when we upgraded the
OpenVZ kernel. After the upgrade there are strange errors in case of
accessing the gluster volume. There are a few folders that can't be seen
with 'ls' but if you
2018 Apr 16
0
Getting glusterfs to expand volume size to brick size
What version of Gluster are you running? Were the bricks smaller earlier?
Regards,
Nithya
On 15 April 2018 at 00:09, Artem Russakovskii <archon810 at gmail.com> wrote:
> Hi,
>
> I have a 3-brick replicate volume, but for some reason I can't get it to
> expand to the size of the bricks. The bricks are 25GB, but even after
> multiple gluster restarts and remounts, the
2017 Sep 08
2
GlusterFS as virtual machine storage
FYI I set up replica 3 (no arbiter this time), did the same thing -
rebooted one node during lots of file IO on VM and IO stopped.
As I mentioned either here or in another thread, this behavior is
caused by high default of network.ping-timeout. My main problem used
to be that setting it to low values like 3s or even 2s did not prevent
the FS to be mounted as read-only in the past (at least with
2012 Nov 02
8
Very slow directory listing and high CPU usage on replicated volume
Hi all,
I am having problems with painfully slow directory listings on a freshly
created replicated volume. The configuration is as follows: 2 nodes with
3 replicated drives each. The total volume capacity is 5.6T. We would
like to expand the storage capacity much more, but first we need to figure
this problem out.
Soon after loading up about 100 MB of small files (about 300kb each), the