Displaying 20 results from an estimated 500 matches similar to: "kernel parameters for improving gluster writes on millions of small writes (long)"
2012 Jun 19
1
"Too many levels of symbolic links" with glusterfs automounting
I set up a 3.3 gluster volume for another sysadmin and he has added it
to his cluster via automount. It seems to work initially but after some
time (days) he is now regularly seeing this warning:
"Too many levels of symbolic links"
$ df: `/share/gl': Too many levels of symbolic links
when he tries to traverse the mounted filesystems.
I've been using gluster with static mounts
2012 Aug 23
1
Stale NFS file handle
Hi, I'm a bit curious of error messages of the type "remote operation
failed: Stale NFS file handle". All clients using the file system use
Gluster Native Client, so why should stale nfs file handle be reported?
Regards,
/jon
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2013 Dec 10
4
Structure needs cleaning on some files
Hi All,
When reading some files we get this error:
md5sum: /path/to/file.xml: Structure needs cleaning
in /var/log/glusterfs/mnt-sharedfs.log we see these errors:
[2013-12-10 08:07:32.256910] W
[client-rpc-fops.c:526:client3_3_stat_cbk] 1-testvolume-client-0: remote
operation failed: No such file or directory
[2013-12-10 08:07:32.257436] W
[client-rpc-fops.c:526:client3_3_stat_cbk]
2012 Jun 20
2
How Fatal? "Server and Client lk-version numbers are not same, reopening the fds"
Despite Joe Landman's sage advice to the contrary, I'm trying to
convince an IPoIB volume to service requests from a GbE client via
some /etc/hosts manipulation. (This may or may not be related to the
automount problems we're having as well.)
This has worked (and continues to work) well on another cluster with a
slightly older version of gluster - the 3.3.0qa42 version on both server
2011 Nov 05
1
glusterfs over rdma ... not.
OK - finished some tests over tcp and ironed out a lot of problems.
rdma is next; should be snap now....
[I must admit that this is my 1st foray into the land of IB, so some
of the following may be obvious to a non-naive admin..]
except that while I can create and start the volume with rdma as
transport:
==================================
root at pbs3:~
622 $ gluster volume info glrdma
2011 Oct 26
2
Some questions about theoretical gluster failures.
We're considering implementing gluster for a genomics cluster, and it
seems to have some theoretical advantages that so far seem to have
been borne out in some limited testing, mod some odd problems with an
inability to delete dir trees. I'm about to test with the latest beta
that was promised to clear up these bugs, but as I'm doing that,
answers to these Qs would be
2012 Aug 03
1
Gluster-users Digest, Vol 51, Issue 49
> Message: 4
> Date: Fri, 27 Jul 2012 15:29:41 -0700
> From: Harry Mangalam <hjmangalam at gmail.com>
> Subject: [Gluster-users] Change NFS parameters post-start
> To: gluster-users <gluster-users at gluster.org>
> Message-ID:
> <CAEib2OnKfENr8NhVwkvpsw21C5QJmzu_=C9j144p2Gkn7KP=LQ at mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
2012 Jul 24
1
temp fix: Simultaneous reads and writes from specific apps to IPoIB volume seem to conflict and kill performance.
The problem described in the subject appears NOT to be the case. It's
not that simultaneous reads and writes dramatically decrease perf, but
that the type of /writes/ being done by this app (bedtools) kills
performance. If this was a self-writ app or an infrequently used one,
I wouldn't bother writing this up, but bedtools is a fairly popular
genomics app and since many installations use
2013 Feb 27
4
GlusterFS performance
Hello!
I have GlusterFS installation with parameters:
- 4 servers, connected by 1Gbit/s network (760-800 Mbit/s by iperf)
- Distributed-replicated volume with 4 bricks and 2x4 redundancy formula.
- Replicated volume with 2 bricks and 2x2 formula.
I found some trouble: if I try to copy huge amount of files (94000 files,
3Gb size), this process takes terribly long time (from 20 to 40 minutes). I
2009 Mar 26
2
rsync questions
Hi all:
I got basic rsync working (not server mode). Basically it went to another server via ssh, backed up subdireactories and stored on the local server. But I am trying to use the feature of "exclude" and could not get it working rigjt.
I am trying to back up /export/home/* (all of users) on another machine but exclude a certain types of files. here is one of my tested exclude
2009 Feb 13
4
uid/gid settings in rsyncd.conf not respected?
Hi All,
I must not understand the uid/gid line in rsyncd.conf. If someone
could briefly point out where I've gone wrong, I'd appreciate it.
I've created a special user to backup a server which has some users
who don't want all their files backed up, so I'm trying to address
their concerns by using the uid= and gid= lines in rsyncd.conf to
have the rsyncd run with
2012 Jun 08
1
too many redirects at the gluster download page
It may be just me/chrome, but trying to dl the latest gluster results by
clicking on the Download button next to the Ant, leads not to a download
page but to the info page. It invites you to go back to the gluster.org
page from when you just came.
And when you click on the alternative 'Download' links (the button on
the upper right or the larger "Download GlusterFS" icon
2018 Mar 16
3
Discrepancy: R sum() VS C or Fortran sum
Hi all,
I found a discrepancy between the sum() in R and either a sum done in C
or Fortran for vector of just 5 elements. The difference is very small,
but this is a very small part of a much larger numerical problem in
which first and second derivatives are computed numerically. This is
part of a numerical method course I am teaching in which I want to
compare speeds of R versus Fortran (We
2018 Mar 16
1
Discrepancy: R sum() VS C or Fortran sum
My simple functions were to compare the result with the gfortran
compiler sum() function. I thought that the Fortran sum could not be
less precise than R. I was wrong. I am impressed. The R sum does in fact
match the result if we use the Kahan algorithm.
P.
I am glad to see that R sum() is more accurate than the gfortran
compiler sum.
On 16/03/18 11:37 AM, luke-tierney at uiowa.edu wrote:
2012 Sep 18
4
cannot create a new volume with a brick that used to be part of a deleted volume?
Greetings,
I'm running v3.3.0 on Fedora16-x86_64. I used to have a replicated
volume on two bricks. This morning I deleted it successfully:
########
[root at farm-ljf0 ~]# gluster volume stop gv0
Stopping volume will make its data inaccessible. Do you want to
continue? (y/n) y
Stopping volume gv0 has been successful
[root at farm-ljf0 ~]# gluster volume delete gv0
Deleting volume will erase
2005 Dec 13
1
nsswitch/winbindd_user.c:winbindd_getpwnam(161)
Dear List-
This error is appearing in my log.winbindd log file. The full error is
as follows:
nsswitch/winbindd_user.c:winbindd_getpwnam(161)
user 'RAID' does not exist
nsswitch/winbindd_user.c:winbindd_getpwnam(161)
user 'BAMBOO_LEAVES_SINGLE.TIF' does not exist
To clarify, there is no user by those names (i.e. raid,
BAMBOO_LEAVES_SINGLE.TIF)
on our DC, but there are share
2011 Nov 29
4
[RFC] virtio: use mandatory barriers for remote processor vdevs
Virtio is using memory barriers to control the ordering of
references to the vrings on SMP systems. When the guest is compiled
with SMP support, virtio is only using SMP barriers in order to
avoid incurring the overhead involved with mandatory barriers.
Lately, though, virtio is being increasingly used with inter-processor
communication scenarios too, which involve running two (separate)
2011 Nov 29
4
[RFC] virtio: use mandatory barriers for remote processor vdevs
Virtio is using memory barriers to control the ordering of
references to the vrings on SMP systems. When the guest is compiled
with SMP support, virtio is only using SMP barriers in order to
avoid incurring the overhead involved with mandatory barriers.
Lately, though, virtio is being increasingly used with inter-processor
communication scenarios too, which involve running two (separate)
2011 Oct 14
2
rsync compares all files again and again
Hi,
we do a 1:1 backup from our main raid to a backup raid every night with
rsync -a --delete /mnt/raid1/ /mnt/raid2
rsync is 3.09, filesystems are ext3, OS is SLES 11 SP1.
The rsync process takes several hours, even if no file has changed at all.
Using -vv I see that rsync compares all files every time and that takes
long for some hundreds of millions of small files.
Can I tell rsync it
2012 Nov 02
8
Very slow directory listing and high CPU usage on replicated volume
Hi all,
I am having problems with painfully slow directory listings on a freshly
created replicated volume. The configuration is as follows: 2 nodes with
3 replicated drives each. The total volume capacity is 5.6T. We would
like to expand the storage capacity much more, but first we need to figure
this problem out.
Soon after loading up about 100 MB of small files (about 300kb each), the