similar to: "Structure needs cleaning" error

Displaying 20 results from an estimated 1000 matches similar to: ""Structure needs cleaning" error"

2011 May 06
2
Best practice to stop the Gluster CLIENT process?
Hi all! What's the best way to stop the CLIENT process for Gluster? We have dual systems, where the Gluster servers also act as clients, so both, glusterd and glusterfsd are running on the system. Stopping the server app. works via "/etc/init.d/glusterd stop" but the client is stopped how? I need to unmount the filesystem from the server in order to do a fsck on the ext4 volume;
2011 Oct 20
1
tons and tons of clients, oh my!
Hello gluster-verse! I'm about to see if GlusterFS can handle a large amount of clients, this was not at all in plans when we initially selected and setup our current configuration. What sort of experience do you (the collective "you" as in y'all) have with a large client to storage brick server ratio? (~1330:1) Where do you see things going awnry? Most of this will be reads
2011 Jun 27
2
Using TSM to back-up glusterfs
Hi We have been trying back-up a glusterfs (v3.1.4) area using the Tivoli TSM software to an off-site area. The back-up keeps failing with the following typical error messages 06/14/2011 22:22:58 ANS1587W I/O error reading file attributes for: /gdata/projects/philex/OAG/2011/May16/mdor3km10/coast_den2.in. errno = 22, Invalid argument 06/14/2011 22:22:59 ANS4007E Error processing
2010 Nov 13
3
Gluster At SC10 ?
Howdy, are any of the Gluster folks going to SC10 next week? Mike
2011 Jan 19
2
tuning gluster performance vs. nfs
I have been tweaking and researching for a while now and can't seem to get "good" performance out of Gluster. I'm using Gluster to replace an NFS server (c1.xlarge) that serves files to an array of web servers, all in EC2. In my tests Gluster is significantly slower than NFS on average. I'm using a distributed replicated volume on two (m1.large) bricks: Volume Name: ebs
2011 Nov 05
1
glusterfs over rdma ... not.
OK - finished some tests over tcp and ironed out a lot of problems. rdma is next; should be snap now.... [I must admit that this is my 1st foray into the land of IB, so some of the following may be obvious to a non-naive admin..] except that while I can create and start the volume with rdma as transport: ================================== root at pbs3:~ 622 $ gluster volume info glrdma
2011 Oct 17
1
Need help with optimizing GlusterFS for Apache
Our webserver is configured as such: The actual website files, php, html ,css and so on. Or on a dedicated non-glusterfs ext4 partition. However, the website access Videos and especially image files on a gluster mounted directory. The write performance for our backend gluster storage is not that important. Since it only comes into play when someone uploads a video or image. However, the files
2012 Nov 02
8
Very slow directory listing and high CPU usage on replicated volume
Hi all, I am having problems with painfully slow directory listings on a freshly created replicated volume. The configuration is as follows: 2 nodes with 3 replicated drives each. The total volume capacity is 5.6T. We would like to expand the storage capacity much more, but first we need to figure this problem out. Soon after loading up about 100 MB of small files (about 300kb each), the
2011 Oct 26
2
Some questions about theoretical gluster failures.
We're considering implementing gluster for a genomics cluster, and it seems to have some theoretical advantages that so far seem to have been borne out in some limited testing, mod some odd problems with an inability to delete dir trees. I'm about to test with the latest beta that was promised to clear up these bugs, but as I'm doing that, answers to these Qs would be
2012 Sep 25
2
GlusterFS performance
GlusterFS newbie (less than a week) here. Running GlusterFS 3.2.6 servers on Dell PE2900 systems with four 3.16 GHz Xeon cores and 16 GB memory under CentOS 5.8. For this test, I have a distributed volume of one brick only, so no replication. I have made performance measurements with both dd and Bonnie++, and they confirm each other; here I report only the dd numbers (using bs=1024k). File
2012 Jan 22
2
Best practices?
Suppose I start building nodes with (say) 24 drives each in them. Would the standard/recommended approach be to make each drive its own filesystem, and export 24 separate bricks, server1:/data1 .. server1:/data24 ? Making a distributed replicated volume between this and another server would then have to list all 48 drives individually. At the other extreme, I could put all 24 drives into some
2012 Feb 23
1
default cluster.stripe-block-size for striped volumes on 3.0.x vs 3.3 beta (128kb), performance change if i reduce to a smaller block size?
Hi, I've been migrating data from an old striped 3.0.x gluster install to a 3.3 beta install. I copied all the data to a regular XFS partition (4K blocksize) from the old gluster striped volume and it totaled 9.2TB. With the old setup I used the following option in a "volume stripe" block in the configuration file in a client : volume stripe type cluster/stripe option
2005 Oct 10
2
Compile error
A bit off topic, but having trouble again with the 64-bit libs when trying to compile this short fortran code to build a shared object. Can anyone offer an explaination of what is meant by the relocation bit, and how to fix it? I've tried the -fPIC, but apparently I either didn't have it in the right place or it does not work. Be glad to take this offline if somone could help me.
2005 Nov 09
2
Filers, filesystems, etc.
On Tue, Nov 08, 2005 at 08:04:54AM -0800, Bryan J. Smith wrote: > > > NAS offers safe concurent access (generally, there might be > > some NAS devices outthere that do not). NAS device will > > manage file system internally, and export it over NFS or > > SMB protocols to the clients. > > Such NAS' are a combined host+storage aka
2017 Jun 02
2
Slow write times to gluster disk
Are you sure using conv=sync is what you want? I normally use conv=fdatasync, I'll look up the difference between the two and see if it affects your test. -b ----- Original Message ----- > From: "Pat Haley" <phaley at mit.edu> > To: "Pranith Kumar Karampuri" <pkarampu at redhat.com> > Cc: "Ravishankar N" <ravishankar at redhat.com>,
2017 Jun 23
2
Slow write times to gluster disk
On Fri, Jun 23, 2017 at 2:23 AM, Pat Haley <phaley at mit.edu> wrote: > > Hi, > > Today we experimented with some of the FUSE options that we found in the > list. > > Changing these options had no effect: > > gluster volume set test-volume performance.cache-max-file-size 2MB > gluster volume set test-volume performance.cache-refresh-timeout 4 > gluster
2017 Jun 20
2
Slow write times to gluster disk
Hi Ben, Sorry this took so long, but we had a real-time forecasting exercise last week and I could only get to this now. Backend Hardware/OS: * Much of the information on our back end system is included at the top of http://lists.gluster.org/pipermail/gluster-users/2017-April/030529.html * The specific model of the hard disks is SeaGate ENTERPRISE CAPACITY V.4 6TB
2017 Jun 26
3
Slow write times to gluster disk
Hi All, Decided to try another tests of gluster mounted via FUSE vs gluster mounted via NFS, this time using the software we run in production (i.e. our ocean model writing a netCDF file). gluster mounted via NFS the run took 2.3 hr gluster mounted via FUSE: the run took 44.2 hr The only problem with using gluster mounted via NFS is that it does not respect the group write permissions which
2017 Jun 12
0
Slow write times to gluster disk
Hi Guys, I was wondering what our next steps should be to solve the slow write times. Recently I was debugging a large code and writing a lot of output at every time step. When I tried writing to our gluster disks, it was taking over a day to do a single time step whereas if I had the same program (same hardware, network) write to our nfs disk the time per time-step was about 45 minutes.
2017 Jun 24
0
Slow write times to gluster disk
On Fri, Jun 23, 2017 at 9:10 AM, Pranith Kumar Karampuri < pkarampu at redhat.com> wrote: > > > On Fri, Jun 23, 2017 at 2:23 AM, Pat Haley <phaley at mit.edu> wrote: > >> >> Hi, >> >> Today we experimented with some of the FUSE options that we found in the >> list. >> >> Changing these options had no effect: >> >>