search for: landman

Displaying 20 results from an estimated 26 matches for "landman".

2012 Feb 26
1
"Structure needs cleaning" error
Hi, We have recently upgraded our gluster to 3.2.5 and have encountered the following error. Gluster seems somehow confused about one of the files it should be serving up, specifically /projects/philex/PE/2010/Oct18/arch07/BalbacFull_250_200_03Mar_3.png If I go to that directory and simply do an ls *.png I get ls: BalbacFull_250_200_03Mar_3.png: Structure needs cleaning (along with a listing
2011 May 06
2
Best practice to stop the Gluster CLIENT process?
Hi all! What's the best way to stop the CLIENT process for Gluster? We have dual systems, where the Gluster servers also act as clients, so both, glusterd and glusterfsd are running on the system. Stopping the server app. works via "/etc/init.d/glusterd stop" but the client is stopped how? I need to unmount the filesystem from the server in order to do a fsck on the ext4 volume;
2007 Nov 08
6
question about backup regimens
...ffsite storage. I was thinking of system backups weekly and differential backups nightly but don't know what software to recommend for the differential b/u's. For full backups I can just schedule a tar and compress using cron. Any recommendations would be appreciated. Marty -- Marty Landman, Face 2 Interface Inc. 845-679-9387 Drupal Development Blog: http://drupal.face2interface.com/ Free Database Search App: http://face2interface.com/Products/FormATable.shtml
2011 Nov 05
1
glusterfs over rdma ... not.
OK - finished some tests over tcp and ironed out a lot of problems. rdma is next; should be snap now.... [I must admit that this is my 1st foray into the land of IB, so some of the following may be obvious to a non-naive admin..] except that while I can create and start the volume with rdma as transport: ================================== root at pbs3:~ 622 $ gluster volume info glrdma
2005 Oct 02
1
question on firewire support and centosplus
...the package support, or major kernel bugfixes. The reason I am curious about firewire is that I would like to use the firewire interface rather than the USB interface for our backup drives. Its not much faster, just fewer context switches (e.g. lower server load). Thanks. Joe -- Joe Landman landman at scalableinformatics.com http://www.scalableinformatics.com
2011 Jul 25
1
3.0.5 RDMA clients seem broken
...all the files. More disconcerting was that several clients saw radically different numbers of files. Just a note ... if you use 3.0.5, it looks like rdma is broken. Caching is (badly) broken there as well, but thats a whole other story. n.b. we are updating them to 3.2.2 tomorrow. -- Joseph Landman, Ph.D Founder and CEO Scalable Informatics, Inc. email: landman at scalableinformatics.com web : http://scalableinformatics.com http://scalableinformatics.com/sicluster phone: +1 734 786 8423 x121 fax : +1 866 888 3112 cell : +1 734 612 4615
2011 Oct 20
1
tons and tons of clients, oh my!
Hello gluster-verse! I'm about to see if GlusterFS can handle a large amount of clients, this was not at all in plans when we initially selected and setup our current configuration. What sort of experience do you (the collective "you" as in y'all) have with a large client to storage brick server ratio? (~1330:1) Where do you see things going awnry? Most of this will be reads
2012 Nov 02
8
Very slow directory listing and high CPU usage on replicated volume
Hi all, I am having problems with painfully slow directory listings on a freshly created replicated volume. The configuration is as follows: 2 nodes with 3 replicated drives each. The total volume capacity is 5.6T. We would like to expand the storage capacity much more, but first we need to figure this problem out. Soon after loading up about 100 MB of small files (about 300kb each), the
2005 Oct 10
2
Compile error
A bit off topic, but having trouble again with the 64-bit libs when trying to compile this short fortran code to build a shared object. Can anyone offer an explaination of what is meant by the relocation bit, and how to fix it? I've tried the -fPIC, but apparently I either didn't have it in the right place or it does not work. Be glad to take this offline if somone could help me.
2010 Nov 13
3
Gluster At SC10 ?
Howdy, are any of the Gluster folks going to SC10 next week? Mike
2012 Sep 25
2
GlusterFS performance
GlusterFS newbie (less than a week) here. Running GlusterFS 3.2.6 servers on Dell PE2900 systems with four 3.16 GHz Xeon cores and 16 GB memory under CentOS 5.8. For this test, I have a distributed volume of one brick only, so no replication. I have made performance measurements with both dd and Bonnie++, and they confirm each other; here I report only the dd numbers (using bs=1024k). File
2011 Jan 19
2
tuning gluster performance vs. nfs
I have been tweaking and researching for a while now and can't seem to get "good" performance out of Gluster. I'm using Gluster to replace an NFS server (c1.xlarge) that serves files to an array of web servers, all in EC2. In my tests Gluster is significantly slower than NFS on average. I'm using a distributed replicated volume on two (m1.large) bricks: Volume Name: ebs
2011 Oct 17
1
Need help with optimizing GlusterFS for Apache
Our webserver is configured as such: The actual website files, php, html ,css and so on. Or on a dedicated non-glusterfs ext4 partition. However, the website access Videos and especially image files on a gluster mounted directory. The write performance for our backend gluster storage is not that important. Since it only comes into play when someone uploads a video or image. However, the files
2005 Nov 09
2
Filers, filesystems, etc.
On Tue, Nov 08, 2005 at 08:04:54AM -0800, Bryan J. Smith wrote: > > > NAS offers safe concurent access (generally, there might be > > some NAS devices outthere that do not). NAS device will > > manage file system internally, and export it over NFS or > > SMB protocols to the clients. > > Such NAS' are a combined host+storage aka
2005 Oct 31
4
Best mkfs.ext2 performance options on RAID5 in CentOS 4.2
I can't seem to get the read and write performance better than approximately 40MB/s on an ext2 file system. IMO, this is horrible performance for a 6-drive, hardware RAID 5 array. Please have a look at what I'm doing and let me know if anybody has any suggestions on how to improve the performance... System specs: ----------------- 2 x 2.8GHz Xeons 6GB RAM 1 3ware 9500S-12 2 x 6-drive,
2011 Oct 26
2
Some questions about theoretical gluster failures.
We're considering implementing gluster for a genomics cluster, and it seems to have some theoretical advantages that so far seem to have been borne out in some limited testing, mod some odd problems with an inability to delete dir trees. I'm about to test with the latest beta that was promised to clear up these bugs, but as I'm doing that, answers to these Qs would be
2005 Dec 04
5
CentOS and Dell Support
I'm a fairly experienced RH and Fedora user and admin looking to try CentOS for the first time. I have lots of experience with Dell servers and I'd like to stick with them. Although I'm sure it's not always strictly enforced, Dell claims that it won't provide warranty hardware support on servers installed with an un-Dell-supported OS (basically, anything other Windows, RH, and
2012 Jan 22
2
Best practices?
Suppose I start building nodes with (say) 24 drives each in them. Would the standard/recommended approach be to make each drive its own filesystem, and export 24 separate bricks, server1:/data1 .. server1:/data24 ? Making a distributed replicated volume between this and another server would then have to list all 48 drives individually. At the other extreme, I could put all 24 drives into some
2012 Jun 20
2
How Fatal? "Server and Client lk-version numbers are not same, reopening the fds"
Despite Joe Landman's sage advice to the contrary, I'm trying to convince an IPoIB volume to service requests from a GbE client via some /etc/hosts manipulation. (This may or may not be related to the automount problems we're having as well.) This has worked (and continues to work) well on another cluster...
2012 Feb 23
1
default cluster.stripe-block-size for striped volumes on 3.0.x vs 3.3 beta (128kb), performance change if i reduce to a smaller block size?
Hi, I've been migrating data from an old striped 3.0.x gluster install to a 3.3 beta install. I copied all the data to a regular XFS partition (4K blocksize) from the old gluster striped volume and it totaled 9.2TB. With the old setup I used the following option in a "volume stripe" block in the configuration file in a client : volume stripe type cluster/stripe option