search for: mozone

Displaying 6 results from an estimated 6 matches for "mozone".

Did you mean: ozone
2010 Apr 23
1
client mount fails on boot under debian lenny...
Hi Is there clean way to ensure that a glusterfs mount point specified in /etc/fstab is mounted automatically on boot under debian lenny when referencing a remote node for the volfile? In my test case everytime I reboot, the system tries to mount the filesystem before the backend 10ge interface comes up so it gets a "No route to host" and immediately aborts. I know I can dump a mount
2009 Jun 24
0
Gluster-users Digest, Vol 14, Issue 34
...luster.org You can reach the person managing the list at gluster-users-owner at gluster.org When replying, please edit your Subject line so it is more specific than "Re: Contents of Gluster-users digest..." Today's Topics: 1. Re: bailout after period of inactivity (mki-glusterfs at mozone.net) 2. AFR problem (maurizio oggiano) ---------------------------------------------------------------------- Message: 1 Date: Tue, 23 Jun 2009 05:48:11 -0700 From: mki-glusterfs at mozone.net Subject: Re: [Gluster-users] bailout after period of inactivity To: Vikas Gorur <vikas at gluster.co...
2009 Jun 24
2
Limit of Glusterfs help
...luster.org You can reach the person managing the list at gluster-users-owner at gluster.org When replying, please edit your Subject line so it is more specific than "Re: Contents of Gluster-users digest..." Today's Topics: 1. Re: bailout after period of inactivity (mki-glusterfs at mozone.net) 2. AFR problem (maurizio oggiano) ---------------------------------------------------------------------- Message: 1 Date: Tue, 23 Jun 2009 05:48:11 -0700 From: mki-glusterfs at mozone.net Subject: Re: [Gluster-users] bailout after period of inactivity To: Vikas Gorur <vikas at gluster.co...
2011 Jun 03
2
adding new bricks for volume expansion with 3.0.x?
Hi How does one go about expanding a volume that consists of a distribute -replicate set of machines in 3.0.6? The setup consists of 4 pairs of machines, with 3 bricks per machine. I need to add an additional 5 pair of machines (15 bricks) to the volume, but I don't understand what's required per se. There are currently 4 client machines mounting the volume using the
2009 Oct 11
1
change fuse max_read= mount option from 128k to something bigger?
Hi Are there any caveats to changing the max_read fuse mount option that's hardcoded into xlators/mount/fuse/src/fuse-bridge.c to something other than 128k? On my client test system, cat /proc/mounts shows: fusectl /sys/fs/fuse/connections fusectl rw,relatime 0 0 10.10.10.11 /storage fuse.glusterfs rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072 0 0 This
2011 Jun 14
0
read-ahead performance translator tweaking with 3.2.1?
Hi Is there a way to tweak the read-ahead settings via the gluster command line? For example: gluster volume set somevolumename performance.read-ahead 2 Or is this no longer feasible? With read-ahead set to the default of 8 like was the case with standard volgen generated configs, the amount of useless reads happening to the bricks is way too high, and on 1 GbE interconnects causes saturation