similar to: Per-directory brick preference?

Displaying 20 results from an estimated 1000 matches similar to: "Per-directory brick preference?"

2005 Jun 24
4
File System Size Limits?
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Is there some limit on the size of a file system which can be shared via samba? I'm trying to set up a file server with a 100GB shared partition and it doesn't want to work. I'm running Fedora Core 4, and Samba Version 3.0.14a-2. The output from testparm looks like this: [root@stitch samba]# testparm Load smb config files from
2007 Mar 05
1
Missing blocks
Hopefully this is a simple issue or just my ignorance on the results returned by "df -k" but can anyone explain why the available block is 0 if total 1k-blocks - Used is greater than 0? #df -k /ems/bigdisk/ Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/vg0-bigdisk 397367512 383562960 0 100% /<name> Filesystem volume
2007 Apr 03
6
How do I use "mount"?
Ok, so I''m obviously doing something wrong here. This is running puppet 0.22.2 on a centos 4 update 4 box. When I try running this test - mount { bigdisk: ensure => mounted, device => ''bigserver:/bigdisk'', fstype => nfs, name => ''/bigdisk'', dump => "0", pass => "0", options =>
2009 Mar 29
2
ssh & rsync equivalence?
Hello Folks Can anyone help with why I can use ssh like so: [root@morgansoldmachine ~]# ssh -t rsync@192.168.1.40 sudo ssh -i /home/rsync/.ssh/id_dsa root@192.168.1.100 Last login: Tue Mar 24 21:32:51 2009 from morgansmachine.lan [root@morgansoldmachine ~]# logout Connection to 192.168.1.100 closed. Connection to 192.168.1.40 closed. [root@morgansoldmachine ~]# But, when I use the same
2002 Feb 07
3
Promise TX2 and ATARAID....... kjouirnald and kupdated seem to fight it out for reasourses
What is really puzzling me is that the ataraid device (/dev/ataraid/d0p1 mounted as /bigdisk) 2x 123GB IBM Deskstar's is formated as ext2...top and ps etc... show a fight between kjournald and kupdated and all searches for kjournald or kupdated fights suggest ext3 issues... :-) It's a RedHat 7.2, 2.4.7-10 custom kernal (I added in the Promise FastTrak support and HIGH mem support to
2009 Mar 21
1
Trouble with -e ...
Hello folks Can any one help with why: [root@morgansoldmachine ~]# ssh -t rsync@morgansmachine sudo ssh -i /home/rsync/.ssh/id_dsa root@morgansoldmachine Last login: Sun Mar 22 10:55:41 2009 from morgansmachine.lan [root@morgansoldmachine ~]# logout Connection to morgansoldmachine closed. Connection to morgansmachine closed. [root@morgansoldmachine ~]# Works But yet: [root@morgansoldmachine ~]#
2004 Jun 01
1
Unexplained error (code 24)
Hi all, While trying to mirror a filesystem from one machine to another (for backup purposes) I get the following error: [root@samantha root]# /usr/bin/rsync -qavxzC --delete chandler:/var/ /bigdisk/backup/chandler/dev-md5-var/ root@chandler's password: rsync error: unexplained error (code 24) at main.c(1045) Does this mean anything to anybody? Regards, Graham --
2002 Jul 30
2
Rsync recursion
Hello, I'm trying to break up my rsync process by separating a directory tree into multiple rsync processes because I'm witnessing some errors trying to rsync large directory trees on Windows machines. After breaking up the tree I tried to rsync each individual directory starting from the bottom directory on up using the command: foreach ($array as $directory){ /* $array = list of
2011 Mar 21
0
permissions changed by rsync over nfs?
Hello wonderful rsync I have a little problem... I sync a file system with this command: rsync -avzAXH --filter="-r *.jpg *.opml *.opml.backup *.m3u" --delete-after --exclude=MP3s /home/Music/ /misc/bigdisk.mythtv.lan/Music /misc/bigdisk.mythtv.lan is an nfs mount mounted by autofs. In /etc/auto.misc I have for /misc/bigdisk.mythtv.lan: bigdisk.mythtv.lan -fstype=nfs4
2001 Mar 07
1
RH 6.2 + VA Linux Enhancements (includes ext3 0.5b) Problem
Hi, [Background] I've been trying (unsuccessfully) to get a reliable RH linux distribution installed on my Intel machine with journaling on my large disks (not interested in journaling the root fs). I've tried using ReiserFS and eventually had some success, but it would seem that they are more in bed with Suse. It would appear that RedHat has chose ext3 as it's current journaling
2011 Apr 22
1
rebalancing after remove-brick
Hello, I'm having trouble migrating data from 1 removed replica set to another active one in a dist replicated volume. My test scenario is the following: - create set (A) - create a bunch of files on it - add another set (B) - rebalance (works fine) - remove-brick A - rebalance (doesn't rebalance - ran on one brick in each set) The doc seems to imply that it is possible to remove
2014 Feb 08
1
Samba 3 to 4 AD migration - extensive permissions problems
Finally biting the bullet and upgrading home machines to Windows 7 but experiencing many problems. Server is a Debian Lenny, old Samba 3.2.5, new Samba 4.1.4 built from source. My setup has been doing roaming profiles for XP since 2003 or so with almost no changes. I want to keep roaming profiles going plus do some folder redirection (Desktop (my wife doesn't believe in file shares for
2019 Jun 12
1
Proper command for replace-brick on distribute–replicate?
On 12/06/19 1:38 PM, Alan Orth wrote: > Dear Ravi, > > Thanks for the confirmation?I replaced a brick in a volume last night > and by the morning I see that Gluster has replicated data there, > though I don't have any indication of its progress. The `gluster v > heal volume info` and `gluster v heal volume info split-brain` are all > looking good so I guess that's
2018 Feb 02
1
How to trigger a resync of a newly replaced empty brick in replicate config ?
Hi, I simplified the config in my first email, but I actually have 2x4 servers in replicate-distribute with each 4 bricks for 6 of them and 2 bricks for the remaining 2. Full healing will just take ages... for a just single brick to resync ! > gluster v status home volume status home Status of volume: home Gluster process TCP Port RDMA Port Online Pid
2011 Aug 07
1
Using volumes during fix-layout after add/remove-brick
Hello All- I regularly increase the size of volumes using "add-brick" followed by "rebalance VOLNAME fix-layout". I usually allow normal use of an expanded volume (i.e reading and writing files) to continue while "fix-layout" is taking place, and I have not experienced any apparent problems as a result. The documentation does not say that volumes cannot be used
2018 Apr 04
2
Expand distributed replicated volume with new set of smaller bricks
We currently have a 3 node gluster setup each has a 100TB brick (total 300TB, usable 100TB due to replica factor 3) We would like to expand the existing volume by adding another 3 nodes, but each will only have a 50TB brick. I think this is possible, but will it affect gluster performance and if so, by how much. Assuming we run a rebalance with force option, will this distribute the existing data
2017 Jun 06
1
Files Missing on Client Side; Still available on bricks
Hello, I am still working at recovering from a few failed OS hard drives on my gluster storage and have been removing, and re-adding bricks quite a bit. I noticed yesterday night that some of the directories are not visible when I access them through the client, but are still on the brick. For example: Client: # ls /scratch/dw Ethiopian_imputation HGDP Rolwaling Tibetan_Alignment Brick: #
2017 Nov 21
1
Brick and Subvolume Info
Hello I have a Distributed-Replicate volume and I would like to know if it is possible to see what sub-volume a brick belongs to, eg: A Distributed-Replicate volume containing: Number of Bricks: 2 x 2 = 4 Brick1: node1.localdomain:/mnt/data1/brick1 Brick2: node2.localdomain:/mnt/data1/brick1 Brick3: node1.localdomain:/mnt/data2/brick2 Brick4: node2.localdomain:/mnt/data2/brick2 Is it possible
2011 Aug 01
1
[Gluster 3.2.1] Réplication issues on a two bricks volume
Hello, I have installed GlusterFS one month ago, and replication have many issues : First of all, our infrastructure, 2 storage array of 8Tb in replication mode... We have our backups file on this arrays, so 6Tb of datas. I want replicate datas on the second storrage array, so, i use this command : # gluster volume rebalance REP_SVG migrate-data start And gluster start to replicate, in 2 weeks
2018 Apr 04
0
Expand distributed replicated volume with new set of smaller bricks
Hi, Yes this is possible. Make sure you have cluster.weighted-rebalance enabled for the volume and run rebalance with the start force option. Which version of gluster are you running (we fixed a bug around this a while ago)? Regards, Nithya On 4 April 2018 at 11:36, Anh Vo <vtqanh at gmail.com> wrote: > We currently have a 3 node gluster setup each has a 100TB brick (total > 300TB,