similar to: zfs mirroring question

Displaying 20 results from an estimated 1000 matches similar to: "zfs mirroring question"

2018 Jan 09
2
Bricks to sub-volume mapping
Hi Team, Please let me know how I can know which bricks are part of which sub-volumes in case of disperse volume, for example in below volume has two sub-volumes : Type: Distributed-Disperse Volume ID: 6dc8ced8-27aa-4481-bfe8-057133c31d0b Status: Started Snapshot Count: 0 Number of Bricks: 2 x (4 + 2) = 12 Transport-type: tcp Bricks: Brick1: pdchyperscale1sds:/ws/disk1/ws_brick Brick2:
2018 Jan 09
0
Bricks to sub-volume mapping
First 6 bricks belong to First sub volume and next 6 bricks belong to second. On Tuesday 09 January 2018 12:11 PM, Anand Malagi wrote: > > Hi Team, > > Please let me know how I can know which bricks are part of which > sub-volumes in case of disperse volume, for example in below volume > has two sub-volumes : > > Type: Distributed-Disperse > > Volume ID:
2018 Jan 09
2
Bricks to sub-volume mapping
But do we store this information somewhere as part of gluster metadata or something... Thanks and Regards, --Anand Extn : 6974 Mobile : 91 9552527199, 91 9850160173 From: Aravinda [mailto:avishwan at redhat.com] Sent: 09 January 2018 12:31 To: Anand Malagi <amalagi at commvault.com>; gluster-users at gluster.org Subject: Re: [Gluster-users] Bricks to sub-volume mapping First 6 bricks
2017 Jul 07
0
[Gluster-devel] gfid and volume-id extended attributes lost
We lost the attributes on all the bricks on servers glusterfs2 and glusterfs3 again. [root at glusterfs2 Log_Files]# gluster volume info Volume Name: StoragePool Type: Distributed-Disperse Volume ID: 149e976f-4e21-451c-bf0f-f5691208531f Status: Started Number of Bricks: 20 x (2 + 1) = 60 Transport-type: tcp Bricks: Brick1: glusterfs1sds:/ws/disk1/ws_brick Brick2: glusterfs2sds:/ws/disk1/ws_brick
2018 Jan 09
0
Bricks to sub-volume mapping
No, we don't store the information separately. But it can be easily predictable from the Volume Info. For example, in the below Volume info, it shows "Number of Bricks" in the following format, ??? Number of Subvols x (Number of Data bricks + Number of Redundancy bricks) = Total Bricks Note: Sub volumes are predictable without storing it as separate info since we do not have
2017 Jul 08
2
[Gluster-devel] gfid and volume-id extended attributes lost
Ram, As per the code, self-heal was the only candidate which *can* do it. Could you check logs of self-heal daemon and the mount to check if there are any metadata heals on root? +Sanoj Sanoj, Is there any systemtap script we can use to detect which process is removing these xattrs? On Sat, Jul 8, 2017 at 2:58 AM, Ankireddypalle Reddy <areddy at commvault.com> wrote: >
2008 Feb 08
4
Subsetting a data.frame degenerates at one column?
Greetings. At the moment, I'm applying R to some AIX 'nmon' output, trying to get a handle on some disk performance metrics. In case anyone's interested: http://docs.osg.ufl.edu/tsm/pdf/ some of them are more edifying than others. (ahem) I'm trying to develop a somewhat general framework for plotting these measures, in the hopes that it's of some use to people other
2017 Jul 10
0
[Gluster-devel] gfid and volume-id extended attributes lost
@ pranith , yes . we can get the pid on all removexattr call and also print the backtrace of the glusterfsd process when trigerring removing xattr. I will write the script and reply back. On Sat, Jul 8, 2017 at 7:06 AM, Pranith Kumar Karampuri <pkarampu at redhat.com > wrote: > Ram, > As per the code, self-heal was the only candidate which *can* do > it. Could you check
2017 Jul 10
2
[Gluster-devel] gfid and volume-id extended attributes lost
Please use the systemtap script( https://paste.fedoraproject.org/paste/EGDa0ErwX0LV3y-gBYpfNA) to check which process is invoking remove xattr calls. It prints the pid, tid and arguments of all removexattr calls. I have checked for these fops at the protocol/client and posix translators. To run the script .. 1) install systemtap and dependencies. 2) install glusterfs-debuginfo 3) change the path
2017 Jul 10
0
[Gluster-devel] gfid and volume-id extended attributes lost
Ram, If you see it again, you can use this. I am going to send out a patch for the code path which can lead to removal of gfid/volume-id tomorrow. On Mon, Jul 10, 2017 at 5:19 PM, Sanoj Unnikrishnan <sunnikri at redhat.com> wrote: > Please use the systemtap script(https://paste.fedoraproject.org/paste/ > EGDa0ErwX0LV3y-gBYpfNA) to check which process is invoking remove xattr
2017 Jul 10
2
[Gluster-devel] gfid and volume-id extended attributes lost
Thanks for the swift turn around. Will try this out and let you know. Thanks and Regards, Ram From: Pranith Kumar Karampuri [mailto:pkarampu at redhat.com] Sent: Monday, July 10, 2017 8:31 AM To: Sanoj Unnikrishnan Cc: Ankireddypalle Reddy; Gluster Devel (gluster-devel at gluster.org); gluster-users at gluster.org Subject: Re: [Gluster-devel] gfid and volume-id extended attributes lost Ram,
2017 Jul 13
0
[Gluster-devel] gfid and volume-id extended attributes lost
Ram, I sent https://review.gluster.org/17765 to fix the possibility in bulk removexattr. But I am not sure if this is indeed the reason for this issue. On Mon, Jul 10, 2017 at 6:30 PM, Ankireddypalle Reddy <areddy at commvault.com> wrote: > Thanks for the swift turn around. Will try this out and let you know. > > > > Thanks and Regards, > > Ram > > *From:*
2007 Jun 26
2
NFS, nested ZFS filesystems and ownership
Hello, I''m sure there is a simple solution, but I am unable to figure this one out. Assuming I have tank/fs, tank/fs/fs1, tank/fs/fs2, and I set sharenfs=on for tank/fs (child filesystems are inheriting it as well), and I chown user:group /tank/fs, /tank/fs/fs1 and /tank/fs/fs2, I see: ls -la /tank/fs user:group . user:group fs1 user:group fs2 user:group some_other_file If I mount
2012 Jun 16
4
Failing to start or create VM, cannot connect to hypervisor host
Greetings - I shutdown one of my Centos 6.2 VMs for some offline maintenance and am now unable to get it to restart. I am also unable to create and start a new VM. The host system is Centos 6.2, fully up to date. I have been searching Google for two days and have not been successful in getting a VM to start. I have restarted libvirtd, but did not want to shutdown my other two running VMs and
2013 Apr 29
2
Samba 3 dynamically enable or disable share
Hello, ? ?I wonder if it is possible to dynamically enable/disable samba 3 shares.? Here is my problem.? On a remote server I have 4 removable hard drives, large capacity. I am not using any RAID/JBOD, so each drive is mounted individually (like /mnt/DISK1, /mnt/DISK2 etc) and each drive is individually shared, something like: [STORAGE01] path = /mnt/DISK1 Guest OK = false ... [STORAGE02]
2007 Sep 13
4
How to delegate filesystems from different pools to non-global zone
I''m trying to add filesystems from two different pools to a zone but can''t seem to find any mention of how to do this in the docs. I tried this but the second set overwrites the first one. add dataset set name=pool1/fs1 set name=pool2/fs2 end Is this possible or do I need to use different syntax? -Robert This message posted from opensolaris.org
2018 Oct 10
6
same netbios aliases on multiple servers
Hi Can I set the same netbios name on multiple servers? more precisely why it does not work? server A [global]   netbios name = FS1   netbios aliases = fs fs.example.ru server B [global]   netbios name = FS2   netbios aliases = fs fs.example.ru
2019 Aug 01
1
guestmount mounts gets corrupted somehow? [iscsi lvm guestmount windows filesystem rsync]
Hello everybody, I been trying to debug a problem for a month now and can use some insights and advice. This is the setup, I got two linux ha storage node providing iscsi disk, the disks is mounted on two linux kvm host and one backup server. The iscsi disk has lvm on it, the logical volume groups are visible on all servers. On the backup server I have the following running: # guestmount
2018 Oct 10
2
same netbios aliases on multiple servers
we have many branch, with different call quality, each branch, has a samba DC with a file server, and in order not to create labels for each branch, we set up a geo round robin on the DNS server. branch A net 192.168.1.0/24 dc and fs 192.168.1.1 name fs1 branch B net 192.168.2.0/24 dc and fs 192.168.2.1 name fs2 if open fs.example in branch A, then user open fs1 if open fs.example in
2008 May 04
3
Some bugs/inconsistencies.
Hi. I''m working on getting the most recent ZFS to the FreeBSD''s CVS. Because of the huge amount of changes, I decided to work on ZFS regression tests, so I''m more or less sure nothing broke in the meantime. (Yes, I know about ZFS testsuite, but unfortunately I wasn''t able to port it to FreeBSD, it was just too much work. I''m afraid it is too