Displaying 20 results from an estimated 2000 matches similar to: "Samba 3 dynamically enable or disable share"
2018 Jan 09
2
Bricks to sub-volume mapping
Hi Team,
Please let me know how I can know which bricks are part of which sub-volumes in case of disperse volume, for example in below volume has two sub-volumes :
Type: Distributed-Disperse
Volume ID: 6dc8ced8-27aa-4481-bfe8-057133c31d0b
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x (4 + 2) = 12
Transport-type: tcp
Bricks:
Brick1: pdchyperscale1sds:/ws/disk1/ws_brick
Brick2:
2008 Feb 08
4
Subsetting a data.frame degenerates at one column?
Greetings.
At the moment, I'm applying R to some AIX 'nmon' output, trying to get
a handle on some disk performance metrics. In case anyone's
interested:
http://docs.osg.ufl.edu/tsm/pdf/
some of them are more edifying than others. (ahem)
I'm trying to develop a somewhat general framework for plotting these
measures, in the hopes that it's of some use to people other
2018 Jan 09
0
Bricks to sub-volume mapping
First 6 bricks belong to First sub volume and next 6 bricks belong to
second.
On Tuesday 09 January 2018 12:11 PM, Anand Malagi wrote:
>
> Hi Team,
>
> Please let me know how I can know which bricks are part of which
> sub-volumes in case of disperse volume, for example in below volume
> has two sub-volumes :
>
> Type: Distributed-Disperse
>
> Volume ID:
2018 Jan 09
2
Bricks to sub-volume mapping
But do we store this information somewhere as part of gluster metadata or something...
Thanks and Regards,
--Anand
Extn : 6974
Mobile : 91 9552527199, 91 9850160173
From: Aravinda [mailto:avishwan at redhat.com]
Sent: 09 January 2018 12:31
To: Anand Malagi <amalagi at commvault.com>; gluster-users at gluster.org
Subject: Re: [Gluster-users] Bricks to sub-volume mapping
First 6 bricks
2008 May 04
3
Some bugs/inconsistencies.
Hi.
I''m working on getting the most recent ZFS to the FreeBSD''s CVS. Because
of the huge amount of changes, I decided to work on ZFS regression
tests, so I''m more or less sure nothing broke in the meantime.
(Yes, I know about ZFS testsuite, but unfortunately I wasn''t able to
port it to FreeBSD, it was just too much work. I''m afraid it is too
2017 Jul 07
0
[Gluster-devel] gfid and volume-id extended attributes lost
We lost the attributes on all the bricks on servers glusterfs2 and glusterfs3 again.
[root at glusterfs2 Log_Files]# gluster volume info
Volume Name: StoragePool
Type: Distributed-Disperse
Volume ID: 149e976f-4e21-451c-bf0f-f5691208531f
Status: Started
Number of Bricks: 20 x (2 + 1) = 60
Transport-type: tcp
Bricks:
Brick1: glusterfs1sds:/ws/disk1/ws_brick
Brick2: glusterfs2sds:/ws/disk1/ws_brick
2017 Jul 08
2
[Gluster-devel] gfid and volume-id extended attributes lost
Ram,
As per the code, self-heal was the only candidate which *can* do it.
Could you check logs of self-heal daemon and the mount to check if there
are any metadata heals on root?
+Sanoj
Sanoj,
Is there any systemtap script we can use to detect which process is
removing these xattrs?
On Sat, Jul 8, 2017 at 2:58 AM, Ankireddypalle Reddy <areddy at commvault.com>
wrote:
>
2018 Jan 09
0
Bricks to sub-volume mapping
No, we don't store the information separately. But it can be easily
predictable from the Volume Info.
For example, in the below Volume info, it shows "Number of Bricks" in
the following format,
??? Number of Subvols x (Number of Data bricks + Number of Redundancy
bricks) = Total Bricks
Note: Sub volumes are predictable without storing it as separate info
since we do not have
2017 Jul 10
0
[Gluster-devel] gfid and volume-id extended attributes lost
@ pranith , yes . we can get the pid on all removexattr call and also print
the backtrace of the glusterfsd process when trigerring removing xattr.
I will write the script and reply back.
On Sat, Jul 8, 2017 at 7:06 AM, Pranith Kumar Karampuri <pkarampu at redhat.com
> wrote:
> Ram,
> As per the code, self-heal was the only candidate which *can* do
> it. Could you check
2017 Jul 10
2
[Gluster-devel] gfid and volume-id extended attributes lost
Please use the systemtap script(
https://paste.fedoraproject.org/paste/EGDa0ErwX0LV3y-gBYpfNA) to check
which process is invoking remove xattr calls.
It prints the pid, tid and arguments of all removexattr calls.
I have checked for these fops at the protocol/client and posix translators.
To run the script ..
1) install systemtap and dependencies.
2) install glusterfs-debuginfo
3) change the path
2017 Jul 10
0
[Gluster-devel] gfid and volume-id extended attributes lost
Ram,
If you see it again, you can use this. I am going to send out a patch
for the code path which can lead to removal of gfid/volume-id tomorrow.
On Mon, Jul 10, 2017 at 5:19 PM, Sanoj Unnikrishnan <sunnikri at redhat.com>
wrote:
> Please use the systemtap script(https://paste.fedoraproject.org/paste/
> EGDa0ErwX0LV3y-gBYpfNA) to check which process is invoking remove xattr
2017 Jul 10
2
[Gluster-devel] gfid and volume-id extended attributes lost
Thanks for the swift turn around. Will try this out and let you know.
Thanks and Regards,
Ram
From: Pranith Kumar Karampuri [mailto:pkarampu at redhat.com]
Sent: Monday, July 10, 2017 8:31 AM
To: Sanoj Unnikrishnan
Cc: Ankireddypalle Reddy; Gluster Devel (gluster-devel at gluster.org); gluster-users at gluster.org
Subject: Re: [Gluster-devel] gfid and volume-id extended attributes lost
Ram,
2017 Jul 13
0
[Gluster-devel] gfid and volume-id extended attributes lost
Ram,
I sent https://review.gluster.org/17765 to fix the possibility in
bulk removexattr. But I am not sure if this is indeed the reason for this
issue.
On Mon, Jul 10, 2017 at 6:30 PM, Ankireddypalle Reddy <areddy at commvault.com>
wrote:
> Thanks for the swift turn around. Will try this out and let you know.
>
>
>
> Thanks and Regards,
>
> Ram
>
> *From:*
2012 Jun 16
4
Failing to start or create VM, cannot connect to hypervisor host
Greetings -
I shutdown one of my Centos 6.2 VMs for some offline maintenance and am
now unable to get it to restart. I am also unable to create and start a
new VM. The host system is Centos 6.2, fully up to date. I have been
searching Google for two days and have not been successful in getting a VM
to start. I have restarted libvirtd, but did not want to shutdown my
other two running VMs and
2007 Aug 07
5
Extending RAIDZ.
Yeah:)
I''d like to work on this. Here are my first observations:
- We need to call vdev_op_asize method with additonal ''offset'' argument,
- We need to move data to new disk starting from the very begining, so
we can''t reuse scrub/resilver code which does tree-walk through the
data.
Below you can see how I imagine to extend RAIDZ. Here is the legend:
2017 Jul 07
3
[Gluster-devel] gfid and volume-id extended attributes lost
On Fri, Jul 7, 2017 at 9:25 PM, Ankireddypalle Reddy <areddy at commvault.com>
wrote:
> 3.7.19
>
These are the only callers for removexattr and only _posix_remove_xattr has
the potential to do removexattr as posix_removexattr already makes sure
that it is not gfid/volume-id. And surprise surprise _posix_remove_xattr
happens only from healing code of afr/ec. And this can only happen
2007 Dec 05
2
zfs mirroring question
I create two zfs''s on one pool of four disks with two mirrors, such as...
/
zpool create tank mirror disk1 disk2 mirror disk3 disk4
zfs create tank/fs1
zfs create tank/fs2/
Are fs1 and fs2 striped across all four disks?
If two disks fail that represent a 2-way mirror, do I lose data?
Brian.
2012 Jun 05
2
best practises for mail systems
hello!
Can someone point me to some best practices in building high-available scalable mail system or! share your own success stories.
I've read article in LJ "Building a Scalable High-Availability E-Mail System with Active Directory and More"
but it seemed to be outdated and there's a single point of failure (Master node).
What I want to achieve:
high-available,
2008 Jan 15
4
Moving zfs to an iscsci equallogic LUN
We have a Mirror setup in ZFS that''s 73GB (two internal disks on a sun fire v440). We currently are going to attach this system to an Equallogic Box, and will attach an ISCSCI LUN from the Equallogic box to the v440 of about 200gb. The Equallogic Box is configured as a Hardware Raid 50 (two hot spares for redundancy).
My question is what''s the best approach to moving the ZFS
2009 May 31
1
ZFS rollback, ORA-00322: log 1 of thread 1 is not current copy (???)
Hi.
Using ZFS-FUSE.
$SUBJECT happened 3 out of 5 times while testing, just wanna know if
someone has seen such scenario before.
Steps:
------------------------------------------------------------
root at localhost:/# uname -a
Linux localhost 2.6.24-24-generic #1 SMP Wed Apr 15 15:54:25 UTC 2009
i686 GNU/Linux
root at localhost:/# zpool upgrade -v
This system is currently running ZFS pool