Displaying 20 results from an estimated 200 matches similar to: "Commvault Engineering expanding"
2017 Jul 10
0
[Gluster-devel] gfid and volume-id extended attributes lost
@ pranith , yes . we can get the pid on all removexattr call and also print
the backtrace of the glusterfsd process when trigerring removing xattr.
I will write the script and reply back.
On Sat, Jul 8, 2017 at 7:06 AM, Pranith Kumar Karampuri <pkarampu at redhat.com
> wrote:
> Ram,
> As per the code, self-heal was the only candidate which *can* do
> it. Could you check
2017 Jul 10
0
[Gluster-devel] gfid and volume-id extended attributes lost
Ram,
If you see it again, you can use this. I am going to send out a patch
for the code path which can lead to removal of gfid/volume-id tomorrow.
On Mon, Jul 10, 2017 at 5:19 PM, Sanoj Unnikrishnan <sunnikri at redhat.com>
wrote:
> Please use the systemtap script(https://paste.fedoraproject.org/paste/
> EGDa0ErwX0LV3y-gBYpfNA) to check which process is invoking remove xattr
2017 Jul 07
0
[Gluster-devel] gfid and volume-id extended attributes lost
We lost the attributes on all the bricks on servers glusterfs2 and glusterfs3 again.
[root at glusterfs2 Log_Files]# gluster volume info
Volume Name: StoragePool
Type: Distributed-Disperse
Volume ID: 149e976f-4e21-451c-bf0f-f5691208531f
Status: Started
Number of Bricks: 20 x (2 + 1) = 60
Transport-type: tcp
Bricks:
Brick1: glusterfs1sds:/ws/disk1/ws_brick
Brick2: glusterfs2sds:/ws/disk1/ws_brick
2017 Jul 13
0
[Gluster-devel] gfid and volume-id extended attributes lost
Ram,
I sent https://review.gluster.org/17765 to fix the possibility in
bulk removexattr. But I am not sure if this is indeed the reason for this
issue.
On Mon, Jul 10, 2017 at 6:30 PM, Ankireddypalle Reddy <areddy at commvault.com>
wrote:
> Thanks for the swift turn around. Will try this out and let you know.
>
>
>
> Thanks and Regards,
>
> Ram
>
> *From:*
2017 Jul 08
2
[Gluster-devel] gfid and volume-id extended attributes lost
Ram,
As per the code, self-heal was the only candidate which *can* do it.
Could you check logs of self-heal daemon and the mount to check if there
are any metadata heals on root?
+Sanoj
Sanoj,
Is there any systemtap script we can use to detect which process is
removing these xattrs?
On Sat, Jul 8, 2017 at 2:58 AM, Ankireddypalle Reddy <areddy at commvault.com>
wrote:
>
2017 Jul 10
2
[Gluster-devel] gfid and volume-id extended attributes lost
Thanks for the swift turn around. Will try this out and let you know.
Thanks and Regards,
Ram
From: Pranith Kumar Karampuri [mailto:pkarampu at redhat.com]
Sent: Monday, July 10, 2017 8:31 AM
To: Sanoj Unnikrishnan
Cc: Ankireddypalle Reddy; Gluster Devel (gluster-devel at gluster.org); gluster-users at gluster.org
Subject: Re: [Gluster-devel] gfid and volume-id extended attributes lost
Ram,
2017 Jul 10
2
[Gluster-devel] gfid and volume-id extended attributes lost
Please use the systemtap script(
https://paste.fedoraproject.org/paste/EGDa0ErwX0LV3y-gBYpfNA) to check
which process is invoking remove xattr calls.
It prints the pid, tid and arguments of all removexattr calls.
I have checked for these fops at the protocol/client and posix translators.
To run the script ..
1) install systemtap and dependencies.
2) install glusterfs-debuginfo
3) change the path
2017 Jul 07
0
[Gluster-devel] gfid and volume-id extended attributes lost
3.7.19
Thanks and Regards,
Ram
From: Pranith Kumar Karampuri [mailto:pkarampu at redhat.com]
Sent: Friday, July 07, 2017 11:54 AM
To: Ankireddypalle Reddy
Cc: Gluster Devel (gluster-devel at gluster.org); gluster-users at gluster.org
Subject: Re: [Gluster-devel] gfid and volume-id extended attributes lost
On Fri, Jul 7, 2017 at 9:20 PM, Ankireddypalle Reddy <areddy at
2008 Oct 08
0
Samba 3.x reports "not implemented" when Server 2008 SMB client requests FSCTL_GET_OBJECT_ID
Hi Samba list,
I ran across this really bizarre issue and was hoping somebody would
be able to shed some further light on the issue.
If this is better directed to the samba technical list, please let me
know and I will post there instead.
Background
=========
I'm using CommVault Galaxy 7.0 SP4 for backup and decided to share
it's "IndexCache", which is a collection of files
2017 Jul 07
0
[Gluster-devel] gfid and volume-id extended attributes lost
Pranith,
Thanks for looking in to the issue. The bricks were mounted after the reboot. One more thing that I noticed was when the attributes were manually set when glusterd was up then on starting the volume the attributes were again lost. Had to stop glusterd set attributes and then start glusterd. After that the volume start succeeded.
Thanks and Regards,
Ram
From: Pranith
2020 Feb 27
1
Question about latest CentOS 7 AWS AMI
Hi,
I'm seeing some strange behavior when trying to use the latest CentOS 7 AMI
from the AWS marketplace.
The AMI that we've been using previously is "ami-02eac2c0129f6376b"
released January 30, 2019 at 6:40:58 PM
Today I saw a new AMI with ID "ami-0c3b960f8440c7d71" that was released
February 21, 2020 at 3:50:07
Both these AMIs are owned by AWS account
2017 Jul 07
3
[Gluster-devel] gfid and volume-id extended attributes lost
On Fri, Jul 7, 2017 at 9:25 PM, Ankireddypalle Reddy <areddy at commvault.com>
wrote:
> 3.7.19
>
These are the only callers for removexattr and only _posix_remove_xattr has
the potential to do removexattr as posix_removexattr already makes sure
that it is not gfid/volume-id. And surprise surprise _posix_remove_xattr
happens only from healing code of afr/ec. And this can only happen
2017 Jul 07
2
[Gluster-devel] gfid and volume-id extended attributes lost
On Fri, Jul 7, 2017 at 9:20 PM, Ankireddypalle Reddy <areddy at commvault.com>
wrote:
> Pranith,
>
> Thanks for looking in to the issue. The bricks were
> mounted after the reboot. One more thing that I noticed was when the
> attributes were manually set when glusterd was up then on starting the
> volume the attributes were again lost. Had to stop glusterd
2017 Nov 02
0
Gluster Scale Limitations
On Tue, 31 Oct 2017 at 03:32, Mayur Dewaikar <mdewaikar at commvault.com>
wrote:
> Hi all,
>
> Are there any scale limitations in terms of how many nodes can be in a
> single Gluster Cluster or how much storage capacity can be managed in a
> single cluster? What are some of the large deployments out there that you
> know of?
>
>
The current design of GlusterD is not
2018 Jan 09
0
Bricks to sub-volume mapping
No, we don't store the information separately. But it can be easily
predictable from the Volume Info.
For example, in the below Volume info, it shows "Number of Bricks" in
the following format,
??? Number of Subvols x (Number of Data bricks + Number of Redundancy
bricks) = Total Bricks
Note: Sub volumes are predictable without storing it as separate info
since we do not have
2017 Jul 07
3
[Gluster-devel] gfid and volume-id extended attributes lost
Did anything special happen on these two bricks? It can't happen in the I/O
path:
posix_removexattr() has:
0 if (!strcmp (GFID_XATTR_KEY, name))
{
1 gf_msg (this->name, GF_LOG_WARNING, 0,
P_MSG_XATTR_NOT_REMOVED,
2 "Remove xattr called on gfid for file %s",
real_path);
3 op_ret =
-1;
4 goto
2018 Jan 09
2
Bricks to sub-volume mapping
But do we store this information somewhere as part of gluster metadata or something...
Thanks and Regards,
--Anand
Extn : 6974
Mobile : 91 9552527199, 91 9850160173
From: Aravinda [mailto:avishwan at redhat.com]
Sent: 09 January 2018 12:31
To: Anand Malagi <amalagi at commvault.com>; gluster-users at gluster.org
Subject: Re: [Gluster-users] Bricks to sub-volume mapping
First 6 bricks
2014 Nov 25
0
speex wideband and ogg question
Hi
My name i Jakob Aagesen
I have a problem understanding the granual pos in the ogg header. I found a something that Selon Ralph Giles wrote to someone:
What speexenc does (Speex itself does not know about Ogg) is that it gives
packet N the granulepos "N*frame_size - lookahead". In the case of narrowband,
the first frame would have granulepos "1*160 - 80", so 80.
What is the
2017 Oct 30
3
Gluster Scale Limitations
Hi all,
Are there any scale limitations in terms of how many nodes can be in a single Gluster Cluster or how much storage capacity can be managed in a single cluster? What are some of the large deployments out there that you know of?
Thanks,
Mayur
***************************Legal Disclaimer***************************
"This communication may contain confidential and privileged material for
2012 Nov 28
1
Issues with USB connectivity
Howdy nut-users :)
Got a problem with nut detecting my UPS.
Here's my scenario.
Base platform is a HP ML350 G6.
UPS is an APC Smart-UPS 1000.
Base OS is VMware ESXi 5.0 with all current patches.
I've created a virtual machine using the VMWare supplied VMA.
USB passthrough is configured to pass the UPS through to the VMA.
So I essentially have a SLES 11 (x86_64) server with a USB