Displaying 20 results from an estimated 6000 matches similar to: "Some bugs/inconsistencies."
2007 Aug 07
5
Extending RAIDZ.
Yeah:)
I''d like to work on this. Here are my first observations:
- We need to call vdev_op_asize method with additonal ''offset'' argument,
- We need to move data to new disk starting from the very begining, so
we can''t reuse scrub/resilver code which does tree-walk through the
data.
Below you can see how I imagine to extend RAIDZ. Here is the legend:
2012 Jun 16
4
Failing to start or create VM, cannot connect to hypervisor host
Greetings -
I shutdown one of my Centos 6.2 VMs for some offline maintenance and am
now unable to get it to restart. I am also unable to create and start a
new VM. The host system is Centos 6.2, fully up to date. I have been
searching Google for two days and have not been successful in getting a VM
to start. I have restarted libvirtd, but did not want to shutdown my
other two running VMs and
2013 Dec 18
1
Re: How to attach USB disk to specified USB controller in domian?
Hi all,
According to Eric's approach, I dumped its xml, but can not find its address as mentioned in former mail(Or,is it a bug?). Could you show me a detailed example for my doubt, thanks.
# virsh dumpxml rhel
<domain type='kvm' id='7'>
<name>rhel</name>
<uuid>205c40e0-e917-47fe-9c4a-1f35748ffd21</uuid>
<memory
2018 Jan 09
2
Bricks to sub-volume mapping
Hi Team,
Please let me know how I can know which bricks are part of which sub-volumes in case of disperse volume, for example in below volume has two sub-volumes :
Type: Distributed-Disperse
Volume ID: 6dc8ced8-27aa-4481-bfe8-057133c31d0b
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x (4 + 2) = 12
Transport-type: tcp
Bricks:
Brick1: pdchyperscale1sds:/ws/disk1/ws_brick
Brick2:
2018 Jan 09
0
Bricks to sub-volume mapping
First 6 bricks belong to First sub volume and next 6 bricks belong to
second.
On Tuesday 09 January 2018 12:11 PM, Anand Malagi wrote:
>
> Hi Team,
>
> Please let me know how I can know which bricks are part of which
> sub-volumes in case of disperse volume, for example in below volume
> has two sub-volumes :
>
> Type: Distributed-Disperse
>
> Volume ID:
2013 Apr 29
2
Samba 3 dynamically enable or disable share
Hello,
? ?I wonder if it is possible to dynamically enable/disable samba 3 shares.?
Here is my problem.?
On a remote server I have 4 removable hard drives, large capacity. I am not using any RAID/JBOD, so each drive is mounted individually (like /mnt/DISK1, /mnt/DISK2 etc) and each drive is individually shared, something like:
[STORAGE01]
path = /mnt/DISK1
Guest OK = false
...
[STORAGE02]
2009 May 31
1
ZFS rollback, ORA-00322: log 1 of thread 1 is not current copy (???)
Hi.
Using ZFS-FUSE.
$SUBJECT happened 3 out of 5 times while testing, just wanna know if
someone has seen such scenario before.
Steps:
------------------------------------------------------------
root at localhost:/# uname -a
Linux localhost 2.6.24-24-generic #1 SMP Wed Apr 15 15:54:25 UTC 2009
i686 GNU/Linux
root at localhost:/# zpool upgrade -v
This system is currently running ZFS pool
2018 Jan 09
2
Bricks to sub-volume mapping
But do we store this information somewhere as part of gluster metadata or something...
Thanks and Regards,
--Anand
Extn : 6974
Mobile : 91 9552527199, 91 9850160173
From: Aravinda [mailto:avishwan at redhat.com]
Sent: 09 January 2018 12:31
To: Anand Malagi <amalagi at commvault.com>; gluster-users at gluster.org
Subject: Re: [Gluster-users] Bricks to sub-volume mapping
First 6 bricks
2018 Jan 09
0
Bricks to sub-volume mapping
No, we don't store the information separately. But it can be easily
predictable from the Volume Info.
For example, in the below Volume info, it shows "Number of Bricks" in
the following format,
??? Number of Subvols x (Number of Data bricks + Number of Redundancy
bricks) = Total Bricks
Note: Sub volumes are predictable without storing it as separate info
since we do not have
2008 Feb 08
4
Subsetting a data.frame degenerates at one column?
Greetings.
At the moment, I'm applying R to some AIX 'nmon' output, trying to get
a handle on some disk performance metrics. In case anyone's
interested:
http://docs.osg.ufl.edu/tsm/pdf/
some of them are more edifying than others. (ahem)
I'm trying to develop a somewhat general framework for plotting these
measures, in the hopes that it's of some use to people other
2017 Jul 10
0
[Gluster-devel] gfid and volume-id extended attributes lost
@ pranith , yes . we can get the pid on all removexattr call and also print
the backtrace of the glusterfsd process when trigerring removing xattr.
I will write the script and reply back.
On Sat, Jul 8, 2017 at 7:06 AM, Pranith Kumar Karampuri <pkarampu at redhat.com
> wrote:
> Ram,
> As per the code, self-heal was the only candidate which *can* do
> it. Could you check
2017 Jul 07
0
[Gluster-devel] gfid and volume-id extended attributes lost
We lost the attributes on all the bricks on servers glusterfs2 and glusterfs3 again.
[root at glusterfs2 Log_Files]# gluster volume info
Volume Name: StoragePool
Type: Distributed-Disperse
Volume ID: 149e976f-4e21-451c-bf0f-f5691208531f
Status: Started
Number of Bricks: 20 x (2 + 1) = 60
Transport-type: tcp
Bricks:
Brick1: glusterfs1sds:/ws/disk1/ws_brick
Brick2: glusterfs2sds:/ws/disk1/ws_brick
2017 Jul 08
2
[Gluster-devel] gfid and volume-id extended attributes lost
Ram,
As per the code, self-heal was the only candidate which *can* do it.
Could you check logs of self-heal daemon and the mount to check if there
are any metadata heals on root?
+Sanoj
Sanoj,
Is there any systemtap script we can use to detect which process is
removing these xattrs?
On Sat, Jul 8, 2017 at 2:58 AM, Ankireddypalle Reddy <areddy at commvault.com>
wrote:
>
2017 Jul 10
0
[Gluster-devel] gfid and volume-id extended attributes lost
Ram,
If you see it again, you can use this. I am going to send out a patch
for the code path which can lead to removal of gfid/volume-id tomorrow.
On Mon, Jul 10, 2017 at 5:19 PM, Sanoj Unnikrishnan <sunnikri at redhat.com>
wrote:
> Please use the systemtap script(https://paste.fedoraproject.org/paste/
> EGDa0ErwX0LV3y-gBYpfNA) to check which process is invoking remove xattr
2008 Nov 06
3
Help recovering zfs filesystem
Let me preface this by admitting that I''m a bonehead.
I had a mirrored a zfs filesystem. I needed to use one of the mirrors temporarily so I did a zpool detach to remove the member (call it disk1) leaving disk0 in the pool. However, after the detach I mistakenly wiped disk0.
So here is the question. I haven''t touched disk1 yet so the data is hopefully still there. Is there
2017 Jul 13
0
[Gluster-devel] gfid and volume-id extended attributes lost
Ram,
I sent https://review.gluster.org/17765 to fix the possibility in
bulk removexattr. But I am not sure if this is indeed the reason for this
issue.
On Mon, Jul 10, 2017 at 6:30 PM, Ankireddypalle Reddy <areddy at commvault.com>
wrote:
> Thanks for the swift turn around. Will try this out and let you know.
>
>
>
> Thanks and Regards,
>
> Ram
>
> *From:*
2017 Jul 10
2
[Gluster-devel] gfid and volume-id extended attributes lost
Please use the systemtap script(
https://paste.fedoraproject.org/paste/EGDa0ErwX0LV3y-gBYpfNA) to check
which process is invoking remove xattr calls.
It prints the pid, tid and arguments of all removexattr calls.
I have checked for these fops at the protocol/client and posix translators.
To run the script ..
1) install systemtap and dependencies.
2) install glusterfs-debuginfo
3) change the path
2017 Jul 10
2
[Gluster-devel] gfid and volume-id extended attributes lost
Thanks for the swift turn around. Will try this out and let you know.
Thanks and Regards,
Ram
From: Pranith Kumar Karampuri [mailto:pkarampu at redhat.com]
Sent: Monday, July 10, 2017 8:31 AM
To: Sanoj Unnikrishnan
Cc: Ankireddypalle Reddy; Gluster Devel (gluster-devel at gluster.org); gluster-users at gluster.org
Subject: Re: [Gluster-devel] gfid and volume-id extended attributes lost
Ram,
2007 Dec 05
2
zfs mirroring question
I create two zfs''s on one pool of four disks with two mirrors, such as...
/
zpool create tank mirror disk1 disk2 mirror disk3 disk4
zfs create tank/fs1
zfs create tank/fs2/
Are fs1 and fs2 striped across all four disks?
If two disks fail that represent a 2-way mirror, do I lose data?
Brian.
2007 Sep 19
2
import zpool error if use loop device as vdev
Hey, guys
I just do the test for use loop device as vdev for zpool
Procedures as followings:
1) mkfile -v 100m disk1
mkfile -v 100m disk2
2) lofiadm -a disk1 /dev/lofi
lofiadm -a disk2 /dev/lofi
3) zpool create pool_1and2 /dev/lofi/1 and /dev/lofi/2
4) zpool export pool_1and2
5) zpool import pool_1and2
error info here:
bash-3.00# zpool import pool1_1and2
cannot import