similar to: build7 isos dont fit on a normal 650M CD-R

Displaying 20 results from an estimated 1000 matches similar to: "build7 isos dont fit on a normal 650M CD-R"

2008 May 04
3
Some bugs/inconsistencies.
Hi. I''m working on getting the most recent ZFS to the FreeBSD''s CVS. Because of the huge amount of changes, I decided to work on ZFS regression tests, so I''m more or less sure nothing broke in the meantime. (Yes, I know about ZFS testsuite, but unfortunately I wasn''t able to port it to FreeBSD, it was just too much work. I''m afraid it is too
2008 Feb 08
4
Subsetting a data.frame degenerates at one column?
Greetings. At the moment, I'm applying R to some AIX 'nmon' output, trying to get a handle on some disk performance metrics. In case anyone's interested: http://docs.osg.ufl.edu/tsm/pdf/ some of them are more edifying than others. (ahem) I'm trying to develop a somewhat general framework for plotting these measures, in the hopes that it's of some use to people other
2018 Jan 09
0
Bricks to sub-volume mapping
First 6 bricks belong to First sub volume and next 6 bricks belong to second. On Tuesday 09 January 2018 12:11 PM, Anand Malagi wrote: > > Hi Team, > > Please let me know how I can know which bricks are part of which > sub-volumes in case of disperse volume, for example in below volume > has two sub-volumes : > > Type: Distributed-Disperse > > Volume ID:
2008 Oct 28
4
blktap, vmdk, vdi, and disk management support
Just a quick fyi... We''ve recently added support for blktap along with support for managing virtual disks (disk file images). There are some difference from a linux dom0. This is available in b101 @ http://www.opensolaris.org/os/downloads/sol_ex_dvd_1/ This allows you to create and manage vmdk and vdi (Virtual Box) disk files. By default, virt-install will now use a vmdk vdisk when
2018 Jan 09
2
Bricks to sub-volume mapping
Hi Team, Please let me know how I can know which bricks are part of which sub-volumes in case of disperse volume, for example in below volume has two sub-volumes : Type: Distributed-Disperse Volume ID: 6dc8ced8-27aa-4481-bfe8-057133c31d0b Status: Started Snapshot Count: 0 Number of Bricks: 2 x (4 + 2) = 12 Transport-type: tcp Bricks: Brick1: pdchyperscale1sds:/ws/disk1/ws_brick Brick2:
2009 May 31
1
ZFS rollback, ORA-00322: log 1 of thread 1 is not current copy (???)
Hi. Using ZFS-FUSE. $SUBJECT happened 3 out of 5 times while testing, just wanna know if someone has seen such scenario before. Steps: ------------------------------------------------------------ root at localhost:/# uname -a Linux localhost 2.6.24-24-generic #1 SMP Wed Apr 15 15:54:25 UTC 2009 i686 GNU/Linux root at localhost:/# zpool upgrade -v This system is currently running ZFS pool
2006 Nov 29
1
pxelinux localboot enhancement
pxelinux, when given a localboot=... option, will try to boot next bios device itself. The problem is it takes next BIOS device by BIOS order of enumeration. That has no connection with the boot order defined in BIOS. Since a user cannot control BIOS enumeration but can control the boot order, there is no way one can force a desired next device for local boot with pxelinux. In general, you
2018 Jan 09
0
Bricks to sub-volume mapping
No, we don't store the information separately. But it can be easily predictable from the Volume Info. For example, in the below Volume info, it shows "Number of Bricks" in the following format, ??? Number of Subvols x (Number of Data bricks + Number of Redundancy bricks) = Total Bricks Note: Sub volumes are predictable without storing it as separate info since we do not have
2017 Jul 10
0
[Gluster-devel] gfid and volume-id extended attributes lost
@ pranith , yes . we can get the pid on all removexattr call and also print the backtrace of the glusterfsd process when trigerring removing xattr. I will write the script and reply back. On Sat, Jul 8, 2017 at 7:06 AM, Pranith Kumar Karampuri <pkarampu at redhat.com > wrote: > Ram, > As per the code, self-heal was the only candidate which *can* do > it. Could you check
2018 Jan 09
2
Bricks to sub-volume mapping
But do we store this information somewhere as part of gluster metadata or something... Thanks and Regards, --Anand Extn : 6974 Mobile : 91 9552527199, 91 9850160173 From: Aravinda [mailto:avishwan at redhat.com] Sent: 09 January 2018 12:31 To: Anand Malagi <amalagi at commvault.com>; gluster-users at gluster.org Subject: Re: [Gluster-users] Bricks to sub-volume mapping First 6 bricks
2013 Apr 29
2
Samba 3 dynamically enable or disable share
Hello, ? ?I wonder if it is possible to dynamically enable/disable samba 3 shares.? Here is my problem.? On a remote server I have 4 removable hard drives, large capacity. I am not using any RAID/JBOD, so each drive is mounted individually (like /mnt/DISK1, /mnt/DISK2 etc) and each drive is individually shared, something like: [STORAGE01] path = /mnt/DISK1 Guest OK = false ... [STORAGE02]
2002 Dec 19
0
Failed to delete entry for share
Hi All, I'm having some minor trouble with an obscure samba feature. I'm using the remote administration "Server Manager" features of smb.conf. Specifically the "add share command" "change share command" and "delete share command". I've written a small C program to do the text-processing portion of smb.conf file needed for each operation. The C
2017 Jul 10
0
[Gluster-devel] gfid and volume-id extended attributes lost
Ram, If you see it again, you can use this. I am going to send out a patch for the code path which can lead to removal of gfid/volume-id tomorrow. On Mon, Jul 10, 2017 at 5:19 PM, Sanoj Unnikrishnan <sunnikri at redhat.com> wrote: > Please use the systemtap script(https://paste.fedoraproject.org/paste/ > EGDa0ErwX0LV3y-gBYpfNA) to check which process is invoking remove xattr
2017 Jul 07
0
[Gluster-devel] gfid and volume-id extended attributes lost
We lost the attributes on all the bricks on servers glusterfs2 and glusterfs3 again. [root at glusterfs2 Log_Files]# gluster volume info Volume Name: StoragePool Type: Distributed-Disperse Volume ID: 149e976f-4e21-451c-bf0f-f5691208531f Status: Started Number of Bricks: 20 x (2 + 1) = 60 Transport-type: tcp Bricks: Brick1: glusterfs1sds:/ws/disk1/ws_brick Brick2: glusterfs2sds:/ws/disk1/ws_brick
2017 Jul 13
0
[Gluster-devel] gfid and volume-id extended attributes lost
Ram, I sent https://review.gluster.org/17765 to fix the possibility in bulk removexattr. But I am not sure if this is indeed the reason for this issue. On Mon, Jul 10, 2017 at 6:30 PM, Ankireddypalle Reddy <areddy at commvault.com> wrote: > Thanks for the swift turn around. Will try this out and let you know. > > > > Thanks and Regards, > > Ram > > *From:*
2017 Jul 08
2
[Gluster-devel] gfid and volume-id extended attributes lost
Ram, As per the code, self-heal was the only candidate which *can* do it. Could you check logs of self-heal daemon and the mount to check if there are any metadata heals on root? +Sanoj Sanoj, Is there any systemtap script we can use to detect which process is removing these xattrs? On Sat, Jul 8, 2017 at 2:58 AM, Ankireddypalle Reddy <areddy at commvault.com> wrote: >
2017 Jul 10
2
[Gluster-devel] gfid and volume-id extended attributes lost
Please use the systemtap script( https://paste.fedoraproject.org/paste/EGDa0ErwX0LV3y-gBYpfNA) to check which process is invoking remove xattr calls. It prints the pid, tid and arguments of all removexattr calls. I have checked for these fops at the protocol/client and posix translators. To run the script .. 1) install systemtap and dependencies. 2) install glusterfs-debuginfo 3) change the path
2007 Aug 07
5
Extending RAIDZ.
Yeah:) I''d like to work on this. Here are my first observations: - We need to call vdev_op_asize method with additonal ''offset'' argument, - We need to move data to new disk starting from the very begining, so we can''t reuse scrub/resilver code which does tree-walk through the data. Below you can see how I imagine to extend RAIDZ. Here is the legend:
2017 Jul 10
2
[Gluster-devel] gfid and volume-id extended attributes lost
Thanks for the swift turn around. Will try this out and let you know. Thanks and Regards, Ram From: Pranith Kumar Karampuri [mailto:pkarampu at redhat.com] Sent: Monday, July 10, 2017 8:31 AM To: Sanoj Unnikrishnan Cc: Ankireddypalle Reddy; Gluster Devel (gluster-devel at gluster.org); gluster-users at gluster.org Subject: Re: [Gluster-devel] gfid and volume-id extended attributes lost Ram,
2016 Jan 10
2
[cfe-dev] Is it a va_arg bug in clang?
Hi Richard, I tried latest 3.7.1 release, the clang has same build failure and don’t know __builtin_ms_va_list at all. I compared the llvm trunk with 3.7.1 and find the trunk has a VA commit from Davis which is not included in the 3.7.1 release. So, I guess I need to directly build the latest trunk instead of the 3.7.1 release. (why 3.7.1 release doesn’t include this patch?) commit