search for: diskgroups

Displaying 20 results from an estimated 33 matches for "diskgroups".

Did you mean: diskgroup
2012 Dec 07
2
[PATCH] Add support for Windows dynamic disks (libldm / ldmtool).
This is just an initial version of the patch, not to be applied. It implements just the diskgroup functions, ie. corresponding to these ldmtool commands: * ldmtool scan * ldmtool show diskgroup <guid> I have chosen yajl as the JSON parsing library (don't worry, this is optional). You will also, of course, need ldmtool which is not packaged in anything except Fedora. Rich.
2017 Nov 03
2
[PATCH] daemon: ldm: avoid manual free()
When the LDM code was converted to the CLEANUP_* macros, a free() invocation for a CLEANUP_FREE variable was left in the ldmtool_diskgroup_volumes implementation, causing double-free on success. Updates commit 950951c67de61da27dceca8ffb2079031c13e43b. --- daemon/ldm.c | 1 - 1 file changed, 1 deletion(-) diff --git a/daemon/ldm.c b/daemon/ldm.c index 1bab28989..2f4d2aef3 100644 ---
2004 Jun 01
5
OCFS 1.0.9-6 performance with EVA 6000 Storage
Dear All... I need some information regarding OCFS performance in my Linux Box, herewith is my environment details : 1. We are using RHAS 2.1 with kernel 2.4.9-e.27 Enterprise 2. OCFS version : 2.4.9-e-enterprise-1.0.9-6 3. Oracle RDBMS : 9.2.0.4 RAC with 5 Nodes 4. Storage = EVA 6000 with 8 TB SIZE 5. We have 1 DiskGroup and 51 LUNs configured in EVA6000. My Question is : 1. It takes arround 15
2004 Jun 01
5
OCFS 1.0.9-6 performance with EVA 6000 Storage
Dear All... I need some information regarding OCFS performance in my Linux Box, herewith is my environment details : 1. We are using RHAS 2.1 with kernel 2.4.9-e.27 Enterprise 2. OCFS version : 2.4.9-e-enterprise-1.0.9-6 3. Oracle RDBMS : 9.2.0.4 RAC with 5 Nodes 4. Storage = EVA 6000 with 8 TB SIZE 5. We have 1 DiskGroup and 51 LUNs configured in EVA6000. My Question is : 1. It takes arround 15
2017 Jul 27
0
[PATCH v2] daemon: Remove GUESTFSD_EXT_CMD.
GUESTFSD_EXT_CMD was used by OpenSUSE to track which external commands are run by the daemon and package those commands into the appliance. It is no longer used by recent SUSE builds, so remove it. Thanks: Pino Toscano, Olaf Hering. --- daemon/9p.c | 3 +- daemon/available.c | 7 +-- daemon/base64.c | 6 +-- daemon/blkid.c | 10 ++---
2017 Jul 24
0
[PATCH 2/2] daemon: Replace GUESTFSD_EXT_CMD with --print-external-commands.
GUESTFSD_EXT_CMD is used by OpenSUSE to track which external commands are run by the daemon and package those commands into the appliance. However because this uses linker trickery it won't work from OCaml code. Replace it with a [nearly] standard C mechanism. Files still have to declare the external commands they will use, eg: DECLARE_EXTERNAL_COMMANDS ("btrfs",
2017 Nov 03
0
Re: [PATCH] daemon: ldm: avoid manual free()
On Fri, Nov 03, 2017 at 05:31:19PM +0100, Pino Toscano wrote: > When the LDM code was converted to the CLEANUP_* macros, a free() > invocation for a CLEANUP_FREE variable was left in the > ldmtool_diskgroup_volumes implementation, causing double-free on > success. > > Updates commit 950951c67de61da27dceca8ffb2079031c13e43b. > --- > daemon/ldm.c | 1 - > 1 file changed,
2007 Jul 02
3
ZFS and VXVM/VXFS
We are looking at the alternatives to VXVM/VXFS. One of the feature which we liked in Veritas, apart from the obvious ones is the ability to call the disks by name and group them in to a disk group. Especially in SAN based environment where the disks may be shared by multiple machines, it is very easy to manage them by disk group names rather than cxtxdx numbers. Does zfs offer such
2017 May 04
4
[PATCH 0/3] generator: Allow returned strings to be annotated as devices.
If we want to permit more than 255 drives to be added, then we will have to add the disks to the same virtio-scsi target using different unit (LUN) numbers. Unfortunately SCSI LUN enumeration in the Linux is not deterministic (eg. two disks with target=0, lun=[0,1] can be enumerated as /dev/sda or /dev/sdb randomly). Dealing with that will require some very complex device name translation on the
2017 Jul 27
3
[PATCH v2] daemon: Remove GUESTFSD_EXT_CMD.
This is a simpler patch that removes GUESTFSD_EXT_CMD completely.
2017 Jul 24
6
[PATCH 0/2] daemon: Replace GUESTFSD_EXT_CMD with --print-external-commands.
Replace GUESTFSD_EXT_CMD with a command line option ‘./guestfsd --print-external-commands’
2004 Jun 02
3
AW: OCFS 1.0.9-6 performance with EVA 6000 Storage
Hi all, Sorry to break in, but I find this thread a bit interesting. Jeram: I'm not very familiar with HP storage and cannot find too much info on the EVA 6000 array. Is it related to the EVA 5000 somehow, or is it a NAS array? In any case, how is the array configured. If the algorithm for hartbeat is as described earlier (36 sector reads and one write per second (per host???)) then you
2012 Dec 13
0
ANNOUNCE: libguestfs 1.20 - tools for accessing and modifying virtual machine disk images
I'm very pleased to announce the release of libguestfs 1.20. Libguestfs is a library and a comprehensive set of tools for accessing and modifying virtual machine (VM) disk images. For more information see http://libguestfs.org Libguestfs 1.20 represents 7 months of upstream work, dozens of major new features and bug fixes. For full details read the release notes below. You can download
2007 Jan 10
0
ZFS and HDS ShadowImage
...h devices are discovered, you may end up with one pool or > the other, or some combination of both. ah .. there we go - so we have an interaction between an uberblock date and prioritization on the import .. very keen. The non- deterministic case is well known in other self-describing pools or diskgroups (eg: vxdg) and where the 6385531 RFE/bug came from on Leadville to provide more options for sites that lack flexibility on the SAN and presentation ports to mask out replicated disks. I guess there''s a couple of corner cases that you may have already considered that would be good to expla...
2007 Sep 25
23
device alias
Hi. I''d like to request a feature be added to zfs. Currently, on SAN attached disk, zpool shows up with a big WWN for the disk. If ZFS (or the zpool command, in particular) had a text field for arbitrary information, it would be possible to add something that would indicate what LUN on what array the disk in question might be. This would make troubleshooting and general
2018 May 14
3
[PATCH libldm v4 0/3] Make libldm to parse and return volume GUID.
v2: wrap commit message, "PATCH libldm" prefix. v3: correctly initialize and free GLib resources. v4: gtk-doc is updated to reflect presence of new volume GUID field. The result of this patch might be used by libguestfs to return drive mappings for LDM volumes. Note, that "show volume" ldmtool command already returns hint which is a drive letter assigned by Windows to
2012 Dec 14
1
[PATCH] Add support for getting and setting GPT partition type GUIDs
New APIs: part_set_gpt_type part_get_gpt_type --- appliance/packagelist.in | 1 + daemon/parted.c | 129 +++++++++++++++++++++++++++++++++++++++++++++++ generator/actions.ml | 30 +++++++++++ generator/tests_c_api.ml | 7 +++ generator/types.ml | 5 ++ src/MAX_PROC_NR | 2 +- 6 files changed, 173 insertions(+), 1 deletion(-) diff --git
2016 Feb 16
3
slightly off-topic, RAID program for on-board SAS 2308-4i ?
Does anyone know what program can be used to query the RAID status from the OS for an on-board LSI SAS 2308-4i? On this page: http://docs.avagotech.com/docs/12351997 there is a curious note on the left that reads: "Integrated MegaRAID support available upon request" After one mostly fruitless round of chatting with LSI/Avago/Broadcom and one completely fruitless round of chatting
2015 Jan 30
5
Very slow disk I/O
On 1/30/2015 1:53 AM, Gordon Messmer wrote: > On 01/29/2015 05:07 AM, Jatin Davey wrote: >> Yes , it is a SATA disk. I am not sure of the speed. Can you tell me >> how to find out this information ? Additionally we are using RAID 10 >> configuration with 4 disks. > > What RAID controller are you using? > > # lspci | grep RAID [Jatin] [root at localhost ~]# lspci |
2015 Jan 30
0
Very slow disk I/O
On 1/29/2015 7:21 PM, Jatin Davey wrote: > [root at localhost ~]# lspci | grep RAID > 05:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS-3 > 3108 [Invader] (rev 02) to get info out of those, you need to install MegaCli64 from LSI Logic, which has the ugliest command lines and output you've ever seen. I use the python script below, which I put in