search for: lv0

Displaying 20 results from an estimated 39 matches for "lv0".

Did you mean: lv
2018 Jan 12
5
[PATCH 1/1] appliance: init: Avoid running degraded md devices
...fish <<EOF +# Add 2 empty disks +sparse $disk1 100M +sparse $disk2 100M +run + +# Create a raid1 based on the 2 disks +md-create test "/dev/sda /dev/sdb" level:raid1 + +# Create volume group and logical volume on md device +pvcreate /dev/md127 +vgcreate vg0 /dev/md127 +lvcreate-free lv0 vg0 100 +EOF + +# Ensure list-md-devices now returns the newly created md device +output=$( +guestfish --format=raw -a $disk1 --format=raw -a $disk2 <<EOF +run +list-md-devices +lvs +EOF +) + +expected="/dev/md127 +/dev/vg0/lv0" + +if [ "$output" != "$expected" ]...
2011 Aug 10
1
fsck hangs in Pass 0a
...-5 (kvm,12322,1):ocfs2_file_buffered_write:2039 ERROR: status = -5 OCFS2: ERROR (device dm-7): ocfs2_check_group_descriptor: Group Descriptor # 0 has bad signature So I ran fsck.ocfs2 -f. But it hangs forever (>12h) with this output: fsck.ocfs2 1.4.4 Checking OCFS2 filesystem in /dev/mapper/lv0: Label: <NONE> UUID: F27D7B8F7127436981A2B5D1C93FB204 Number of blocks: 2684349440 Block size: 4096 Number of clusters: 2684349440 Cluster size: 4096 Number of slots: 16 /dev/mapper/lv0 was run with -f, check forced. Pass 0a: Checki...
2006 Mar 02
3
Advice on setting up Raid and LVM
Hi all, I'm setting up Centos4.2 on 2x80GB SATA drives. The partition scheme is like this: /boot = 300MB / = 9.2GB /home = 70GB swap = 500MB The RAID is RAID 1. md0 = 300MB = /boot md1 = 9.2GB = LVM md2 = 70GB = LVM md3 = 500MB = LVM Now, the confusing part is: 1. When creating VolGroup00, should I include all PV (md1, md2, md3)? Then create the LV. 2. When setting up RAID 1, should I
2008 Feb 25
0
The I/O bandwidth controller: dm-ioband Performance Report
...ol on a per logical volume basis =============================================== Test procedure -------------- o Prepare two partitions sda11 and sdb11. o Create a volume group with the two partitions. o Create two striped logical volumes on the volume group. o Give weights of 20 and 10 to lv0 and lv1 respectively. o Run 128 processes issuing random read/write direct I/O with 4KB data on each ioband device at the same time respectively. o Count up the number of I/Os which have done in 60 seconds. Block diagram ------------- Read/Write process x 128 Read/Write process x...
2008 Feb 25
0
The I/O bandwidth controller: dm-ioband Performance Report
...ol on a per logical volume basis =============================================== Test procedure -------------- o Prepare two partitions sda11 and sdb11. o Create a volume group with the two partitions. o Create two striped logical volumes on the volume group. o Give weights of 20 and 10 to lv0 and lv1 respectively. o Run 128 processes issuing random read/write direct I/O with 4KB data on each ioband device at the same time respectively. o Count up the number of I/Os which have done in 60 seconds. Block diagram ------------- Read/Write process x 128 Read/Write process x...
2008 Feb 25
0
The I/O bandwidth controller: dm-ioband Performance Report
...ol on a per logical volume basis =============================================== Test procedure -------------- o Prepare two partitions sda11 and sdb11. o Create a volume group with the two partitions. o Create two striped logical volumes on the volume group. o Give weights of 20 and 10 to lv0 and lv1 respectively. o Run 128 processes issuing random read/write direct I/O with 4KB data on each ioband device at the same time respectively. o Count up the number of I/Os which have done in 60 seconds. Block diagram ------------- Read/Write process x 128 Read/Write process x...
2005 Jan 23
0
ef2sck loops forever with re-allocation
...e super block sparse. Then problems surfaced. I tried both ef2sck v1.34 and v1.35 but they have the same result: Group descriptors look bad... trying backup blocks... Inode table for group 1519 is not in group. (block 7503623) WARNING: SEVERE DATA LOSS POSSIBLE. Relocate? yes /dev/mapper/lvm_vg0-lv0 was not cleanly unmounted, check forced. Pass 1: Checking inodes, blocks, and sizes Relocating group 1519's inode table to 7503623... Restarting e2fsck from the beginning... Group descriptors look bad... trying backup blocks... Inode table for group 1519 is not in group. (block 7503623) WARNIN...
2011 Nov 11
3
[PATCH v2] Add mdadm-create, list-md-devices APIs.
This adds the mdadm-create API for creating RAID devices, and includes various fixes for the other two patches. Rich.
2012 Jun 12
9
[PATCH v2 0/9]
More comprehensive support for virtio-scsi. Passes all the tests. Rich.
2011 Nov 24
1
[PATCH] Rename mdadm_ apis to md_
...etadata for an MD device", "\ diff --git a/regressions/test-list-filesystems.sh b/regressions/test-list-filesystems.sh index 1144286..353cdd0 100755 --- a/regressions/test-list-filesystems.sh +++ b/regressions/test-list-filesystems.sh @@ -50,7 +50,7 @@ vgcreate vg0 /dev/sdb1 lvcreate lv0 vg0 16 # Create an md device from sda2 and sdb2 -mdadm-create test "/dev/sda2 /dev/sdb2" level:raid1 +md-create test "/dev/sda2 /dev/sdb2" level:raid1 # Create filesystems mkfs ext3 /dev/sda1 diff --git a/regressions/test-list-md-devices.sh b/regressions/test-list-md-devi...
2010 Mar 10
9
Error starting stubdom HVM on Xen-3.4.3-rc4-pre
Hi there, Last night I was trying to start a HVM domU via stubdom-dm device model. Initially I did not receive any error to stdout when I did so with Xen-3.4.2. My Xen-3.4.2 installation works fine with qemu-dm (or regular HVM guests). The stubdom-dm guest I was trying to create did not really operate as I was unable to connect to the VNC console. The output of xm list showed the DomU was there,
2010 Mar 10
9
Error starting stubdom HVM on Xen-3.4.3-rc4-pre
Hi there, Last night I was trying to start a HVM domU via stubdom-dm device model. Initially I did not receive any error to stdout when I did so with Xen-3.4.2. My Xen-3.4.2 installation works fine with qemu-dm (or regular HVM guests). The stubdom-dm guest I was trying to create did not really operate as I was unable to connect to the VNC console. The output of xm list showed the DomU was there,
2008 Feb 05
2
[PATCH 0/2] dm-ioband v0.0.3: The I/O bandwidth controller: Introduction
Hi everyone, This is dm-ioband version 0.0.3 release. Dm-ioband is an I/O bandwidth controller implemented as a device-mapper driver, which gives specified bandwidth to each job running on the same physical device. Changes since 0.0.2 (23rd January): - Ported to linux-2.6.24. - Rename the name of this device-mapper device as "ioband." - The output format of "dmsetup
2008 Feb 05
2
[PATCH 0/2] dm-ioband v0.0.3: The I/O bandwidth controller: Introduction
Hi everyone, This is dm-ioband version 0.0.3 release. Dm-ioband is an I/O bandwidth controller implemented as a device-mapper driver, which gives specified bandwidth to each job running on the same physical device. Changes since 0.0.2 (23rd January): - Ported to linux-2.6.24. - Rename the name of this device-mapper device as "ioband." - The output format of "dmsetup
2008 Feb 05
2
[PATCH 0/2] dm-ioband v0.0.3: The I/O bandwidth controller: Introduction
Hi everyone, This is dm-ioband version 0.0.3 release. Dm-ioband is an I/O bandwidth controller implemented as a device-mapper driver, which gives specified bandwidth to each job running on the same physical device. Changes since 0.0.2 (23rd January): - Ported to linux-2.6.24. - Rename the name of this device-mapper device as "ioband." - The output format of "dmsetup
2008 May 27
8
How to manage partitions and logical volumes with puppet?
Hi, As someone new to puppet I''m trying to work out the best way to manage different filesystems and logical volumes on different servers. Specifically I would like to be able to define on a series of nodes different LVM logical volumes to create and mount. I''m trying to do this at the moment with a define of the following type: # Manage a partition and create if needed.
2008 Apr 24
2
[PATCH 0/2] dm-ioband: I/O bandwidth controller v0.0.4: Introduction
Hi everyone, This is dm-ioband version 0.0.4 release. Dm-ioband is an I/O bandwidth controller implemented as a device-mapper driver, which gives specified bandwidth to each job running on the same physical device. Changes since 0.0.3 (5th February): - Improve the performance when many processes are issuing I/Os simultaneously. - Change the table format to fully support dmsetup
2008 Apr 24
2
[PATCH 0/2] dm-ioband: I/O bandwidth controller v0.0.4: Introduction
Hi everyone, This is dm-ioband version 0.0.4 release. Dm-ioband is an I/O bandwidth controller implemented as a device-mapper driver, which gives specified bandwidth to each job running on the same physical device. Changes since 0.0.3 (5th February): - Improve the performance when many processes are issuing I/Os simultaneously. - Change the table format to fully support dmsetup
2008 Apr 24
2
[PATCH 0/2] dm-ioband: I/O bandwidth controller v0.0.4: Introduction
Hi everyone, This is dm-ioband version 0.0.4 release. Dm-ioband is an I/O bandwidth controller implemented as a device-mapper driver, which gives specified bandwidth to each job running on the same physical device. Changes since 0.0.3 (5th February): - Improve the performance when many processes are issuing I/Os simultaneously. - Change the table format to fully support dmsetup
2009 May 12
1
[PATCH 1/1] dm-ioband: I/O bandwidth controller
...| ioband groups + | (80) | | (40) | (weight) + +-------------|------------+ +-------------|------------+ + | | + +-------------V------------+ +-------------V------------+ + | /dev/mapper/lv0 | | /dev/mapper/lv1 | striped logical + | | | | volumes + +-------------------------------------------------------+ + | vg0 | volume group + +-------------|--------------...