similar to: IRC question - failing tests

Displaying 20 results from an estimated 20000 matches similar to: "IRC question - failing tests"

2018 Jan 12
5
[PATCH 1/1] appliance: init: Avoid running degraded md devices
'--no-degraded' flag in the first mdadm call inhibits the startup of array unless all expected drives are present. This will prevent starting arrays in degraded state. Second mdadm call (after LVM is scanned) will scan unused yet devices and make an attempt to run all found arrays even they are in degraded state. Two new tests are added. This fixes rhbz1527852. Here is boot-benchmark
2016 Jan 26
1
[PATCH] xfs_admin: do not set lazycounter in tests not checking that
This flag cannot be disabled (yet) in V5 xfs filesystems; since 2 out of the current 3 tests of xfs_admin check other results than that flag, avoid setting it when not needed. --- generator/actions.ml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/generator/actions.ml b/generator/actions.ml index 9ea5736..14902e7 100644 --- a/generator/actions.ml +++
2008 Aug 17
2
mirroring with LVM?
I'm pulling my hair out trying to setup a mirrored logical volume. lvconvert tells me I don't have enough free space, even though I have hundreds of gigabytes free on both physical volumes. Command: lvconvert -m1 /dev/vg1/iscsi_deeds_data Insufficient suitable allocatable extents for logical volume : 10240 more required Any ideas? Thanks!, Gordon Here's the output from the
2011 Feb 17
0
Can't create mirrored LVM: Insufficient suitable allocatable extents for logical volume : 2560 more required
I'm trying to setup a LVM mirror on 2 iSCS targets, but can't. I have added both /dev/sda & /dev/sdb to the LVM-RAID PV, and both have 500GB space. [root at HP-DL360 by-path]# pvscan PV /dev/cciss/c0d0p2 VG LVM lvm2 [136.59 GB / 2.69 GB free] PV /dev/sda VG LVM-RAID lvm2 [500.00 GB / 490.00 GB free] PV /dev/sdb VG LVM-RAID lvm2 [502.70 GB /
2017 Jul 26
0
[PATCH 2/2] tests: lvm: Make the lvm_set_filter test easier to understand.
No functional change. --- tests/lvm/test-lvm-filtering.sh | 14 +++++++++----- 1 file changed, 9 insertions(+), 5 deletions(-) diff --git a/tests/lvm/test-lvm-filtering.sh b/tests/lvm/test-lvm-filtering.sh index 0c8b8803a..abb88ae6c 100755 --- a/tests/lvm/test-lvm-filtering.sh +++ b/tests/lvm/test-lvm-filtering.sh @@ -40,22 +40,22 @@ pvcreate /dev/sdb1 vgcreate VG1 /dev/sda1 vgcreate VG2
2011 Feb 25
3
can't create large LVM, even though pvscan shows enough space left
I'm trying to create a 500GB lv volume on a 500GB physical volume, but can't: [root at francois-pc ~]# pvscan PV /dev/sdd VG freenas lvm2 [500.00 GB / 500.00 GB free] PV /dev/sdc VG thecus lvm2 [1010.00 GB / 910.00 GB free] PV /dev/mapper/ddf1_RAIDp2 VG VolGroup00 lvm2 [931.25 GB / 0 free] Total: 3 [2.38 TB] / in use: 3 [2.38 TB]
2010 Jul 01
0
GNBD/LVM problem
Hello all: I'm having a strange problem with GNBD and LVM on two fully updated CentOS 5.5 x86_64 systems. On node1, I have exported a gnbd volume. lvcreate -L 500M -n mirrortest_lv01 mirrorvg gnbd_serv gnbd_export -d /dev/mirrorvg/mirrortest_lv01 -e node1_lv01 On node2 I have imported the volume: gnbd_import -i node1 Next, on node2 I attempt to create a mirrored LV with the
2015 Jan 13
3
[PATCH] mkfs: add 'label' optional argument
Add the 'label' optional argument to the mkfs action, so it is possible to set a filesystem label direct when creating it. There may be filesystems not supporting changing the label of existing filesystems but only setting it at creation time, so this new optarg will help. Implement it for the most common filesystems (ext*, fat, ntfs, btrfs, xfs), giving an error for all the others, just
2012 Aug 20
1
[PATCH] xfs: add new api xfs_admin
Add new api xfs_admin to change parameters of an XFS filesystem. Signed-off-by: Wanlong Gao <gaowanlong at cn.fujitsu.com> --- daemon/xfs.c | 78 ++++++++++++++++++++++++++++++++++++++++++ generator/generator_actions.ml | 21 ++++++++++++ gobject/Makefile.inc | 6 ++-- guestfs-release-notes.txt | 1 + po/POTFILES | 1 + src/MAX_PROC_NR
2007 Jan 04
2
Freeing pv space for snapshots
After upgrading my HD, I am now wishing I left some space for doing snapshots. Is there a way to free up some space so I can get some free PE? Right now I have this: # vgdisplay --- Volume group --- VG Name VolGroup00 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 7 VG Access read/write VG Status resizable
2012 Aug 21
1
[PATCH] xfs: add a new api xfs_repair
Add a new api xfs_repair for repairing an XFS filesystem. Signed-off-by: Wanlong Gao <gaowanlong at cn.fujitsu.com> --- daemon/xfs.c | 116 +++++++++++++++++++++++++++++++++++++++++ generator/generator_actions.ml | 23 ++++++++ gobject/Makefile.inc | 6 ++- po/POTFILES | 1 + src/MAX_PROC_NR | 2 +- 5 files changed, 145
2017 Nov 07
0
Re: using LVM thin pool LVs as a storage for libvirt guest
Please don't use lvm thin for vm. In our hosting in Russia we have 100-150 vps on each node with lvm thin pool on ssd and have locks, slowdowns and other bad things because of COW. After we switching to qcow2 files on plain ssd ext4 fs and happy =). 2017-11-04 23:21 GMT+03:00 Jan Hutaƙ <jhutar@redhat.com>: > Hello, > as usual, I'm few years behind trends so I have learned
2017 Nov 04
3
using LVM thin pool LVs as a storage for libvirt guest
Hello, as usual, I'm few years behind trends so I have learned about LVM thin volumes recently and I especially like that your volumes can be "sparse" - that you can have 1TB thin volume on 250GB VG/thin pool. Is it somehow possible to use that with libvirt? I have found this post from 2014: https://www.redhat.com/archives/libvirt-users/2014-August/msg00010.html which says
2018 Mar 28
0
Re: [PATCH FOR DISCUSSION ONLY v2] v2v: Add -o kubevirt output mode.
On Wed, Mar 28, 2018 at 01:37:06PM +0200, Piotr Kliczewski wrote: > On Wed, Mar 28, 2018 at 1:01 PM, Richard W.M. Jones <rjones@redhat.com> > wrote: > > > On Wed, Mar 28, 2018 at 12:33:56PM +0200, Piotr Kliczewski wrote: > > > configure: error: Package requirements (jansson >= 2.7) were not met: > > > > You need to install jansson-devel. > > >
2017 Nov 07
1
Re: using LVM thin pool LVs as a storage for libvirt guest
Do you have some comparasion of IO performance on thin pool vs. qcow2 file on fs? In my case each VM would have its own thin volume. I just want to overcommit disk-space. Regards, Jan On 2017-11-07 13:16 +0300, Vasiliy Tolstov wrote: >Please don't use lvm thin for vm. In our hosting in Russia we have >100-150 vps on each node with lvm thin pool on ssd and have locks, >slowdowns
2007 Apr 27
9
can''t mount vfat fs on lvm created by winxp guest
Greetings, I''ve had no success with mounting a vfat file system created by a Windows XP guest on a lvm volume. # mount -t vfat /dev/vg1/win1 /mnt/ mount: wrong fs type, bad option, bad superblock on /dev/vg1/win1, missing codepage or other error In some cases useful info is found in syslog - try dmesg | tail or so # dmesg FAT: invalid media value (0xb9) VFS:
2016 Oct 21
0
VM disk question
> However I wish to change the disk from UUID booting (fstab) to the old > style LABEL. > (so I can export it and use on another machine). > > however when I run: > e2label /dev/sda1 / > e2label: Bad magic number in super-block while trying to open /dev/sda1 > Couldn't find valid filesystem superblock. > > Three questions: > 1) Am I doing something wrong? >
2011 Feb 13
2
using an lvm for kvm vm
Is there a simple way to directly install a vm on an lvm (or proably seperate LVM's for root and swap)? For example something like: lvcreate -L 10G -n testvm_root vg_myvg lvcreate -L 1G -n testvm_swap Then somehow setup the VM to be able to directly install and boot the vm from these LV's. How do I do this? Could you then pause the virtual machine and safely take an LVM snapshot,
2010 Feb 28
3
puzzling md error ?
this has never happened to me before, and I'm somewhat at a loss. got a email from the cron thing... /etc/cron.weekly/99-raid-check: WARNING: mismatch_cnt is not 0 on /dev/md10 WARNING: mismatch_cnt is not 0 on /dev/md11 ok, md10 and md11 are each raid1's made from 2 x 72GB scsi drives, on a dell 2850 or something dual single-core 3ghz server. these two md's are in
2010 Jan 21
1
/proc/mounts always shows "nobarrier" option for xfs, even when mounted with "barrier"
Ran into a confusing situation today. When I mount an xfs filesystem on a server running centos 5.4 x86_64 with kernel 2.6.18-164.9.1.el5, the barrier/nobarrier mount option as displayed in /proc/mounts is always set to "nobarrier" Here's an example: [root at host ~]# mount -o nobarrier /dev/vg1/homexfs /mnt [root at host ~]# grep xfs /proc/mounts /dev/vg1/homexfs /mnt xfs