similar to: [PATCH] fish: Increase default size of prepared disks (-N) to 1G.

Displaying 20 results from an estimated 10000 matches similar to: "[PATCH] fish: Increase default size of prepared disks (-N) to 1G."

2016 May 19
0
[PATCH 3/3] fish: generate test-prep.sh with generator
Generate test-prep.sh using the generator, so the prepared disk types tested are the same as the ones configured in prepopts.ml. --- .gitignore | 1 + fish/test-prep.sh | 35 ----------------------------------- generator/fish.ml | 33 +++++++++++++++++++++++++++++++++ generator/fish.mli | 1 + generator/main.ml | 1 + 5 files changed, 36 insertions(+), 35 deletions(-) delete mode
2011 Jun 02
2
increase harddisk diskspace Failed to suspend LogVol00
Hi, I want to increase my harddisk space and receive the following error # lvextend -l +323 /dev/VolGroup00/LogVol00 Extending logical volume LogVol00 to 48.97 GB device-mapper: reload ioctl failed: Invalid argument Failed to suspend LogVol00 Can you help me please? # fdisk -l Disk /dev/hda: 54.7 GB, 54759997440 bytes 255 heads, 63 sectors/track, 6657 cylinders Units =
2016 Sep 27
0
Re: [PATCH] fish: drop leading '/' in nbd paths (RHBZ#1379585)
On Tue, Sep 27, 2016 at 11:20:07AM +0200, Pino Toscano wrote: > When parsing the URI, drop the leading '/' from the path also when the > protocol is 'nbd': in this case, the path represents the export name, > which does not need the '/' coming from the URI format. > > Improve the coverage for nbd in test-add-uri.sh, adding a couple of > tests, and
2014 Oct 05
0
lvcreate error
Hello I am unable create a new logical volume, I receive the following error when using lvcreate # lvcreate -L 1g -n system3_root hm device-mapper: resume ioctl on failed: Invalid argument Unable to resume hm-system3_root (253:7) Failed to activate new LV. # pvs PV VG Fmt Attr PSize PFree /dev/sda4 hm lvm2 a-- 998.00g 864.75g # gvs VG #PV #LV #SN Attr VSize
2005 Dec 22
2
ext2online failure
Could someone tell me what could be causing this failure on my system and a way to get around/fix it? Your help is very much appreciated. I'd just finished running lvm lvextend. "lvextend -L+L1G /dev/VolGroup00/LogVol00", after adding a new 1G partition (/dev/sda4) to /dev/VolGroup00. [root at ppstest13 ~]# ext2online -d -v /dev/VolGroup00/LogVol00 ext2online v1.1.18 -
2016 Sep 27
2
[PATCH] fish: drop leading '/' in nbd paths (RHBZ#1379585)
When parsing the URI, drop the leading '/' from the path also when the protocol is 'nbd': in this case, the path represents the export name, which does not need the '/' coming from the URI format. Improve the coverage for nbd in test-add-uri.sh, adding a couple of tests, and adjusting the result of an existing one. --- fish/test-add-uri.sh | 8 +++++++- fish/uri.c
2006 Nov 15
0
System crashed & LVM1
Hi all! I'm new to this list. I wanted to have an opinion of the situation i'm facing now (i'm no expert in LVM). Last week, i had to re-install a system which had a system hard drive crash (it was clunking before it definitely died!). It was running on SuSE 9.0 before. I just finished to install a fresh copy of CentOS 4.4 on a new hard disk. I want to know if it's possible
2012 Aug 06
0
Problem with mdadm + lvm + drbd + ocfs ( sounds obvious, eh ? :) )
Hi there First of all apologies for the lenghty message, but it's been a long weekend. I'm trying to setup a two node cluster with the following configuration: OS: Debian 6.0 amd64 ocfs: 1.4.4-3 ( debian package ) drbd: 8.3.7-2.1 lvm2: 2.02.66-5 kernel: 2.6.32-45 mdadm: 3.1.4-1+8efb9d1+squeeze1 layout: 0- 2 36GB scsi disks in a raid1 array , with mdadm. 1- 1 lvm2 VG above the raid1 ,
2008 Aug 17
2
mirroring with LVM?
I'm pulling my hair out trying to setup a mirrored logical volume. lvconvert tells me I don't have enough free space, even though I have hundreds of gigabytes free on both physical volumes. Command: lvconvert -m1 /dev/vg1/iscsi_deeds_data Insufficient suitable allocatable extents for logical volume : 10240 more required Any ideas? Thanks!, Gordon Here's the output from the
2017 Apr 10
0
lvm cache + qemu-kvm stops working after about 20GB of writes
Adding Paolo and Miroslav. On Sat, Apr 8, 2017 at 4:49 PM, Richard Landsman - Rimote <richard at rimote.nl > wrote: > Hello, > > I would really appreciate some help/guidance with this problem. First of > all sorry for the long message. I would file a bug, but do not know if it > is my fault, dm-cache, qemu or (probably) a combination of both. And i can > imagine some of
2010 Feb 15
3
My first type/provider - does nothing...
Hi list, i tried to write my first type and provider that should create logical volumes. Seems like i''m missing something as i get nothing when i use it: No errors and no logical volume :-( type/logicalvolume.rb: ================= Puppet::Type.newtype(:logicalvolume) do @doc = "Manage logical volumes" ensurable newparam(:lvname) do desc "The logcal
2015 Aug 27
0
[PATCH v4 2/2] fish: add journal-view command
Lets user view journald log from VM in a similar format as journalctl uses. Fixes RFE: journal reader in guestfish (RHBZ#988100) --- fish/fish.h | 3 +++ generator/Makefile.am | 6 ++++-- generator/actions.ml | 22 ++++++++++++++++++++++ generator/main.ml | 3 +++ 4 files changed, 32 insertions(+), 2 deletions(-) diff --git a/fish/fish.h b/fish/fish.h index df22e34..8ae6454
2012 Apr 16
2
libvirt slow responding after define poool with existing VG with some other lv
If I add VG with some other lv-s to libvirt pool then libvirt slow responding. How to fix it? <pool type='logical'> <name>LVM_MAIN</name> <uuid>a2713bed-ad4a-fb79-83b5-65a9e8f1094e</uuid> <capacity>0</capacity> <allocation>0</allocation> <available>0</available> <source>
2009 Mar 24
1
Disks do not mount at boot
I have a problem with two entries in my /etc/fstab. When I boot the machine, the disks are not mounted. When I give mount -a, all disks are present without an error. Of course I don't want to manually do that after each reboot. What can be the problem? CentOS 5.2 cat /etc/fstab /dev/vg/centos / ext3 defaults 1 1 LABEL=/boot /boot
2009 Jun 05
1
DRBD+GFS - Logical Volume problem
Hi list. I am dealing with DRBD (+GFS as its DLM). GFS configuration needs a CLVMD configuration. So, after syncronized my (two) /dev/drbd0 block devices, I start the clvmd service and try to create a clustered logical volume. I get this: On "alice": [root at alice ~]# pvcreate /dev/drbd0 Physical volume "/dev/drbd0" successfully created [root at alice ~]# vgcreate
2015 Feb 19
0
CentOS 7: software RAID 5 array with 4 disks and no spares?
On Wed, Feb 18, 2015 at 9:25 PM, John R Pierce <pierce at hogranch.com> wrote: > disks -> partition(s) -> mdraid devices -> PVs -> VG -> LV -> file system. > phew. You might be a candidate for LVM integrated raid. It uses the md kernel code on the backend, but it's all LVM tools to create, manage and monitor. The raid level is defined per LV, instead of all
2017 Apr 12
0
qcow2 --> logical volume
Hello CentOS community members, A hardware vendor provided us with a .qcow2 file to run on our KVM hypervisor file that will monitor/control said hardware (firewall). I'd like to import this .qcow2 to run as a logical volume (named 'server3') in an existing logical group named 'centos' on our CentOS Linux release 7.1.1503 server. Right now the .qcow2 file is sitting
2013 Sep 20
1
Creating 38TB ext4 FS
mkfs.ext4 fails to create 38TB file system on CentOS 6.4 64bit with this error: mkfs.ext4: Size of device /dev/vg02/vtapes too big to be expressed in 32 bits using a blocksize of 4096. More details follow: # uname -a Linux tzbackup 2.6.32-358.18.1.el6.x86_64 #1 SMP Wed Aug 28 17:19:38 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux # fdisk -l /dev/sdc Disk /dev/sdc: 41996.7 GB, 41996727091200
2007 Dec 06
0
LVM2: large volume problem?
Hi all, I'm having problems to create/resize an lv up to 1T (well I can't reach 300G), my system is a CentOS 5.1 x86_64 on a Dell 2950 with 6x500G SATA (RAID5 to aprox. 2.5T) [root at Mugello ~]# fdisk -l Disk /dev/sda: 2497.7 GB, 2497791918080 bytes 255 heads, 63 sectors/track, 303672 cylinders Units = cilindros of 16065 * 512 = 8225280 bytes Dispositivo Boot Start
2023 Nov 28
0
possible LVM corruption?
While updating a hypervisor, I'm getting the following errors printed to the terminal during rpm upgrade scripts.? The first line is printed 15 times, and GRUB prints a similar error at boot.? "vgck" doesn't seem to find any problems.? Does anyone have suggestions for diagnosing the issue? error: ../grub-core/disk/diskfilter.c:524:unknown node