similar to: lvm cache + qemu-kvm stops working after about 20GB of writes

Displaying 20 results from an estimated 100 matches similar to: "lvm cache + qemu-kvm stops working after about 20GB of writes"

2017 Apr 10
0
lvm cache + qemu-kvm stops working after about 20GB of writes
Adding Paolo and Miroslav. On Sat, Apr 8, 2017 at 4:49 PM, Richard Landsman - Rimote <richard at rimote.nl > wrote: > Hello, > > I would really appreciate some help/guidance with this problem. First of > all sorry for the long message. I would file a bug, but do not know if it > is my fault, dm-cache, qemu or (probably) a combination of both. And i can > imagine some of
2017 Apr 20
2
lvm cache + qemu-kvm stops working after about 20GB of writes
Hello everyone, Anybody had the chance to test out this setup and reproduce the problem? I assumed it would be something that's used often these days and a solution would benefit a lot of users. If can be of any assistance please contact me. -- Met vriendelijke groet, Richard Landsman http://rimote.nl T: +31 (0)50 - 763 04 07 (ma-vr 9:00 tot 18:00) 24/7 bij storingen: +31 (0)6 - 4388
2017 Aug 19
2
Problem with softwareraid
Hello Gordon, yeah. it is really strange. from one boot to the next, everyhing is f** up.(2 months between). any idea? [root at quad live]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 1.8T 0 disk ??sda1 8:1 0 1.8T 0 part
2018 Jun 07
2
Matching ConstantFPSDNode tablegen
I'm trying to match a ConstantFPSDNode == 0 in dag pattern for tablegen but am having some issues. So LLVM doesn't seem to accept a floating point constant literal match like: %v = call <4 x float> @foo(i32 15, float %s, float 0.0, <8 x i32> %rsrc, <4 x i32> %samp, i1 0, i32 0, i32 0) ret <4 x float> %v def : XXXPat<(v4f32 (int_foo i32:$mask, f32:$s, 0,
2017 Aug 18
4
Problem with softwareraid
Hello all, i have already had a discussion on the software raid mailinglist and i want to switch to this one :) I am having a really strange problem with my md0 device running centos7. after a new start of my server the md0 was gone. now after trying to find the problem i detected the following: Booting any installed kernel gives me NO md0 device. (ls /dev/md* doesnt give anything). a 'cat
2011 Sep 02
0
Copying data failed on distributed replicated volume (ver. 3.1.3)
Hi, I am trying to backup data from a distributed replicated volume. The volume was built from 6 units of 2 TB hard disks: gluster> volume info Volume Name: 6TB-Vol Type: Distributed-Replicate Status: Started Number of Bricks: 3 x 2 = 6 Transport-type: tcp Bricks: Brick1: c107:/exp0 Brick2: c108:/exp0 Brick3: c109:/exp0 Brick4: c110:/exp0 Brick5: c111:/exp0 Brick6: c112:/exp0 Options
2017 Nov 13
1
Shared storage showing 100% used
Hello list, I recently enabled shared storage on a working cluster with nfs-ganesha and am just storing my ganesha.conf file there so that all 4 nodes can access it(baby steps).? It was all working great for a couple of weeks until I was alerted that /run/gluster/shared_storage was full, see below.? There was no warning; it went from fine to critical overnight.
2012 Aug 19
5
fail to mount after first reboot
I created a 1TB RAID1. So far it is just for testing, no important data on there. After a reboot, I tried to mount it again # mount /dev/mapper/vg00-btrfsvol0_0 /mnt/btrfs0 mount: wrong fs type, bad option, bad superblock on /dev/mapper/vg00-btrfsvol0_0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or
2017 Aug 20
0
Problem with softwareraid
On 08/19/2017 12:06 PM, Mr Typo wrote: > sda 8:0 0 1.8T 0 disk > ??sda1 8:1 0 1.8T 0 part > ??WDC_WD20EFRX-68AX9N0_WD-WMC1T2547260 253:3 0 1.8T 0 mpath > ??WDC_WD20EFRX-68AX9N0_WD-WMC1T2547260p1 253:8 0 1.8T 0 part You haven't said anything about multipath hardware yet,
2016 Jul 12
3
Broken output for fdisk -l
Hi, There was some problem with our system so I re-installed the server with CentOS 7. Now, when I am trying to run 'fdisk -l' command, it is returning a broken output. It throws this error- "fdisk: cannot open /dev/sdc: Input/output error". There are valid /dev/sdd and /dev/sde devices which are mounted and they are accessible but, somehow /dev/sdc is having a problem and
2013 Feb 12
2
Lost folders after changing MDS
OK, so our old MDS had hardware issues so I configured a new MGS / MDS on a VM (this is a backup lustre filesystem and I wanted to separate the MGS / MDS from OSS of the previous), and then did this: For example: mount -t ldiskfs /dev/old /mnt/ost_old mount -t ldiskfs /dev/new /mnt/ost_new rsync -aSv /mnt/ost_old/ /mnt/ost_new # note trailing slash on ost_old/ If you are unable to connect both
2018 Feb 19
5
CentOS 7 1708 won't boot after grub2 update
Hi all, This is the third fresh install of CentOS 7 1708 the last two months that won't boot after a regular "yum update" just after the fresh install has finished. I've never had this problem before, it has just worked. The installs are on OEM's that have previously run CentOS 6.9 x64. Am I missing something here? -- BW, Sorin
2016 Apr 07
2
Suddenly increased my hard disk
Hi John, Ashish, Still no luck . I have tried your commands in root folder. It's showing max size 384 only in home directory. But if i try df -h shown 579. Is there any way to find out recycle bin folder On Thu, Apr 7, 2016 at 2:16 PM, Ashish Yadav <gwalashish at gmail.com> wrote: > Hi Chandran, > > > On Thu, Apr 7, 2016 at 10:38 AM, Chandran Manikandan <tech2mani at
2016 Aug 11
5
Software RAID and GRUB on CentOS 7
Hi, When I perform a software RAID 1 or RAID 5 installation on a LAN server with several hard disks, I wonder if GRUB already gets installed on each individual MBR, or if I have to do that manually. On CentOS 5.x and 6.x, this had to be done like this: # grub grub> device (hd0) /dev/sda grub> device (hd1) /dev/sdb grub> root (hd0,0) grub> setup (hd0) grub> root (hd1,0) grub>
2016 Apr 07
0
Suddenly increased my hard disk
On 4/7/2016 12:04 AM, Chandran Manikandan wrote: > Still no luck . > I have tried your commands in root folder. > It's showing max size 384 only in home directory. > > But if i try df -h shown 579. > > Is there any way to find out recycle bin folder Linux shell has no such thing as a recycle bin, thats a windows thing (some linux graphical desktops might create one in
2016 Aug 11
0
Software RAID and GRUB on CentOS 7
On 08/11/16 02:33, Nicolas Kovacs wrote: > Hi, > > When I perform a software RAID 1 or RAID 5 installation on a LAN server > with several hard disks, I wonder if GRUB already gets installed on each > individual MBR, or if I have to do that manually. On CentOS 5.x and 6.x, > this had to be done like this: > > # grub > grub> device (hd0) /dev/sda > grub> device
2016 Apr 07
1
Suddenly increased my hard disk
Hi John, Thank you. For my system shows like below. df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 909G 576G 287G 67% / tmpfs 3.9G 0 3.9G 0% /dev/shm /dev/sda1 3.9G 160M 3.5G 5% /boot /dev/sdb1 916G 382G 488G 44% /bkhdd First hard disk /dev/sda Second hard disk /dev/sdb The problem is in first hard disk show above size but my email
2013 Sep 15
1
grub command line
Hello Everyone I have a remote CentOS 6.4 server (with KVM access), when I received the server it was running with LVM on single disk (sda) I managed to remove LVM and install raid 1 in sda and sdb disks the mirroring is working fine, my only issue now is that everytime I reboot the server I got the grub command line and I have manually boot using comand grub> configfile
2015 Feb 19
3
iostat a partition
Hey guys, I need to use iostat to diagnose a disk latency problem we think we may be having. So if I have this disk partition: [root at uszmpdblp010la mysql]# df -h /mysql Filesystem Size Used Avail Use% Mounted on /dev/mapper/MysqlVG-MysqlVol 9.9G 1.1G 8.4G 11% /mysql And I want to correlate that to the output of fdisk -l, so that I can feed the disk
2008 May 19
1
geom_raid5 + FreeBSD 7.0-STABLE + 5x500Gb (1.8T UFS volume) -- crashes :(
Hello, Arne. I try to build storage server for my home (I have a LOT of media files) with FreeBSD 7, 5xHDD (WD 500Gb) and geom_raid5 ("simple" version from perforce, beacuse http://home.tiscali.de/cmdr_faako/geom_raid5.tbz is not patched for FreeBSD7). Array & FS were created with default arguments: # graid5 label storage ad6 ad8 ad10 ad12 ad14 # newfs -O2 -U /dev/raid5/storage