similar to: Behavior of disk caching with qcow2 disks

Displaying 20 results from an estimated 4000 matches similar to: "Behavior of disk caching with qcow2 disks"

2014 Aug 12
0
Re: Behavior of disk caching with qcow2 disks
On Mon, Aug 11, 2014 at 02:06:54PM -0500, Andrew Martin wrote: > Hello, > > I am running several virtualization servers with QEMU 1.4.x and > libvirt 1.0.2 on Ubuntu 12.04 and am working on optimizing the cache= > and aio= options for the virtual machines. These VM images are mostly > qcow2, and are served both from a local ext4 filesystem (with > data=ordered,barrier) and
2014 Oct 14
3
Filesystem writes unexpectedly slow (CentOS 6.4)
I have a rather large box (2x8-core Xeon, 96GB RAM) where I have a couple of disk arrays connected on an Areca controller. I just added a new external array, 8 3TB drives in RAID5, and the testing I'm doing right now is on this array, but this seems to be a problem on this machine in general, on all file systems (even, possibly, NFS, but I'm not sure about that one yet). So, if I use
2015 Jan 16
2
kvm guest from zfs dataset
Hi all, I thought I?d post this in case any one has issues similar to mine. First, my initial email to the list which I didn?t send; I?m trying to run a KVM based guest OS off of a mirrored ZFS dataset. It won?t run with errors "invalid argument..", but will run when on the root volume, or when that ZFS dataset has been removed in favor of an EXT4 volumes. Thanks in advance, PS I
2014 Mar 21
3
OT: DELL PERC H200
Does anyone know if a PERC H200 is a real RAID controller? I'm about to build a box to CentOS 6.5 (it was Windows...) with RAID 6 on Monday, and this PE R610 has this.... I'm familiar with PERC 6 and 7s, but just dunno 'bout this one. mark
2012 Feb 05
4
qcow2 performance
Greets, I have to research performance-issues of a W2003-VM within KVM. Right now it's a qcow2-image-file w/ default settings within libvirt (configured by vmm ...) My question: what caching to use? writeback/writethrough/etc ... what to use for data integrity while not getting ultraslow performance? Found https://www.linuxfoundation.jp/jp_uploads/JLS2009/jls09_hellwig.pdf Is there
2008 Jun 03
6
development machine with xen on gentoo
Hi all. We just put a relative large server in operation (for our measures ;-) ), a 4x Dual Core / Intel with 16GB Ram and a 3ware 9550SX SATA-RAID with 4 drives. Operating system for the Dom0 is an up-to-date gentoo linux. Everything runs really fine, there are 4 Domu''s running, 1x gentoo and 2x debian and 1x Windows Server 2003. DomU''s are running blazing fast in normal
2015 Jan 06
2
Hardware raid LSI Megaraid not working since Centos 6.6
Thank you for your help. Le 05/01/2015 19:10, John R Pierce a ?crit : > works here fine on the 9261, which is an OEM version of the same card > with the connectors in a different orientation... you might check your > LSI firmware revision. My firmware seems to be more up to date. Anyway, I will try to update firmware. I have to check how to do that. # dmesg |grep LSI scsi4 : LSI SAS
2013 Jun 26
3
introduce a cache options for PV disks
Document a per-disk cache option in the xl config file to allow users to select the cache mode that the backend should use to open the disk file or device. Document backend options that are part of the vbd xenstore interface. The existing "mode" and "device-type" as well as the new "cache". Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
2015 Jan 27
4
redistribute virtual machines from vps hosting to end users
Hi. I need to get ability to download backups of kvm virtual machines (raw images) to end users. Virtualbox users can import ovs images, that i can create, but in case of linux and virt-manager - how users can import images and simply run it? I think about newbie linux users, that can install virt-manager under ubuntu via graphical package manager and do some easy steps... Can you share me info
2015 Jan 28
2
Re: redistribute virtual machines from vps hosting to end users
2015-01-28 12:47 GMT+03:00 Kashyap Chamarthy <kchamart@redhat.com>: > I don't have immediate steps with `virt-manager` as I don't use it much > in my workflow. But if you have users who're comfortable with CLI, you > can import disk images into libvirt (and which will also be accessible > via `virt-manager`) tribvially: > > $ virt-install --name f21vm --ram
2019 Aug 19
3
[ORC] Removing / replacing JITDylibs
Hi, I'm working on a runtime autotuner that utilizes ORCv2 JIT (I'm closely tracking tip-of-tree), so linking new object files and patching in the new function(s) will happen frequently. One of the concerns my runtime system has is the ability to do one of the following: (1) replacement of the contents of a JITDylib with a new object file [to provide semi-space GC-style reclaiming], (2)
2005 Nov 07
1
More info on 3Ware 9550SX from the field (in case anyone else is interested)
From a buddy of mine that has spent a little time with the new cards. This was in response to my email asking about his impressions of them: It just came out in mid-September. There are a couple reasons I'm not thrilled with it: 1) no 8-port multilane version (only 12) 2) the 8-port non-multilane version is a kludge, connector-wise (a row of 3 double-stacked connectors along the
2013 Mar 06
3
[OT/HW] hardware raid -- comment/experience with 3Ware
Greetings, I am looking for a hardware raid card that supports up to 4 SATA II hard disks with hot swap (compatible raid cage) I have short listed two LSI/3Ware cards: 1. 9750-4i 2. 9650SE-4LPML Both appear to be well supported in Linux. I would appreciate your personal experience with the CLI tools provided by LSI. Can they be configured to send email for disk failures or SMART errors? Is
2010 Jan 24
5
Centos/Linux Disk Caching, might be OT in some ways
I'm trying to optimize some database app running on a CentOS server and wanted to confirm some things about the disk/file caching mechanism. >From what I've read, Linux has a Virtual Filesystem layer that sits between the physical file system and everything else. So no matter what FS is used, applications are still addressing the VFS. Due to this, disk caching is done on an inode/block
2007 Aug 30
4
OT: Suggestions for RAID HW for 2 SATA drives in DellPowerEdge SC
On 29 August 2007, "Ross S. W. Walker" <rwalker at medallion.com> wrote: > Message: 39 <snip> > I wouldn't worry too much about the OS HD configuration, you are > always going to want RAID1 for the OS, whether software or hardware. > > Reason I say not to worry too much about the OS HD config is because > you are almost certainly going to put the
2010 Jan 11
5
internal backup power supplies?
With all the recent discussion of SSD''s that lack suitable power-failure cache protection, surely there''s an opportunity for a separate modular solution? I know there used to be (years and years ago) small internal UPS''s that fit in a few 5.25" drive bays. They were designed to power the motherboard and peripherals, with the advantage of simplicity and efficiency
2015 Mar 25
2
Point-in-time snapshots (was: Re: Inspection of disk snapshots)
On Wed, Mar 25, 2015 at 07:38:03PM +0100, Kashyap Chamarthy wrote: > On Mon, Mar 23, 2015 at 10:43:30PM +0000, Richard W.M. Jones wrote: > > [. . .] > > > > This makes a copy of the whole disk image. It's also not a consistent > > > (point in time) copy. > > > > Oh I see that you're copying the _snapshot_ that you created with > >
2007 Mar 21
1
EXT2 vs. EXT3: mount w/sync or fdatasync
My application always needs to sync file data after writing. I don't want anything handing around in the kernel buffers. I am wondering what is the best method to accomplish this. 1. Do I use EXT2 and use fdatasync() or fsync()? 2. Do I use EXT2 and mount with the "sync" option? 3. Do I use EXT2 and use the O_DIRECT flag on open()? 4. Do I use EXT3 in full journaled mode,
2008 Mar 04
2
7.0-Release and 3ware 9550SXU w/BBU - horrible write performance
Hi, I've got a new server with a 3ware 9550SXU with the Battery. I am using FreeBSD 7.0-Release (tried both 4BSD and ULE) using AMD64 and the 3ware performance for writes is just plain horrible. Something is obviously wrong but I'm not sure what. I've got a 4 disk RAID 10 array. According to 3dm2 the cache is on. I even tried setting The StorSave preference to
2019 Oct 10
2
RAID controller recommendations that are supported by RHEL/CentOS 8?
Hi, I'm currently looking for a RAID controller with BBU/CacheVault and while LSI MegaRaid controllers worked well in the past apparently they are no longer supported in RHEL 8: https://access.redhat.com/discussions/3722151 Does anybody have recommendations for for hardware controllers with cache that should work in both CentOS 7 and 8 out of the box? Regards, Dennis