On 2/2/2015 8:52 PM, Jatin Davey wrote:> So , You dont think that any configuration changes like increasing the > number of volumes or anything else will help in reducing the I/O wait > time ?not by much. it might reduce the overhead if you use LVM volumes for virtual disks instead of using files, but if you're doing too much disk IO, there's not much that helps other than faster disks (or reducing the amount of reads by more aggressive caching via having more memory). -- john r pierce 37N 122W somewhere on the middle of the left coast
On 2/3/2015 10:44 AM, John R Pierce wrote:> On 2/2/2015 8:52 PM, Jatin Davey wrote: >> So , You dont think that any configuration changes like increasing >> the number of volumes or anything else will help in reducing the I/O >> wait time ? > > not by much. it might reduce the overhead if you use LVM volumes for > virtual disks instead of using files, but if you're doing too much > disk IO, there's not much that helps other than faster disks (or > reducing the amount of reads by more aggressive caching via having > more memory). > > > >Thanks John I will test and get the I/O speed results with the following and see what works best with the given workload: Create 5 volumes each with 150 GB in size for the 5 VMs that i will be running on the server Create 1 volume with 600GB in size for the 5 VMs that i will be running on the server Try with LVM volumes instead of files Will test and compare the I/O responsiveness in all cases and go with the one which is acceptable. Appreciate your responses in this regard. Thanks again.. Regards, Jatin
On Mon, Feb 2, 2015 at 11:37 PM, Jatin Davey <jashokda at cisco.com> wrote:> > I will test and get the I/O speed results with the following and see what > works best with the given workload: > > Create 5 volumes each with 150 GB in size for the 5 VMs that i will be > running on the server > Create 1 volume with 600GB in size for the 5 VMs that i will be running on > the server > Try with LVM volumes instead of files > > Will test and compare the I/O responsiveness in all cases and go with the > one which is acceptable.Unless you put each VM on its own physical disk or raid1 mirror you aren't really doing anything to isolate the vms from each other or to increase the odds that a head will be near the place the next access needs it to be. -- Les Mikesell lesmikesell at gmail.com