similar to: strange behaviour of domU - i/o performance tests

Displaying 20 results from an estimated 4000 matches similar to: "strange behaviour of domU - i/o performance tests"

2006 Feb 27
1
SLES 9 domU for testing
Hi, I need to do a test with SLES 9 running in a domU very quickly and don''t have much time. Is there anybody out there having experience in installing SLES 9 into a domU? Or way better, maybe has a already running guest installation for download? Thanks in advance, Michael -- ---------------------------------------------------------------------------------------- Michael Mey     
2005 Dec 07
0
live migration with xen 2.0.7 with fibre channel on Debian - help needed
Hi, I''d like to test the stability of live migration during heavy load of domU. scenario: - both dom0s and domU are running on Debian Sarge. - script on dom0 triggers live-migration to the other dom0 - domU is running I/O tests, e.g. bonnie++ - domUs root- (ext3) and swap fs is stored on two partitions in a san - san is connected using fibre channel cards to both dom0s - san in dom0
2005 Dec 07
6
RE: live migration with xen 2.0.7 with fibre channel onDebian - help needed
I had this exact same problem with 2.0.7. I had done a little investigation and found scheduled_work gets called to schedule the shutdown in the user domain kernel, but the shutdown work that gets scheduled never actually gets called. I''m glad someone else is seeing this same problem now :-) Like you, it worked a number of times in a row, then would fail, and it didn''t seem to
2005 Nov 22
7
Tutorial : Debian, Xen and CLUSTER / GFS Support
Hello there! I made Debian, Xen 2.0 and CLUSTER/GFS work together :). I wrote this little tutorial to help you setup yours. Any feedback is welcome, except comments about W3C validation of my code :p Note that I am talking about compiling CLUSTER with your XEN kernel, I don''t explain how to setup a working cluster. You can find how to setup those in official RedHat docs. You will see,
2006 Jan 06
37
cow implementation
Has any one had any success with cow implementation in xen ? I found this somewhere http://www.atconsultancy.nl/cowloop/, has anyone tried it out with xen ? -- regards, Anand _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
2005 Dec 15
6
Sharing a partition between dom0 and domU
Hi! I''m currently preparing to xen installation. In the system that I''m going to set up, there will be a need to copy large amounts of data (therefore I want to avoid using network) from domUs to the dom0 (one way only). Mounting a filesystem RO, when a domU has it mounted RW is probably asking for trouble. Therefore I was thinking about using some remounting protocol similar
2006 Feb 13
1
Managing multiple Dom0''s
Are there people currently managing multiple Dom0''s and making significant use of migration? I''m loking at a potential setup of 16 or so physical systems with perhaps 64 or so virtual systems (DomU''s I suppose Dom0 is also a virtual system if I understand thing correctly, but that''s no twhat I mean here). It seems to me that making heavy use of migration for
2003 Mar 05
1
ACL Support for sftp
Hello, I would like to see getfacl/setfacl as commands added to sftp. Are there any plans on doing this, or whom should I send patches to? Thanks in advance regards Stefan -- -------------------------------------------------------------------- Stefan V?lkel stefan.voelkel at millenux.com Millenux GmbH mobile: +49.170.79177.17
2013 Oct 17
2
xentop vbd output
Hi all, Now I use xentop to get disk statistical information. NAME STATE CPU(sec) CPU(%) MEM(k) MEM(%) MAXMEM(k) MAXMEM(%) VCPUS NETS NETTX(k) NETRX(k) VBDS VBD_OO VBD_RD VBD_WR VBD_RSECT VBD_WSECT SSID Domain-0 -----r 96233 0.7 3902464 23.3 no limit n/a 12 0 0 0 0 0 0 0 0 0 0 slave3
2001 Dec 18
0
openssh, pam and cryptocard's cryptoadmin / easyradius
Hi, this is merely FYI, but i would appreciate if someone had any comments or further information on the topic. We were using the following setup : cryptocard easyradius with RB-1 hardware tokens (hex or decimal display, synchronous (quicklog) mode) f-secure ssh with pam radius authentication This worked fine until we updated to openssh 2.9p2. Then all authentications where the response
2003 Apr 01
0
Reading ADS volumes
Forgive a possibly naive question. I have a Linux laptop (mandrake 9) which I would like to use as a client on network at work. The work LAN is a mixture of W2K servers and NT4 servers. Some share volumes are ADS. I can read the non-ADS shares just fine (samba 2.2.7a), but (no surprise here) it chokes on ADS volumes using either the smb protocol in Konqueror or w/ Linneighborhood to mount
2003 Jul 22
0
Samba 3beta2/3 with ldapsam as PDC dont advice itself
Hi, i have a problem with samba3 (with ldapsam). I've set up my samba3 beta2 (also tried beta3) as a pdc, but it seems that samba does not anounce it's DOMAIN correctly. I don't see my JOCHENGROUP Domain from any Windows-Workstation. I can search for the name of my PDC using the 'Search Computer' Function and Windows will find it. I can doubleclick the PDC and login using
2004 Jan 07
0
anouncing getent passwd database as domain users without winbind?
Hi, does anyone know if it is possible to implement a samba-Setup using Domain users without the use of Winbind? If've the following environment: - Ldap-Server with every Userinformation (Single Point of Administration) - Group and User (also the Mappings and the Passwords) are replicated into the ADS Domain - The ADS is needed to get group Policies working. - The fileserver is Linux with
2006 Jan 30
12
Error: Device 769 (vbd) could not be connected. Backend device not found.
Hi, I have installed debian 3.1 on an Intel Pentium D processor 920 ( including vanderpool.) The installation worked fine and i''m able to start guest-domains. When I try to start the 5.th Guest-Domain, I Get the following error: Error: Device 769 (vbd) could not be connected. Backend device not found. this problem exists still if i have shutdown or destroyed all guests, after
2010 Aug 06
0
Re: PATCH 3/6 - direct-io: do not merge logically non-contiguous requests
On Fri, May 21, 2010 at 15:37:45AM -0400, Josef Bacik wrote: > On Fri, May 21, 2010 at 11:21:11AM -0400, Christoph Hellwig wrote: >> On Wed, May 19, 2010 at 04:24:51PM -0400, Josef Bacik wrote: >> > Btrfs cannot handle having logically non-contiguous requests submitted. For >> > example if you have >> > >> > Logical: [0-4095][HOLE][8192-12287]
2011 Dec 08
0
folder with no permissions
Hi Matt, Can you please provide us with more information? 1. what version of glusterfs are you using 2. Was the iozone run as root or user? a. if user, did it have the required permissions? 3. steps to reproduce the problem 4. Any other errors related to stripe in the clinet log? With regards, Shishir ________________________________________ From: gluster-users-bounces at gluster.org
2007 Nov 19
0
Solaris 8/07 Zfs Raidz NFS dies during iozone test on client host
Hi, Well I have a freshly built system with ZFS raidz. Intel P4 2.4 Ghz 1GB Ram Marvell Technology Group Ltd. MV88SX6081 8-port SATA II PCI-X Controller (2) Intel Dual Port 1Gbit nics I have (5) 300GB disks in a Raidz1 with Zfs. I''ve created a couple of FS on this. /export/downloads /export/music /export/musicraw I''ve shared these out as well. First with ZFS ''zfs
2010 May 25
0
Magic parameter "-ec" of IOZone to increase the write performance of samba
Hi, I am measuring the performance of my newly bought NAS with IOZone. The NAS is of an embedded linux with samba installed. (CPU is Intel Atom) The IOZone reported that write performance to be over 1GBps while the file size less or equals to 1GB. Since the nic is 1Gbps, the maximum speed is supposed to be 125MiBps at most. The testing report of IOZone is amazing. Later I found that If the
2010 Jul 06
0
[PATCH 0/6 v6][RFC] jbd[2]: enhance fsync performance when using CFQ
Hi Jeff, On 07/03/2010 03:58 AM, Jeff Moyer wrote: > Hi, > > Running iozone or fs_mark with fsync enabled, the performance of CFQ is > far worse than that of deadline for enterprise class storage when dealing > with file sizes of 8MB or less. I used the following command line as a > representative test case: > > fs_mark -S 1 -D 10000 -N 100000 -d /mnt/test/fs_mark -s
2017 Sep 11
0
3.10.5 vs 3.12.0 huge performance loss
Here are my results: Summary: I am not able to reproduce the problem, IOW I get relatively equivalent numbers for sequential IO when going against 3.10.5 or 3.12.0 Next steps: - Could you pass along your volfile (both for a brick and also the client vol file (from /var/lib/glusterd/vols/<yourvolname>/patchy.tcp-fuse.vol and a brick vol file from the same place) - I want to check