similar to: [HELP!]GFS2 in the xen 4.1.2 does not work!

Displaying 20 results from an estimated 2000 matches similar to: "[HELP!]GFS2 in the xen 4.1.2 does not work!"

2011 Jun 11
2
mmap in GFS2 on rhel 6.1
Hello list, we continue our tests using Dovecot on a RHEL 6.1 Cluster Backend with GFS2, also we are using dovecot as a Director for user node persistence, everything was ok until we started stress testing the solution with imaptest, we had many deadlocks, cluster filesystems corruptions and hangs, specially in index filesystem, we have configured the backend as if they were on a NFS like setup
2009 Nov 03
3
virsh troubling zfs!?
Hi and hello, I have a problem confusing me. I hope someone can help me with it. I followed a "best practise" - I think - using dedicated zfs filesystems for my virtual machines. Commands (for completion): [i]zfs create rpool/vms[/i] [i]zfs create rpool/vms/vm1[/i] [i] zfs create -V 10G rpool/vms/vm1/vm1-dsk[/i] This command creates the file system [i]/rpool/vms/vm1/vm1-dsk[/i] and the
2011 Jun 08
2
Looking for gfs2-kmod SRPM
I'm searching for the SRPM corresponding to this installed RPM. % yum list | grep gfs2 gfs2-kmod-debuginfo.x86_64 1.92-1.1.el5_2.2 It is missing from: http://msync.centos.org/centos-5/5/os/SRPMS/ What I need from the SRPM are the patches. I'm working through some issues using the source code, and the patches in the RedHat SRPM
2011 Feb 27
1
Recover botched drdb gfs2 setup .
Hi. The short story... Rush job, never done clustered file systems before, vlan didn't support multicast. Thus I ended up with drbd working ok between the two servers but cman / gfs2 not working, resulting in what was meant to be a drbd primary/primary cluster being a primary/secondary cluster until the vlan could be fixed with gfs only mounted on the one server. I got the single server
2007 Jul 22
11
Many same managed domain
Hi, When I tested xm new command without uuid parameter repeatedly, I saw many same managed domain as follows. # xm list Name ID Mem VCPUs State Time(s) Domain-0 0 941 2 r----- 51.9 # xm new /xen/vm1.conf Using config file "/xen/vm1.conf". # xm new /xen/vm1.conf Using config file
2011 Feb 16
4
[PATCH] xen: use freeze/restore/thaw PM events for suspend/resume/chkpt
Use PM_FREEZE, PM_THAW and PM_RESTORE power events for suspend/resume/checkpoint functionality, instead of PM_SUSPEND and PM_RESUME. Use of these pm events fixes the Xen Guest hangup when taking checkpoints. When a suspend event is cancelled (while taking checkpoints once/continuously), we use PM_THAW instead of PM_RESUME. PM_RESTORE is used when suspend is not cancelled. See
2014 Mar 10
1
gfs2 and quotas - system crash
I have tried sending this before, but it did not appear to get through. Hello, When using gfs2 with quotas on a SAN that is providing storage to two clustered systems running CentOS6.5, one of the systems can crash. This crash appears to be caused when a user tries to add something to a SAN disk when they have exceeded their quota on that disk. Sometimes a stack trace is produced in
2007 Feb 16
3
[PATCH][XEND] Don''t call destroy() on exception in start()
destroy() is being called on exception in both start() and create(). It needs to be called only in create(). Signed-off-by: Aravindh Puthiyaparambil <aravindh.puthiyaparambil@unisys.com> _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
2017 Jun 30
2
Re: recovering from deleted snapshot
On Fri, Jun 30, 2017 at 09:23:29 -0400, Doug Hughes wrote: > On Jun 30, 2017 6:22 AM, "Peter Krempa" <pkrempa@redhat.com> wrote: > > On Fri, Jun 30, 2017 at 12:05:47 +0200, Peter Krempa wrote: > > > On Thu, Jun 22, 2017 at 11:02:41 -0400, Doug Hughes wrote: [...] > file or directory > > $ virsh blockcommit --active --pivot fedora23 vda > > >
2011 Jan 18
2
dovecot Digest, Vol 93, Issue 41
> From: Stan Hoeppner <stan at hardwarefreak.com> > Subject: Re: [Dovecot] SSD drives are really fast running Dovecot > > > Yes. Go with a cluster filesystem such as OCFS or GFS2 and an inexpensive SAN > storage unit that supports mixed SSD and spinning storage such as the Nexsan > SATABoy with 2GB cache: http://www.nexsan.com/sataboy.php I can't speak for
2009 Aug 03
1
CTDB+GFS2+CMAN. clean_start="0" or clean_start="1"?
Hi everybody, I have tested CTDB+GFS2+CMAN under Debian. It works good but I do not understand some points. It is possible to run the CTDB defining it under services section in cluster.conf but running it on the second node shuts down the process at the first one. My CTDB configuration implies 2 active-active nodes. Does CTDB care if the node starts with clean_start="0" or
2012 Nov 27
6
CTDB / Samba / GFS2 - Performance - with Picture Link
Hello, maybe there is someone they can help and answer a question why i get these network screen on my ctdb clusters. I have two ctdb clusters. One physical and one in a vmware enviroment. So when i transfer any files (copy) in a samba share so i get such network curves with performance breaks. I dont see that the transfer will stop but why is that so? can i change anything or does anybody know
2014 Mar 07
5
[PATCH net V2] vhost: net: switch to use data copy if pending DMAs exceed the limit
We used to stop the handling of tx when the number of pending DMAs exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation of both host and guest. But it was too aggressive in some cases, since any delay or blocking of a single packet may delay or block the guest transmission. Consider the following setup: +-----+ +-----+ | VM1 | | VM2 | +--+--+
2014 Mar 07
5
[PATCH net V2] vhost: net: switch to use data copy if pending DMAs exceed the limit
We used to stop the handling of tx when the number of pending DMAs exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation of both host and guest. But it was too aggressive in some cases, since any delay or blocking of a single packet may delay or block the guest transmission. Consider the following setup: +-----+ +-----+ | VM1 | | VM2 | +--+--+
2011 Jun 20
2
ubuntu, ocfs2 with cman and ctdb
hi guys, we're evaluating the available clustering options to get ctdb up and running for a highly available file server. we've set up both gluster and ocfs2 both on seperate 2 node setups. ocfs2 seems to provide better throughput and iops to samba clients than does gluster and that is comparing a single node server to a ctdb clustered 2 node server. problem with ocfs2 is that i've
2010 Mar 27
1
DRBD,GFS2 and GNBD without all clustered cman stuff
Hi all, Where i want to arrive: 1) having two storage server replicating partition with DRBD 2) exporting via GNBD from the primary server the drbd with GFS2 3) inporting the GNBD on some nodes and mount it with GFS2 Assuming no logical error are done in the last points logic this is the situation: Server 1: LogVol09, DRDB configured as /dev/drbd0 replicated to Server 2. DRBD seems to work
2008 Sep 24
3
Dovecot performance on GFS clustered filesystem
Hello All, We are using Dovecot 1.1.3 to serve IMAP on a pair of clustered Postfix servers which share a fiber array via the GFS clustered filesystem. This all works very well for the most part, with the exception that certain operations are so inefficient on GFS that they generate significant I/O load and hurt performance. We are using the Maildir format on disk. We're also using
2014 Nov 12
3
Put virbr0 in promiscusous
Hi , I have two virtual machines VM1 and VM2. Then I have added eth0 of my VM to 'default' network. Use case :- I want to monitor all traffic on virbr0('default' network). Steps followed :- 1. Add VM1 eth0 to virbr0 2. Add VM2 eth1 to virbr0 3. brctl setageing ovsbr0 0 ..(To put bridge in promiscuous) Now I am running tcpdump on eth1 of VM2 and trying to ping
2014 Mar 13
3
[PATCH net V2] vhost: net: switch to use data copy if pending DMAs exceed the limit
On 03/10/2014 04:03 PM, Michael S. Tsirkin wrote: > On Fri, Mar 07, 2014 at 01:28:27PM +0800, Jason Wang wrote: >> > We used to stop the handling of tx when the number of pending DMAs >> > exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation >> > of both host and guest. But it was too aggressive in some cases, since >> > any delay or blocking
2014 Mar 13
3
[PATCH net V2] vhost: net: switch to use data copy if pending DMAs exceed the limit
On 03/10/2014 04:03 PM, Michael S. Tsirkin wrote: > On Fri, Mar 07, 2014 at 01:28:27PM +0800, Jason Wang wrote: >> > We used to stop the handling of tx when the number of pending DMAs >> > exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation >> > of both host and guest. But it was too aggressive in some cases, since >> > any delay or blocking