I am wondering if there is a way to solve the following problem: I suppose that the usual way is to establish distributed file system with locking mechanisms like it is possible with GFS and Red Hat Cluster Suite or similar, but I am interested in doing some of this manually and ONLY with raw devices (no file system), or simply in knowing some general principles. The case: I have a VLUN (on FC SAN) presented on two servers, but mounted only on one host - to be more precise, used by a Xen HVM guest system as a raw physical phy:// drive. Then, I put this guest down, and bring it manually up on second host - it can see changed images, and make changes to the presented disks. Then I put it down there, and bring it up again on the first host - BUT THEN, this guest (or host) doesn''t see changes made by the second system, it still sees the picture as it was the way it left it. Or even better, if I bring HVM guest on a host, then put it down, make restore of his disks on the storage (I am using HP EVA8400, restoring original disk from a snapshot - it does have redundant controllers but their cache must be in sync for sure), and then bring it up - it still sees things on the disks as they were before restore. But if I _RESTART_ the host, it can see restored disks correctly. Now, I am wondering why is this happening, and if it is possible somehow to resync with the storage without restart (I wouldn''t like that on production ! and on our windows systems this is possible) ... I''ve tried sync (but that is like flushing buffer cache), and I didn''t try echo 3 > /proc/sys/vm/drop_caches after that (I''ve just come upon some articles about that), and I am not sure if that would really invalidate cache and help me. What is the right way of dong this ? Please, help ... ZP. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
2010/2/4 Zoran Popović <shoom013@gmail.com>:> The case: I have a VLUN (on FC SAN) presented on two servers, but mounted > only on one host - to be more precise, used by a Xen HVM guest system as a > raw physical phy:// drive. Then, I put this guest down, and bring it > manually up on second host - it can see changed images, and make changes to > the presented disks. Then I put it down there, and bring it up again on the > first host - BUT THEN, this guest (or host) doesn''t see changes made by the > second system, it still sees the picture as it was the way it left it.It looks similar to this: https://bugzilla.redhat.com/show_bug.cgi?id=466681 As a temporary workaround, echo 1 > /proc/sys/vm/drop_caches should do. -- Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Pasi Kärkkäinen
2010-Feb-04 11:20 UTC
[Xen-users] Re: [rhelv5-list] shared storage manual remount ...
On Thu, Feb 04, 2010 at 01:40:26AM +0100, Zoran Popovi? wrote:> I am wondering if there is a way to solve the following problem: I suppose > that the usual way is to establish distributed file system with locking > mechanisms like it is possible with GFS and Red Hat Cluster Suite or > similar, but I am interested in doing some of this manually and ONLY with > raw devices (no file system), or simply in knowing some general > principles. The case: I have a VLUN (on FC SAN) presented on two servers, > but mounted only on one host - to be more precise, used by a Xen HVM guest > system as a raw physical phy:// drive. Then, I put this guest down, and > bring it manually up on second host - it can see changed images, and make > changes to the presented disks. Then I put it down there, and bring it up > again on the first host - BUT THEN, this guest (or host) doesn''t see > changes made by the second system, it still sees the picture as it was the > way it left it. > Or even better, if I bring HVM guest on a host, then put it down, make > restore of his disks on the storage (I am using HP EVA8400, restoring > original disk from a snapshot - it does have redundant controllers but > their cache must be in sync for sure), and then bring it up - it still > sees things on the disks as they were before restore. But if I _RESTART_ > the host, it can see restored disks correctly. Now, I am wondering why is > this happening, and if it is possible somehow to resync with the storage > without restart (I wouldn''t like that on production ! and on our windows > systems this is possible) ... I''ve tried sync (but that is like flushing > buffer cache), and I didn''t try echo 3 > /proc/sys/vm/drop_caches after > that (I''ve just come upon some articles about that), and I am not sure if > that would really invalidate cache and help me. What is the right way of > dong this ? Please, help ... > ZP.Exactly what changes in the guest are you talking about? (that are not visible after switching hosts). There was a pygrub caching bug in Xen in EL5, but that shouldn''t affect HVM guests, since they don''t use pygrub. If you use phy: backend for the disks, then there should be no caching in dom0. Please paste your /etc/xen/hvmguest config file. -- Pasi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Yes, you''re right, I have tested dropping the cache and it works like a charm, thank you ! ZP. 2010/2/4 Fajar A. Nugraha <fajar@fajar.net>> 2010/2/4 Zoran Popović <shoom013@gmail.com>: > > The case: I have a VLUN (on FC SAN) presented on two servers, but mounted > > only on one host - to be more precise, used by a Xen HVM guest system as > a > > raw physical phy:// drive. Then, I put this guest down, and bring it > > manually up on second host - it can see changed images, and make changes > to > > the presented disks. Then I put it down there, and bring it up again on > the > > first host - BUT THEN, this guest (or host) doesn''t see changes made by > the > > second system, it still sees the picture as it was the way it left it. > > It looks similar to this: > https://bugzilla.redhat.com/show_bug.cgi?id=466681 > As a temporary workaround, echo 1 > /proc/sys/vm/drop_caches should do. > > -- > Fajar >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Zavodsky, Daniel (GE Capital)
2010-Feb-08 10:58 UTC
[Xen-users] RE: [rhelv5-list] shared storage manual remount ...
Hello, I have tried this and it works here... caching is not used for phy: devices, only buffering but it is flushed frequently so it is not a problem. Maybe you should post some more info about your setup? Regards, Daniel ________________________________ From: rhelv5-list-bounces@redhat.com [mailto:rhelv5-list-bounces@redhat.com] On Behalf Of Zoran Popović Sent: Thursday, February 04, 2010 1:40 AM To: Red Hat Enterprise Linux 5 (Tikanga) discussion mailing-list; xen-users@lists.xensource.com Subject: [rhelv5-list] shared storage manual remount ... I am wondering if there is a way to solve the following problem: I suppose that the usual way is to establish distributed file system with locking mechanisms like it is possible with GFS and Red Hat Cluster Suite or similar, but I am interested in doing some of this manually and ONLY with raw devices (no file system), or simply in knowing some general principles. The case: I have a VLUN (on FC SAN) presented on two servers, but mounted only on one host - to be more precise, used by a Xen HVM guest system as a raw physical phy:// drive. Then, I put this guest down, and bring it manually up on second host - it can see changed images, and make changes to the presented disks. Then I put it down there, and bring it up again on the first host - BUT THEN, this guest (or host) doesn''t see changes made by the second system, it still sees the picture as it was the way it left it. Or even better, if I bring HVM guest on a host, then put it down, make restore of his disks on the storage (I am using HP EVA8400, restoring original disk from a snapshot - it does have redundant controllers but their cache must be in sync for sure), and then bring it up - it still sees things on the disks as they were before restore. But if I _RESTART_ the host, it can see restored disks correctly. Now, I am wondering why is this happening, and if it is possible somehow to resync with the storage without restart (I wouldn''t like that on production ! and on our windows systems this is possible) ... I''ve tried sync (but that is like flushing buffer cache), and I didn''t try echo 3 > /proc/sys/vm/drop_caches after that (I''ve just come upon some articles about that), and I am not sure if that would really invalidate cache and help me. What is the right way of dong this ? Please, help ... ZP. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Zoran Popović
2010-Feb-08 15:46 UTC
[Xen-users] Re: [rhelv5-list] shared storage manual remount ...
Tell me what would you like to know more about my environment - I was trying to give all relevant information, at least concerning this issue. And, btw, echo 1 > /proc/sys/vm/drop_caches does the work I have needed - if I do this I get results I need (and, for example, if I don''t do that after snapshot restore on the storage, my HVM Windows guest usually start chkdsk during boot). ZP. 2010/2/8 Zavodsky, Daniel (GE Capital) <daniel.zavodsky@ge.com>> Hello, > I have tried this and it works here... caching is not used for phy: > devices, only buffering but it is flushed frequently so it is not a problem. > Maybe you should post some more info about your setup? > > Regards, > Daniel > > ------------------------------ > *From:* rhelv5-list-bounces@redhat.com [mailto: > rhelv5-list-bounces@redhat.com] *On Behalf Of *Zoran Popović > *Sent:* Thursday, February 04, 2010 1:40 AM > *To:* Red Hat Enterprise Linux 5 (Tikanga) discussion mailing-list; > xen-users@lists.xensource.com > *Subject:* [rhelv5-list] shared storage manual remount ... > > I am wondering if there is a way to solve the following problem: I suppose > that the usual way is to establish distributed file system with locking > mechanisms like it is possible with GFS and Red Hat Cluster Suite or > similar, but I am interested in doing some of this manually and ONLY with > raw devices (no file system), or simply in knowing some general principles. > The case: I have a VLUN (on FC SAN) presented on two servers, but mounted > only on one host - to be more precise, used by a Xen HVM guest system as a > raw physical phy:// drive. Then, I put this guest down, and bring it > manually up on second host - it can see changed images, and make changes to > the presented disks. Then I put it down there, and bring it up again on the > first host - BUT THEN, this guest (or host) doesn''t see changes made by the > second system, it still sees the picture as it was the way it left it. > Or even better, if I bring HVM guest on a host, then put it down, make > restore of his disks on the storage (I am using HP EVA8400, restoring > original disk from a snapshot - it does have redundant controllers but their > cache must be in sync for sure), and then bring it up - it still sees things > on the disks as they were before restore. But if I _RESTART_ the host, it > can see restored disks correctly. Now, I am wondering why is this happening, > and if it is possible somehow to resync with the storage without restart (I > wouldn''t like that on production ! and on our windows systems this is > possible) ... I''ve tried sync (but that is like flushing buffer cache), and > I didn''t try echo 3 > /proc/sys/vm/drop_caches after that (I''ve just come > upon some articles about that), and I am not sure if that would really > invalidate cache and help me. What is the right way of dong this ? Please, > help ... > ZP. > > _______________________________________________ > rhelv5-list mailing list > rhelv5-list@redhat.com > https://www.redhat.com/mailman/listinfo/rhelv5-list > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Pasi Kärkkäinen
2010-Feb-09 10:44 UTC
[Xen-users] Re: [rhelv5-list] shared storage manual remount ...
On Mon, Feb 08, 2010 at 04:46:59PM +0100, Zoran Popovi? wrote:> Tell me what would you like to know more about my environment - I was > trying to give all relevant information, at least concerning this issue. > And, btw, echo 1 > /proc/sys/vm/drop_caches does the work I have needed > - if I do this I get results I need (and, for example, if I don''t do that > after snapshot restore on the storage, my HVM Windows guest usually start > chkdsk during boot).So you''re saying you need drop_caches *with* phy for the other host to see the disk contents? -- Pasi> ZP. > > 2010/2/8 Zavodsky, Daniel (GE Capital) <[1]daniel.zavodsky@ge.com> > > Hello, > I have tried this and it works here... caching is not used for phy: > devices, only buffering but it is flushed frequently so it is not a > problem. Maybe you should post some more info about your setup? > > Regards, > Daniel > > -------------------------------------------------------------------------- > > From: [2]rhelv5-list-bounces@redhat.com > [mailto:[3]rhelv5-list-bounces@redhat.com] On Behalf Of Zoran Popoviæ > Sent: Thursday, February 04, 2010 1:40 AM > To: Red Hat Enterprise Linux 5 (Tikanga) discussion mailing-list; > [4]xen-users@lists.xensource.com > Subject: [rhelv5-list] shared storage manual remount ... > I am wondering if there is a way to solve the following problem: I > suppose that the usual way is to establish distributed file system with > locking mechanisms like it is possible with GFS and Red Hat Cluster > Suite or similar, but I am interested in doing some of this manually and > ONLY with raw devices (no file system), or simply in knowing some > general principles. The case: I have a VLUN (on FC SAN) presented on two > servers, but mounted only on one host - to be more precise, used by a > Xen HVM guest system as a raw physical phy:// drive. Then, I put this > guest down, and bring it manually up on second host - it can see changed > images, and make changes to the presented disks. Then I put it down > there, and bring it up again on the first host - BUT THEN, this guest > (or host) doesn''t see changes made by the second system, it still sees > the picture as it was the way it left it. > Or even better, if I bring HVM guest on a host, then put it down, make > restore of his disks on the storage (I am using HP EVA8400, restoring > original disk from a snapshot - it does have redundant controllers but > their cache must be in sync for sure), and then bring it up - it still > sees things on the disks as they were before restore. But if I _RESTART_ > the host, it can see restored disks correctly. Now, I am wondering why > is this happening, and if it is possible somehow to resync with the > storage without restart (I wouldn''t like that on production ! and on our > windows systems this is possible) ... I''ve tried sync (but that is like > flushing buffer cache), and I didn''t try echo 3 > > /proc/sys/vm/drop_caches after that (I''ve just come upon some articles > about that), and I am not sure if that would really invalidate cache and > help me. What is the right way of dong this ? Please, help ... > ZP. > _______________________________________________ > rhelv5-list mailing list > [5]rhelv5-list@redhat.com > [6]https://www.redhat.com/mailman/listinfo/rhelv5-list > > References > > Visible links > 1. mailto:daniel.zavodsky@ge.com > 2. mailto:rhelv5-list-bounces@redhat.com > 3. mailto:rhelv5-list-bounces@redhat.com > 4. mailto:xen-users@lists.xensource.com > 5. mailto:rhelv5-list@redhat.com > 6. https://www.redhat.com/mailman/listinfo/rhelv5-list> _______________________________________________ > rhelv5-list mailing list > rhelv5-list@redhat.com > https://www.redhat.com/mailman/listinfo/rhelv5-list_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Zoran Popović
2010-Feb-09 15:37 UTC
Re: [Xen-users] Re: [rhelv5-list] shared storage manual remount ...
....mmmm to be exact: I need to drop caches on *other* host to see the disk contents on that *other* host if it was already using that *phy* ... 2010/2/9 Pasi Kärkkäinen <pasik@iki.fi>> On Mon, Feb 08, 2010 at 04:46:59PM +0100, Zoran Popovi? wrote: > > Tell me what would you like to know more about my environment - I was > > trying to give all relevant information, at least concerning this > issue. > > And, btw, echo 1 > /proc/sys/vm/drop_caches does the work I have > needed > > - if I do this I get results I need (and, for example, if I don''t do > that > > after snapshot restore on the storage, my HVM Windows guest usually > start > > chkdsk during boot). > > So you''re saying you need drop_caches *with* phy for the other host to see > the > disk contents? > ..._______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users