Pekka.Panula@sofor.fi
2008-Oct-16 08:04 UTC
[Xen-users] Shared SAN disk LUN between 2 servers and migration problem
HI Dom0 is CentOS 5.2, 64-bit, Xen version 3.2.1 and 3.2.2. I got Fiber SAN disk system and 2 servers and i am making shared LUN available to both servers. Then i got multipath layer running on both Dom0s and so Xen is seeing device-mapper file, instead of two /dev/sdX devices, then i have setup Xen domU (HVM Windows 2003) using /dev/mapper/sharedLUN as physical storage to my windows server. Do i need one more layer to setup migration working? It seems now that if i migrate between servers i get filesystem corruptions. Perhaps some sort of cache on Linux that it not written to disk system yet is causing this and when server is migrated device state is not same on target servers multipath device, eg. some part of write operation is not written to shared lun device so target Xen dont have complete state. Whats the best way so cache problem do not occur, eg. both multipath devices has same state on both servers, i am not wanting to duplicate data and use anything separate cluster file system, so is there some sort of system how to get this problem fixed? Terveisin/Regards, Pekka Panula, Net Servant Oy _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
George Rushby
2008-Oct-16 08:48 UTC
RE: [Xen-users] Shared SAN disk LUN between 2 servers and migration problem
You are connecting to the same volume with at least 2 servers. This will cause your files system to eventually corrupt. The array has no knowledge of your operating system at this level you should think of a volume as a disk drive. If you are not using a cluster you must restrict access to the volume to only one server. Once you connect to the volume you share the volume through the server. Most people will set up a server with 2 NIC''s one to connect to the iscsi vlan setup for the array and the other is on the public vlan. This way you separate your traffic and are able to share the data on the volume. You should also look in GFS From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Pekka.Panula@sofor.fi Sent: 16 October 2008 10:05 AM To: xen-users@lists.xensource.com Subject: [Xen-users] Shared SAN disk LUN between 2 servers and migration problem HI Dom0 is CentOS 5.2, 64-bit, Xen version 3.2.1 and 3.2.2. I got Fiber SAN disk system and 2 servers and i am making shared LUN available to both servers. Then i got multipath layer running on both Dom0s and so Xen is seeing device-mapper file, instead of two /dev/sdX devices, then i have setup Xen domU (HVM Windows 2003) using /dev/mapper/sharedLUN as physical storage to my windows server. Do i need one more layer to setup migration working? It seems now that if i migrate between servers i get filesystem corruptions. Perhaps some sort of cache on Linux that it not written to disk system yet is causing this and when server is migrated device state is not same on target servers multipath device, eg. some part of write operation is not written to shared lun device so target Xen dont have complete state. Whats the best way so cache problem do not occur, eg. both multipath devices has same state on both servers, i am not wanting to duplicate data and use anything separate cluster file system, so is there some sort of system how to get this problem fixed? Terveisin/Regards, Pekka Panula, Net Servant Oy ________________________________ NOTICE: If received in error, please destroy and notify sender. Sender does not intend to waive confidentiality or privilege. Use of this email is prohibited when received in error. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Ferenc Wagner
2008-Oct-16 09:24 UTC
Re: [Xen-users] Shared SAN disk LUN between 2 servers and migration problem
Pekka.Panula@sofor.fi writes:> I got Fiber SAN disk system and 2 servers and i am making shared LUN > available to both servers. Then i got multipath layer running on both > Dom0s and so Xen is seeing device-mapper file, instead of two /dev/sdX > devices, then i have setup Xen domU (HVM Windows 2003) using > /dev/mapper/sharedLUN as physical storage to my windows server. > > Do i need one more layer to setup migration working? It seems now that if > i migrate between servers i get filesystem corruptions.I run a similar setup but for PV linux guests. (Live) migration works. I''ve never tried it with HVM, but can''t see any obvious obstacle. -- Feri. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Pekka.Panula@sofor.fi
2008-Oct-16 10:51 UTC
RE: [Xen-users] Shared SAN disk LUN between 2 servers and migration problem
> George Rushby <george@viamedia.co.za> > Sent by: xen-users-bounces@lists.xensource.com > > 16.10.2008 11:50 > > To > > "xen-users@lists.xensource.com" <xen-users@lists.xensource.com> > > cc > > Subject > > RE: [Xen-users] Shared SAN disk LUN between 2 servers and migrationproblem> > You are connecting to the same volume with at least 2 servers. This > will cause your files system to eventually corrupt. The array has no > knowledge of your operating system at this level you should think of > a volume as a disk drive. If you are not using a cluster you must > restrict access to the volume to only one server. Once you connect > to the volume you share the volume through the server. Most people > will set up a server with 2 NIC''s one to connect to the iscsi vlan > setup for the array and the other is on the public vlan. This way > you separate your traffic and are able to share the data on the volume. > > You should also look in GFSBut why you need GFS if only one server is doing access to shared LUN at the time. Server 2 is running DomU and its accessing multipath device there (reading/writing) but on another server its just there waiting, no server is actually accessing it until server is migrated to there. I am running only one access per multipathed LUN at the time, not several nodes to same LUN, i dont need active/passive failover as i just want to migrate servers when i am doing maintenance, eg. rebooting Dom0 etc. Other Dom0s do not touch my multipath device at all. Meaning one multipath device per DomU server. And this is on FC SAN Disk system, so servers are pointed to same LUN on array but only one server is writing it at same time. So i am lost here? This does not work, eg. Linux/Xen/multipath device does not sync all operations to block device when Xen is migrating server to another server? Why Xen does not say to OS to sync data to disk when migration is happening, or does it? Anyway on HVMed Windows 2003 Standard Server i am getting corruption when i do file access during migration operation, i have tested this by installing 7-zip compression program and put it to compress eg. windows -directory and then migrate it to another server and then back to original and then when verifying compressed file 7-zip says lots of files are corrupted. Not tested on PV host, but anyway, currently my need is to run many Windows servers so i need to get this working. Ofc i can do it now without migration, by manually shutting down DomU and copying its Xen configuration to another server and starting server there... Terveisin/Regards, Pekka Panula, Net Servant Oy _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
John Madden
2008-Oct-16 13:06 UTC
Re: [Xen-users] Shared SAN disk LUN between 2 servers and migration problem
> > Do i need one more layer to setup migration working? It seems now that if > > i migrate between servers i get filesystem corruptions. > > I run a similar setup but for PV linux guests. (Live) migration works. > I''ve never tried it with HVM, but can''t see any obvious obstacle.Agreed. No cluster filesystems or extra locking (DLM, etc.) is needed for live migration itself to work in a shared-disk scenario. I''ve never tried it with HVM either though. John -- John Madden Sr. UNIX Systems Engineer Ivy Tech Community College of Indiana jmadden@ivytech.edu _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Stephan Seitz
2008-Oct-16 13:20 UTC
Re: [Xen-users] Shared SAN disk LUN between 2 servers and migration problem
Pekka.Panula@sofor.fi schrieb:>> George Rushby <george@viamedia.co.za> >> Sent by: xen-users-bounces@lists.xensource.com >> >> 16.10.2008 11:50 >> >> To >> >> "xen-users@lists.xensource.com" <xen-users@lists.xensource.com> >> >> cc >> >> Subject >> >> RE: [Xen-users] Shared SAN disk LUN between 2 servers and migration > problem >> You are connecting to the same volume with at least 2 servers. This >> will cause your files system to eventually corrupt. The array has no >> knowledge of your operating system at this level you should think of >> a volume as a disk drive. If you are not using a cluster you must >> restrict access to the volume to only one server. Once you connect >> to the volume you share the volume through the server. Most people >> will set up a server with 2 NIC''s one to connect to the iscsi vlan >> setup for the array and the other is on the public vlan. This way >> you separate your traffic and are able to share the data on the volume. >> >> You should also look in GFS > > But why you need GFS if only one server is doing access to shared LUN at > the time. Server 2 is running DomU and its accessing multipath device > there (reading/writing) but on another server its just there waiting, no > server is actually accessing it until server is migrated to there. I am > running only one access per multipathed LUN at the time, not several nodes > to same LUN, i dont need active/passive failover as i just want to migrate > servers when i am doing maintenance, eg. rebooting Dom0 etc. Other Dom0s > do not touch my multipath device at all. Meaning one multipath device per > DomU server. And this is on FC SAN Disk system, so servers are pointed to > same LUN on array but only one server is writing it at same time. > > So i am lost here? This does not work, eg. Linux/Xen/multipath device does > not sync all operations to block device when Xen is migrating server to > another server? Why Xen does not say to OS to sync data to disk when > migration is happening, or does it?You might want to look for "block-iscsi". You''ll find it via google or markmail as it has been posted on the ML as well. This script will connect a LUN on domU startup and disconnect on powerdown. The block-scripts at all are aware of domU migration. The only thing you can''t do without a cluster filesystem is running a remotely connected machine on more than one dom0.> > Anyway on HVMed Windows 2003 Standard Server i am getting corruption when > i do file access during migration operation, i have tested this by > installing 7-zip compression program and put it to compress eg. windows > -directory and then migrate it to another server and then back to original > and then when verifying compressed file 7-zip says lots of files are > corrupted. Not tested on PV host, but anyway, currently my need is to run > many Windows servers so i need to get this working. Ofc i can do it now > without migration, by manually shutting down DomU and copying its Xen > configuration to another server and starting server there...> > Terveisin/Regards, > Pekka Panula, Net Servant Oy > > > > > ------------------------------------------------------------------------ > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users-- Stephan Seitz Senior System Administrator *netz-haut* e.K. multimediale kommunikation zweierweg 22 97074 würzburg fon: +49 931 2876247 fax: +49 931 2876248 web: www.netz-haut.de <http://www.netz-haut.de/> registriergericht: amtsgericht würzburg, hra 5054 _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Simon Talbot
2008-Oct-20 17:37 UTC
RE: [Xen-users] Shared SAN disk LUN between 2 servers and migrationproblem
We have a 6 server Xen cluster, with shared SAN storage (directly presenting LUNs to the host OSs). Live migration works well for both PV and HVM domains, the only times that we have observed file system corruption is when heartbeat has mistakenly brought the same VM up on two machines. Apart from that we had a test harness performing around 10,000 live migrations randomly around the 6 server cluster without a single incident, the domain ran perfectly throughout the test. Once scenario I could imagine problems in however, is if your OS/HBA is performing caching on the write operations to the SAN, so say for example you have the domain started on Server A and heavy IO is taking place. You live migrate to Server B and dirty data is still in the cache of Server A being written to the SAN. Server B then starts running and continues the heavy writes before Server A is fully flushed, this then potentially leads to the two servers effectively both fighting over the same device for a short period of time until Server A''s cache is fully flushed. I must stress that we have **NOT** experienced this problem and in general the Host OS cache is fully flushed as part of the Live Migration, but it could be a problem in some situations where something in the data path is performing caching that the Host OS/Xen has no knowledge of. What Servers/SAN/HBAs etc. are you using? Simon Simon Talbot MEng, ACGI (Chief Engineer) Tel: 020 3161 6001 Fax: 020 3161 6011 The information contained in this e-mail and any attachments are private and confidential and may be legally privileged. It is intended for the named addressee(s) only. If you are not the intended recipient(s), you must not read, copy or use the information contained in any way. If you receive this email or any attachments in error, please notify us immediately by e-mail and destroy any copy you have of it. We accept no responsibility for any loss or damages whatsoever arising in any way from receipt or use of this e-mail or any attachments. This e-mail is not intended to create legally binding commitments on our behalf, nor do its comments reflect our corporate views or policies. -----Original Message----- From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of John Madden Sent: 16 October 2008 14:07 To: xen-users@lists.xensource.com Cc: Pekka.Panula@sofor.fi; Ferenc Wagner Subject: Re: [Xen-users] Shared SAN disk LUN between 2 servers and migrationproblem> > Do i need one more layer to setup migration working? It seems nowthat if> > i migrate between servers i get filesystem corruptions. > > I run a similar setup but for PV linux guests. (Live) migrationworks.> I''ve never tried it with HVM, but can''t see any obvious obstacle.Agreed. No cluster filesystems or extra locking (DLM, etc.) is needed for live migration itself to work in a shared-disk scenario. I''ve never tried it with HVM either though. John -- John Madden Sr. UNIX Systems Engineer Ivy Tech Community College of Indiana jmadden@ivytech.edu _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Pekka.Panula@sofor.fi
2008-Oct-21 08:07 UTC
RE: [Xen-users] Shared SAN disk LUN between 2 servers and migrationproblem
xen-users-bounces@lists.xensource.com wrote on 20.10.2008 20:37:57:> "Simon Talbot" <simont@nse.co.uk> > Sent by: xen-users-bounces@lists.xensource.com > > 20.10.2008 20:39 > > To > > "John Madden" <jmadden@ivytech.edu>, <xen-users@lists.xensource.com> > > cc > > Pekka.Panula@sofor.fi, Ferenc Wagner <wferi@niif.hu> > > Subject > > RE: [Xen-users] Shared SAN disk LUN between 2 servers andmigrationproblem> > We have a 6 server Xen cluster, with shared SAN storage (directly > presenting LUNs to the host OSs). Live migration works well for both PV > and HVM domains, the only times that we have observed file system > corruption is when heartbeat has mistakenly brought the same VM up on > two machines. > > Apart from that we had a test harness performing around 10,000 live > migrations randomly around the 6 server cluster without a single > incident, the domain ran perfectly throughout the test. > > Once scenario I could imagine problems in however, is if your OS/HBA is > performing caching on the write operations to the SAN, so say for > example you have the domain started on Server A and heavy IO is taking > place. You live migrate to Server B and dirty data is still in the cache > of Server A being written to the SAN. Server B then starts running and > continues the heavy writes before Server A is fully flushed, this then > potentially leads to the two servers effectively both fighting over the > same device for a short period of time until Server A''s cache is fully > flushed. > > I must stress that we have **NOT** experienced this problem and in > general the Host OS cache is fully flushed as part of the Live > Migration, but it could be a problem in some situations where something > in the data path is performing caching that the Host OS/Xen has no > knowledge of. > > What Servers/SAN/HBAs etc. are you using? > > Simon > > Simon Talbot MEng, ACGI > (Chief Engineer) > Tel: 020 3161 6001 > Fax: 020 3161 6011I tested also with paravirtualized CentOS 5.2 32-bit and i did not got any corruption during 7-zip or with zip. I did migrate atleast 4 times during compression and then i did verify compressed file and they all say it was ok. I did get corrupted with Windows 2003 standard server, with HVM drivers and with GPLPV drivers. Compression program was 7-zip latest beta i think, maybe 7-zip does not like virtualized servers. Storage is IBM DS-series and Xen servers are on BladeCenter H HS21 cards, so HBA is "QLogic Corp. ISP2422-based 4Gb Fibre Channel to PCI-X HBA (rev 02)", driver: 8.02.00-k5-rhel5.2-04 Anyone else running Xen with similar system and does your HVM Windows DomU migrate without filesystem corruptions? BTW: I did test also Xen 3.3.0 but interestingly you cant migrate it with 3.2.1 Xen, is this bug or should it work? Terveisin/Regards, Pekka Panula, Net Servant Oy _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Alain Barthe
2008-Oct-21 09:48 UTC
Re: [Xen-users] Shared SAN disk LUN between 2 servers and migrationproblem
2008/10/21 <Pekka.Panula@sofor.fi>> > BTW: I did test also Xen 3.3.0 but interestingly you cant migrate it with > 3.2.1 Xen, is this bug or should it work?I posted few weeks ago the question concerning live migration ability between different hypervisor versions. The answer, as I understood, is that it should not work in general case, but it may work for some versions. I tested with hypervisor versions 3.0.3, 3.1.2 and 3.2.0. No cross version live migration works, except from 3.0.3 to 3.1.2 (but not the reverse). Cheers, Alain.> > > Terveisin/Regards, > Pekka Panula, Net Servant Oy > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Peter Van Biesen
2008-Oct-23 14:36 UTC
Re: [Xen-users] Shared SAN disk LUN between 2 servers and migrationproblem
Hi, I''m running a xen cluster with paravirtualised windows xp machines. The domu disks are LUN''s ( no clvm or such ). They are shared between the cluster members ( dom0 ). The HBA cards are qlogics, the diskarray an hp xp 128. Xen version is 3.2-1, dom0 kernel is 2.6.18 from debian stable. The multipath daemon is running. Live migration works perfectly for non-hvm domu''s. Windows domu''s, however, do not live migrate in this setup ( I don''t know why ). Gplpv drivers corrupt the fs upon shutdown of the domu ( as long as it is kept running, I haven''t noticed corruption ). So basically, as long as you keep your windows hvm domu running with the block device it started with, you''re ok. Switch ( either from VBD to ioemu either from machine - live migrate ) and you''re in trouble. FYI Peter. On Tuesday 21 October 2008 10:07:18 Pekka.Panula@sofor.fi wrote:> xen-users-bounces@lists.xensource.com wrote on 20.10.2008 20:37:57: > > I tested also with paravirtualized CentOS 5.2 32-bit and i did not got any > corruption during 7-zip or with zip. I did migrate atleast 4 times during > compression and then i did verify compressed file and they all say it was > ok. > > I did get corrupted with Windows 2003 standard server, with HVM drivers > and with GPLPV drivers. Compression program was 7-zip latest beta i think, > maybe 7-zip does not like virtualized servers. > > Storage is IBM DS-series and Xen servers are on BladeCenter H HS21 cards, > so HBA is "QLogic Corp. ISP2422-based 4Gb Fibre Channel to PCI-X HBA (rev > 02)", driver: 8.02.00-k5-rhel5.2-04 > > Anyone else running Xen with similar system and does your HVM Windows DomU > migrate without filesystem corruptions? > > BTW: I did test also Xen 3.3.0 but interestingly you cant migrate it with > 3.2.1 Xen, is this bug or should it work? > > Terveisin/Regards, > Pekka Panula, Net Servant Oy >-- Peter Van Biesen Sysadmin VAPH tel: +32 (0) 2 225 85 70 fax: +32 (0) 2 225 85 88 e-mail: peter.vanbiesen@vaph.be PGP: http://www.vaph.be/pgpkeys _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Seemingly Similar Threads
- XCP 1.0 Beta - Boot-from-SAN with multipathing - howto?
- Converting full virtualized Linux domUs to use paravirtualized drivers
- GPLPV driver installation for Windows 2000 server
- What does viridian=1 do?
- Direct access from VM to LUN over FC with multipathing