HI ! I have tested the following scenario created a zfs filesystem as part of HAStoragePlus in SunCluster 3.2, Solaris 11/06 Currently i am having only one fc hba per server. 1. There is no IO to the zfs mountpoint. I disconnected the FC cable. Filesystem on zfs still shows as mounted (because of no IO to filesystem). I touch a file. Still ok. i did a "sync" and only then the node panicked and zfs filesystem failed over to other cluster node. however my file which i touched is lost !!!! 2. with zfs mounted on one cluster node, i created a file and keeps it updating every second, then i removed the fc cable, the writes are still continuing to the file system, after 10 seconds i have put back the fc cable and my writes continues, no failover of zfs happens. seems that all IO are going to some cache. Any suggestions on whts going wrong over here and whts the solution to this. thanks Ayaz Anjum -------------------------------------------------------------------------------------------------- Confidentiality Notice : This e-mail and any attachments are confidential to the addressee and may also be privileged. If you are not the addressee of this e-mail, you may not copy, forward, disclose or otherwise use it in any way whatsoever. If you have received this e-mail by mistake, please e-mail the sender by replying to this message, and delete the original and any print out thereof. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20070308/d561898a/attachment.html>
Ayaz Anjum wrote:> > HI ! > > I have tested the following scenario > > created a zfs filesystem as part of HAStoragePlus in SunCluster 3.2, > Solaris 11/06 > > Currently i am having only one fc hba per server. > > 1. There is no IO to the zfs mountpoint. I disconnected the FC cable. > Filesystem on zfs still shows as mounted (because of no IO to > filesystem). I touch a file. Still ok. i did a "sync" and only then the > node panicked and zfs filesystem failed over to other cluster node. > however my file which i touched is lost !!!!This is to be expected, I''d say. HAStoragePlus is primarily a wrapper over zfs that manages the import/export and mount/unmount. It can not and does not provide for a retry of pending IOs. The ''touch'' would have been part of a zfs transaction group that never got committed. And it stays lost when the pool is imported on the other node. In other words, it does not provide the same kind of high availability that, say, PxFS for instance provides.> 2. with zfs mounted on one cluster node, i created a file and keeps it > updating every second, then i removed the fc cable, the writes are still > continuing to the file system, after 10 seconds i have put back the fc > cable and my writes continues, no failover of zfs happens. > > seems that all IO are going to some cache. Any suggestions on whts going > wrong over here and whts the solution to this.I don''t know for sure. But my guess is, if you do a fsync after the writes and wait for the fsync to complete, then you might get some action. fsync should fail. zfs could panic the node. If it does, you will see a failover. Hope that helps. -Manoj
Hello Manoj, Thursday, March 8, 2007, 7:10:57 AM, you wrote: MJ> Ayaz Anjum wrote:>> 2. with zfs mounted on one cluster node, i created a file and keeps it >> updating every second, then i removed the fc cable, the writes are still >> continuing to the file system, after 10 seconds i have put back the fc >> cable and my writes continues, no failover of zfs happens. >> >> seems that all IO are going to some cache. Any suggestions on whts going >> wrong over here and whts the solution to this.MJ> I don''t know for sure. But my guess is, if you do a fsync after the MJ> writes and wait for the fsync to complete, then you might get some MJ> action. fsync should fail. zfs could panic the node. If it does, you MJ> will see a failover. Exactly. Files must be open with O_DSYNC of fdsync should be used. If you don''t then writes are expected to be buffered and later put to disks which in your case has to fail. If you want to guarantee that when your applications writes something it''s on stable storage then use proper semantics like shown above. -- Best regards, Robert mailto:rmilkowski at task.gda.pl http://milek.blogspot.com
robert, this applies only if you have full control on the application forsure ..but how do you do it if you don''t own the application ... can you mount zfs with forcedirectio flag ? selim On 3/8/07, Robert Milkowski <rmilkowski at task.gda.pl> wrote:> Hello Manoj, > > Thursday, March 8, 2007, 7:10:57 AM, you wrote: > > MJ> Ayaz Anjum wrote: > > >> 2. with zfs mounted on one cluster node, i created a file and keeps it > >> updating every second, then i removed the fc cable, the writes are still > >> continuing to the file system, after 10 seconds i have put back the fc > >> cable and my writes continues, no failover of zfs happens. > >> > >> seems that all IO are going to some cache. Any suggestions on whts going > >> wrong over here and whts the solution to this. > > MJ> I don''t know for sure. But my guess is, if you do a fsync after the > MJ> writes and wait for the fsync to complete, then you might get some > MJ> action. fsync should fail. zfs could panic the node. If it does, you > MJ> will see a failover. > > Exactly. > Files must be open with O_DSYNC of fdsync should be used. > If you don''t then writes are expected to be buffered and later put to > disks which in your case has to fail. > > If you want to guarantee that when your applications writes something > it''s on stable storage then use proper semantics like shown above. > > > -- > Best regards, > Robert mailto:rmilkowski at task.gda.pl > http://milek.blogspot.com > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
Hello Selim, Thursday, March 8, 2007, 8:08:50 PM, you wrote: SD> robert, SD> this applies only if you have full control on the application forsure SD> ..but how do you do it if you don''t own the application ... can you SD> mount zfs with forcedirectio flag ? No -- Best regards, Robert mailto:rmilkowski at task.gda.pl http://milek.blogspot.com
Le 8 mars 07 ? 20:08, Selim Daoud a ?crit :> robert, > this applies only if you have full control on the application forsure > ..but how do you do it if you don''t own the application ... can you > mount zfs with forcedirectio flag ? > selim >ufs directio and O_DSYNC are different things. Would a forcesync flag be something of interest to the community ? -r
>Would a forcesync flag be something of interest to the community ?Yes.
it''s an absolute necessity On 3/8/07, Roch Bourbonnais <Roch.Bourbonnais at sun.com> wrote:> > Le 8 mars 07 ? 20:08, Selim Daoud a ?crit : > > > robert, > > this applies only if you have full control on the application forsure > > ..but how do you do it if you don''t own the application ... can you > > mount zfs with forcedirectio flag ? > > selim > > > > ufs directio and O_DSYNC are different things. > Would a forcesync flag be something of interest to the community ? > > -r >
Any details on the use case ? Such an option will clearly make any filesystem just crawl on so many common operation. So it''s rather interesting to know who/what is ready to sacrifice so much performance. In exchange for what ? Le 8 mars 07 ? 21:19, Bruce Shaw a ?crit :>> Would a forcesync flag be something of interest to the community ? > > Yes. > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
HI ! I have some concerns here, from my experience in the past, touching a file ( doing some IO ) will cause the ufs filesystem to failover, unlike zfs where it did not ! Why the behaviour of zfs different than ufs ? is not this compromising data integrity ? thanks Ayaz From: Robert Milkowski <rmilkowski at task.gda.pl> Recipients: Manoj Joseph <manoj at clusterfs.com>, Ayaz Anjum <anjum at qp.com.qa>,zfs-discuss at opensolaris.org Subject: Re[2]: [zfs-discuss] writes lost with zfs ! Date: 03/08/2007 02:34:20 PM Hello Manoj, Thursday, March 8, 2007, 7:10:57 AM, you wrote: MJ> Ayaz Anjum wrote:>> 2. with zfs mounted on one cluster node, i created a file and keeps it >> updating every second, then i removed the fc cable, the writes arestill>> continuing to the file system, after 10 seconds i have put back the fc >> cable and my writes continues, no failover of zfs happens. >> >> seems that all IO are going to some cache. Any suggestions on whtsgoing>> wrong over here and whts the solution to this.MJ> I don''t know for sure. But my guess is, if you do a fsync after the MJ> writes and wait for the fsync to complete, then you might get some MJ> action. fsync should fail. zfs could panic the node. If it does, you MJ> will see a failover. Exactly. Files must be open with O_DSYNC of fdsync should be used. If you don''t then writes are expected to be buffered and later put to disks which in your case has to fail. If you want to guarantee that when your applications writes something it''s on stable storage then use proper semantics like shown above. -- Best regards, Robert mailto:rmilkowski at task.gda.pl http://milek.blogspot.com -------------------------------------------------------------------------------------------------- Confidentiality Notice : This e-mail and any attachments are confidential to the addressee and may also be privileged. If you are not the addressee of this e-mail, you may not copy, forward, disclose or otherwise use it in any way whatsoever. If you have received this e-mail by mistake, please e-mail the sender by replying to this message, and delete the original and any print out thereof. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20070311/1f62d80c/attachment.html>
> I have some concerns here, from my experience in the past, touching a > file ( doing some IO ) will cause the ufs filesystem to failover, unlike > zfs where it did not ! Why the behaviour of zfs different than ufs ?UFS always does synchronous metadata updates. So a ''touch'' that creates a file is going to require a metadata write. ZFS writes may not necessarily hit the disk until a transaction group flush.> is not this compromising data integrity ?It should not. Is there a scenario that you are worried about? -- Darren Dunham ddunham at taos.com Senior Technical Consultant TAOS http://www.taos.com/ Got some Dr Pepper? San Francisco, CA bay area < This line left intentionally blank to confuse you. >
HI ! Well as per my actual post, i created a zfs file as part of Sun cluster HAStoragePlus, and then disconned the FC cable, since there was no active IO hence the failure of disk was not detected, then i touched a file in the zfs filesystem, and it went fine, only after that when i did sync then the node panicked and zfs filesystem is failed over to other node. On the othernode the file i touched is not there in the same zfs file system hence i am saying that data is lost. I am planning to deploy zfs in a production NFS environment with above 2TB of Data where users are constantly updating file. Hence my concerns about data integrity. Please explain. thaks Ayaz Anjum Darren Dunham <ddunham at taos.com> Sent by: zfs-discuss-bounces at opensolaris.org 03/12/2007 05:45 AM To zfs-discuss at opensolaris.org cc Subject Re: Re[2]: [zfs-discuss] writes lost with zfs !> I have some concerns here, from my experience in the past, touching a > file ( doing some IO ) will cause the ufs filesystem to failover, unlike> zfs where it did not ! Why the behaviour of zfs different than ufs ?UFS always does synchronous metadata updates. So a ''touch'' that creates a file is going to require a metadata write. ZFS writes may not necessarily hit the disk until a transaction group flush.> is not this compromising data integrity ?It should not. Is there a scenario that you are worried about? -- Darren Dunham ddunham at taos.com Senior Technical Consultant TAOS http://www.taos.com/ Got some Dr Pepper? San Francisco, CA bay area < This line left intentionally blank to confuse you. > _______________________________________________ zfs-discuss mailing list zfs-discuss at opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -------------------------------------------------------------------------------------------------- Confidentiality Notice : This e-mail and any attachments are confidential to the addressee and may also be privileged. If you are not the addressee of this e-mail, you may not copy, forward, disclose or otherwise use it in any way whatsoever. If you have received this e-mail by mistake, please e-mail the sender by replying to this message, and delete the original and any print out thereof. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20070312/86eb95bc/attachment.html>
On 11-Mar-07, at 11:12 PM, Ayaz Anjum wrote:> > HI ! > > Well as per my actual post, i created a zfs file as part of Sun > cluster HAStoragePlus, and then disconned the FC cable, since there > was no active IO hence the failure of disk was not detected, then i > touched a file in the zfs filesystem, and it went fine, only after > that when i did sync then the node panicked and zfs filesystem is > failed over to other node. On the othernode the file i touched is > not there in the same zfs file system hence i am saying that data > is lost. I am planning to deploy zfs in a production NFS > environment with above 2TB of Data where users are constantly > updating file. Hence my concerns about data integrity.I believe Robert and Darren have offered sufficient explanations: You cannot be assured of committed data unless you''ve sync''d it. You are only risking data loss if your users and/or applications assume data is committed without seeing a completed sync, which would be a design error. This applies to any filesystem. --Toby> Please explain. > > thaks > > Ayaz Anjum > > > > Darren Dunham <ddunham at taos.com> > Sent by: zfs-discuss-bounces at opensolaris.org > 03/12/2007 05:45 AM > > To > zfs-discuss at opensolaris.org > cc > Subject > Re: Re[2]: [zfs-discuss] writes lost with zfs ! > > > > > > > I have some concerns here, from my experience in the past, > touching a > > file ( doing some IO ) will cause the ufs filesystem to failover, > unlike > > zfs where it did not ! Why the behaviour of zfs different than ufs ? > > UFS always does synchronous metadata updates. So a ''touch'' that > creates > a file is going to require a metadata write. > > ZFS writes may not necessarily hit the disk until a transaction group > flush. > > > is not this compromising data integrity ? > > It should not. Is there a scenario that you are worried about? > > -- > Darren Dunham > ddunham at taos.com > Senior Technical Consultant TAOS http:// > www.taos.com/ > Got some Dr Pepper? San Francisco, CA bay > area > < This line left intentionally blank to confuse you. > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > > > > > > > > > ---------------------------------------------------------------------- > ---------------------------- > Confidentiality Notice : This e-mail and any attachments are > confidential to the addressee and may also be privileged. If > you are not the addressee of this e-mail, you may not copy, > forward, disclose or otherwise use it in any way whatsoever. If > you have received this e-mail by mistake, please e-mail the > sender by replying to this message, and delete the original and any > print out thereof. > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20070311/33f5789b/attachment.html>
Heya,> I believe Robert and Darren have offered sufficient explanations: You > cannot be assured of committed data unless you''ve sync''d it. You are > only risking data loss if your users and/or applications assume data > is committed without seeing a completed sync, which would be a design > error. This applies to any filesystem.Which, from memory with NFS would be the ''sync'' exports option. Just my 2c, Stuart
On 11-Mar-07, at 11:22 PM, Stuart Low wrote:> Heya, > >> I believe Robert and Darren have offered sufficient explanations: You >> cannot be assured of committed data unless you''ve sync''d it. You are >> only risking data loss if your users and/or applications assume data >> is committed without seeing a completed sync, which would be a design >> error. This applies to any filesystem. > > Which, from memory with NFS would be the ''sync'' exports option.Or at application discretion with fsync()/fdatasync()? (end of transaction). While I''m not an NFS guru, I''m pretty sure this has been previously discussed on this list. If I understand the OP''s post correctly, his issue is not about ZFS but more not realising he actually needs commit semantics (as Robert quickly pointed out) - coupled with not realising that ALL filesystems will probably lose uncommitted data if you yank the cable. --Toby> > Just my 2c, > > Stuart > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Ayaz, What does the panic stack look like ? Did you have DPM (Disk Path Monitoring) enabled in both the cases (UFS/ZFS) ? Also, from what I have seen pulling the FC cable (or similar fault) to simulate disk fault has caused ZFS to hang or panic. I don''t think such a test is the right way to test disk failures. Experts, please correct me here if needed. Thanks and regards, Sanjeev. Ayaz Anjum wrote:> > HI ! > > Well as per my actual post, i created a zfs file as part of Sun > cluster HAStoragePlus, and then disconned the FC cable, since there > was no active IO hence the failure of disk was not detected, then i > touched a file in the zfs filesystem, and it went fine, only after > that when i did sync then the node panicked and zfs filesystem is > failed over to other node. On the othernode the file i touched is not > there in the same zfs file system hence i am saying that data is lost. > I am planning to deploy zfs in a production NFS environment with above > 2TB of Data where users are constantly updating file. Hence my > concerns about data integrity. Please explain. > > thaks > > Ayaz Anjum > > > > *Darren Dunham <ddunham at taos.com>* > Sent by: zfs-discuss-bounces at opensolaris.org > > 03/12/2007 05:45 AM > > > To > zfs-discuss at opensolaris.org > cc > > Subject > Re: Re[2]: [zfs-discuss] writes lost with zfs ! > > > > > > > > > > > I have some concerns here, from my experience in the past, touching a > > file ( doing some IO ) will cause the ufs filesystem to failover, > unlike > > zfs where it did not ! Why the behaviour of zfs different than ufs ? > > UFS always does synchronous metadata updates. So a ''touch'' that creates > a file is going to require a metadata write. > > ZFS writes may not necessarily hit the disk until a transaction group > flush. > > > is not this compromising data integrity ? > > It should not. Is there a scenario that you are worried about? > > -- > Darren Dunham ddunham at taos.com > Senior Technical Consultant TAOS http://www.taos.com/ > Got some Dr Pepper? San Francisco, CA bay area > < This line left intentionally blank to confuse you. > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > > > > > > > > > -------------------------------------------------------------------------------------------------- > > Confidentiality Notice : This e-mail and any attachments are > confidential to the addressee and may also be privileged. If you > are not the addressee of this e-mail, you may not copy, forward, > disclose or otherwise use it in any way whatsoever. If you have > received this e-mail by mistake, please e-mail the sender by > replying to this message, and delete the original and any print out > thereof. > >------------------------------------------------------------------------ > >_______________________________________________ >zfs-discuss mailing list >zfs-discuss at opensolaris.org >http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > >-- Solaris Revenue Products Engineering, India Engineering Center, Sun Microsystems India Pvt Ltd. Tel: x27521 +91 80 669 27521
Ayaz Anjum and others, I think once you move into NFS over TCP in a client server env, the chance for lost data is significantly higher than just a disconnecting a cable, Scenario, before a client generates a delayed write from his violatile DRAM client cache, client reboots, and/or a asynchronous or a delayed write is done, no error on the write and the error is missed on the close because the programmer didn''t perform a fsync on the fd before the close and/or expect that a close might fail, and/or the tcp connection is lost and the data is not transfered, Thus, I know of very few FSs that can guarantee against data loss. What most modern FSs try to prevent is data corruption and FS corruption,... However, I am surprised that you seem to indicate that no hardware indication is/was present to indicate some form of hardware degredation/failure had occured. Mitchell Erblich ---------------- is generated because of the delayed On 11-Mar-07, at 11:12 PM, Ayaz Anjum wrote:> > HI ! > > Well as per my actual post, i created a zfs file as part of Sun > cluster HAStoragePlus, and then disconned the FC cable, since there > was no active IO hence the failure of disk was not detected, then i > touched a file in the zfs filesystem, and it went fine, only after > that when i did sync then the node panicked and zfs filesystem is > failed over to other node. On the othernode the file i touched is > not there in the same zfs file system hence i am saying that data > is lost. I am planning to deploy zfs in a production NFS > environment with above 2TB of Data where users are constantly > updating file. Hence my concerns about data integrity.I believe Robert and Darren have offered sufficient explanations: You cannot be assured of committed data unless you''ve sync''d it. You are only risking data loss if your users and/or applications assume data is committed without seeing a completed sync, which would be a design error. This applies to any filesystem. --Toby> Please explain. > > thaks > > Ayaz Anjum > > > > Darren Dunham <ddunham at taos.com> > Sent by: zfs-discuss-bounces at opensolaris.org > 03/12/2007 05:45 AM > > To > zfs-discuss at opensolaris.org > cc > Subject > Re: Re[2]: [zfs-discuss] writes lost with zfs ! > > > > > > > I have some concerns here, from my experience in the past, > touching a > > file ( doing some IO ) will cause the ufs filesystem to failover, > unlike > > zfs where it did not ! Why the behaviour of zfs different than ufs ? > > UFS always does synchronous metadata updates. So a ''touch'' that > creates > a file is going to require a metadata write. > > ZFS writes may not necessarily hit the disk until a transaction group > flush. > > > is not this compromising data integrity ? > > It should not. Is there a scenario that you are worried about? > > -- > Darren Dunham > ddunham at taos.com > Senior Technical Consultant TAOS http:// > www.taos.com/ > Got some Dr Pepper? San Francisco, CA bay > area > < This line left intentionally blank to confuse you. > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > > > > > > > > > ---------------------------------------------------------------------- > ----------------------------
Ayaz, Ayaz Anjum wrote:> > HI ! > > I have some concerns here, from my experience in the past, touching a > file ( doing some IO ) will cause the ufs filesystem to failover, unlike > zfs where it did not ! Why the behaviour of zfs different than ufs ? is > not this compromising data integrity ?As others have explained, until a sync is done or unless the file is opened to do ''sync writes'', a write is not guaranteed to be on disk. If the node fails before the disk commit, the data can be lost. Applications are written with this in mind. While ZFS and UFS do lots of things differently, the above applies to both of them and to all POSIX filesystems in general. Could you tell us more about how the UFS failover happened? Did you see a UFS panic? Did the Sun Cluster disk path monitor cause the failover? Regards, Manoj
Did you run "touch" from a client ? ZFS and UFS are different in general but in response to a local "touch" command neither need to generate immediate I/O and in response to a client "touch" both do. -r Ayaz Anjum writes: > HI ! > > Well as per my actual post, i created a zfs file as part of Sun cluster > HAStoragePlus, and then disconned the FC cable, since there was no active > IO hence the failure of disk was not detected, then i touched a file in > the zfs filesystem, and it went fine, only after that when i did sync then > the node panicked and zfs filesystem is failed over to other node. On the > othernode the file i touched is not there in the same zfs file system > hence i am saying that data is lost. I am planning to deploy zfs in a > production NFS environment with above 2TB of Data where users are > constantly updating file. Hence my concerns about data integrity. Please > explain. > > thaks > > Ayaz Anjum > > > > > Darren Dunham <ddunham at taos.com> > Sent by: zfs-discuss-bounces at opensolaris.org > 03/12/2007 05:45 AM > > To > zfs-discuss at opensolaris.org > cc > > Subject > Re: Re[2]: [zfs-discuss] writes lost with zfs ! > > > > > > > > I have some concerns here, from my experience in the past, touching a > > file ( doing some IO ) will cause the ufs filesystem to failover, unlike > > > zfs where it did not ! Why the behaviour of zfs different than ufs ? > > UFS always does synchronous metadata updates. So a ''touch'' that creates > a file is going to require a metadata write. > > ZFS writes may not necessarily hit the disk until a transaction group > flush. > > > is not this compromising data integrity ? > > It should not. Is there a scenario that you are worried about? > > -- > Darren Dunham ddunham at taos.com > Senior Technical Consultant TAOS http://www.taos.com/ > Got some Dr Pepper? San Francisco, CA bay area > < This line left intentionally blank to confuse you. > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > > > > > > > > > -------------------------------------------------------------------------------------------------- > > Confidentiality Notice : This e-mail and any attachments are > confidential to the addressee and may also be privileged. If you are > not the addressee of this e-mail, you may not copy, forward, disclose or > otherwise use it in any way whatsoever. If you have received this e-mail > by mistake, please e-mail the sender by replying to this message, and > delete the original and any print out thereof. > > <br><font size=3 face="sans-serif">HI !</font> > <br> > <br><font size=3 face="sans-serif">Well as per my actual post, i created > a zfs file as part of Sun cluster HAStoragePlus, and then disconned the > FC cable, since there was no active IO hence the failure of disk was not > detected, then i touched a file in the zfs filesystem, and it went fine, > only after that when i did sync then the node panicked and zfs filesystem > is failed over to other node. On the othernode the file i touched is not > there in the same zfs file system hence i am saying that data is lost. > I am planning to deploy zfs in a production NFS environment with above > 2TB of Data where users are constantly updating file. Hence my concerns > about data integrity. Please explain.</font> > <br> > <br><font size=3 face="sans-serif">thaks</font> > <br> > <br><font size=3 face="sans-serif">Ayaz Anjum<br> > </font> > <br> > <br> > <br> > <table width=100%> > <tr valign=top> > <td width=40%><font size=1 face="sans-serif"><b>Darren Dunham <ddunham at taos.com></b> > </font> > <br><font size=1 face="sans-serif">Sent by: zfs-discuss-bounces at opensolaris.org</font> > <p><font size=1 face="sans-serif">03/12/2007 05:45 AM</font> > <td width=59%> > <table width=100%> > <tr> > <td> > <div align=right><font size=1 face="sans-serif">To</font></div> > <td valign=top><font size=1 face="sans-serif">zfs-discuss at opensolaris.org</font> > <tr> > <td> > <div align=right><font size=1 face="sans-serif">cc</font></div> > <td valign=top> > <tr> > <td> > <div align=right><font size=1 face="sans-serif">Subject</font></div> > <td valign=top><font size=1 face="sans-serif">Re: Re[2]: [zfs-discuss] > writes lost with zfs !</font></table> > <br> > <table> > <tr valign=top> > <td> > <td></table> > <br></table> > <br> > <br> > <br><font size=2><tt>> I have some concerns here, from my experience > in the past, touching a <br> > > file ( doing some IO ) will cause the ufs filesystem to failover, > unlike <br> > > zfs where it did not ! Why the behaviour of zfs different than ufs > ?<br> > <br> > UFS always does synchronous metadata updates. So a ''touch'' that creates<br> > a file is going to require a metadata write.<br> > <br> > ZFS writes may not necessarily hit the disk until a transaction group<br> > flush. <br> > <br> > > is not this compromising data integrity ?<br> > <br> > It should not. Is there a scenario that you are worried about?<br> > <br> > -- <br> > Darren Dunham > > ddunham at taos.com<br> > Senior Technical Consultant TAOS > http://www.taos.com/<br> > Got some Dr Pepper? > San Francisco, CA bay area<br> > < This line left intentionally blank to > confuse you. ><br> > _______________________________________________<br> > zfs-discuss mailing list<br> > zfs-discuss at opensolaris.org<br> > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss<br> > </tt></font> > <br><font size=2 face="sans-serif"><br> > <br> > <br> > <br> > <br> > <br> > <br> > <br> > -------------------------------------------------------------------------------------------------- > <br> > Confidentiality Notice : This e-mail and any attachments are > confidential to the addressee and may also be privileged. If > you are not the addressee of this e-mail, you may > not copy, forward, disclose or otherwise use it in any way whatsoever. > If you have received this e-mail by mistake, please e-mail > the sender by replying to this message, and delete the original and > any print out thereof. </font> > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss