-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 I''m interesting in ZFS redundancy when vdev''s are "remote". The idea, for example, is use vdev remote mirroring as a cluster FS layer. Or puntual backup. Has anybody tried to mount an iscsi target as a ZFS device?. Are machine reboots / conectivity problems gracefully managed by ZFS?. Hope Solaris (not express) be able to act as a iscsi target soon :-) - -- Jesus Cea Avion _/_/ _/_/_/ _/_/_/ jcea at argo.es http://www.argo.es/~jcea/ _/_/ _/_/ _/_/ _/_/ _/_/ jabber / xmpp:jcea at jabber.org _/_/ _/_/ _/_/_/_/_/ _/_/ _/_/ _/_/ _/_/ _/_/ "Things are not so easy" _/_/ _/_/ _/_/ _/_/ _/_/ _/_/ "My name is Dump, Core Dump" _/_/_/ _/_/_/ _/_/ _/_/ "El amor es poner tu felicidad en la felicidad de otro" - Leibniz -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.5 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iQCVAwUBRYKzLZlgi5GaxT1NAQI1jQP8CR0h4xBuYjTJPBTk7QBS5+MAgwTr2NcC vYgYjsXr6oyeeO4qKlTDgAopNBoLJwYgoLI3m50FNhHY6mVQGVR+8DpmjY1abKZv myMUsWSUkkPdryhG3XGg+OxnTOfZJF4d0hDYK4ObAw4rUfFYEiqneHHTLGMFajwG ddfh2uUtQZI=QYuq -----END PGP SIGNATURE-----
On Dec 15, 2006, at 7:37 AM, Jesus Cea wrote:> -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > I''m interesting in ZFS redundancy when vdev''s are "remote". The idea, > for example, is use vdev remote mirroring as a cluster FS layer. Or > puntual backup. > > Has anybody tried to mount an iscsi target as a ZFS device?. Are > machine > reboots / conectivity problems gracefully managed by ZFS?. >I use the iSCSI Target as a ZFS device quite often these days. I''ve got several machines which only have a single disk and a test suite that I use for the iSCSI target which requires access to the complete device. So, I run the target on another machine and have the initiator locate that device. For remote replication there could be an issue. It would very much depend on the link speed. The target is not aware of the link speed, but can handle large numbers of outstanding commands. It''s very possible for the initiator to send commands that will timeout before the data can be returned. The initiator therefore needs to determine how fast data is being returned and throttle things so that they don''t timeout. The timeouts here are related to the SCSI I/O stack and nothing to due with the network layer.> Hope Solaris (not express) be able to act as a iscsi target soon :-) > > - -- > Jesus Cea Avion _/_/ _/_/_/ _/_/_/ > jcea at argo.es http://www.argo.es/~jcea/ _/_/ _/_/ _/_/ _/_/ > _/_/ > jabber / xmpp:jcea at jabber.org _/_/ _/_/ _/_/_/_/_/ > _/_/ _/_/ _/_/ _/_/ _/_/ > "Things are not so easy" _/_/ _/_/ _/_/ _/_/ _/_/ _/_/ > "My name is Dump, Core Dump" _/_/_/ _/_/_/ _/_/ _/_/ > "El amor es poner tu felicidad en la felicidad de otro" - Leibniz > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.5 (GNU/Linux) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org > > iQCVAwUBRYKzLZlgi5GaxT1NAQI1jQP8CR0h4xBuYjTJPBTk7QBS5+MAgwTr2NcC > vYgYjsXr6oyeeO4qKlTDgAopNBoLJwYgoLI3m50FNhHY6mVQGVR+8DpmjY1abKZv > myMUsWSUkkPdryhG3XGg+OxnTOfZJF4d0hDYK4ObAw4rUfFYEiqneHHTLGMFajwG > ddfh2uUtQZI> =QYuq > -----END PGP SIGNATURE----- > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss---- Rick McNeal "If ignorance is bliss, this lesson would appear to be a deliberate attempt on your part to deprive me of happiness, the pursuit of which is my unalienable right according to the Declaration of Independence. I therefore assert my patriotic prerogative not to know this material. I''ll be out on the playground." -- Calvin
On Fri, 15 Dec 2006, Jesus Cea wrote:> -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > I''m interesting in ZFS redundancy when vdev''s are "remote". The idea, > for example, is use vdev remote mirroring as a cluster FS layer. Or > puntual backup. > > Has anybody tried to mount an iscsi target as a ZFS device?. Are machine > reboots / conectivity problems gracefully managed by ZFS?.I''ve been using it with Solaris 10 6/06 on a T1000, with a pair of Promise VTrak 500i iSCSI boxes as the targets. I set up the VTraks identically (RAID5E-something, ~4TB each), filled one with about 3.5TB worth of data, then attached the other one as a mirror. Shortly after the mirror silvering began, performance dropped considerably (''zpool iostat 1'' wrote a line of output once every 5 minutes. It took about 3 days to finish during which the T1000 was basically unusable. (during that time, sendmail managed to syslog a few messages about how it was skipping the queue run because the load was at 200 :-) Once the mirror was synced, I disconnected one of the iSCSI boxes (pulled the ethernet plug from one of the VTraks), did some I/O on the volume, and Solaris paniced. After it rebooted, I did a ''zpool scrub'' and the T1000 again went into la-la land while the scrubbing occurred. I''ve given up on the mirrored iSCSI idea for now, and am just using a single VTrak for backups. It''s been stable; not the fastest thing, but I haven''t attempted any tuning (no zil tweaking or anything like that), so I won''t complain too much about speed. I''d really like the ''panic-on-drive-failure/disappearance'' behaviour to change. Now that U3 is out, I''ll be giving it another try. James
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 James W. Abendschan wrote:> It took about 3 days to finish > during which the T1000 was basically unusable. (during that time, > sendmail managed to syslog a few messages about how it > was skipping the queue run because the load was at 200 :-)Glup!.> Once the mirror was synced, I disconnected one of the iSCSI boxes > (pulled the ethernet plug from one of the VTraks), did some I/O > on the volume, and Solaris paniced.Pufff.> I''d really like the ''panic-on-drive-failure/disappearance'' behaviour > to change. Now that U3 is out, I''ll be giving it another try.Please, post your results. Thanks in advance. - -- Jesus Cea Avion _/_/ _/_/_/ _/_/_/ jcea at argo.es http://www.argo.es/~jcea/ _/_/ _/_/ _/_/ _/_/ _/_/ jabber / xmpp:jcea at jabber.org _/_/ _/_/ _/_/_/_/_/ _/_/ _/_/ _/_/ _/_/ _/_/ "Things are not so easy" _/_/ _/_/ _/_/ _/_/ _/_/ _/_/ "My name is Dump, Core Dump" _/_/_/ _/_/_/ _/_/ _/_/ "El amor es poner tu felicidad en la felicidad de otro" - Leibniz -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.5 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iQCVAwUBRYbQmZlgi5GaxT1NAQJ3ugP+JW8gDZOuWEYEARWseowKMuM6ukqVObbr diz9vUUH6K06B10teoIxQy70bq8XDD+WZWm6hS1l+YDbpZmt26mFqLEqyGzMLPyP /T+sEw2NQvX29NNTIyBlIUcPJxDn3zqbsX+eGxStqzHiYiEtK5tAmLbJx08n8N3U +PW/N4WdkWk=GsY9 -----END PGP SIGNATURE-----
James W. Abendschan wrote:> > Once the mirror was synced, I disconnected one of the iSCSI boxes > (pulled the ethernet plug from one of the VTraks), did some I/O > on the volume, and Solaris paniced. After it rebooted, I did a > ''zpool scrub'' and the T1000 again went into la-la land while the > scrubbing occurred.What did the backtrace have? Was it ZFS related? iSCSI related? Networking stack?