Our application Canary has approx 750 clients uploading to the server every 10 mins, that''s approx 108,000 gzip tarballs per day writing to the /upload directory. The parser untars the tarball which consists of 8 ascii files into the /archives directory. /app is our application and tools (apache, tomcat, etc) directory. We also have batch jobs that run throughout the day, I would say we read 2 to 3 times more than we write. Since we have an alternate server, downtime or data lost is somewhat acceptable. How can we best layout our filesystems to get the most performance. directory info -------------- /app - 30G /upload - 10G /archives - 35G HW info ------- System Configuration: Sun Microsystems sun4v Sun Fire T200 System clock frequency: 200 MHz Memory size: 8184 Megabytes CPU: 32 x 1000 MHz SUNW,UltraSPARC-T1 Disks: 4x68G Vendor: FUJITSU Product: MAV2073RCSUN72G Revision: 0301 We plan on using 1 disk for OS, the others 3 disks for canary filesystems, /app, /upload, and /archives. Should I create 3 pools, ie zpool create canary_app c1t1d0 zpool create canary_upload c1t2d0 zpool create canary_archives c1t3d0 --OR-- create 1 pool using dynamic stripe, ie zpool create canary c1t1d0 c1t2d0 c1t3d0 --OR-- create a single-parity raid-z pool, ie. zpool create canary raidz c1t1d0 c1t2d0 c1t3d0 Which option gives us the best performance? If there''s another method that''s not mentioned, please let me know. Also, should be enable read/write cache on the OS as well as the other disks? Is build 9 in S10U2 RR?? If not, please point me to the OS image on nana.eng. Thanks, karen -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ NOTICE: This email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
On Tue, Jul 25, 2006 at 03:39:11PM -0700, Karen Chau wrote:> Our application Canary has approx 750 clients uploading to the server > every 10 mins, that''s approx 108,000 gzip tarballs per day writing to > the /upload directory. The parser untars the tarball which consists of > 8 ascii files into the /archives directory. /app is our application and > tools (apache, tomcat, etc) directory. We also have batch jobs that run > throughout the day, I would say we read 2 to 3 times more than we write. > > Since we have an alternate server, downtime or data lost is somewhat > acceptable. How can we best layout our filesystems to get the most > performance. > > directory info > -------------- > /app - 30G > /upload - 10G > /archives - 35G > > HW info > ------- > System Configuration: Sun Microsystems sun4v Sun Fire T200 > System clock frequency: 200 MHz > Memory size: 8184 Megabytes > CPU: 32 x 1000 MHz SUNW,UltraSPARC-T1 > Disks: 4x68G > Vendor: FUJITSU > Product: MAV2073RCSUN72G > Revision: 0301 > > > We plan on using 1 disk for OS, the others 3 disks for canary > filesystems, /app, /upload, and /archives. Should I create 3 pools, ie > zpool create canary_app c1t1d0 > zpool create canary_upload c1t2d0 > zpool create canary_archives c1t3d0 > > --OR-- > create 1 pool using dynamic stripe, ie > zpool create canary c1t1d0 c1t2d0 c1t3d0 > > --OR-- > create a single-parity raid-z pool, ie. > zpool create canary raidz c1t1d0 c1t2d0 c1t3d0 > > Which option gives us the best performance? If there''s another method > that''s not mentioned, please let me know.You should create a single pool of a RAID-Z stripe. This will give you approximately 140G of usuable space, and if you turn on compression (on everything but /upload, since that''s already gzipped) you''ll get much more. You''ll also have some data redundancy in case one of the disks fails. Simply create 3 datasets, along the lines of: # zpool create canary raidz c1t1d0 c1t2d0 c1t3d0 # zfs set mountpoint=none canary # zfs set compression=on canary # zfs create canary/app # zfs set mountpoint=/app canary/app # zfs create canary/upload # zfs set mountpoint=/upload canary/upload # zfs set compression=off canary/upload # zfs create canary/archives # zfs set mountpoint=/archives canary/archives This will give you reasonable performance. If this isn''t enough, then you should probably do a 3-way mirror (which gives you redundancy but perhaps not enough space), or a dynamic stripe (which gives you better performance but no data redundancy). I would try both configurations, benchmark your app, and see if raidz will actually be a bottleneck (my guess is it won''t be).> Also, should be enable read/write cache on the OS as well as the other > disks?If you give zpool(1M) ''whole disks'' (i.e. no ''s0'' slice number) and let it label and use the disks, it will automatically turn on the write cache for you. - Eric -- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
Given the amount of I/O wouldn''t it make sense to get more drives involved or something that has cache on the front end or both? If you''re really pushing the amount of I/O you''re alluding too - Hard to tell without all the details - then you''re probably going to hit a limitation on the drive IOPS. (Even with the cache on.) Karen Chau wrote:> Our application Canary has approx 750 clients uploading to the server > every 10 mins, that''s approx 108,000 gzip tarballs per day writing to > the /upload directory. The parser untars the tarball which consists of > 8 ascii files into the /archives directory. /app is our application and > tools (apache, tomcat, etc) directory. We also have batch jobs that run > throughout the day, I would say we read 2 to 3 times more than we write. > >
Hi Torrey; we are the cobblers kids. We borrowed this T2000 from Niagara engineering after we did some performance tests for them. I am trying to get a thumper to run this data set. This could take up to 3-4 months. Today we are watching 750 Sun Ray servers and 30,000 employees. Lets see 1) Solaris 10 2) ZFS version 6 3) T2000 32x1000 with the poorer performing drives that come with the Niagara We need a short term solution. Niagara engineering has given us two more of the internal drives so we can max out the Niagara with 4 internal drives. This is the hardware we need to use this week. . When we get a new box, more drives we will reconfigure. Our graphs have 5000 data points per month, 140 data points per day. we can stand to lose data. my suggestion was one drive as the system volume and the remaining three drives as one big zfs volume , probably raidz. thanks sean Torrey McMahon wrote:> Given the amount of I/O wouldn''t it make sense to get more drives > involved or something that has cache on the front end or both? If > you''re really pushing the amount of I/O you''re alluding too - Hard to > tell without all the details - then you''re probably going to hit a > limitation on the drive IOPS. (Even with the cache on.) > > Karen Chau wrote: > >> Our application Canary has approx 750 clients uploading to the server >> every 10 mins, that''s approx 108,000 gzip tarballs per day writing to >> the /upload directory. The parser untars the tarball which consists of >> 8 ascii files into the /archives directory. /app is our application and >> tools (apache, tomcat, etc) directory. We also have batch jobs that run >> throughout the day, I would say we read 2 to 3 times more than we write. >> >> > >-- <http://www.sun.com> * Sean Meighan * Mgr ITSM Engineering *Sun Microsystems, Inc.* US Phone x32329 / +1 408 850-9537 Mobile 303-520-2024 Fax 408 850-9537 Email Sean.Meighan at Sun.COM ------------------------------------------------------------------------ NOTICE: This email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. ------------------------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20060725/7c415316/attachment.html>
Karen and Sean, You mention ZFS version 6 do yo mean that you are running s10u2_06? If so, then definitely you want to upgrade to the RR version of s10u2 which is s10u2_09a. Additionally, I''ve just putback the latest feature set and bugfixes which will be part of s10u3_03. There were some additional performance fixes which may really benefit plus it will provide hot spares support. Once this build is available I would highly recommend that you guys take it for a spin (works great on Thumper). Thanks, George Sean Meighan wrote:> Hi Torrey; we are the cobblers kids. We borrowed this T2000 from Niagara > engineering after we did some performance tests for them. I am trying to > get a thumper to run this data set. This could take up to 3-4 months. > Today we are watching 750 Sun Ray servers and 30,000 employees. Lets see > 1) Solaris 10 > 2) ZFS version 6 > 3) T2000 32x1000 with the poorer performing drives that come with the > Niagara > > We need a short term solution. Niagara engineering has given us two more > of the internal drives so we can max out the Niagara with 4 internal > drives. This is the hardware we need to use this week. . When we get a > new box, more drives we will reconfigure. > > Our graphs have 5000 data points per month, 140 data points per day. we > can stand to lose data. > > my suggestion was one drive as the system volume and the remaining three > drives as one big zfs volume , probably raidz. > > thanks > sean > > > Torrey McMahon wrote: >> Given the amount of I/O wouldn''t it make sense to get more drives >> involved or something that has cache on the front end or both? If >> you''re really pushing the amount of I/O you''re alluding too - Hard to >> tell without all the details - then you''re probably going to hit a >> limitation on the drive IOPS. (Even with the cache on.) >> >> Karen Chau wrote: >>> Our application Canary has approx 750 clients uploading to the server >>> every 10 mins, that''s approx 108,000 gzip tarballs per day writing to >>> the /upload directory. The parser untars the tarball which consists of >>> 8 ascii files into the /archives directory. /app is our application and >>> tools (apache, tomcat, etc) directory. We also have batch jobs that run >>> throughout the day, I would say we read 2 to 3 times more than we write. >>> >>> >> > > -- > <http://www.sun.com> * Sean Meighan * > Mgr ITSM Engineering > > *Sun Microsystems, Inc.* > US > Phone x32329 / +1 408 850-9537 > Mobile 303-520-2024 > Fax 408 850-9537 > Email Sean.Meighan at Sun.COM > > > ------------------------------------------------------------------------ > NOTICE: This email message is for the sole use of the intended > recipient(s) and may contain confidential and privileged information. > Any unauthorized review, use, disclosure or distribution is prohibited. > If you are not the intended recipient, please contact the sender by > reply email and destroy all copies of the original message. > ------------------------------------------------------------------------ > > > ------------------------------------------------------------------------ > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Tue, Jul 25, 2006 at 03:54:22PM -0700, Eric Schrock wrote:> > If you give zpool(1M) ''whole disks'' (i.e. no ''s0'' slice number) and let > it label and use the disks, it will automatically turn on the write > cache for you.What if you can''t give ZFS whole disks? I run snv_38 on the Optiplex GX620 on my desk at work and I run snv_40 on the Latitude D610 that I carry with me. In both cases the machines only have one disk, so I need to split it up for UFS for the OS and ZFS for my data. How do I turn on write cache for partial disks? -brian
Brian Hechinger wrote On 07/26/06 06:49,:> On Tue, Jul 25, 2006 at 03:54:22PM -0700, Eric Schrock wrote: > >>If you give zpool(1M) ''whole disks'' (i.e. no ''s0'' slice number) and let >>it label and use the disks, it will automatically turn on the write >>cache for you. > > > What if you can''t give ZFS whole disks? I run snv_38 on the Optiplex > GX620 on my desk at work and I run snv_40 on the Latitude D610 that I > carry with me. In both cases the machines only have one disk, so I need > to split it up for UFS for the OS and ZFS for my data. How do I turn on > write cache for partial disks? > > -brianYou can''t enable write caching for just part of the disk. We don''t enable it for slices because UFS (and other file systems) doesn''t do write cache flushing and so could get corruption on power failure. I suppose if you know the disk only contains zfs slices then write caching could be manually enabled using "format -e" -> cache -> write_cache -> enable Neil
Jesus Cea
2006-Jul-26 17:06 UTC
Write cache (was: Re: [zfs-discuss] How to best layout our filesystems)
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Neil Perrin wrote:> I suppose if you know > the disk only contains zfs slices then write caching could be > manually enabled using "format -e" -> cache -> write_cache -> enableWhen will we have write cache control over ATA/SATA drives? :-). - -- Jesus Cea Avion _/_/ _/_/_/ _/_/_/ jcea at argo.es http://www.argo.es/~jcea/ _/_/ _/_/ _/_/ _/_/ _/_/ jabber / xmpp:jcea at jabber.org _/_/ _/_/ _/_/_/_/_/ _/_/ _/_/ _/_/ _/_/ _/_/ "Things are not so easy" _/_/ _/_/ _/_/ _/_/ _/_/ _/_/ "My name is Dump, Core Dump" _/_/_/ _/_/_/ _/_/ _/_/ "El amor es poner tu felicidad en la felicidad de otro" - Leibniz -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.2.2 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iQCVAwUBRMehLplgi5GaxT1NAQJKTwP/UmC3RIsOu+CygedrepaDqAXRyL4AzTpZ qpLR1XdS9Q01EuYx+SoPeFD//3QOUPAS+5gU1i7ZPBoLHx2ErkvcaxICYtecvoMD aIJW2vGvApEipLPLU6zlDjjhM3LlKb96x03ElpRvOmdM1FL0IV1RqGSzVJ+e2Uo7 fPKfpzhZESI=5NBk -----END PGP SIGNATURE-----
On Wed, Jul 26, 2006 at 08:38:16AM -0600, Neil Perrin wrote:> > > >GX620 on my desk at work and I run snv_40 on the Latitude D610 that I > >carry with me. In both cases the machines only have one disk, so I need > >to split it up for UFS for the OS and ZFS for my data. How do I turn on > >write cache for partial disks? > > > >-brian > > You can''t enable write caching for just part of the disk. > We don''t enable it for slices because UFS (and other > file systems) doesn''t do write cache flushing and so > could get corruption on power failure. I suppose if you know > the disk only contains zfs slices then write caching could be > manually enabled using "format -e" -> cache -> write_cache -> enableEh, I guess I''ll skip it then. ;) -brian
Jesus Cea wrote:> -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Neil Perrin wrote: >> I suppose if you know >> the disk only contains zfs slices then write caching could be >> manually enabled using "format -e" -> cache -> write_cache -> enable > > When will we have write cache control over ATA/SATA drives? :-). >A method of controlling write cache independent of drive type, color or flavor is being developed.... I''ll ping the responsible parties (bcc''d). - Bart -- Bart Smaalders Solaris Kernel Performance barts at cyber.eng.sun.com http://blogs.sun.com/barts
There is manual, programmatic and start-up control of write cache on SATA drives already available. There is no drive-agnostic (i.e. for all types of drives) control that covers all three ways of cache control - that was shifted into a lower priority item than other sata development stuff. It will be done eventaully... Pawel Bart Smaalders wrote:> Jesus Cea wrote: > >> -----BEGIN PGP SIGNED MESSAGE----- >> Hash: SHA1 >> >> Neil Perrin wrote: >> >>> I suppose if you know >>> the disk only contains zfs slices then write caching could be >>> manually enabled using "format -e" -> cache -> write_cache -> enable >> >> >> When will we have write cache control over ATA/SATA drives? :-). >> > > A method of controlling write cache independent of drive > type, color or flavor is being developed.... I''ll ping > the responsible parties (bcc''d). > > - Bart > >-- Pawel Wojcik Sun Microsystems pawel.wojcik at Sun.com 310 341-1133 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ NOTICE: This email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Hello George, Wednesday, July 26, 2006, 7:27:04 AM, you wrote: GW> Additionally, I''ve just putback the latest feature set and bugfixes GW> which will be part of s10u3_03. There were some additional performance GW> fixes which may really benefit plus it will provide hot spares support. GW> Once this build is available I would highly recommend that you guys take GW> it for a spin (works great on Thumper). I guess patches will be released first (or later). Can you give actual BUG IDs especially those related to performance? -- Best regards, Robert mailto:rmilkowski at task.gda.pl http://milek.blogspot.com
Robert Milkowski wrote:>Hello George, > >Wednesday, July 26, 2006, 7:27:04 AM, you wrote: > > >GW> Additionally, I''ve just putback the latest feature set and bugfixes >GW> which will be part of s10u3_03. There were some additional performance >GW> fixes which may really benefit plus it will provide hot spares support. >GW> Once this build is available I would highly recommend that you guys take >GW> it for a spin (works great on Thumper). > >I guess patches will be released first (or later). >Can you give actual BUG IDs especially those related to performance? > > > >For U3, these are the performance fixes: 6424554 full block re-writes need not read data in 6440499 zil should avoid txg_wait_synced() and use dmu_sync() to issue parallelIOs when fsyncing 6447377 ZFS prefetch is inconsistant 6373978 want to take lots of snapshots quickly (''zfs snapshot -r'') you could perhaps include these two as well: 4034947 anon_swap_adjust() should call kmem_reap() if availrmem is low. 6416482 filebench oltp workload hangs in zfs There won''t be anything in U3 that isn''t already in nevada... happy performing, eric
Hello eric, Thursday, July 27, 2006, 4:34:16 AM, you wrote: ek> Robert Milkowski wrote:>>Hello George, >> >>Wednesday, July 26, 2006, 7:27:04 AM, you wrote: >> >> >>GW> Additionally, I''ve just putback the latest feature set and bugfixes >>GW> which will be part of s10u3_03. There were some additional performance >>GW> fixes which may really benefit plus it will provide hot spares support. >>GW> Once this build is available I would highly recommend that you guys take >>GW> it for a spin (works great on Thumper). >> >>I guess patches will be released first (or later). >>Can you give actual BUG IDs especially those related to performance? >> >> >> >>ek> For U3, these are the performance fixes: ek> 6424554 full block re-writes need not read data in ek> 6440499 zil should avoid txg_wait_synced() and use dmu_sync() to issue ek> parallelIOs when fsyncing ek> 6447377 ZFS prefetch is inconsistant ek> 6373978 want to take lots of snapshots quickly (''zfs snapshot -r'') ek> you could perhaps include these two as well: ek> 4034947 anon_swap_adjust() should call kmem_reap() if availrmem is low. ek> 6416482 filebench oltp workload hangs in zfs ok, thank you. Do you know if patches for S10 will be released before U3? ek> There won''t be anything in U3 that isn''t already in nevada... I know that :) -- Best regards, Robert mailto:rmilkowski at task.gda.pl http://milek.blogspot.com
Eric said:> For U3, these are the performance fixes: > 6424554 full block re-writes need not read data in > 6440499 zil should avoid txg_wait_synced() and use dmu_sync() > to issue > parallelIOs when fsyncing > 6447377 ZFS prefetch is inconsistant > 6373978 want to take lots of snapshots quickly (''zfs snapshot -r'') > > you could perhaps include these two as well: > 4034947 anon_swap_adjust() should call kmem_reap() if > availrmem is low. > 6416482 filebench oltp workload hangs in zfs > > There won''t be anything in U3 that isn''t already in nevada...Hi Eric, Do S10U2 users have to wait for U3 to get these fixes, or are they going to be released as patches before then? I''m presuming that U3 is scheduled for early 2007... Steve.
For S10U3, RR is 11/13/06 and GA is 11/27/06. Gary Bennett, Steve wrote:> Eric said: > >> For U3, these are the performance fixes: >> 6424554 full block re-writes need not read data in >> 6440499 zil should avoid txg_wait_synced() and use dmu_sync() >> to issue >> parallelIOs when fsyncing >> 6447377 ZFS prefetch is inconsistant >> 6373978 want to take lots of snapshots quickly (''zfs snapshot -r'') >> >> you could perhaps include these two as well: >> 4034947 anon_swap_adjust() should call kmem_reap() if >> availrmem is low. >> 6416482 filebench oltp workload hangs in zfs >> >> There won''t be anything in U3 that isn''t already in nevada... >> > Hi Eric, > > Do S10U2 users have to wait for U3 to get these fixes, or are they going > to be released as patches before then? > I''m presuming that U3 is scheduled for early 2007... > > Steve. > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >-- <http://www.sun.com/solaris> * Gary Combs * Product Architect *Sun Microsystems, Inc.* 3295 NW 211th Terrace Hillsboro, OR 97124 US Phone x32604/+1 503 715 3517 Fax 503-715-3517 Email Gary.Combs at Sun.COM "The box said ''Windows 2000 Server or better'', so I installed Solaris" <http://www.sun.com/solaris> -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20060727/c73669c3/attachment.html>
Robert, The patches will be available sometime late September. This may be a week or so before s10u3 actually releases. Thanks, George Robert Milkowski wrote:> Hello eric, > > Thursday, July 27, 2006, 4:34:16 AM, you wrote: > > ek> Robert Milkowski wrote: > >>> Hello George, >>> >>> Wednesday, July 26, 2006, 7:27:04 AM, you wrote: >>> >>> >>> GW> Additionally, I''ve just putback the latest feature set and bugfixes >>> GW> which will be part of s10u3_03. There were some additional performance >>> GW> fixes which may really benefit plus it will provide hot spares support. >>> GW> Once this build is available I would highly recommend that you guys take >>> GW> it for a spin (works great on Thumper). >>> >>> I guess patches will be released first (or later). >>> Can you give actual BUG IDs especially those related to performance? >>> >>> >>> >>> > ek> For U3, these are the performance fixes: > ek> 6424554 full block re-writes need not read data in > ek> 6440499 zil should avoid txg_wait_synced() and use dmu_sync() to issue > ek> parallelIOs when fsyncing > ek> 6447377 ZFS prefetch is inconsistant > ek> 6373978 want to take lots of snapshots quickly (''zfs snapshot -r'') > > ek> you could perhaps include these two as well: > ek> 4034947 anon_swap_adjust() should call kmem_reap() if availrmem is low. > ek> 6416482 filebench oltp workload hangs in zfs > > ok, thank you. > Do you know if patches for S10 will be released before U3? > > ek> There won''t be anything in U3 that isn''t already in nevada... > > I know that :) > >
Sean Meighan
2006-Jul-31 00:55 UTC
[zfs-discuss] Canary is now running latest code and has a 3 disk raidz ZFS volume
*Hi George; life is better for us now. we upgraded to s10s_u3wos_01 last Friday on itsm-mpk-2.sfbay , the production Canary server http://canary.sfbay. What do we look like now? * # zpool upgrade This system is currently running ZFS version 2. All pools are formatted using this version. we added two more lower performance disk drives last Friday. we went from two drives that were mirrored to four drives. now we look like this on our T2000: (1) 68 gig running unmirrored for the system (3) 68 gig drives setup as raidz # zpool status pool: canary state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM canary ONLINE 0 0 0 raidz ONLINE 0 0 0 c1t1d0 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 c1t3d0 ONLINE 0 0 0 errors: No known data errors our 100% disk drive from previous weeks is now three drives. iostat now shows that no single drive is reaching 100% . here is a "iostat -xn 1 99" extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 4.0 0.0 136.0 0.0 0.0 0.0 0.0 5.3 0 2 c1t0d0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c0t0d0 0.0 288.9 0.0 939.3 0.0 7.0 0.0 24.1 1 74 c1t1d0 0.0 300.9 0.0 940.8 0.0 6.2 0.0 20.7 1 72 c1t2d0 0.0 323.9 0.0 927.8 0.0 5.3 0.0 16.5 1 63 c1t3d0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 itsm-mpk-2:vold(pid334) extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c1t0d0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c0t0d0 0.0 70.9 0.0 118.8 0.0 0.5 0.0 7.6 0 28 c1t1d0 0.0 74.9 0.0 124.3 0.0 0.5 0.0 6.1 0 26 c1t2d0 0.0 75.8 0.0 120.3 0.0 0.5 0.0 7.2 0 27 c1t3d0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 itsm-mpk-2:vold(pid Here is our old box # more /etc/release Solaris 10 6/06 s10s_u2wos_06 SPARC Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Use is subject to license terms. Assembled 30 March 2006 # pkginfo -l SUNWzfsr PKGINST: SUNWzfsr NAME: ZFS (Root) CATEGORY: system ARCH: sparc VERSION: 11.10.0,REV=2006.03.22.02.15 BASEDIR: / VENDOR: Sun Microsystems, Inc. DESC: ZFS root components PSTAMP: on10-patch20060322021857 INSTDATE: Apr 04 2006 13:52 HOTLINE: Please contact your local service provider STATUS: completely installed FILES: 18 installed pathnames 5 shared pathnames 7 directories 4 executables 1811 blocks used (approx) here is the current version # more /etc/release Solaris 10 11/06 s10s_u3wos_01 SPARC Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Use is subject to license terms. Assembled 27 June 2006 # pkginfo -l SUNWzfsr PKGINST: SUNWzfsr NAME: ZFS (Root) CATEGORY: system ARCH: sparc VERSION: 11.10.0,REV=2006.05.18.02.15 BASEDIR: / VENDOR: Sun Microsystems, Inc. DESC: ZFS root components PSTAMP: on10-patch20060315140831 INSTDATE: Jul 27 2006 12:10 HOTLINE: Please contact your local service provider STATUS: completely installed FILES: 18 installed pathnames 5 shared pathnames 7 directories 4 executables 1831 blocks used (approx) In my opinion the 2 1/2" disk drives in the Niagara box were not designed to receive one million files per day. these two extra drives (thanks Denis!) have given us acceptable performance. i still want a thumper *smile*. It is pretty amazing that we have 800 servers, 30,000 users, 140 million lines of ASCII per day all fitting in a 2u T2000 box! thanks sean George Wilson wrote:> Sean, > > Sorry for the delay getting back to you. > > You can do a ''zpool upgrade'' to see what version of the on-disk format > you pool is currently running. The latest version is 3. You can then > issue a ''zpool upgrade <pool>'' to upgrade. Keep in mind that the > upgrade is a one-way ticket and can''t be rolled backwards. > > ZFS can be upgraded by just applying patches. So if you were running > Solaris 10 06/06 (a.k.a u2) you could apply the patches that will come > out when u3 ships. Then issue the ''zpool upgrade'' command to get the > functionality you need. > > Does this help? Can you send me the output of ''zpool upgrade'' on your > system? > > Thanks, > George > > Sean Meighan wrote: > >> Hi George; we are trying to build our server today. We should have >> the four disk drives mounted by this afternoon. >> >> Separate question; we were on an old ZFS version, how could we have >> upgraded to a new version? Do we really have to re-install Solaris to >> upgrade ZFS? >> >> thanks >> sean >> >> George Wilson wrote: >> >>> Sean, >>> >>> The gate for s10u3_03 closed yesterday and I think the DVD image >>> will be available early next week. I''ll keep you posted. If you want >>> to try this out before then what I can provide you are the binaries >>> to run on top of s10u3_02. >>> >>> Thanks, >>> George >>> >>> Sean Meighan wrote: >>> >>>> George; is there a link to s10u3_03? My team would be happy to put >>>> the latest in. >>>> thanks >>>> sean >>>> >>>> >>>> George Wilson wrote: >>>> >>>>> Karen and Sean, >>>>> >>>>> You mention ZFS version 6 do yo mean that you are running >>>>> s10u2_06? If so, then definitely you want to upgrade to the RR >>>>> version of s10u2 which is s10u2_09a. >>>>> >>>>> Additionally, I''ve just putback the latest feature set and >>>>> bugfixes which will be part of s10u3_03. There were some >>>>> additional performance fixes which may really benefit plus it will >>>>> provide hot spares support. Once this build is available I would >>>>> highly recommend that you guys take it for a spin (works great on >>>>> Thumper). >>>>> >>>>> Thanks, >>>>> George >>>>> >>>>> Sean Meighan wrote: >>>>> >>>>>> Hi Torrey; we are the cobblers kids. We borrowed this T2000 from >>>>>> Niagara engineering after we did some performance tests for them. >>>>>> I am trying to get a thumper to run this data set. This could >>>>>> take up to 3-4 months. Today we are watching 750 Sun Ray servers >>>>>> and 30,000 employees. Lets see >>>>>> 1) Solaris 10 >>>>>> 2) ZFS version 6 >>>>>> 3) T2000 32x1000 with the poorer performing drives that come >>>>>> with the Niagara >>>>>> >>>>>> We need a short term solution. Niagara engineering has given us >>>>>> two more of the internal drives so we can max out the Niagara >>>>>> with 4 internal drives. This is the hardware we need to use this >>>>>> week. . When we get a new box, more drives we will reconfigure. >>>>>> >>>>>> Our graphs have 5000 data points per month, 140 data points per >>>>>> day. we can stand to lose data. >>>>>> >>>>>> my suggestion was one drive as the system volume and the >>>>>> remaining three drives as one big zfs volume , probably raidz. >>>>>> >>>>>> thanks >>>>>> sean >>>>>> >>>>>> >>>>>> Torrey McMahon wrote: >>>>>> >>>>>>> Given the amount of I/O wouldn''t it make sense to get more >>>>>>> drives involved or something that has cache on the front end or >>>>>>> both? If you''re really pushing the amount of I/O you''re alluding >>>>>>> too - Hard to tell without all the details - then you''re >>>>>>> probably going to hit a limitation on the drive IOPS. (Even with >>>>>>> the cache on.) >>>>>>> >>>>>>> Karen Chau wrote: >>>>>>> >>>>>>>> Our application Canary has approx 750 clients uploading to the >>>>>>>> server >>>>>>>> every 10 mins, that''s approx 108,000 gzip tarballs per day >>>>>>>> writing to >>>>>>>> the /upload directory. The parser untars the tarball which >>>>>>>> consists of >>>>>>>> 8 ascii files into the /archives directory. /app is our >>>>>>>> application and >>>>>>>> tools (apache, tomcat, etc) directory. We also have batch jobs >>>>>>>> that run >>>>>>>> throughout the day, I would say we read 2 to 3 times more than >>>>>>>> we write. >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>>> -- >>>>>> <http://www.sun.com> * Sean Meighan * >>>>>> Mgr ITSM Engineering >>>>>> >>>>>> *Sun Microsystems, Inc.* >>>>>> US >>>>>> Phone x32329 / +1 408 850-9537 >>>>>> Mobile 303-520-2024 >>>>>> Fax 408 850-9537 >>>>>> Email Sean.Meighan at Sun.COM >>>>>> >>>>>> ------------------------------------------------------------------------ >>>>>> >>>>>> NOTICE: This email message is for the sole use of the intended >>>>>> recipient(s) and may contain confidential and privileged >>>>>> information. Any unauthorized review, use, disclosure or >>>>>> distribution is prohibited. If you are not the intended >>>>>> recipient, please contact the sender by reply email and destroy >>>>>> all copies of the original message. >>>>>> ------------------------------------------------------------------------ >>>>>> >>>>>> >>>>>> >>>>>> ------------------------------------------------------------------------ >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> zfs-discuss mailing list >>>>>> zfs-discuss at opensolaris.org >>>>>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >>>>> >>>> >>>> -- >>>> <http://www.sun.com> * Sean Meighan * >>>> Mgr ITSM Engineering >>>> >>>> *Sun Microsystems, Inc.* >>>> US >>>> Phone x32329 / +1 408 850-9537 >>>> Mobile 303-520-2024 >>>> Fax 408 850-9537 >>>> Email Sean.Meighan at Sun.COM >>>> >>>> ------------------------------------------------------------------------ >>>> >>>> NOTICE: This email message is for the sole use of the intended >>>> recipient(s) and may contain confidential and privileged >>>> information. Any unauthorized review, use, disclosure or >>>> distribution is prohibited. If you are not the intended recipient, >>>> please contact the sender by reply email and destroy all copies of >>>> the original message. >>>> ------------------------------------------------------------------------ >>>> >>> >> >> -- >> <http://www.sun.com> * Sean Meighan * >> Mgr ITSM Engineering >> >> *Sun Microsystems, Inc.* >> US >> Phone x32329 / +1 408 850-9537 >> Mobile 303-520-2024 >> Fax 408 850-9537 >> Email Sean.Meighan at Sun.COM >> >> >> ------------------------------------------------------------------------ >> NOTICE: This email message is for the sole use of the intended >> recipient(s) and may contain confidential and privileged information. >> Any unauthorized review, use, disclosure or distribution is >> prohibited. If you are not the intended recipient, please contact the >> sender by reply email and destroy all copies of the original message. >> ------------------------------------------------------------------------ >-- <http://www.sun.com> * Sean Meighan * Mgr ITSM Engineering *Sun Microsystems, Inc.* US Phone x32329 / +1 408 850-9537 Mobile 303-520-2024 Fax 408 850-9537 Email Sean.Meighan at Sun.COM ------------------------------------------------------------------------ NOTICE: This email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. ------------------------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20060730/f7f95a7b/attachment.html>
George Wilson
2006-Jul-31 02:25 UTC
[zfs-discuss] Re: Canary is now running latest code and has a 3 disk raidz ZFS volume
Sean, This is looking better! Once you get to the latest ZFS changes that we just putback into s10 you will be able to upgrade to ZFS version 3 which will provide such key features as Hot spares, RAID-6, clone promotion, and fast snapshots. Additionally, there are more performance gains that will probably help you out. Thanks, George Sean Meighan wrote:> *Hi George; life is better for us now. > > we upgraded to s10s_u3wos_01 last Friday on itsm-mpk-2.sfbay , the > production Canary server http://canary.sfbay. What do we look like now? > > * > # zpool upgrade > This system is currently running ZFS version 2. > > All pools are formatted using this version. > > we added two more lower performance disk drives last Friday. we went > from two drives that were mirrored to four drives. now we look like this > on our T2000: > (1) 68 gig running unmirrored for the system > (3) 68 gig drives setup as raidz > > # zpool status > pool: canary > state: ONLINE > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > canary ONLINE 0 0 0 > raidz ONLINE 0 0 0 > c1t1d0 ONLINE 0 0 0 > c1t2d0 ONLINE 0 0 0 > c1t3d0 ONLINE 0 0 0 > > errors: No known data errors > > our 100% disk drive from previous weeks is now three drives. iostat now > shows that no single drive is reaching 100% . here is a "iostat -xn 1 99" > > extended device statistics > r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device > 4.0 0.0 136.0 0.0 0.0 0.0 0.0 5.3 0 2 c1t0d0 > 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c0t0d0 > 0.0 288.9 0.0 939.3 0.0 7.0 0.0 24.1 1 74 c1t1d0 > 0.0 300.9 0.0 940.8 0.0 6.2 0.0 20.7 1 72 c1t2d0 > 0.0 323.9 0.0 927.8 0.0 5.3 0.0 16.5 1 63 c1t3d0 > 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 > itsm-mpk-2:vold(pid334) > extended device statistics > r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device > 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c1t0d0 > 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c0t0d0 > 0.0 70.9 0.0 118.8 0.0 0.5 0.0 7.6 0 28 c1t1d0 > 0.0 74.9 0.0 124.3 0.0 0.5 0.0 6.1 0 26 c1t2d0 > 0.0 75.8 0.0 120.3 0.0 0.5 0.0 7.2 0 27 c1t3d0 > 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 > itsm-mpk-2:vold(pid > > Here is our old box > # more /etc/release > Solaris 10 6/06 s10s_u2wos_06 SPARC > Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. > Use is subject to license terms. > Assembled 30 March 2006 > > # pkginfo -l SUNWzfsr > PKGINST: SUNWzfsr > NAME: ZFS (Root) > CATEGORY: system > ARCH: sparc > VERSION: 11.10.0,REV=2006.03.22.02.15 > BASEDIR: / > VENDOR: Sun Microsystems, Inc. > DESC: ZFS root components > PSTAMP: on10-patch20060322021857 > INSTDATE: Apr 04 2006 13:52 > HOTLINE: Please contact your local service provider > STATUS: completely installed > FILES: 18 installed pathnames > 5 shared pathnames > 7 directories > 4 executables > 1811 blocks used (approx) > > here is the current version > # more /etc/release > Solaris 10 11/06 s10s_u3wos_01 SPARC > Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. > Use is subject to license terms. > Assembled 27 June 2006 > > # pkginfo -l SUNWzfsr > PKGINST: SUNWzfsr > NAME: ZFS (Root) > CATEGORY: system > ARCH: sparc > VERSION: 11.10.0,REV=2006.05.18.02.15 > BASEDIR: / > VENDOR: Sun Microsystems, Inc. > DESC: ZFS root components > PSTAMP: on10-patch20060315140831 > INSTDATE: Jul 27 2006 12:10 > HOTLINE: Please contact your local service provider > STATUS: completely installed > FILES: 18 installed pathnames > 5 shared pathnames > 7 directories > 4 executables > 1831 blocks used (approx) > > > In my opinion the 2 1/2" disk drives in the Niagara box were not > designed to receive one million files per day. these two extra drives > (thanks Denis!) have given us acceptable performance. i still want a > thumper *smile*. It is pretty amazing that we have 800 servers, 30,000 > users, 140 million lines of ASCII per day all fitting in a 2u T2000 box! > > thanks > > sean > > > George Wilson wrote: >> Sean, >> >> Sorry for the delay getting back to you. >> >> You can do a ''zpool upgrade'' to see what version of the on-disk format >> you pool is currently running. The latest version is 3. You can then >> issue a ''zpool upgrade <pool>'' to upgrade. Keep in mind that the >> upgrade is a one-way ticket and can''t be rolled backwards. >> >> ZFS can be upgraded by just applying patches. So if you were running >> Solaris 10 06/06 (a.k.a u2) you could apply the patches that will come >> out when u3 ships. Then issue the ''zpool upgrade'' command to get the >> functionality you need. >> >> Does this help? Can you send me the output of ''zpool upgrade'' on your >> system? >> >> Thanks, >> George >> >> Sean Meighan wrote: >>> Hi George; we are trying to build our server today. We should have >>> the four disk drives mounted by this afternoon. >>> >>> Separate question; we were on an old ZFS version, how could we have >>> upgraded to a new version? Do we really have to re-install Solaris to >>> upgrade ZFS? >>> >>> thanks >>> sean >>> >>> George Wilson wrote: >>>> Sean, >>>> >>>> The gate for s10u3_03 closed yesterday and I think the DVD image >>>> will be available early next week. I''ll keep you posted. If you want >>>> to try this out before then what I can provide you are the binaries >>>> to run on top of s10u3_02. >>>> >>>> Thanks, >>>> George >>>> >>>> Sean Meighan wrote: >>>>> George; is there a link to s10u3_03? My team would be happy to put >>>>> the latest in. >>>>> thanks >>>>> sean >>>>> >>>>> >>>>> George Wilson wrote: >>>>>> Karen and Sean, >>>>>> >>>>>> You mention ZFS version 6 do yo mean that you are running >>>>>> s10u2_06? If so, then definitely you want to upgrade to the RR >>>>>> version of s10u2 which is s10u2_09a. >>>>>> >>>>>> Additionally, I''ve just putback the latest feature set and >>>>>> bugfixes which will be part of s10u3_03. There were some >>>>>> additional performance fixes which may really benefit plus it will >>>>>> provide hot spares support. Once this build is available I would >>>>>> highly recommend that you guys take it for a spin (works great on >>>>>> Thumper). >>>>>> >>>>>> Thanks, >>>>>> George >>>>>> >>>>>> Sean Meighan wrote: >>>>>>> Hi Torrey; we are the cobblers kids. We borrowed this T2000 from >>>>>>> Niagara engineering after we did some performance tests for them. >>>>>>> I am trying to get a thumper to run this data set. This could >>>>>>> take up to 3-4 months. Today we are watching 750 Sun Ray servers >>>>>>> and 30,000 employees. Lets see >>>>>>> 1) Solaris 10 >>>>>>> 2) ZFS version 6 >>>>>>> 3) T2000 32x1000 with the poorer performing drives that come >>>>>>> with the Niagara >>>>>>> >>>>>>> We need a short term solution. Niagara engineering has given us >>>>>>> two more of the internal drives so we can max out the Niagara >>>>>>> with 4 internal drives. This is the hardware we need to use this >>>>>>> week. . When we get a new box, more drives we will reconfigure. >>>>>>> >>>>>>> Our graphs have 5000 data points per month, 140 data points per >>>>>>> day. we can stand to lose data. >>>>>>> >>>>>>> my suggestion was one drive as the system volume and the >>>>>>> remaining three drives as one big zfs volume , probably raidz. >>>>>>> >>>>>>> thanks >>>>>>> sean >>>>>>> >>>>>>> >>>>>>> Torrey McMahon wrote: >>>>>>>> Given the amount of I/O wouldn''t it make sense to get more >>>>>>>> drives involved or something that has cache on the front end or >>>>>>>> both? If you''re really pushing the amount of I/O you''re alluding >>>>>>>> too - Hard to tell without all the details - then you''re >>>>>>>> probably going to hit a limitation on the drive IOPS. (Even with >>>>>>>> the cache on.) >>>>>>>> >>>>>>>> Karen Chau wrote: >>>>>>>>> Our application Canary has approx 750 clients uploading to the >>>>>>>>> server >>>>>>>>> every 10 mins, that''s approx 108,000 gzip tarballs per day >>>>>>>>> writing to >>>>>>>>> the /upload directory. The parser untars the tarball which >>>>>>>>> consists of >>>>>>>>> 8 ascii files into the /archives directory. /app is our >>>>>>>>> application and >>>>>>>>> tools (apache, tomcat, etc) directory. We also have batch jobs >>>>>>>>> that run >>>>>>>>> throughout the day, I would say we read 2 to 3 times more than >>>>>>>>> we write. >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>>> -- >>>>>>> <http://www.sun.com> * Sean Meighan * >>>>>>> Mgr ITSM Engineering >>>>>>> >>>>>>> *Sun Microsystems, Inc.* >>>>>>> US >>>>>>> Phone x32329 / +1 408 850-9537 >>>>>>> Mobile 303-520-2024 >>>>>>> Fax 408 850-9537 >>>>>>> Email Sean.Meighan at Sun.COM >>>>>>> >>>>>>> ------------------------------------------------------------------------ >>>>>>> >>>>>>> NOTICE: This email message is for the sole use of the intended >>>>>>> recipient(s) and may contain confidential and privileged >>>>>>> information. Any unauthorized review, use, disclosure or >>>>>>> distribution is prohibited. If you are not the intended >>>>>>> recipient, please contact the sender by reply email and destroy >>>>>>> all copies of the original message. >>>>>>> ------------------------------------------------------------------------ >>>>>>> >>>>>>> >>>>>>> >>>>>>> ------------------------------------------------------------------------ >>>>>>> >>>>>>> >>>>>>> _______________________________________________ >>>>>>> zfs-discuss mailing list >>>>>>> zfs-discuss at opensolaris.org >>>>>>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >>>>> >>>>> -- >>>>> <http://www.sun.com> * Sean Meighan * >>>>> Mgr ITSM Engineering >>>>> >>>>> *Sun Microsystems, Inc.* >>>>> US >>>>> Phone x32329 / +1 408 850-9537 >>>>> Mobile 303-520-2024 >>>>> Fax 408 850-9537 >>>>> Email Sean.Meighan at Sun.COM >>>>> >>>>> ------------------------------------------------------------------------ >>>>> >>>>> NOTICE: This email message is for the sole use of the intended >>>>> recipient(s) and may contain confidential and privileged >>>>> information. Any unauthorized review, use, disclosure or >>>>> distribution is prohibited. If you are not the intended recipient, >>>>> please contact the sender by reply email and destroy all copies of >>>>> the original message. >>>>> ------------------------------------------------------------------------ >>>>> >>> >>> -- >>> <http://www.sun.com> * Sean Meighan * >>> Mgr ITSM Engineering >>> >>> *Sun Microsystems, Inc.* >>> US >>> Phone x32329 / +1 408 850-9537 >>> Mobile 303-520-2024 >>> Fax 408 850-9537 >>> Email Sean.Meighan at Sun.COM >>> >>> >>> ------------------------------------------------------------------------ >>> NOTICE: This email message is for the sole use of the intended >>> recipient(s) and may contain confidential and privileged information. >>> Any unauthorized review, use, disclosure or distribution is >>> prohibited. If you are not the intended recipient, please contact the >>> sender by reply email and destroy all copies of the original message. >>> ------------------------------------------------------------------------ > > -- > <http://www.sun.com> * Sean Meighan * > Mgr ITSM Engineering > > *Sun Microsystems, Inc.* > US > Phone x32329 / +1 408 850-9537 > Mobile 303-520-2024 > Fax 408 850-9537 > Email Sean.Meighan at Sun.COM > > > ------------------------------------------------------------------------ > NOTICE: This email message is for the sole use of the intended > recipient(s) and may contain confidential and privileged information. > Any unauthorized review, use, disclosure or distribution is prohibited. > If you are not the intended recipient, please contact the sender by > reply email and destroy all copies of the original message. > ------------------------------------------------------------------------
Jesus Cea
2006-Aug-01 19:09 UTC
Write cache (was: Re: [zfs-discuss] How to best layout our filesystems)
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Neil Perrin wrote:> I suppose if you know > the disk only contains zfs slices then write caching could be > manually enabled using "format -e" -> cache -> write_cache -> enableWhen will we have write cache control over ATA/SATA drives? :-). - -- Jesus Cea Avion _/_/ _/_/_/ _/_/_/ jcea at argo.es http://www.argo.es/~jcea/ _/_/ _/_/ _/_/ _/_/ _/_/ jabber / xmpp:jcea at jabber.org _/_/ _/_/ _/_/_/_/_/ _/_/ _/_/ _/_/ _/_/ _/_/ "Things are not so easy" _/_/ _/_/ _/_/ _/_/ _/_/ _/_/ "My name is Dump, Core Dump" _/_/_/ _/_/_/ _/_/ _/_/ "El amor es poner tu felicidad en la felicidad de otro" - Leibniz -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.2.2 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iQCVAwUBRM+mzZlgi5GaxT1NAQL8oAP7BJEUzDlMhVGt5j3IKcNc2Q8TCyUwAn4k yWBCEmXdyBdpbRyoUnr6jlsn4QceC6/weYl/0H9df+eUibitu5QwWq4zRwFLUrqB BkgdIdgECmOt9u6Y6uAEFRGKlMQUU5ZVNuJKDgfIsJSlsvxD1f5ddKx74ZZpFqmx d9IVFK/KzQ0=YqXY -----END PGP SIGNATURE-----