Hi guys, sorry in advance if this is somewhat a lowly question, I''ve recently built a zfs test box based on nexentastor with 4x samsung 2tb drives connected via SATA-II in a raidz1 configuration with dedup enabled compression off and pool version 23. From running bonnie++ I get the following results: Version 1.03b ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP nexentastor 4G 60582 54 20502 4 12385 3 53901 57 105290 10 429.8 1 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 7181 29 +++++ +++ +++++ +++ 21477 97 +++++ +++ +++++ +++ nexentastor,4G,60582,54,20502,4,12385,3,53901,57,105290,10,429.8,1,16,7181,29,+++++,+++,+++++,+++,21477,97,+++++,+++,+++++,+++ I''d expect more than 105290K/s on a sequential read as a peak for a single drive, let alone a striped set. The system has a relatively decent CPU, however only 2GB memory, do you think increasing this to 4GB would noticeably affect performance of my zpool? The memory is only DDR1. Thanks in advance.
On Jan 15, 2011, at 4:21 PM, Michael Armstrong wrote:> Hi guys, sorry in advance if this is somewhat a lowly question, I''ve recently built a zfs test box based on nexentastor with 4x samsung 2tb drives connected via SATA-II in a raidz1 configuration with dedup enabled compression off and pool version 23. From running bonnie++ I get the following results: > > Version 1.03b ------Sequential Output------ --Sequential Input- --Random- > -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- > Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP > nexentastor 4G 60582 54 20502 4 12385 3 53901 57 105290 10 429.8 1 > ------Sequential Create------ --------Random Create-------- > -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- > files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP > 16 7181 29 +++++ +++ +++++ +++ 21477 97 +++++ +++ +++++ +++ > nexentastor,4G,60582,54,20502,4,12385,3,53901,57,105290,10,429.8,1,16,7181,29,+++++,+++,+++++,+++,21477,97,+++++,+++,+++++,+++ > > > I''d expect more than 105290K/s on a sequential read as a peak for a single drive, let alone a striped set. The system has a relatively decent CPU, however only 2GB memory, do you think increasing this to 4GB would noticeably affect performance of my zpool? The memory is only DDR1.2GB or 4GB of RAM + dedup is a recipe for pain. Do yourself a favor, turn off dedup and enable compression. -- richard
I''ve since turned off dedup, added another 3 drives and results have improved to around 148388K/sec on average, would turning on compression make things more CPU bound and improve performance further? On 18 Jan 2011, at 15:07, Richard Elling wrote:> On Jan 15, 2011, at 4:21 PM, Michael Armstrong wrote: > >> Hi guys, sorry in advance if this is somewhat a lowly question, I''ve recently built a zfs test box based on nexentastor with 4x samsung 2tb drives connected via SATA-II in a raidz1 configuration with dedup enabled compression off and pool version 23. From running bonnie++ I get the following results: >> >> Version 1.03b ------Sequential Output------ --Sequential Input- --Random- >> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- >> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP >> nexentastor 4G 60582 54 20502 4 12385 3 53901 57 105290 10 429.8 1 >> ------Sequential Create------ --------Random Create-------- >> -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- >> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP >> 16 7181 29 +++++ +++ +++++ +++ 21477 97 +++++ +++ +++++ +++ >> nexentastor,4G,60582,54,20502,4,12385,3,53901,57,105290,10,429.8,1,16,7181,29,+++++,+++,+++++,+++,21477,97,+++++,+++,+++++,+++ >> >> >> I''d expect more than 105290K/s on a sequential read as a peak for a single drive, let alone a striped set. The system has a relatively decent CPU, however only 2GB memory, do you think increasing this to 4GB would noticeably affect performance of my zpool? The memory is only DDR1. > > 2GB or 4GB of RAM + dedup is a recipe for pain. Do yourself a favor, turn off dedup > and enable compression. > -- richard >
I''ve seen a lot of cases where enabling compression helps with systems that are disk-bound. If you''ve got extra CPU ... give it a shot. On 1/18/2011 10:11 AM, Michael Armstrong wrote:> I''ve since turned off dedup, added another 3 drives and results have improved to around 148388K/sec on average, would turning on compression make things more CPU bound and improve performance further? > > On 18 Jan 2011, at 15:07, Richard Elling wrote: > >> On Jan 15, 2011, at 4:21 PM, Michael Armstrong wrote: >> >>> Hi guys, sorry in advance if this is somewhat a lowly question, I''ve recently built a zfs test box based on nexentastor with 4x samsung 2tb drives connected via SATA-II in a raidz1 configuration with dedup enabled compression off and pool version 23. From running bonnie++ I get the following results: >>> >>> Version 1.03b ------Sequential Output------ --Sequential Input- --Random- >>> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- >>> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP >>> nexentastor 4G 60582 54 20502 4 12385 3 53901 57 105290 10 429.8 1 >>> ------Sequential Create------ --------Random Create-------- >>> -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- >>> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP >>> 16 7181 29 +++++ +++ +++++ +++ 21477 97 +++++ +++ +++++ +++ >>> nexentastor,4G,60582,54,20502,4,12385,3,53901,57,105290,10,429.8,1,16,7181,29,+++++,+++,+++++,+++,21477,97,+++++,+++,+++++,+++ >>> >>> >>> I''d expect more than 105290K/s on a sequential read as a peak for a single drive, let alone a striped set. The system has a relatively decent CPU, however only 2GB memory, do you think increasing this to 4GB would noticeably affect performance of my zpool? The memory is only DDR1. >> 2GB or 4GB of RAM + dedup is a recipe for pain. Do yourself a favor, turn off dedup >> and enable compression. >> -- richard >> > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
On Tue, 2011-01-18 at 15:11 +0000, Michael Armstrong wrote:> I''ve since turned off dedup, added another 3 drives and results have improved to around 148388K/sec on average, would turning on compression make things more CPU bound and improve performance further? > > On 18 Jan 2011, at 15:07, Richard Elling wrote: > > > On Jan 15, 2011, at 4:21 PM, Michael Armstrong wrote: > > > >> Hi guys, sorry in advance if this is somewhat a lowly question, I''ve recently built a zfs test box based on nexentastor with 4x samsung 2tb drives connected via SATA-II in a raidz1 configuration with dedup enabled compression off and pool version 23. From running bonnie++ I get the following results: > >> > >> Version 1.03b ------Sequential Output------ --Sequential Input- --Random- > >> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- > >> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP > >> nexentastor 4G 60582 54 20502 4 12385 3 53901 57 105290 10 429.8 1 > >> ------Sequential Create------ --------Random Create-------- > >> -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- > >> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP > >> 16 7181 29 +++++ +++ +++++ +++ 21477 97 +++++ +++ +++++ +++ > >> nexentastor,4G,60582,54,20502,4,12385,3,53901,57,105290,10,429.8,1,16,7181,29,+++++,+++,+++++,+++,21477,97,+++++,+++,+++++,+++ > >> > >> > >> I''d expect more than 105290K/s on a sequential read as a peak for a single drive, let alone a striped set. The system has a relatively decent CPU, however only 2GB memory, do you think increasing this to 4GB would noticeably affect performance of my zpool? The memory is only DDR1. > > > > 2GB or 4GB of RAM + dedup is a recipe for pain. Do yourself a favor, turn off dedup > > and enable compression. > > -- richard > > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discussCompression will help speed things up (I/O, that is), presuming that you''re not already CPU-bound, which it doesn''t seem you are. If you want Dedup, you pretty much are required to buy an SSD for L2ARC, *and* get more RAM. These days, I really don''t recommend running ZFS as a fileserver without a bare minimum of 4GB of RAM (8GB for anything other than light use), even with Dedup turned off. -- Erik Trimble Java System Support Mailstop: usca22-317 Phone: x67195 Santa Clara, CA Timezone: US/Pacific (GMT-0800)
Thanks everyone, I think overtime I''m gonna update the system to include an ssd for sure. Memory may come later though. Thanks for everyone''s responses Erik Trimble <erik.trimble at oracle.com> wrote:>On Tue, 2011-01-18 at 15:11 +0000, Michael Armstrong wrote: >> I''ve since turned off dedup, added another 3 drives and results have improved to around 148388K/sec on average, would turning on compression make things more CPU bound and improve performance further? >> >> On 18 Jan 2011, at 15:07, Richard Elling wrote: >> >> > On Jan 15, 2011, at 4:21 PM, Michael Armstrong wrote: >> > >> >> Hi guys, sorry in advance if this is somewhat a lowly question, I''ve recently built a zfs test box based on nexentastor with 4x samsung 2tb drives connected via SATA-II in a raidz1 configuration with dedup enabled compression off and pool version 23. From running bonnie++ I get the following results: >> >> >> >> Version 1.03b ------Sequential Output------ --Sequential Input- --Random- >> >> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- >> >> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP >> >> nexentastor 4G 60582 54 20502 4 12385 3 53901 57 105290 10 429.8 1 >> >> ------Sequential Create------ --------Random Create-------- >> >> -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- >> >> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP >> >> 16 7181 29 +++++ +++ +++++ +++ 21477 97 +++++ +++ +++++ +++ >> >> nexentastor,4G,60582,54,20502,4,12385,3,53901,57,105290,10,429.8,1,16,7181,29,+++++,+++,+++++,+++,21477,97,+++++,+++,+++++,+++ >> >> >> >> >> >> I''d expect more than 105290K/s on a sequential read as a peak for a single drive, let alone a striped set. The system has a relatively decent CPU, however only 2GB memory, do you think increasing this to 4GB would noticeably affect performance of my zpool? The memory is only DDR1. >> > >> > 2GB or 4GB of RAM + dedup is a recipe for pain. Do yourself a favor, turn off dedup >> > and enable compression. >> > -- richard >> > >> >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > >Compression will help speed things up (I/O, that is), presuming that >you''re not already CPU-bound, which it doesn''t seem you are. > >If you want Dedup, you pretty much are required to buy an SSD for L2ARC, >*and* get more RAM. > > >These days, I really don''t recommend running ZFS as a fileserver without >a bare minimum of 4GB of RAM (8GB for anything other than light use), >even with Dedup turned off. > > >-- >Erik Trimble >Java System Support >Mailstop: usca22-317 >Phone: x67195 >Santa Clara, CA >Timezone: US/Pacific (GMT-0800) >
You can''t really do that. Adding an SSD for L2ARC will help a bit, but L2ARC storage also consumes RAM to maintain a cache table of what''s in the L2ARC. Using 2GB of RAM with an SSD-based L2ARC (even without Dedup) likely won''t help you too much vs not having the SSD. If you''re going to turn on Dedup, you need at least 8GB of RAM to go with the SSD. -Erik On Tue, 2011-01-18 at 18:35 +0000, Michael Armstrong wrote:> Thanks everyone, I think overtime I''m gonna update the system to include an ssd for sure. Memory may come later though. Thanks for everyone''s responses > > Erik Trimble <erik.trimble at oracle.com> wrote: > > >On Tue, 2011-01-18 at 15:11 +0000, Michael Armstrong wrote: > >> I''ve since turned off dedup, added another 3 drives and results have improved to around 148388K/sec on average, would turning on compression make things more CPU bound and improve performance further? > >> > >> On 18 Jan 2011, at 15:07, Richard Elling wrote: > >> > >> > On Jan 15, 2011, at 4:21 PM, Michael Armstrong wrote: > >> > > >> >> Hi guys, sorry in advance if this is somewhat a lowly question, I''ve recently built a zfs test box based on nexentastor with 4x samsung 2tb drives connected via SATA-II in a raidz1 configuration with dedup enabled compression off and pool version 23. From running bonnie++ I get the following results: > >> >> > >> >> Version 1.03b ------Sequential Output------ --Sequential Input- --Random- > >> >> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- > >> >> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP > >> >> nexentastor 4G 60582 54 20502 4 12385 3 53901 57 105290 10 429.8 1 > >> >> ------Sequential Create------ --------Random Create-------- > >> >> -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- > >> >> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP > >> >> 16 7181 29 +++++ +++ +++++ +++ 21477 97 +++++ +++ +++++ +++ > >> >> nexentastor,4G,60582,54,20502,4,12385,3,53901,57,105290,10,429.8,1,16,7181,29,+++++,+++,+++++,+++,21477,97,+++++,+++,+++++,+++ > >> >> > >> >> > >> >> I''d expect more than 105290K/s on a sequential read as a peak for a single drive, let alone a striped set. The system has a relatively decent CPU, however only 2GB memory, do you think increasing this to 4GB would noticeably affect performance of my zpool? The memory is only DDR1. > >> > > >> > 2GB or 4GB of RAM + dedup is a recipe for pain. Do yourself a favor, turn off dedup > >> > and enable compression. > >> > -- richard > >> > > >> > >> _______________________________________________ > >> zfs-discuss mailing list > >> zfs-discuss at opensolaris.org > >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > > > > >Compression will help speed things up (I/O, that is), presuming that > >you''re not already CPU-bound, which it doesn''t seem you are. > > > >If you want Dedup, you pretty much are required to buy an SSD for L2ARC, > >*and* get more RAM. > > > > > >These days, I really don''t recommend running ZFS as a fileserver without > >a bare minimum of 4GB of RAM (8GB for anything other than light use), > >even with Dedup turned off. > > > > > >-- > >Erik Trimble > >Java System Support > >Mailstop: usca22-317 > >Phone: x67195 > >Santa Clara, CA > >Timezone: US/Pacific (GMT-0800) > >-- Erik Trimble Java System Support Mailstop: usca22-317 Phone: x67195 Santa Clara, CA Timezone: US/Pacific (GMT-0800)
Ah ok, I wont be using dedup anyway just wanted to try. Ill be adding more ram though, I guess you can''t have too much. Thanks Erik Trimble <erik.trimble at oracle.com> wrote:>You can''t really do that. > >Adding an SSD for L2ARC will help a bit, but L2ARC storage also consumes >RAM to maintain a cache table of what''s in the L2ARC. Using 2GB of RAM >with an SSD-based L2ARC (even without Dedup) likely won''t help you too >much vs not having the SSD. > >If you''re going to turn on Dedup, you need at least 8GB of RAM to go >with the SSD. > >-Erik > > >On Tue, 2011-01-18 at 18:35 +0000, Michael Armstrong wrote: >> Thanks everyone, I think overtime I''m gonna update the system to include an ssd for sure. Memory may come later though. Thanks for everyone''s responses >> >> Erik Trimble <erik.trimble at oracle.com> wrote: >> >> >On Tue, 2011-01-18 at 15:11 +0000, Michael Armstrong wrote: >> >> I''ve since turned off dedup, added another 3 drives and results have improved to around 148388K/sec on average, would turning on compression make things more CPU bound and improve performance further? >> >> >> >> On 18 Jan 2011, at 15:07, Richard Elling wrote: >> >> >> >> > On Jan 15, 2011, at 4:21 PM, Michael Armstrong wrote: >> >> > >> >> >> Hi guys, sorry in advance if this is somewhat a lowly question, I''ve recently built a zfs test box based on nexentastor with 4x samsung 2tb drives connected via SATA-II in a raidz1 configuration with dedup enabled compression off and pool version 23. From running bonnie++ I get the following results: >> >> >> >> >> >> Version 1.03b ------Sequential Output------ --Sequential Input- --Random- >> >> >> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- >> >> >> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP >> >> >> nexentastor 4G 60582 54 20502 4 12385 3 53901 57 105290 10 429.8 1 >> >> >> ------Sequential Create------ --------Random Create-------- >> >> >> -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- >> >> >> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP >> >> >> 16 7181 29 +++++ +++ +++++ +++ 21477 97 +++++ +++ +++++ +++ >> >> >> nexentastor,4G,60582,54,20502,4,12385,3,53901,57,105290,10,429.8,1,16,7181,29,+++++,+++,+++++,+++,21477,97,+++++,+++,+++++,+++ >> >> >> >> >> >> >> >> >> I''d expect more than 105290K/s on a sequential read as a peak for a single drive, let alone a striped set. The system has a relatively decent CPU, however only 2GB memory, do you think increasing this to 4GB would noticeably affect performance of my zpool? The memory is only DDR1. >> >> > >> >> > 2GB or 4GB of RAM + dedup is a recipe for pain. Do yourself a favor, turn off dedup >> >> > and enable compression. >> >> > -- richard >> >> > >> >> >> >> _______________________________________________ >> >> zfs-discuss mailing list >> >> zfs-discuss at opensolaris.org >> >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> > >> > >> >Compression will help speed things up (I/O, that is), presuming that >> >you''re not already CPU-bound, which it doesn''t seem you are. >> > >> >If you want Dedup, you pretty much are required to buy an SSD for L2ARC, >> >*and* get more RAM. >> > >> > >> >These days, I really don''t recommend running ZFS as a fileserver without >> >a bare minimum of 4GB of RAM (8GB for anything other than light use), >> >even with Dedup turned off. >> > >> > >> >-- >> >Erik Trimble >> >Java System Support >> >Mailstop: usca22-317 >> >Phone: x67195 >> >Santa Clara, CA >> >Timezone: US/Pacific (GMT-0800) >> > >-- >Erik Trimble >Java System Support >Mailstop: usca22-317 >Phone: x67195 >Santa Clara, CA >Timezone: US/Pacific (GMT-0800) >
On Tue, Jan 18, 2011 at 07:07:50AM -0800, Richard Elling wrote:> > I''d expect more than 105290K/s on a sequential read as a peak for a single drive, let alone a striped set. The system has a relatively decent CPU, however only 2GB memory, do you think increasing this to 4GB would noticeably affect performance of my zpool? The memory is only DDR1. > > 2GB or 4GB of RAM + dedup is a recipe for pain. Do yourself a favor, turn off dedup > and enable compression.Assuming 4x 3 TByte drives and 8 GByte RAM, and a lowly dual-core 1.3 GHZ AMD Neo, should I do the same? Or should I even not bother with compression? The data set is a lot of scanned documents, already compressed (TIF and PDF). I presume the incidence of identical blocks will be very low under such circumstances. Oh, and with 4x 3 TByte SATA mirrored pool is pretty much without alternative, right? -- Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
On Jan 20, 2011, at 11:18 AM, Eugen Leitl wrote:>>> I''d expect more than 105290K/s on a sequential read as a peak for a single drive, let alone a striped set. The system has a relatively decent CPU, however only 2GB memory, do you think increasing this to 4GB would noticeably affect performance of my zpool? The memory is only DDR1. >> >> 2GB or 4GB of RAM + dedup is a recipe for pain. Do yourself a favor, turn off dedup >> and enable compression. > > Assuming 4x 3 TByte drives and 8 GByte RAM, and a lowly dual-core 1.3 GHZ > AMD Neo, should I do the same? Or should I even not bother with compression? > The data set is a lot of scanned documents, already compressed (TIF and PDF). > I presume the incidence of identical blocks will be very low under such > circumstances.This would seem very unlikely to benefit from dedup (unless you cp the individual files to multiple directories). But if you are just keeping lots of scans the odds of a given block being identical to a lot of other ones seem low. The thing about compression is it is easy to test out (whereas dedup can be more painful to test when it doesn''t work out). So you might as well try, but it would seem like dedup is a waste of time and might well cause a lot of headaches. Good luck, Ware
On Thu, Jan 20, 2011 at 8:18 AM, Eugen Leitl <eugen at leitl.org> wrote:> Oh, and with 4x 3 TByte SATA mirrored pool is pretty much without > alternative, right?You can also use raidz2, which will have a little more resiliency. With mirroring, you can lose one disk without data loss, but losing a second disk might destroy your data. With raidz2, you can lose any 2 disks, but you pay for it with somewhat lower performance. -B -- Brandon High : bhigh at freaks.com