So I have read in the ZFS Wiki: # The minimum size of a log device is the same as the minimum size of device in pool, which is 64 Mbytes. The amount of in-play data that might be stored on a log device is relatively small. Log blocks are freed when the log transaction (system call) is committed. # The maximum size of a log device should be approximately 1/2 the size of physical memory because that is the maximum amount of potential in-play data that can be stored. For example, if a system has 16 Gbytes of physical memory, consider a maximum log device size of 8 Gbytes. What is the downside of over-large log device? Let''s say I have a 3310 with 10 older 72-gig 10K RPM drives and RAIDZ2 them. Then I throw an entire 72-gig 15K RPM drive in as slog. What is behind this maximum size recommendation? -- This message posted from opensolaris.org
On 02/08/09 11:50, Vincent Fox wrote:> So I have read in the ZFS Wiki: > > # The minimum size of a log device is the same as the minimum size of device in > pool, which is 64 Mbytes. The amount of in-play data that might be stored on a log > device is relatively small. Log blocks are freed when the log transaction (system call) > is committed. > # The maximum size of a log device should be approximately 1/2 the size of physical > memory because that is the maximum amount of potential in-play data that can be stored. > For example, if a system has 16 Gbytes of physical memory, consider a maximum log device > size of 8 Gbytes. > > What is the downside of over-large log device?- Wasted disk space.> > Let''s say I have a 3310 with 10 older 72-gig 10K RPM drives and RAIDZ2 them. > Then I throw an entire 72-gig 15K RPM drive in as slog. > > What is behind this maximum size recommendation?- Just guidance on what might be used in the most stressed environment. Personally I''ve never seen anything like the maximum used but it''s theoretically possible. Neil.
Neil Perrin wrote:> On 02/08/09 11:50, Vincent Fox wrote: > >> So I have read in the ZFS Wiki: >> >> # The minimum size of a log device is the same as the minimum size of device in >> pool, which is 64 Mbytes. The amount of in-play data that might be stored on a log >> device is relatively small. Log blocks are freed when the log transaction (system call) >> is committed. >> # The maximum size of a log device should be approximately 1/2 the size of physical >> memory because that is the maximum amount of potential in-play data that can be stored. >> For example, if a system has 16 Gbytes of physical memory, consider a maximum log device >> size of 8 Gbytes. >> >> What is the downside of over-large log device? >> > > - Wasted disk space. > > >> Let''s say I have a 3310 with 10 older 72-gig 10K RPM drives and RAIDZ2 them. >> Then I throw an entire 72-gig 15K RPM drive in as slog. >> >> What is behind this maximum size recommendation? >> > > - Just guidance on what might be used in the most stressed environment. > Personally I''ve never seen anything like the maximum used but it''s > theoretically possible. >Just thinking out loud here, but given such a disk (i.e. one which is bigger than required), I might be inclined to slice it up, creating a slice for the log at the outer edge of the disk. The outer edge of the disk has the highest data rate, and by effectively constraining the head movement to only a portion of the whole disk, average seek times should be significantly improved (not to mention fewer seeks due to more data/cylinder at the outer edge). The log can''t be using the write cache, so the normal penalty for not using the write cache when not giving the whole disk to ZFS is irrelevant in this case. By allocating, say, a 32GB slice from the outer edge of a 72GB disk, you should get really good performance. If you turn out not to need anything like 32GB, then making it smaller will make it even faster (depending how ZFS allocates space on a log device, which I don''t know). Obviously, don''t use the rest of the disk in order to achieve this performance. -- Andrew
On Sun, 8 Feb 2009, Andrew Gabriel wrote:> > Just thinking out loud here, but given such a disk (i.e. one which is > bigger than required), I might be inclined to slice it up, creating a > slice for the log at the outer edge of the disk. The outer edge of the > disk has the highest data rate, and by effectively constraining the head > movement to only a portion of the whole disk, average seek times should > be significantly improved (not to mention fewer seeks due to more > data/cylinder at the outer edge). The log can''t be using the writeThis is good thinking but it is likely that the zfs implementors were already aware of this so zfs likely already does what you want without this extra work. Bob =====================================Bob Friesenhahn bfriesen at simple.dallas.tx.us, simplesystems.org/users/bfriesen GraphicsMagick Maintainer, GraphicsMagick.org
Thanks I think I get it now. Do you think having log on a 15K RPM drive with the main pool composed of 10K RPM drives will show worthwhile improvements? Or am I chasing a few percentage points? I don''t have money for new hardware & SSD. Just recycling some old components here are and there are a few 15K RPM drives on the shelf I thought I could throw strategically into the mix. Application will likely be NFS serving. Might use same setup for a list-serve system which does have local storage for archived emails etc. -- This message posted from opensolaris.org
On Sun, Feb 8, 2009 at 22:12, Vincent Fox <vincent_b_fox at yahoo.com> wrote:> Thanks I think I get it now. > > Do you think having log on a 15K RPM drive with the main pool composed of 10K RPM drives will show worthwhile improvements? Or am I chasing a few percentage points? > > I don''t have money for new hardware & SSD. Just recycling some old components here are and there are a few 15K RPM drives on the shelf I thought I could throw strategically into the mix. > > Application will likely be NFS serving. Might use same setup for a list-serve system which does have local storage for archived emails etc.The 3310 has battery backed write cache, that is faster than any disk. You might get more from the cache if you use it only for the log. The RPM of the disks used for the log is not important when you have a RAM write cache in front of the disk.
On Feb 8, 2009, at 16:12, Vincent Fox wrote:> Do you think having log on a 15K RPM drive with the main pool > composed of 10K RPM drives will show worthwhile improvements? Or am > I chasing a few percentage points?Another important question is whether it would be sufficient to purchase only one 15K disk, or should two be purchased and they be mirrored? What would happen if the device that the ZIL lives on suddenly goes away or start returning checksum errors? While an SSD is theoretically less likely to fail in some respects (no mechanical parts), what happens if it fails? How important is mirroring on log devices? Another question comes to mind: if you have multiple pools, can they all share one log device? For example, if you have twelve disks in a JBOD, and assign disks 1-4 are in mypool0, disks 4-8 are in mypool1, and disks 9-12 are in mypool2. Can you then have one SSD that can be allocated as the log device for all the pools?
Hello Andrew, Sunday, February 8, 2009, 8:46:24 PM, you wrote: AG> Neil Perrin wrote:>> On 02/08/09 11:50, Vincent Fox wrote: >> >>> So I have read in the ZFS Wiki: >>> >>> # The minimum size of a log device is the same as the minimum size of device in >>> pool, which is 64 Mbytes. The amount of in-play data that might be stored on a log >>> device is relatively small. Log blocks are freed when the log transaction (system call) >>> is committed. >>> # The maximum size of a log device should be approximately 1/2 the size of physical >>> memory because that is the maximum amount of potential in-play data that can be stored. >>> For example, if a system has 16 Gbytes of physical memory, consider a maximum log device >>> size of 8 Gbytes. >>> >>> What is the downside of over-large log device? >>> >> >> - Wasted disk space. >> >> >>> Let''s say I have a 3310 with 10 older 72-gig 10K RPM drives and RAIDZ2 them. >>> Then I throw an entire 72-gig 15K RPM drive in as slog. >>> >>> What is behind this maximum size recommendation? >>> >> >> - Just guidance on what might be used in the most stressed environment. >> Personally I''ve never seen anything like the maximum used but it''s >> theoretically possible. >>AG> Just thinking out loud here, but given such a disk (i.e. one which is AG> bigger than required), I might be inclined to slice it up, creating a AG> slice for the log at the outer edge of the disk. The outer edge of the AG> disk has the highest data rate, and by effectively constraining the head AG> movement to only a portion of the whole disk, average seek times should AG> be significantly improved (not to mention fewer seeks due to more AG> data/cylinder at the outer edge). The log can''t be using the write AG> cache, so the normal penalty for not using the write cache when not AG> giving the whole disk to ZFS is irrelevant in this case. By allocating, AG> say, a 32GB slice from the outer edge of a 72GB disk, you should get AG> really good performance. If you turn out not to need anything like 32GB, AG> then making it smaller will make it even faster (depending how ZFS AG> allocates space on a log device, which I don''t know). Obviously, don''t AG> use the rest of the disk in order to achieve this performance. 1. zfs by default will end-up utilizing outer regions of disk drive so there is no point slicing a lun in this case 2. the log definitely can use cache if it is nv one. of course in such a case there is a good question if one 15k disk behind 3510 for several 10k disks does make sense at all? btw: IIRC on 3510 you need to disable cache flushes in zfs and make sure that a disk array will switch to WT mode if one of a controlers or batteries fail -- Best regards, Robert mailto:milek at task.gda.pl milek.blogspot.com
Le 8 f?vr. 09 ? 13:12, Vincent Fox a ?crit :> Thanks I think I get it now. > > Do you think having log on a 15K RPM drive with the main pool > composed of 10K RPM drives will show worthwhile improvements? Or am > I chasing a few percentage points? >In cases where logzilla helps, then this should help by a factor of 15/10.> I don''t have money for new hardware & SSD. Just recycling some old > components here are and there are a few 15K RPM drives on the shelf > I thought I could throw strategically into the mix. > > Application will likely be NFS serving. Might use same setup for a > list-serve system which does have local storage for archived emails > etc. > -- > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > mail.opensolaris.org/mailman/listinfo/zfs-discuss
Le 8 f?vr. 09 ? 13:44, David Magda a ?crit :> On Feb 8, 2009, at 16:12, Vincent Fox wrote: > >> Do you think having log on a 15K RPM drive with the main pool >> composed of 10K RPM drives will show worthwhile improvements? Or am >> I chasing a few percentage points? > > Another important question is whether it would be sufficient to > purchase only one 15K disk, or should two be purchased and they be > mirrored?Worth repeating here... it can be done but not necessary.> What would happen if the device that the ZIL lives on > suddenly goes away or start returning checksum errors? >nothing, the ZIL is just read in case of host failure/reboot. Mirroring log devices helps to survive double failure scenario : (1 log device + host failure). If log devices goes away the system starts to behave as if no separate log was configured and the zil just uses the main storage pool.> While an SSD is theoretically less likely to fail in some respects (no > mechanical parts), what happens if it fails?synchronous writes starts to be handled at higher latency, nothing else.> How important is mirroring on log devices?> Another question comes to mind: if you have multiple pools, can they > all share one log device? For example, if you have twelve disks in a > JBOD, and assign disks 1-4 are in mypool0, disks 4-8 are in mypool1, > and disks 9-12 are in mypool2. Can you then have one SSD that can be > allocated as the log device for all the pools?I''d think so, you just need to partition it 3-way. -r> > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>>>> "rb" == Roch Bourbonnais <Roch.Bourbonnais at Sun.COM> writes:rb> If log devices goes away the system starts to behave as if no rb> separate log was configured and the zil just uses the main rb> storage pool. Maybe you can continue running for a while, but if you reboot in this situation, the pool will refuse to import because the log device is missing, and you can''t get at any of the data inside it. (6733267) -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 304 bytes Desc: not available URL: <mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090302/ca1c6734/attachment.bin>