Hi list, I''ve did some tests and run into a very strange situation.. I created a zvol using "zfs create -V" and initialize an sam-filesystem on this zvol. After that I restored some testdata using a dump from another system. So far so good. After some big troubles I found out that releasing files in the sam-filesystem doesn''t create space on the underlying zvol. So staging and releasing files just work until the "zfs list" shows me a zvol with 100% usage although the sam-filesystem was only filled up to 20%. I didn''t create snapshots and a scrub did show any errors. When the zvol was filled up even a sammkfs can''t solve the problem. I had to destroy the zvol ( not zpool ). After that I was able recreate a new zvol with sam-fs on top. Is that a known behaviour? .. or did I run into a bug? System: SAM-FS 4.6.85 Solaris 10 U7 X86
On Mon, Jul 27, 2009 at 02:14:24PM +0200, Tobias Exner wrote:> Hi list, > > I''ve did some tests and run into a very strange situation.. > > > I created a zvol using "zfs create -V" and initialize an sam-filesystem > on this zvol. > After that I restored some testdata using a dump from another system. > > So far so good. > > After some big troubles I found out that releasing files in the > sam-filesystem doesn''t create space on the underlying zvol. > So staging and releasing files just work until the "zfs list" shows me a > zvol with 100% usage although the sam-filesystem was only filled up to 20%. > I didn''t create snapshots and a scrub did show any errors.This is most likely QFS bug number 6837405. Dean
Hi Dean, may you provide more infos about that? Are you able to send me a bug description for a better understanding? Is there a patch available, or do I have to use a previous patch of sam-qfs? Thanks in advance... Tobias Dean Roehrich schrieb:> On Mon, Jul 27, 2009 at 02:14:24PM +0200, Tobias Exner wrote: > >> Hi list, >> >> I''ve did some tests and run into a very strange situation.. >> >> >> I created a zvol using "zfs create -V" and initialize an sam-filesystem >> on this zvol. >> After that I restored some testdata using a dump from another system. >> >> So far so good. >> >> After some big troubles I found out that releasing files in the >> sam-filesystem doesn''t create space on the underlying zvol. >> So staging and releasing files just work until the "zfs list" shows me a >> zvol with 100% usage although the sam-filesystem was only filled up to 20%. >> I didn''t create snapshots and a scrub did show any errors. >> > > This is most likely QFS bug number 6837405. > > Dean > > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090727/c5f17980/attachment.html>
On 27/07/2009, at 10:14 PM, Tobias Exner wrote:> Hi list, > > I''ve did some tests and run into a very strange situation.. > > > I created a zvol using "zfs create -V" and initialize an sam- > filesystem on this zvol. > After that I restored some testdata using a dump from another system. > > So far so good. > > After some big troubles I found out that releasing files in the sam- > filesystem doesn''t create space on the underlying zvol. > So staging and releasing files just work until the "zfs list" shows > me a zvol with 100% usage although the sam-filesystem was only > filled up to 20%. > I didn''t create snapshots and a scrub did show any errors. > > When the zvol was filled up even a sammkfs can''t solve the problem. > I had to destroy the zvol ( not zpool ). > After that I was able recreate a new zvol with sam-fs on top.this is a feature of block devices. once you (or samfs) uses a block on the zvol, it has no mechanism to tell the zvol when it is no longer using it. samfs simply unreferences the blocks it frees, it doesnt actively go through them and tell the block layer underneath it that they can be reclaimed. from the zvols point of view theyre still being used because they were used at some point in the past. you might be able to get the space back in the zvol by writing a massive file full of zeros in the samfs, but you''d have to test that.> Is that a known behaviour? .. or did I run into a bug?it''s known. dlg> > > System: > > SAM-FS 4.6.85 > Solaris 10 U7 X86 > > > _______________________________________________ > sam-qfs-discuss mailing list > sam-qfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/sam-qfs-discussDavid Gwynne Infrastructure Architect Engineering, Architecture, and IT University of Queensland +61 7 3365 3636
On 28 July, 2009 - David Gwynne sent me these 1,9K bytes:> > On 27/07/2009, at 10:14 PM, Tobias Exner wrote: > >> Hi list, >> >> I''ve did some tests and run into a very strange situation.. >> >> >> I created a zvol using "zfs create -V" and initialize an sam- >> filesystem on this zvol. >> After that I restored some testdata using a dump from another system. >> >> So far so good. >> >> After some big troubles I found out that releasing files in the sam- >> filesystem doesn''t create space on the underlying zvol. >> So staging and releasing files just work until the "zfs list" shows me >> a zvol with 100% usage although the sam-filesystem was only filled up >> to 20%. >> I didn''t create snapshots and a scrub did show any errors. >> >> When the zvol was filled up even a sammkfs can''t solve the problem. I >> had to destroy the zvol ( not zpool ). >> After that I was able recreate a new zvol with sam-fs on top. > > this is a feature of block devices. once you (or samfs) uses a block on > the zvol, it has no mechanism to tell the zvol when it is no longer > using it. samfs simply unreferences the blocks it frees, it doesnt > actively go through them and tell the block layer underneath it that > they can be reclaimed. from the zvols point of view theyre still being > used because they were used at some point in the past.http://en.wikipedia.org/wiki/TRIM_(SSD_command) should make it possible I guess.. (assuming it''s implemented all the way in the chain).. Should/could help in virtualization too.. /Tomas -- Tomas ?gren, stric at acc.umu.se, http://www.acc.umu.se/~stric/ |- Student at Computing Science, University of Ume? `- Sysadmin at {cs,acc}.umu.se
On Jul 28, 2009, at 8:53 AM, Tomas ?gren wrote:> On 28 July, 2009 - David Gwynne sent me these 1,9K bytes: > >> >> On 27/07/2009, at 10:14 PM, Tobias Exner wrote: >> >>> Hi list, >>> >>> I''ve did some tests and run into a very strange situation.. >>> >>> >>> I created a zvol using "zfs create -V" and initialize an sam- >>> filesystem on this zvol. >>> After that I restored some testdata using a dump from another >>> system. >>> >>> So far so good. >>> >>> After some big troubles I found out that releasing files in the sam- >>> filesystem doesn''t create space on the underlying zvol. >>> So staging and releasing files just work until the "zfs list" >>> shows me >>> a zvol with 100% usage although the sam-filesystem was only filled >>> up >>> to 20%. >>> I didn''t create snapshots and a scrub did show any errors. >>> >>> When the zvol was filled up even a sammkfs can''t solve the >>> problem. I >>> had to destroy the zvol ( not zpool ). >>> After that I was able recreate a new zvol with sam-fs on top. >> >> this is a feature of block devices. once you (or samfs) uses a >> block on >> the zvol, it has no mechanism to tell the zvol when it is no longer >> using it. samfs simply unreferences the blocks it frees, it doesnt >> actively go through them and tell the block layer underneath it that >> they can be reclaimed. from the zvols point of view theyre still >> being >> used because they were used at some point in the past. > > http://en.wikipedia.org/wiki/TRIM_(SSD_command) should make it > possible > I guess.. (assuming it''s implemented all the way in the chain).. > Should/could help in virtualization too..Or just enable compression and zero fill. -- richard
Hello tobex, While the original question may have been answered by posts above, I''m interested: when you say "according to zfs list the zvol is 100% full", does it only mean that it uses all 20Gb on the pool (like a non-sparse uncompressed file), or does it also imply that you can''t write into the samfs although its structures are only 20% used? If by any chance the latter - I think it would count as a bug. If the former - see the posts above for explanations and workarounds :) Thanks in advance for such detail, Jim -- This message posted from opensolaris.org
Hi Jim, first of all I''m sure this behaviour is a bug or has been changed sometime in the past, because I''ve used this configuration a lot of times. If I understand you right it is as you said. Here''s an example and you can see what happened. The sam-fs is filled to only 6% and the zvol ist full. *archiv1:~ # zfs list* NAME USED AVAIL REFER MOUNTPOINT sampool 405G 2.49G 18K /sampool sampool/samdev1 405G 0K 405G - *archiv1:~ # samcmd f* File systems samcmd 4.6.85 11:18:32 Jul 28 2009 samcmd on archiv1 ty eq state device_name status high low mountpoint server ms 1 on samfs1 m----2----d 80% 70% /samfs md 11 on /dev/zvol/dsk/sampool/samdev1 *archiv1:~ # samcmd m* Mass storage status samcmd 4.6.85 11:19:09 Jul 28 2009 samcmd on archiv1 ty eq status use state ord capacity free ra part high low ms 1 m----2----d 6% on 405.000G 380.469G 1M 16 80% 70% md 11 6% on 0 405.000G 380.469G Jim Klimov schrieb:> Hello tobex, > > While the original question may have been answered by posts above, I''m interested: > when you say "according to zfs list the zvol is 100% full", does it only mean that it > uses all 20Gb on the pool (like a non-sparse uncompressed file), or does it also imply > that you can''t write into the samfs although its structures are only 20% used? > > If by any chance the latter - I think it would count as a bug. If the former - see > the posts above for explanations and workarounds :) > > Thanks in advance for such detail, > Jim >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090731/6c479f4c/attachment.html>
> If I understand you right it is as you said. > Here''s an example and you can see what happened. > The sam-fs is filled to only 6% and the zvol ist full.I''m afraid I was not clear with my question, so I''d elaborate, then. It remains standing as: during this situation, can you write new data into SAMFS? That is, can you fill it up from these 6% used? Or does the system complain that it can''t write more data? The way I see this discussion (and maybe I''m wrong), it''s thus: * Your zvol starts sparse (not using much space from the pool, but with a "quota" of 405Gb). That is, you don''t have a "reservation" for these 405Gb to grab them as soon as you create the zvol and not let any other datasets use this space. * Your zvol allocates blocks from the pool to keep the data written by SAMFS, and the disk space consumed from the pool grows until the zvol hits the quota (405Gb of allocated blocks = 100% of quota). * SAMFS writes data to the zvol and never tells the zvol that you deleted some files so these blocks can be unallocated. * The zvol could release unused blocks - if it ever knew they are unused. If this is all true, then your zvol now consumes 405Gb from the pool, and your SAMFS thinks it uses 6% of the block device with its 25Gb of saved files. However, (and this is the salt of my question) the situation does not prevent you from writing the other 380Gb into the SAMFS without errors and complaints, and probably not changing the amount of space "used" in the ZFS pool and in the zvol dataset either. Is this assumption correct? If it is, then I''d see the situation as a big inconvenience and a way to improve interaction between SAMFS and ZFS as its storage (and/or fix a regression if this worked better in previous releases). But it''s not a bug per se. However, if you can''t write much data into the SAMFS now, it is definitely a bad bug. -- This message posted from opensolaris.org
Concerning the reservations, here''s a snip from "man zfs": The reservation is kept equal to the volume''s logical size to prevent unexpected behavior for consumers. Without the reservation, the volume could run out of space, resulting in undefined behavior or data corrup- tion, depending on how the volume is used. These effects can also occur when the volume size is changed while it is in use (particularly when shrinking the size). Extreme care should be used when adjusting the volume size. Though not recommended, a "sparse volume" (also known as "thin provisioning") can be created by specifying the -s option to the zfs create -V command, or by changing the reservation after the volume has been created. A "sparse volume" is a volume where the reservation is less then the volume size. Consequently, writes to a sparse volume can fail with ENOSPC when the pool is low on space. For a sparse volume, changes to volsize are not reflected in the reservation. Did you do anything like this? HTH, //Jim -- This message posted from opensolaris.org
No, just did "zfs create -V". and I didn''t change the size of the zpool or zvol at any time.. regards, Tobias Jim Klimov schrieb:> Concerning the reservations, here''s a snip from "man zfs": > > The reservation is kept equal to the volume''s logical > size to prevent unexpected behavior for consumers. > Without the reservation, the volume could run out of > space, resulting in undefined behavior or data corrup- > tion, depending on how the volume is used. These effects > can also occur when the volume size is changed while it > is in use (particularly when shrinking the size). > Extreme care should be used when adjusting the volume > size. > > Though not recommended, a "sparse volume" (also known as > "thin provisioning") can be created by specifying the -s > option to the zfs create -V command, or by changing the > reservation after the volume has been created. A "sparse > volume" is a volume where the reservation is less then > the volume size. Consequently, writes to a sparse volume > can fail with ENOSPC when the pool is low on space. For > a sparse volume, changes to volsize are not reflected in > the reservation. > > Did you do anything like this? > > HTH, > //Jim >