I ''ve a weired situation .. my system has zfs root as its file system and now root file system is full at 100% Code: # zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 134G 65.2M 94K /rpool rpool/ROOT 29.8G 65.2M 18K legacy rpool/ROOT/s10s_u6 29.8G 65.2M 22.34G / rpool/ROOT/s10s_u6 at prepatch 7.7G - 22.3G - rpool/dump 4.00G 65.2M 4.00G - rpool/metadb 10M 67.2M 8.05M - rpool/metadb1 10M 67.2M 8.05M - rpool/swap 100G 100G 16K - Code: # df -h / Filesystem size used avail capacity Mounted on rpool/ROOT/s10s_u6 134G 21G 65M 100% / now if i delete something from the / directory it does not free up any space as the freed up space is taken by the modified snapshot. I want to keep my snapshot and decrease the usage too .. how shuld it do that ? -- This message posted from opensolaris.org
On Fri, Nov 6, 2009 at 6:02 PM, Ketan <techieneb at gmail.com> wrote:> now if i delete something from the / directory it does not free up any space as the freed up space is taken by the modified snapshot. I want to keep my snapshot and decrease the usage too .. how shuld it do that ?I don''t think you can. What you could probably do : - remove old snapshot/BE - if there''s no snapshot left, try removing /var/pkg/download (there''s a thread on http://opensolaris.org/jive/thread.jspa?threadID=114274&tstart=0) -- Fajar
>I ''ve a weired situation .. my system has zfs root as its file system and now root file system isfull at 100%> >Code: ># zfs list >NAME USED AVAIL REFER MOUNTPOINT >rpool 134G 65.2M 94K /rpool >rpool/ROOT 29.8G 65.2M 18K legacy >rpool/ROOT/s10s_u6 29.8G 65.2M 22.34G / >rpool/ROOT/s10s_u6 at prepatch 7.7G - 22.3G - >rpool/dump 4.00G 65.2M 4.00G - >rpool/metadb 10M 67.2M 8.05M - >rpool/metadb1 10M 67.2M 8.05M - >rpool/swap 100G 100G 16K -I''m somewhat surprised about the swap file. How can "rpool/swap" have 100G available? Can you make it smaller? Casper
Ketan <techieneb at gmail.com> writes:> if i delete something from the / directory it does not free up any > space as the freed up space is taken by the modified snapshot. I want > to keep my snapshot and decrease the usage too .. how should it do > that?you can turn the snapshot into a clone, promote it, then remove the snapshot. you can now remove some files from the cloned filesystem and keep the rest. -- Kjetil T. Homme Redpill Linpro AS - Changing the game
But i don''t think so that we can shrink the volume/filesystem in zfs .. correct me if i ''m wrong :-) -- This message posted from opensolaris.org
Ketan wrote:> But i don''t think so that we can shrink the volume/filesystem in zfs .. correct me if i ''m wrong :-)You can change the size of a volume. # zfs create -V5m dummy/vol # zfs get volsize dummy/vol NAME PROPERTY VALUE SOURCE dummy/vol volsize 5M - # zfs set volsize=1m dummy/vol # zfs get volsize dummy/vol NAME PROPERTY VALUE SOURCE dummy/vol volsize 1M - You will probbaly need to delete it from being used by swap first then read add it, eg: # swap -d /dev/zvol/dsk/rpool/swap # zfs set volsize=1G rpool/swap # /sbin/swapadd ZFS filesystems don''t actually have a fixed size they may have a quota and/or reservation set on them. -- Darren J Moffat
On Fri, Nov 06, 2009 at 12:47:46PM +0100, Kjetil Torgrim Homme wrote:> Ketan <techieneb at gmail.com> writes: > > > if i delete something from the / directory it does not free up any > > space as the freed up space is taken by the modified snapshot. I want > > to keep my snapshot and decrease the usage too .. how should it do > > that?The snapshot takes up space. You can keep it and your data, or you can delete it and get the space back. You can''t delete part of a snapshot.> you can turn the snapshot into a clone, promote it, then remove the > snapshot. you can now remove some files from the cloned filesystem and > keep the rest.A clone and the original filesystem are always joined by a snapshot. One of the filesystems has to go before you can delete the snapshot. -- Darren
On Fri, Nov 6, 2009 at 5:02 AM, Ketan <techieneb at gmail.com> wrote:> I ''ve a weired situation .. my system has zfs root as its file system and > now root file system is full at 100% > > Code: > # zfs list > NAME USED AVAIL REFER MOUNTPOINT > rpool 134G 65.2M 94K /rpool > rpool/ROOT 29.8G 65.2M 18K legacy > rpool/ROOT/s10s_u6 29.8G 65.2M 22.34G / > rpool/ROOT/s10s_u6 at prepatch 7.7G - 22.3G - > rpool/dump 4.00G 65.2M 4.00G - > rpool/metadb 10M 67.2M 8.05M - > rpool/metadb1 10M 67.2M 8.05M - > rpool/swap 100G 100G 16K - > Code: > # df -h / > Filesystem size used avail capacity Mounted on > rpool/ROOT/s10s_u6 > 134G 21G 65M 100% / now if i delete > something from the / directory it does not free up any space as the freed up > space is taken by the modified snapshot. I want to keep my snapshot and > decrease the usage too .. how shuld it do that ? > -- >How old is the snapshot? The way you keep the snapshot and clean up space is by deleting files created after the snapshot was taken. --Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20091106/f6a9e0b1/attachment.html>
Hello, I''ve got similar problem now - I''ve got auto snapshot on up to 80% of the file system capacity. # zfs get quota rpool/export/home/danjagor NAME PROPERTY VALUE SOURCE rpool/export/home/danjagor quota 60G local # df -h /export/home/danjagor Filesystem Size Used Avail Use% Mounted on rpool/export/home/danjagor 8.0G 7.9G 106M 99% /export/home/danjagor # zfs list |grep danjagor rpool/export/home/danjagor 59.9G 106M 7.85G /export/home/danjagor there are apparently 46 snapshots - 10.3GB So how is it possible that there is only 106M left?!? I also know that once there isn''t enough space on the device, it should start to delete the old snapshot, if I''m right. Therefore I should never really run out of space unless I really fill it up with files. Thanks, Dan -- This message posted from opensolaris.org