similar to: zvol space consumption vs ashift, metadata packing

Displaying 20 results from an estimated 400 matches similar to: "zvol space consumption vs ashift, metadata packing"

2011 Jul 29
12
booting from ashift=12 pool..
.. evidently doesn''t work. GRUB reboots the machine moments after loading stage2, and doesn''t recognise the fstype when examining the disk loaded from an alernate source. This is with SX-151. Here''s hoping a future version (with grub2?) resolves this, as well as lets us boot from raidz. Just a note for the archives in case it helps someone else get back the afternoon
2009 Aug 14
16
What''s eating my disk space? Missing snapshots?
Please can someone take a look at the attached file which shows the output on my machine of zfs list -r -t filesystem,snapshot -o space rpool/export/home/matt The USEDDS figure of ~2GB is what I would expect, and is the same figure reported by the Disk Usage Analyzer. Where is the remaining 13.8GB USEDSNAP figure coming from? If I total up the list of zfs-auto snapshots it adds up to about 4.8GB,
2011 Oct 05
1
Fwd: Re: zvol space consumption vs ashift, metadata packing
Hello, Daniel, Apparently your data is represented by rather small files (thus many small data blocks), so proportion of metadata is relatively high, and your<4k blocks are now using at least 4k disk space. For data with small blocks (a 4k volume on an ashift=12 pool) I saw metadata use up most of my drive - becoming equal to data size. Just for the sake of completeness, I brought up a
2010 Nov 23
14
ashift and vdevs
zdb -C shows an shift value on each vdev in my pool, I was just wondering if it is vdev specific, or pool wide. Google didn''t seem to know. I''m considering a mixed pool with some "advanced format" (4KB sector) drives, and some normal 512B sector drives, and was wondering if the ashift can be set per vdev, or only per pool. Theoretically, this would save me some size on
2012 Jan 11
1
How many "rollback" TXGs in a ring for 4k drives?
Hello all, I found this dialog on the zfs-devel at zfsonlinux.org list, and I''d like someone to confirm-or-reject the discussed statement. Paraphrasing in my words and understanding: "Labels, including Uberblock rings, are fixed 256KB in size each, of which 128KB is the UB ring. Normally there is 1KB of data in one UB, which gives 128 TXGs to rollback to. When ashift=12 is
2007 Sep 18
5
ZFS panic in space_map.c line 125
One of our Solaris 10 update 3 servers paniced today with the following error: Sep 18 00:34:53 m2000ef savecore: [ID 570001 auth.error] reboot after panic: assertion failed: ss != NULL, file: ../../common/fs/zfs/space_map.c, line: 125 The server saved a core file, and the resulting backtrace is listed below: $ mdb unix.0 vmcore.0 > $c vpanic() 0xfffffffffb9b49f3() space_map_remove+0x239()
2012 Feb 16
3
4k sector support in Solaris 11?
If I want to use a batch of new Seagate 3TB Barracudas with Solaris 11, will zpool let me create a new pool with ashift=12 out of the box or will I need to play around with a patched zpool binary (or the iSCSI loopback)? -- Dave Pooser Manager of Information Services Alford Media http://www.alfordmedia.com
2011 Oct 12
33
weird bug with Seagate 3TB USB3 drive
Banging my head against a Seagate 3TB USB3 drive. Its marketing name is: Seagate Expansion 3 TB USB 3.0 Desktop External Hard Drive STAY3000102 format(1M) shows it identify itself as: Seagate-External-SG11-2.73TB Under both Solaris 10 and Solaris 11x, I receive the evil message: | I/O request is not aligned with 4096 disk sector size. | It is handled through Read Modify Write but the performance
2012 Jun 17
26
Recommendation for home NAS external JBOD
Hi, my oi151 based home NAS is approaching a frightening "drive space" level. Right now the data volume is a 4*1TB Raid-Z1, 3 1/2" local disks individually connected to an 8 port LSI 6Gbit controller. So I can either exchange the disks one by one with autoexpand, use 2-4 TB disks and be happy. This was my original approach. However I am totally unclear about the 512b vs 4Kb issue.
2012 Jul 31
1
FreeBSD 9.1-BETA1 amd64 fails to mount ZFS rootfs with error 2 when system has more than 3584MB of RAM
Dear Everyone, I am running FreeBSD 9.1-BETA1 amd64 on ZFS in KVM on Gentoo Linux on ZFS. The root pool uses ashift=13 and is on a single disk. The kernel fails to mount the root filesystem if the system has more than 3584MB of RAM. I did a manual binary search to try to find the exact upper limit, but stopped when I tried 3648MB. FreeBSD 9.0-RELEASE works perfectly. Yours truly, Richard Yao
2013 Dec 17
2
Setting up a lustre zfs dual mgs/mdt over tcp - help requested
Hi all, Here is the situation: I have 2 nodes MDS1 , MDS2 (10.0.0.22 , 10.0.0.23) I wish to use as failover MGS, active/active MDT with zfs. I have a jbod shelf with 12 disks, seen by both nodes as das (the shelf has 2 sas ports, connected to a sas hba on each node), and I am using lustre 2.4 on centos 6.4 x64 I have created 3 zfs pools: 1. mgs: # zpool
2008 Oct 19
9
My 500-gig ZFS is gone: insufficient replicas, corrupted data
Hi, I''m running FreeBSD 7.1-PRERELEASE with a 500-gig ZFS drive. Recently I''ve encountered a FreeBSD problem (PR kern/128083) and decided about updating the motherboard BIOS. It looked like the update went right but after that I was shocked to see my ZFS destroyed! Rolling the BIOS back did not help. Now it looks like that: # zpool status pool: tank state: UNAVAIL status:
2008 Sep 05
0
raidz pool metadata corrupted nexanta-core->freenas 0.7->nexanta-core
I made a bad judgment and now my raidz pool is corrupted. I have a raidz pool running on Opensolaris b85. I wanted to try out freenas 0.7 and tried to add my pool to freenas. After adding the zfs disk, vdev and pool. I decided to back out and went back to opensolaris. Now my raidz pool will not mount and got the following errors. Hope someone expert can help me recover from this error.
2013 Aug 21
1
Properties list for zfs in FreeBSD
Hi: Where can I find a list of properties (-o/-O property=value) for creating a zpool? I meant something like: #zpool create \ -o ashift=12 \ -0 dedup=off -O autoexpand=off -O atime=off \ -O canmount=off \ -O compression=lz4 \ -O normalization=formD \ -O mountpoint=/jail \ tank \ mirror \ /dev/gptid/diskname0 \ /dev/gptid/diskname1 \
2012 Jan 11
0
Clarifications wanted for ZFS spec
I''m reading the "ZFS On-disk Format" PDF (dated 2006 - are there newer releases?), and have some questions regarding whether it is outdated: 1) On page 16 it has the following phrase (which I think is in general invalid): The value stored in offset is the offset in terms of sectors (512 byte blocks). To find the physical block byte offset from the beginning of a slice,
2012 Sep 24
20
cannot replace X with Y: devices have different sector alignment
Well this is a new one.... Illumos/Openindiana let me add a device as a hot spare that evidently has a different sector alignment than all of the other drives in the array. So now I''m at the point that I /need/ a hot spare and it doesn''t look like I have it. And, worse, the other spares I have are all the same model as said hot spare. Is there anything I can do with this or
2012 Jul 18
7
Question on 4k sectors
Hi. Is the problem with ZFS supporting 4k sectors or is the problem mixing 512 byte and 4k sector disks in one pool, or something else? I have seen alot of discussion on the 4k issue but I haven''t understood what the actual problem ZFS has with 4k sectors is. It''s getting harder and harder to find large disks with 512 byte sectors so what should we do? TIA...
2010 Jun 29
0
Processes hang in /dev/zvol/dsk/poolname
After multiple power outages caused by storms coming through, I can no longer access /dev/zvol/dsk/poolname, which are hold l2arc and slog devices in another pool I don''t think this is related, since I the pools are ofline pending access to the volumes. I tried running find /dev/zvol/dsk/poolname -type f and here is the stack, hopefully this someone a hint at what the issue is, I have
2010 May 07
0
confused about zpool import -f and export
Hi, all, I think I''m missing a concept with import and export. I''m working on installing a Nexenta b134 system under Xen, and I have to run the installer under hvm mode, then I''m trying to get it back up under pv mode. In that process the controller names change, and that''s where I''m getting tripped up. I do a successful install, then I boot OK,
2010 Jan 17
4
Snapshot that won''t go away.
I have a Solaris 10 update 6 system with a snapshot I can''t remove. zfs destroy -f <snap> reports the device as being busy. fuser doesn''t shore any process using the filesystem and it isn''t shared. I can unmount the filesystem OK. Any clues or suggestions of bigger sticks to hit it with? -- Ian.