similar to: Change the volblocksize of a ZFS volume

Displaying 20 results from an estimated 10000 matches similar to: "Change the volblocksize of a ZFS volume"

2009 Mar 31
3
Bad SWAP performance from zvol
I''ve upgraded my system from ufs to zfs (root pool). By default, it creates a zvol for dump and swap. It''s a 4GB Ultra-45 and every late night/morning I run a job which takes around 2GB of memory. With a zvol swap, the system becomes unusable and the Sun Ray client often goes into "26B". So I removed the zvol swap and now I have a standard swap partition. The
2007 Sep 11
4
ext3 on zvols journal performance pathologies?
I''ve been seeing read and write performance pathologies with Linux ext3 over iSCSI to zvols, especially with small writes. Does running a journalled filesystem to a zvol turn the block storage into swiss cheese? I am considering serving ext3 journals (and possibly swap too) off a raw, hardware-mirrored device. Before I do (and I''ll write up any results) I''d like to know
2007 Jan 26
10
UFS on zvol: volblocksize and maxcontig
Hi all! First off, if this has been discussed, please point me in that direction. I have searched high and low and really can''t find much info on the subject. We have a large-ish (200gb) UFS file system on a Sun Enterprise 250 that is being shared with samba (lots of files, mostly random IO). OS is Solaris 10u3. Disk set is 7x36gb 10k scsi, 4 internal 3 external. For several
2009 Oct 17
3
zvol used apparently greater than volsize for sparse volume
What does it mean for the reported value of a zvol volsize to be less than the product of used and compressratio? For example, # zfs get -p all home1/home1mm01 NAME PROPERTY VALUE SOURCE home1/home1mm01 type volume - home1/home1mm01 creation 1254440045 - home1/home1mm01 used 14902492672
2010 Jan 31
5
server hang with compression on, ping timeouts from remote machine
Hello All, I am running NTFS over iSCSI on a ZFS ZVOL volume with compression=gzip-9 and blocksize=8K. The server is 2 core P4 3.0 Ghz with 5 GB of RAM. Whenever I start copying files from Windows onto the ZFS disk, after about 100-200 Mb been copied the server starts to experience freezes. I have iostat running, which freezes as well. Even pings on both of the network adapters are reporting
2005 Nov 30
2
Trying to understand volblocksize ?
Hi, I am trying to understand the use of volblocksize in emulated volumes. If I create a volume in pool and I want a database engine to read and write, say 16K blocks. Should I then set volblocksize to 16K ? Regards, Patrik This message posted from opensolaris.org
2006 Aug 01
5
ZFS, block device and Xen?
Hi There, I looked at the ZFS admin guide in attempt to find a way to leverage ZFS capabilities (storage pool, mirroring, dynamic stripping, etc.) for Xen domU file systems that are not ZFS. Couldn''t find an answer whether ZFS could be used only as a "regular" volume manager to create logical volumes for UFS or even a Linux ext2fs, with ideally, the ability to create
2009 Dec 31
6
zvol (slow) vs file (fast) performance snv_130
Hello, I was doing performance testing, validating zvol performance in particularly, and found that zvol write performance to be slow ~35-44MB/s at 1MB blocksize writes. I then tested the underlying zfs file system with the same test and got 121MB/s. Is there any way to fix this? I really would like to have compatible performance between the zfs filesystem and the zfs zvols. # first test is a
2013 Nov 22
1
FreeBSD 10-BETA3 - zfs clone of zvol snapshot is not created
Hi, am I doing something wrong, ZFS does not support that or there is a bug that zvol clone does not show up under /dev/zvol after creating it from other zvol snapshot? # zfs list -t all | grep local local 136G 76.8G 144K none local/home 117G 76.8G 117G /home local/vm 18.4G 76.8G 144K
2009 Jun 29
7
ZFS - SWAP and lucreate..
Good morning everybody I was migrating my ufs ? rootfilesystem to a zfs ? one, but was a little upset finding out that it became bigger (what was clear because of the swap and dump size). Now I am questioning myself if it is possible to set the swap and dump size by using the lucreate ? command (I wanna try it again but on less space). Unfortunately I did not find any advice in manpages.
2007 Apr 12
10
How to bind the oracle 9i data file to zfs volumes
Experts, I''m installing Oracle 9i on Solaris 10 11/06(update 3),I created some zfs volumes which will be used by oracle data file,as: # zfs create -V 200m ora_pool/controlfile01_200m # zfs create -V 800m ora_pool/system_800m ... # ls -l /dev/zvol/rdsk/ora_pool lrwxrwxrwx 1 root root 39 Apr 11 12:23 controlfile01_200m -> ../../../../devices/pseudo/zfs at 0:1c,raw
2008 Jun 01
1
capacity query
Hi, My swap is on raidz1. Df -k and swap -l are showing almost no usage of swap, while zfs list and zpool list are showing me 96% capacity. Which should i believe? Justin # df -hk Filesystem size used avail capacity Mounted on /dev/dsk/c3t0d0s1 14G 4.0G 10G 28% / /devices 0K 0K 0K 0% /devices ctfs
2006 Oct 31
3
zfs: zvols minor #''s changing and causing probs w/ volumes
Team, **Please respond to me and my coworker listed in the Cc, since neither one of us are on this alias** QUICK PROBLEM DESCRIPTION: Cu created a dataset which contains all the zvols for a particular zone. The zone is then given access to all the zvols in the dataset using a match statement in the zoneconfig (see long problem description for details). After the initial boot of the zone
2012 Jan 07
14
zfs defragmentation via resilvering?
Hello all, I understand that relatively high fragmentation is inherent to ZFS due to its COW and possible intermixing of metadata and data blocks (of which metadata path blocks are likely to expire and get freed relatively quickly). I believe it was sometimes implied on this list that such fragmentation for "static" data can be currently combatted only by zfs send-ing existing
2009 Jul 27
10
sam-fs on zfs-pool
Hi list, I''ve did some tests and run into a very strange situation.. I created a zvol using "zfs create -V" and initialize an sam-filesystem on this zvol. After that I restored some testdata using a dump from another system. So far so good. After some big troubles I found out that releasing files in the sam-filesystem doesn''t create space on the underlying zvol.
2006 Oct 31
0
6347421 Trying to set volblocksize on existing volume gives an unexpected error
Author: eschrock Repository: /hg/zfs-crypto/gate Revision: 8c1e3b54454c3c995f38d9fbb614dfef27d153e9 Log message: 6347421 Trying to set volblocksize on existing volume gives an unexpected error 6347492 snapshots should not pretend to have real properties Files: update: usr/src/common/zfs/zfs_prop.c
2010 Nov 05
3
ZFS vs mpxio vs cfgadm in Solaris.
Folks, I''m trying to figure out whether we should give ZFS / mpxio a shot on one of our research servers, or simply skip it (as we have previously). In Nov 2009 Cindy responded to a thread concerning ZFS device issues, cfgadm, and mpxio: http://mail.opensolaris.org/pipermail/zfs-discuss/2009-November/033496.html I''ve got an x2270 with the Sun EZ-SAS HBA and external SATA
2009 Nov 03
3
virsh troubling zfs!?
Hi and hello, I have a problem confusing me. I hope someone can help me with it. I followed a "best practise" - I think - using dedicated zfs filesystems for my virtual machines. Commands (for completion): [i]zfs create rpool/vms[/i] [i]zfs create rpool/vms/vm1[/i] [i] zfs create -V 10G rpool/vms/vm1/vm1-dsk[/i] This command creates the file system [i]/rpool/vms/vm1/vm1-dsk[/i] and the
2018 Aug 08
1
Windows Guest I/O performance issues (already using virtio)
List, I have a number of Windows 2016 servers I am deploying, but I’m having some I/O performance issues. I have done all of the obvious things like virtio drivers, but am finding there is more performance to be found with hyper-v extensions, how we virtualize the hardware clock, and iothreads. I’m using ZVOLs to back the VM, and I’m using 4k block sizes, which seems to offer the best 4k random
2010 Jul 16
1
Making a zvol unavailable to iSCSI trips up ZFS
I''ve been experimenting with a two system setup in snv_134 where each system exports a zvol via COMSTAR iSCSI. One system imports both its own zvol and the one from the other system and puts them together in a ZFS mirror. I manually faulted the zvol on one system by physically removing some drives. What I expect to happen is that ZFS will fault the zvol pool and the iSCSI stack will