similar to: Home server questions

Displaying 20 results from an estimated 10000 matches similar to: "Home server questions"

2010 Jan 02
27
Pool import with failed ZIL device now possible ?
Hello list, someone (actually neil perrin (CC)) mentioned in this thread: http://mail.opensolaris.org/pipermail/zfs-discuss/2009-December/034340.html that is should be possible to import a pool with failed log devices (with or without data loss ?). >/ />/ Has the following error no consequences? />/ />/ Bug ID 6538021 />/ Synopsis Need a way to force pool startup when
2010 Jan 17
3
I can''t seem to get the pool to export...
root at nas:~# zpool export -f raid cannot export ''raid'': pool is busy I''ve disabled all the services I could think of. I don''t see anything accessing it. I also don''t see any of the filesystems mounted with mount or "zfs mount". What''s the deal? This is not the rpool, so I''m not booted off it or anything like that.
2010 Jan 31
5
server hang with compression on, ping timeouts from remote machine
Hello All, I am running NTFS over iSCSI on a ZFS ZVOL volume with compression=gzip-9 and blocksize=8K. The server is 2 core P4 3.0 Ghz with 5 GB of RAM. Whenever I start copying files from Windows onto the ZFS disk, after about 100-200 Mb been copied the server starts to experience freezes. I have iostat running, which freezes as well. Even pings on both of the network adapters are reporting
2008 Apr 12
5
ZVOL access permissions?
How can I set up a ZVOL that''s accessible by non-root users, too? The intent is to use sparse ZVOLs as raw disks in virtualization (reducing overhead compared to file-based virtual volumes). Thanks, -mg This message posted from opensolaris.org
2009 Dec 31
6
zvol (slow) vs file (fast) performance snv_130
Hello, I was doing performance testing, validating zvol performance in particularly, and found that zvol write performance to be slow ~35-44MB/s at 1MB blocksize writes. I then tested the underlying zfs file system with the same test and got 121MB/s. Is there any way to fix this? I really would like to have compatible performance between the zfs filesystem and the zfs zvols. # first test is a
2011 Jul 10
3
How create a FAT filesystem on a zvol?
The `lofiadm'' man page describes how to export a file as a block device and then use `mkfs -F pcfs'' to create a FAT filesystem on it. Can''t I do the same thing by first creating a zvol and then creating a FAT filesystem on it? Nothing I''ve tried seems to work. Isn''t the zvol just another block device? -- -Gary Mills- -Unix Group-
2010 Mar 06
3
Monitoring my disk activity
Recently, I''m benchmarking all kinds of stuff on my systems. And one question I can''t intelligently answer is what blocksize I should use in these tests. I assume there is something which monitors present disk activity, that I could run on my production servers, to give me some statistics of the block sizes that the users are actually performing on the production server.
2007 Sep 11
4
ext3 on zvols journal performance pathologies?
I''ve been seeing read and write performance pathologies with Linux ext3 over iSCSI to zvols, especially with small writes. Does running a journalled filesystem to a zvol turn the block storage into swiss cheese? I am considering serving ext3 journals (and possibly swap too) off a raw, hardware-mirrored device. Before I do (and I''ll write up any results) I''d like to know
2006 Nov 01
56
ZFS/iSCSI target integration
Rick McNeal and I have been working on building support for sharing ZVOLs as iSCSI targets directly into ZFS. Below is the proposal I''ll be submitting to PSARC. Comments and suggestions are welcome. Adam ---8<--- iSCSI/ZFS Integration A. Overview The goal of this project is to couple ZFS with the iSCSI target in Solaris specifically to make it as easy to create and export ZVOLs
2006 Jul 15
2
zvol of files for Oracle?
Hello zfs-discuss, What would you rather propose for ZFS+ORACLE - zvols or just files from the performance standpoint? -- Best regards, Robert mailto:rmilkowski at task.gda.pl http://milek.blogspot.com
2009 Mar 11
6
Export ZFS via ISCSI to Linux - Is it stable for production use now?
Hello, I want to setup an opensolaris for centralized storage server, using ZFS as the underlying FS, on a RAID 10 SATA disks. I will export the storage blocks using ISCSI to RHEL 5 (less than 10 clients, and I will format the partition as EXT3) I want to ask... 1. Is this setup suitable for mission critical use now? 2. Can I use LVM with this setup? Currently we are using NFS as the
2006 Oct 31
3
zfs: zvols minor #''s changing and causing probs w/ volumes
Team, **Please respond to me and my coworker listed in the Cc, since neither one of us are on this alias** QUICK PROBLEM DESCRIPTION: Cu created a dataset which contains all the zvols for a particular zone. The zone is then given access to all the zvols in the dataset using a match statement in the zoneconfig (see long problem description for details). After the initial boot of the zone
2010 May 04
8
iscsitgtd failed request to share on zpool import after upgrade from b104 to b134
Hi, I am posting my question to both storage-discuss and zfs-discuss as I am not quite sure what is causing the messages I am receiving. I have recently migrated my zfs volume from b104 to b134 and upgraded it from zfs version 14 to 22. It consist of two zvol''s ''vol01/zvol01'' and ''vol01/zvol02''. During zpool import I am getting a non-zero exit code,
2009 Sep 19
2
[Fwd: Shared Storage in xVM Opensolaris build 122]
Hi, I double checked the configuration with xm and the "w!" option seems to be configured :-( I also upgraded to build 123, but also no change in behaviour. Thus is this a RAC 11.2 problem or is there a general problem using zvols as as shared disks for xVM domains ? (device (vbd (protocol x86_64-abi) (uuid 048d282d-da4c-2e0b-b9f4-f3f4cc0811c7)
2009 Oct 17
3
zvol used apparently greater than volsize for sparse volume
What does it mean for the reported value of a zvol volsize to be less than the product of used and compressratio? For example, # zfs get -p all home1/home1mm01 NAME PROPERTY VALUE SOURCE home1/home1mm01 type volume - home1/home1mm01 creation 1254440045 - home1/home1mm01 used 14902492672
2009 Mar 31
3
Bad SWAP performance from zvol
I''ve upgraded my system from ufs to zfs (root pool). By default, it creates a zvol for dump and swap. It''s a 4GB Ultra-45 and every late night/morning I run a job which takes around 2GB of memory. With a zvol swap, the system becomes unusable and the Sun Ray client often goes into "26B". So I removed the zvol swap and now I have a standard swap partition. The
2010 Mar 29
19
sharing a ssd between rpool and l2arc
Hi, as Richard Elling wrote earlier: "For more background, low-cost SSDs intended for the boot market are perfect candidates. Take a X-25V @ 40GB and use 15-20 GB for root and the rest for an L2ARC. For small form factor machines or machines with max capacity of 8GB of RAM (a typical home system) this can make a pleasant improvement over a HDD-only implementation." For the upcoming
2009 Jun 23
6
recursive snaptshot
I thought I recalled reading somewhere that in the situation where you have several zfs filesystems under one top level directory like this: rpool rpool/ROOT/osol-112 rpool/export rpool/export/home rpool/export/home/reader you could do a shapshot encompassing everything below zpool instead of having to do it at each level. (Maybe it was in a dream...)
2007 Sep 21
2
Installing Centos5 into Solaris-Xen DomU
I have posted in http://vireso.blogspot.com my notes about installing Centos5 (from DVD image) into DomU on OpenSolaris Xen (xen-nv66-2007-06-24). Using OpenSolaris for Dom0 gives such advantage as zfs zvols for "phy" devices in DomUs, so zfs snapshot & zfs send gives you incredible fast and convenient full/incremental backup/restore of full DomUs.
2009 Jun 16
3
Adding zvols to a DomU
I''m trying to add extra zvols to a Solaris10 DomU, sv_113 Dom0 I can use virsh attach-disk <name> <zvol> hdb --device phy to attach the zvol as c0d1. Replacing hdb by hdd gives me c1d1 but then that is it. Being able to attach several more zvols would be nice but even being able to get at c1d0 would be useful Am I missing something or can I only attach to hda/hdb/hdd?