similar to: DomU with ZFS root on iSCSI - any tips?

Displaying 20 results from an estimated 3000 matches similar to: "DomU with ZFS root on iSCSI - any tips?"

2009 Jun 19
8
x4500 resilvering spare taking forever?
I''ve got a Thumper running snv_57 and a large ZFS pool. I recently noticed a drive throwing some read errors, so I did the right thing and zfs replaced it with a spare. Everything went well, but the resilvering process seems to be taking an eternity: # zpool status pool: bigpool state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was
2008 Sep 05
3
Snapshots during a scrub
I have a weekly scrub setup, and I''ve seen at least once now where it says "don''t snapshot while scrubbing" Is this a data integrity issue, or will it make one or both of the processes take longer? Thanks
2007 Jul 26
5
FC6 domU install problem
Hello, Just getting my head around Xen. I''ve been able to successfully install a Solaris dom0 and domU with the latest Solaris Xen code drop. Now I''m moving on to Fedora Core 6 (64), and Its failing. I''m following the instructions posted to http://www.opensolaris.org/os/community/xen/docs/fedora-install.htm But early in the install process I see GUI and shell
2007 Feb 27
16
understanding zfs/thunoer "bottlenecks"?
Currently I''m trying to figure out the best zfs layout for a thumper wrt. to read AND write performance. I did some simple mkfile 512G tests and found out, that per average ~ 500 MB/s seems to be the maximum on can reach (tried initial default setup, all 46 HDDs as R0, etc.). According to http://www.amd.com/us-en/assets/content_type/DownloadableAssets/ArchitectureWP_062806.pdf I would
2007 Jul 26
4
Does iSCSI target support SCSI-3 PGR reservation ?
Does opensolaris iSCSI target support SCSI-3 PGR reservation ? My goal is to use the iSCSI LUN created by [1] or [2] as a quorum device for a 3-node suncluster. [1] zfs set shareiscsi=on <storage-pool/zfs volume name> [2] iscsitadm create target ..... Thanks, -- leon This message posted from opensolaris.org
2008 Oct 27
7
Fujitsu Siemens PRIMERGY RX300
Hi all Opensolaris works perfectly, but I am not able to boot the xvm kernel on this hardware. I added the -k option in grub, but the system hangs before the hostname line without any debug info. I''ve tried snv from b94 to b99, with the same results. If I install Debian with xen kernel I am able to use pvm and hvm guests. What can I do ? thanks Giacomo -- This message posted from
2009 Dec 17
4
NIS failover
We just updated our configuratiosn to have multiple NIS servers, when we initiated a test of client failover, we were disapointed. It seemed that the only way to get a filaover was to /etc/init.d/ypbind restart. It behaves as indicated in http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=5084845 using ypbind-1.17.2-13 on Centos 4.5 / Linux xxxxxxxxxxxx 2.6.9-55.0.12.ELsmp #1 SMP Fri Nov
2010 Mar 27
16
zpool split problem?
Zpool split is a wonderful feature and it seems to work well, and the choice of which disk got which name was perfect! But there seems to be an odd anomaly (at least with b132) . Started with c0t1d0s0 running b132 (root pool is called rpool) Attached c0t0d0s0 and waited for it to resilver Rebooted from c0t0d0s0 zpool split rpool spool Rebooted from c0t0d0s0, both rpool and spool were mounted
2009 Oct 17
3
zvol used apparently greater than volsize for sparse volume
What does it mean for the reported value of a zvol volsize to be less than the product of used and compressratio? For example, # zfs get -p all home1/home1mm01 NAME PROPERTY VALUE SOURCE home1/home1mm01 type volume - home1/home1mm01 creation 1254440045 - home1/home1mm01 used 14902492672
2008 Sep 16
3
iscsi target problems on snv_97
I''ve recently upgraded my x4500 to Nevada build 97, and am having problems with the iscsi target. Background: this box is used to serve NFS underlying a VMware ESX environment (zfs filesystem-type datasets) and presents iSCSI targets (zfs zvol datasets) for a Windows host and to act as zoneroots for Solaris 10 hosts. For optimal random-read performance, I''ve configured a single
2008 Sep 05
6
resilver speed.
Is there any way to control the resliver speed? Having attached a third disk to a mirror (so I can replace the other disks with larger ones) the resilver goes at a fraction of the speed of the same operation using disk suite. However it still renders the system pretty much unusable for anything else. So I would like to control the rate of the resilver. Either slow it down a lot so that the
2007 Sep 11
4
ext3 on zvols journal performance pathologies?
I''ve been seeing read and write performance pathologies with Linux ext3 over iSCSI to zvols, especially with small writes. Does running a journalled filesystem to a zvol turn the block storage into swiss cheese? I am considering serving ext3 journals (and possibly swap too) off a raw, hardware-mirrored device. Before I do (and I''ll write up any results) I''d like to know
2007 Jun 01
10
SMART
On Solaris x86, does zpool (or anything) support PATA (or SATA) IDE SMART data? With the Predictive Self Healing feature, I assumed that Solaris would have at least some SMART support, but what I''ve googled so far has been discouraging. http://prefetch.net/blog/index.php/2006/10/29/solaris-needs-smart-support-please-help/ Bug ID: 4665068 SMART support in IDE driver
2006 Aug 31
3
Find the difference between two snapshots
Hi everyone, Is there an easy way to find out which files has changed between two snapshots? Currently I''m doing a # rsync -arvn <snapshot1> <snapshot2> and it creates a list. But rsync needs to go through the whole filesystem and compare files. It would be nice if zfs would have this option builtin. Regards, Nickus
2009 Aug 04
7
Sol10u7: can''t "zpool remove" missing hot spare
I''m using Solaris 10u6 updated to u7 via patches, and I have a pool with a mirrored pair and a (shared) hot spare. We reconfigured disks a while ago and now the controller is c4 instead of c2. The hot spare was originally on c2, and apparently on rebooting it didn''t get found. So, I looked up what the new name for the hot spare was, then added it to the pool with "zpool
2010 Jan 28
16
Large scale ZFS deployments out there (>200 disks)
While thinking about ZFS as the next generation filesystem without limits I am wondering if the real world is ready for this kind of incredible technology ... I''m actually speaking of hardware :) ZFS can handle a lot of devices. Once in the import bug (http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6761786) is fixed it should be able to handle a lot of disks. I want to
2010 Jun 13
3
panic after zfs mount
Dear all We ran into a nasty problem the other day. One of our mirrored zpool hosts several ZFS filesystems. After a reboot (all FS mounted at that time an in use) the machine paniced (console output further down). After detaching one of the mirrors the pool fortunately imported automatically in a faulted state without mounting the filesystems. Offling the unplugged device and clearing the fault
2009 Aug 23
3
zfs send/receive and compression
Is there a mechanism by which you can perform a zfs send | zfs receive and not have the data uncompressed and recompressed at the other end? I have a gzip-9 compressed filesystem that I want to backup to a remote system and would prefer not to have to recompress everything again at such great computation expense. If this doesn''t exist, how would one go about creating an RFE for
2006 Jun 12
2
?: zfs mv within pool seems slow
I have just upgraded my jumpstart server to S10 u2 b9a. It is an Ultra 10 with two 120GB EIDE drives. The second drive (disk1) is new, and has u2b9a installed on a slice, with most of the space in slice 7 for the ZFS pool I created pool1 on disk1, and created the filesystem pool1/ro (for legacy reasons). I them moved my data from the original disk0 UFS file system to pool1/ro. Initially I
2009 Mar 04
5
Oracle database on zfs
Hi, I am wondering if there is a guideline on how to configure ZFS on a server with Oracle database? We are experiencing some slowness on writes to ZFS filesystem. It take about 530ms to write a 2k data. We are running Solaris 10 u5 127127-11 and the back-end storage is a RAID5 EMC EMX. This is a small database with about 18gb storage allocated. Is there a tunable parameters that we can apply to