Displaying 20 results from an estimated 900 matches similar to: "Does iSCSI target support SCSI-3 PGR reservation ?"
2007 Jul 12
2
[AVS] Question concerning reverse synchronization of a zpool
Hi,
I''m struggling to get a stable ZFS replication using Solaris 10 110/06
(actual patches) and AVS 4.0 for several weeks now. We tried it on
VMware first and ended up in kernel panics en masse (yes, we read Jim
Dunham''s blog articles :-). Now we try on the real thing, two X4500
servers. Well, I have no trouble replicating our kernel panics there,
too ... but I think I
2010 May 04
8
iscsitgtd failed request to share on zpool import after upgrade from b104 to b134
Hi,
I am posting my question to both storage-discuss and zfs-discuss as I am not quite sure what is causing the messages I am receiving.
I have recently migrated my zfs volume from b104 to b134 and upgraded it from zfs version 14 to 22. It consist of two zvol''s ''vol01/zvol01'' and ''vol01/zvol02''.
During zpool import I am getting a non-zero exit code,
2009 Jan 02
3
ZFS iSCSI (For VirtualBox target) and SMB
Hey all,
I''m setting up a ZFS based fileserver to use both as a shared network drive and separately to have an iSCSI target to be used as the "Hard disk" of a windows based VM runninf on another machine.
I''ve built the machine, installed the OS, created the RAIDZ pool and now have a couple of questions (I''m pretty much new to Solaris by the way but have been
2008 Sep 16
3
iscsi target problems on snv_97
I''ve recently upgraded my x4500 to Nevada build 97, and am having problems with the iscsi target.
Background: this box is used to serve NFS underlying a VMware ESX environment (zfs filesystem-type datasets) and presents iSCSI targets (zfs zvol datasets) for a Windows host and to act as zoneroots for Solaris 10 hosts. For optimal random-read performance, I''ve configured a single
2009 Feb 11
8
Write caches on X4540
We''re using some X4540s, with OpenSolaris 2008.11.
According to my testing, to optimize our systems for our specific
workload, I''ve determined that we get the best performance with the
write cache disabled on every disk, and with zfs:zfs_nocacheflush=1 set
in /etc/system.
The only issue is setting the write cache permanently, or at least quickly.
Right now, as it is,
2012 Mar 25
0
[LLVMdev] upgrading Python on http://bb.pgr.jp bots
Good evening, Eli.
2012/3/25 Bendersky, Eli <eli.bendersky at intel.com>:
> It appears that some of the bots on http://bb.pgr.jp use a really old
> version of Python (2.4)
I know. It is extremely old. IIRC, RHEL5(and its clone) does not
provide python greater than 2.4 by default.
EPEL (extra packages for enterprise linux, by fedora) provides 2.6.
CentOS5, my buildslave, and RHEL5 can
2006 Oct 10
3
Solaris 10 / ZFS file system major/minor number
Hi,
In migrating from **VM to ZFS am I going to have an issue with Major/Minor numbers with NFS mounts? Take the following scenario.
1. NFS clients are connected to an active NFS server that has SAN shared storage between the active and standby nodes in a cluster.
2. The NFS clients are using the major/minor numbers on the active node in the cluster to communicate to the NFS active server.
3.
2008 Dec 26
19
separate home "partition"?
(i use the term loosely because i know that zfs likes whole volumes better)
when installing ubuntu, i got in the habit of using a separate partition for my home directory so that my data and gnome settings would all remain intact when i reinstalled or upgraded.
i''m running osol 2008.11 on an ultra 20, which has only two drives. i''ve got all my data located in my home directory,
2006 Nov 16
2
Porting ZFS, trouble with nvpair
Hi. I thought I''d take a stab at the first steps of porting ZFS to Darwin. I realize there are rumors that Apple is already doing this, but my contact at Apple has yet to get back to me to verify this. In the meantime, I wanted to see how hard it would be. I started with libzfs, and promptly ran into issues with libnvpair.
It wants sys/nvpair.h, but I can''t find that in the
2010 Mar 02
2
dedup source code
Hello ZFS experts:
I would like to study ZFS de-duplication feature. Can someone please let me know which directory/files I should be looking at?
Thanks in advance.
--
This message posted from opensolaris.org
2008 May 14
2
vdev cache - comments in the source
Hello zfs-code,
http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/vdev_cache.c
72 * All i/os smaller than zfs_vdev_cache_max will be turned into
73 * 1<<zfs_vdev_cache_bshift byte reads by the vdev_cache (aka software
74 * track buffer). At most zfs_vdev_cache_size bytes will be kept in each
75 * vdev''s vdev_cache.
While it
2010 Jun 11
9
Are recursive snapshot destroy and rename atomic too?
In another thread recursive snapshot creation was found atomic so that
it is done quickly, and more important, all at once or nothing at all.
Do you know if recursive destroying and renaming of snapshots are atomic too?
Regards
Henrik Heino
2007 Apr 28
4
What tags are supported on a zvol?
I assume that a zvol has a vtoc. What tags are supported?
Thanks,
Brian
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20070427/57e1a64f/attachment.html>
2009 Mar 11
6
Export ZFS via ISCSI to Linux - Is it stable for production use now?
Hello,
I want to setup an opensolaris for centralized storage server, using
ZFS as the underlying FS, on a RAID 10 SATA disks.
I will export the storage blocks using ISCSI to RHEL 5 (less than 10
clients, and I will format the partition as EXT3)
I want to ask...
1. Is this setup suitable for mission critical use now?
2. Can I use LVM with this setup?
Currently we are using NFS as the
2009 Feb 18
4
tracing aio syscalls
Hi all,
Is there some documentation or some example on how to interpret the arg0
.. arg<n> for the aioread, aiowrite, aiowait syscalls? The system call
name for all three seems to be "kaio".
Michael
=== Michael Mueller ==================
Tel. + 49 8171 63600
Fax. + 49 8171 63615
Web: http://www.michael-mueller-it.de
======================================
2007 Jun 19
0
Re: [storage-discuss] Performance expectations of iscsi targets?
Paul,
> While testing iscsi targets exported from thumpers via 10GbE and
> imported 10GbE on T2000s I am not seeing the throughput I expect,
> and more importantly there is a tremendous amount of read IO
> happending on a purely sequential write workload. (Note all systems
> have Sun 10GbE cards and are running Nevada b65.)
The read IO activity you are seeing is a direct
2009 Mar 08
2
[Bug 866] ssh(1) is too picky about unknown options in ~/.ssh/config
https://bugzilla.mindrot.org/show_bug.cgi?id=866
--- Comment #17 from Olav Morken <olavmrk at gmail.com> 2009-03-09 06:21:16 ---
Created an attachment (id=1610)
--> (http://bugzilla.mindrot.org/attachment.cgi?id=1610)
Patch which allows OpenSSH to ignore unknown options.
This is a patch which implements alternative 1 from Josh Triplett. This
patch makes ssh ignore all unknown
2008 Feb 20
12
no luck with Xen....
Perhaps someone has ideas on this topic, a recent attempt to play with Xen was a reather unlucky event, all what I will be able to demonstrate on that system is a PV ONNV domU which
ikely will not be very attractive to the audience :(
HW: U40M2, 2 x 2 core AMD revF procs, 8GB MEM, Phonix BIOS 1.5 (latest)
1 x 200GB internal SATA drive
SW: dom0 ONNV build 82, latest VirtManager from
2010 Aug 15
2
Is the error threshold for a degraded device configurable?
I look after an x4500 for a client and wee keep getting drives marked as
degraded with just over 20 checksum errors.
Most of these errors appear to be driver or hardware related and thier
frequency increases during a resilver, which can lead to a death
spiral. The increase in errors within a vdev during a resilver (I
recently had three drives in an 8 drive raidz vdev "degraded")
2012 Jan 03
10
arc_no_grow is set to 1 and never set back to 0
Hello.
I have a Solaris 11/11 x86 box (which I migrated from SolEx 11/10 a couple of weeks ago).
Without no obvious reason (at least for me), after an uptime of 1 to 2 days (observed 3 times now) Solaris sets arc_no_grow to 1 and then never sets it back to 0. ARC is being shrunk to less than 1 GB -- needless to say that performance is terrible. There is not much load on this system.
Memory