search for: nedharvey

Displaying 20 results from an estimated 48 matches for "nedharvey".

2011 Jul 15
22
Zil on multiple usb keys
This might be a stupid question, but here goes... Would adding, say, 4 4 or 8gb usb keys as a zil make enough of a difference for writes on an iscsi shared vol? I am finding reads are not too bad (40is mb/s over gige on 2 500gb drives stripped) but writes top out at about 10 and drop a lot lower... If I where to add a couple usb keys for zil, would it make a difference? Thanks. Sent from a
2010 Jul 20
16
zfs raidz1 and traditional raid 5 perfomrance comparision
Hi, for zfs raidz1, I know for random io, iops of a raidz1 vdev eqaul to one physical disk iops, since raidz1 is like raid5 , so is raid5 has same performance like raidz1? ie. random iops equal to one physical disk''s ipos. Regards Victor -- This message posted from opensolaris.org
2009 Aug 23
23
incremental backup with zfs to file
FULL backup to a file zfs snapshot -r rpool at 0908 zfs send -Rv rpool at 0908 > /net/remote/rpool/snaps/rpool.0908 INCREMENTAL backup to a file zfs snapshot -i rpool at 0908 rpool at 090822 zfs send -Rv rpool at 090822 > /net/remote/rpool/snaps/rpool.090822 As I understand the latter gives a file with changes between 0908 and 090822. Is this correct? How do I restore those files? I know
2011 Jul 22
4
add device to mirror rpool in sol11exp
In my new oracle server, sol11exp, it''s using multipath device names... Presently I have two disks attached: (I removed the other 10 disks for now, because these device names are so confusing. This way I can focus on *just* the OS disks.) 0. c0t5000C5003424396Bd0 <SEAGATE-ST32000SSSUN2.0-0514 cyl 3260 alt 2 hd 255 sec 252> /scsi_vhci/disk at g5000c5003424396b
2010 Mar 24
21
ZFS on a 11TB HW RAID-5 controller
Hello all, I am a complete newbie to OpenSolaris, and must to setup a ZFS NAS. I do have linux experience, but have never used ZFS. I have tried to install OpenSolaris Developer 134 on a 11TB HW RAID-5 virtual disk, but after the installation I can only use one 2TB disk, and I cannot partition the rest. I realize that maximum partition size is 2TB, but I guess the rest must be usable. For
2011 Jan 29
27
ZFS and TRIM
My google-fu is coming up short on this one... I didn''t see that it had been discussed in a while ... What is the status of ZFS support for TRIM? For the pool in general... and... Specifically for the slog and/or cache??? -------------- next part -------------- An HTML attachment was scrubbed... URL:
2010 Nov 21
10
Running on Dell hardware?
> From: Edward Ned Harvey [mailto:shill at nedharvey.com] > > I have a Dell R710 which has been flaky for some time.? It crashes about once > per week.? I have literally replaced every piece of hardware in it, and > reinstalled Sol 10u9 fresh and clean. It has been over 3 weeks now, with no crashes, and me doing everything I can to get...
2011 Jan 05
6
ZFS on top of ZFS iSCSI share
I have a filer running Opensolaris (snv_111b) and I am presenting a iSCSI share from a RAIDZ pool. I want to run ZFS on the share at the client. Is it necessary to create a mirror or use ditto blocks at the client to ensure ZFS can recover if it detects a failure at the client? Thanks, Bruin
2011 Aug 10
9
zfs destory snapshot takes an hours.
Hi, I am facing issue with zfs destroy, this takes almost 3 Hours to delete the snapshot of size 150G. Could you please help me to resolve this issue, why zfs destroy takes this much time. While taking snapshot, it''s done within few seconds. I have tried with removing with old snapshot but still problem is same. =========================== I am using : Release : OpenSolaris
2013 Feb 15
28
zfs-discuss mailing list & opensolaris EOL
So, I hear, in a couple weeks'' time, opensolaris.org is shutting down. What does that mean for this mailing list? Should we all be moving over to something at illumos or something? I''m going to encourage somebody in an official capacity at opensolaris to respond... I''m going to discourage unofficial responses, like, illumos enthusiasts etc simply trying to get people
2010 Dec 23
31
SAS/short stroking vs. SSDs for ZIL
Hi, as I have learned from the discussion about which SSD to use as ZIL drives, I stumbled across this article, that discusses short stroking for increasing IOPs on SAS and SATA drives: http://www.tomshardware.com/reviews/short-stroking-hdd,2157.html Now, I am wondering if using a mirror of such 15k SAS drives would be a good-enough fit for a ZIL on a zpool that is mainly used for file
2010 Jan 28
13
ZFS configuration suggestion with 24 drives
Replacing my current media server with another larger capacity media server. Also switching over to solaris/zfs. Anyhow we have 24 drive capacity. These are for large sequential access (large media files) used by no more than 3 or 5 users at a time. I''m inquiring as to what the best configuration for this is for vdevs. I''m considering the following configurations 4 x x6
2012 Sep 28
2
iscsi confusion
I am confused, because I would have expected a 1-to-1 mapping, if you create an iscsi target on some system, you would have to specify which LUN it connects to. But that is not the case... I read the man pages for sbdadm, stmfadm, itadm, and iscsiadm. I read some online examples, where you first "sbdadm create-lu" which gives you a GUID for a specific device in the system, and then
2012 Nov 20
6
zvol wrapped in a vmdk by Virtual Box and double writes?
Hi folks, (Long time no post...) Only starting to get into this one, so apologies if I''m light on detail, but... I have a shiny SSD I''m using to help make some VirtualBox stuff I''m doing go fast. I have a 240GB Intel 520 series jobbie. Nice. I chopped into a few slices - p0 (partition table), p1 128GB, p2 60gb. As part of my work, I have used it both as a RAW
2010 Oct 17
10
RaidzN blocksize ... or blocksize in general ... and resilver
The default blocksize is 128K. If you are using mirrors, then each block on disk will be 128K whenever possible. But if you''re using raidzN with a capacity of M disks (M disks useful capacity + N disks redundancy) then the block size on each individual disk will be 128K / M. Right? This is one of the reasons the raidzN resilver code is inefficient. Since you end up waiting for the
2010 Aug 07
13
PowerEdge R510 with PERC H200/H700 with ZFS
Anyone have any experience with a R510 with the PERC H200/H700 controller with ZFS? My perception is that Dell doesn''t play well with OpenSolaris. Thanks, Geoff
2009 Dec 04
30
ZFS send | verify | receive
If there were a ?zfs send? datastream saved someplace, is there a way to verify the integrity of that datastream without doing a ?zfs receive? and occupying all that disk space? I am aware that ?zfs send? is not a backup solution, due to vulnerability of even a single bit error, and lack of granularity, and other reasons. However ... There is an attraction to ?zfs send? as an augmentation to the
2010 Jun 17
9
Monitoring filessytem access
When somebody is hammering on the system, I want to be able to detect who''s doing it, and hopefully even what they''re doing. I can''t seem to find any way to do that. Any suggestions? Everything I can find ... iostat, nfsstat, etc ... AFAIK, just show me performance statistics and so forth. I''m looking for something more granular. Either *who* the
2010 May 16
9
can you recover a pool if you lose the zil (b134+)
I was messing around with a ramdisk on a pool and I forgot to remove it before I shut down the server. Now I am not able to mount the pool. I am not concerned with the data in this pool, but I would like to try to figure out how to recover it. I am running Nexenta 3.0 NCP (b134+). I have tried a couple of the commands (zpool import -f and zpool import -FX llift) root at
2010 Dec 18
10
a single nfs file system shared out twice with different permissions
I am trying to configure a system where I have two different NFS shares which point to the same directory. The idea is if you come in via one path, you will have read-only access and can''t delete any files, if you come in the 2nd path, then you will have read/write access. For example, create the read/write nfs share: zfs create tank/snapshots zfs set sharenfs=on tank/snapshots root