similar to: SVM - UFS Upgrade

Displaying 20 results from an estimated 11000 matches similar to: "SVM - UFS Upgrade"

2007 Dec 03
2
Help replacing dual identity disk in ZFS raidz and SVM mirror
Hi, We have a number of 4200''s setup using a combination of an SVM 4 way mirror and a ZFS raidz stripe. Each disk (of 4) is divided up like this / 6GB UFS s0 Swap 8GB s1 /var 6GB UFS s3 Metadb 50MB UFS s4 /data 48GB ZFS s5 For SVM we do a 4 way mirror on /,swap, and /var So we have 3 SVM mirrors d0=root (sub mirrors d10, d20, d30, d40) d1=swap (sub mirrors d11, d21,d31,d41)
2007 Jul 13
1
do we support zonepath on UFS formated ZFS volume
Hi, ZFS experts, From ZFS release notes, " Solaris 10 6/06 and Solaris 10 11/06: Do Not Place the Root File Systemof a Non-Global Zone on ZFS. The zonepath of a non-global zone should not reside on ZFS for this release. This action might result in patching problems and possibly prevent the system from being upgraded to a later Solaris 10 update release." So my question is, do we
2007 Apr 28
4
What tags are supported on a zvol?
I assume that a zvol has a vtoc. What tags are supported? Thanks, Brian -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20070427/57e1a64f/attachment.html>
2007 Jun 17
18
6 disk raidz2 or 3 stripe 2 way mirror
I''m playing around with ZFS and want to figure out the best use of my 6x 300GB SATA drives. The purpose of the drives is to store all of my data at home (video, photos, music, etc). I''m debating between: 6x 300GB disks in a single raidz2 pool --or-- 2x (3x 300GB disks in a pool) mirrored I''ve read up a lot on ZFS, but I can''t really figure out which is
2007 Apr 30
4
need some explanation
Hi, OS : Solaris 10 11/06 zpool list doesn''t reflect pool usage stats instantly. Why? # ls -l total 209769330 -rw------T 1 root root 107374182400 Apr 30 14:28 deleteme # zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT wo 136G 100G 36.0G 73% ONLINE - # rm deleteme # zpool list NAME SIZE
2007 Jan 10
2
using veritas dmp with ZFS (but not vxvm)
We have some HDS storage that isn''t supported by mpxio, so we have to use veritas dmp to get multipathing. Whats the recommended way to use DMP storage with ZFS. I want to use DMP but get at the multipathed virtual luns at as low a level as possible to avoid using vxvm as much as possible. I figure theres no point in having overhead from 2 volume manages if we can avoid it. Has anyone
2007 Jan 26
10
UFS on zvol: volblocksize and maxcontig
Hi all! First off, if this has been discussed, please point me in that direction. I have searched high and low and really can''t find much info on the subject. We have a large-ish (200gb) UFS file system on a Sun Enterprise 250 that is being shared with samba (lots of files, mostly random IO). OS is Solaris 10u3. Disk set is 7x36gb 10k scsi, 4 internal 3 external. For several
2008 Mar 12
3
Mixing RAIDZ and RAIDZ2 zvols in the same zpool
I have a customer who has implemented the following layout: As you can see, he has mostly raidz zvols but has one raidz2 in the same zpool. What are the implications here? Is this a bad thing to do? Please elaborate. Thanks, Scott Gaspard Scott.J.Gaspard at Sun.COM > NAME STATE READ WRITE CKSUM > > chipool1 ONLINE 0 0 0 > >
2009 Oct 09
22
Does ZFS work with SAN-attached devices?
Hi All, Its been a while since I touched zfs. Is the below still the case with zfs and hardware raid array? Do we still need to provide two luns from the hardware raid then zfs mirror those two luns? http://www.opensolaris.org/os/community/zfs/faq/#hardwareraid Thanks, Shawn -- This message posted from opensolaris.org
2007 Jun 13
5
drive displayed multiple times
So I just imported an old zpool onto this new system. The problem would be one drive (c4d0) is showing up twice. First it''s displayed as ONLINE, then it''s displayed as "UNAVAIL". This is obviously causing a problem as the zpool now thinks it''s in a degraded state, even though all drives are there, and all are online. This pool should have 7 drives total,
2006 Dec 22
6
Re: Difference between ZFS and UFS with one LUN froma SAN
This may not be the answer you''re looking for, but I don''t know if it''s something you''ve thought of. If you''re pulling a LUN from an expensive array, with multiple HBA''s in the system, why not run mpxio? If you ARE running mpxio, there shouldn''t be an issue with a path dropping. I have the setup above in my test lab and pull cables
2007 Jun 14
44
Best use of 4 drives?
I''m putting together a NexentaOS (b65)-based server that has 4 500 GB drives on it. Currently it has two, set up as a ZFS mirror. I''m able to boot Nexenta from it, and it seems to work ok. But, as I''ve learned, the mirror is not properly redundant, and so I can''t just have a drive fail (when I pull one, the OS ends up hanging, and even if I replace it, I have to
2006 Dec 21
12
Difference between ZFS and UFS with one LUN from a SAN
All, I understand that ZFS gives you more error correction when using two LUNS from a SAN. But, does it provide you with less features than UFS does on one LUN from a SAN (i.e is it less stable). Thanks, Shawn This message posted from opensolaris.org
2006 May 19
3
Oracle on ZFS vs. UFS
Hi, I''m preparing a personal TPC-H benchmark. The goal is not to measure or optimize the database performance, but to compare ZFS to UFS in similar configurations. At the moment I''m preparing the tests at home. The test setup is as follows: . Solaris snv_37 . 2 x AMD Opteron 252 . 4 GB RAM . 2 x 80 GB ST380817AS . Oracle 10gR2 (small SGA (320m)) The disks also contain the OS
2007 Jun 20
14
Z-Raid performance with Random reads/writes
Given a 1.6TB ZFS Z-Raid consisting 6 disks: And a system that does an extreme amount of small /(<20K) /random reads /(more than twice as many reads as writes) / 1) What performance gains, if any does Z-Raid offer over other RAID or Large filesystem configurations? 2) What is any hindrance is Z-Raid to this configuration, given the complete randomness and size of these accesses? Would
2007 Apr 27
2
Scrubbing a zpool built on LUNs
I''m building a system with two Apple RAIDs attached. I have hardware RAID5 configured so no RAIDZ or RAIDZ2, just a basic zpool pointing at the four LUNs representing the four RAID controllers. For on-going maintenance, will a zpool scrub be of any benefit? From what I''ve read with this layer of abstration ZFS is only maintaining the metadata and not the actual data on the
2010 Mar 03
6
Question about multiple RAIDZ vdevs using slices on the same disk
Hi all :) I''ve been wanting to make the switch from XFS over RAID5 to ZFS/RAIDZ2 for some time now, ever since I read about ZFS the first time. Absolutely amazing beast! I''ve built my own little hobby server at home and have a boatload of disks in different sizes that I''ve been using together to build a RAID5 array on Linux using mdadm in two layers; first layer is
2006 Mar 23
17
Poor performance on NFS-exported ZFS volumes
I''m seeing some pretty pitiful performance using ZFS on a NFS server, with a ZFS volume exported (only with rw=host.foo.com,root=host.foo.com opts) and mounted on a Linux host running kernel 2.4.31. This linux kernel I''m working with is limited in that I can only do NFSv2 mounts... irregardless of that aspect, I''m sure something''s amiss. I mounted the zfs-based
2006 Aug 11
3
Proposal expand raidz
Greetings Have used zfs raidz for while and question rised is it possible to expand raidz with additional disks. Got answer pool yes but raidz "group" no. So very high level idea for you , maybe already know. And i''m not detail level expert of zfs so here might be "trivial" things for you. So could add operation enhanced so that it allow to add additional
2006 Sep 13
10
Snapshots and backing store
Hi, There''s something really bizarre in ZFS snaphot specs : "Uses no separate backing store." . Hum...if I want to mutualize one physical volume somewhere in my SAN as THE snaphots backing-store...it becomes impossible to do ! Really bad. Is there any chance to have a "backing-store-file" option in a future release ? In the same idea, it would be great to