similar to: Proposal: multiple copies of user data

Displaying 20 results from an estimated 100000 matches similar to: "Proposal: multiple copies of user data"

2006 Oct 16
11
Configuring a 3510 for ZFS
Hi folks, Myself and a colleague are currently involved in a prototyping exercise to evaluate ZFS against our current filesystem. We are looking at the best way to arrange the disks in a 3510 storage array. We have been testing with the 12 disks on the 3510 exported as "nraid" logical devices. We then configured a single ZFS pool on top of this, using two raid-z arrays. We are getting
2012 Jan 15
22
Does raidzN actually protect against bitrot? If yes - how?
"Does raidzN actually protect against bitrot?" That''s a kind of radical, possibly offensive, question formula that I have lately. Reading up on theory of RAID5, I grasped the idea of the write hole (where one of the sectors of the stripe, such as the parity data, doesn''t get written - leading to invalid data upon read). In general, I think the same applies to bitrot of
2011 Feb 05
12
ZFS Newbie question
I?ve spend a few hours reading through the forums and wiki and honestly my head is spinning. I have been trying to study up on either buying or building a box that would allow me to add drives of varying sizes/speeds/brands (adding more later etc) and still be able to use the full space of drives (minus parity? [not sure if I got the terminology right]) with redundancy. I have found the ?all in
2009 Oct 09
22
Does ZFS work with SAN-attached devices?
Hi All, Its been a while since I touched zfs. Is the below still the case with zfs and hardware raid array? Do we still need to provide two luns from the hardware raid then zfs mirror those two luns? http://www.opensolaris.org/os/community/zfs/faq/#hardwareraid Thanks, Shawn -- This message posted from opensolaris.org
2006 Apr 28
4
ZFS RAID-Z for Two-Disk Workstation Setup?
After reading the ZFS docs it does appear that RAID-Z can be used on a two-disk system and I was wondering if the system would [i]basically [/i]work as Intel''s Matrix RAID for two disks? [u] Intel Matrix RAID info:[/u] http://www.intel.com/design/chipsets/matrixstorage_sb.htm http://techreport.com/reviews/2005q1/matrix-raid/index.x?pg=1 My focus with this thread is some
2011 Jan 05
6
ZFS on top of ZFS iSCSI share
I have a filer running Opensolaris (snv_111b) and I am presenting a iSCSI share from a RAIDZ pool. I want to run ZFS on the share at the client. Is it necessary to create a mirror or use ditto blocks at the client to ensure ZFS can recover if it detects a failure at the client? Thanks, Bruin
2007 May 02
16
ZFS Support for remote mirroring
Does ZFS support any type of remote mirroring? It seems at present my only two options to achieve this would be Sun Cluster or Availability Suite. I thought that this functionality was in the works, but I haven''t heard anything lately. Thanks! Aaron Newcomb http://opennewsshow.org http://thesourceshow.org This message posted from opensolaris.org
2008 Jun 30
20
Some basic questions about getting the best performance for database usage
I''m new so opensolaris and very new to ZFS. In the past we have always used linux for our database backends. So now we are looking for a new database server to give us a big performance boost, and also the possibility for scalability. Our current database consists mainly of a huge table containing about 230 million records and a few (relatively) smaller tables (something like 13 million
2007 Apr 17
10
storage type for ZFS
The paragraph below is from ZFS admin guide Traditional Volume Management As described in ?ZFS Pooled Storage? on page 18, ZFS eliminates the need for a separate volume manager. ZFS operates on raw devices, so it is possible to create a storage pool comprised of logical volumes, either software or hardware. This configuration is not recommended, as ZFS works best when it uses raw physical
2007 Mar 06
2
recover user error
ZFS claims that it can recover user error such as accidentally deleting of files. How does it work? Does it only work for mirrored or RAID-Z pool? What is the command to perform the task? Also for COW, I understand that during the transaction (while data is been undated), ZFS keeps a copy of the previous data. However, once the transaction is successfully completed, isn''t it the case
2006 Sep 13
10
Snapshots and backing store
Hi, There''s something really bizarre in ZFS snaphot specs : "Uses no separate backing store." . Hum...if I want to mutualize one physical volume somewhere in my SAN as THE snaphots backing-store...it becomes impossible to do ! Really bad. Is there any chance to have a "backing-store-file" option in a future release ? In the same idea, it would be great to
2006 Aug 24
5
unaccounted for daily growth in ZFS disk space usage
We finally flipped the switch on one of our ZFS-based servers, with approximately 1TB of 2.8TB (3 stripes of 950MB or so, each of which is a RAID5 volume on the adaptec card). We have snapshots every 4 hours for the first few days. If you add up the snapshot references it appears somewhat high versus daily use (mostly mail boxes, spam, etc changing), but say an aggregate of no more than 400+MB a
2007 Jul 26
8
Read-only (forensic) mounts of ZFS
Hi I''m looking into forensic aspects of ZFS, in particular ways to use ZFS tools to investigate ZFS file systems without writing to the pools. I''m working on a test suite of file system images within VTOC partitions. At the moment, these only have 1 file system per pool per VTOC partition for simplicity''s sake, and I''m using Solaris 10 6/06, which may not
2007 Sep 26
9
Rule of Thumb for zfs server sizing with (192) 500 GB SATA disks?
I''m trying to get maybe 200 MB/sec over NFS for large movie files (need large capacity to hold all of them). Are there any rules of thumb on how much RAM is needed to handle this (probably RAIDZ for all the disks) with zfs, and how large a server should be used? The throughput required is not so large, so I am thinking an X4100 M2 or X4150 should be plenty. This message posted from
2007 Apr 19
14
Permanently removing vdevs from a pool
Is it possible to gracefully and permanently remove a vdev from a pool without data loss? The type of pool in question here is a simple pool without redundancies (i.e. JBOD). The documentation mentions for instance offlining, but without going into the end results of doing that. The thing I''m looking for is an option to evacuate, for the lack of a better word, the data from a specific
2007 Dec 31
4
Help! ZFS pool is UNAVAILABLE
Hi All, I posted this in a different threat, but it was recommended that I post in this one. Basically, I have a 3 drive raidz array on internal Seagate drives. running build 64nv. I purchased 3 add''l USB drives with the intention of mirroring and then migrating the data to the new USB drives. I accidentally added the 3 USB drives in a raidz to my original storage pool, so now I have 2
2009 Sep 08
4
Can ZFS simply concatenate LUNs (eg no RAID0)?
Hi, I do have a disk array that is providing striped LUNs to my Solaris box. Hence I''d like to simply concat those LUNs without adding another layer of striping. Is this possibile with ZFS? As far as I understood, if I use zpool create myPool lun-1 lun-2 ... lun-n I will get a RAID0 striping where each data block is split across all "n" LUNs. If that''s
2011 Aug 14
4
Space usage
I''m just uploading all my data to my server and the space used is much more than what i''m uploading; Documents = 147MB Videos = 11G Software= 1.4G By my calculations, that equals 12.547T, yet zpool list is showing 21G as being allocated; NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT dpool 27.2T 21.2G 27.2T 0% 1.00x ONLINE - It doesn''t look like
2011 Mar 01
5
btrfs wishlist
Hi all Having managed ZFS for about two years, I want to post a wishlist. INCLUDED IN ZFS - Mirror existing single-drive filesystem, as in ''zfs attach'' - RAIDz-stuff - single and hopefully multiple-parity RAID configuration with block-level checksumming - Background scrub/fsck - Pool-like management with multiple RAIDs/mirrors (VDEVs) - Autogrow as in ZFS autoexpand NOT
2010 Mar 26
23
RAID10
Hi All, I am looking at ZFS and I get that they call it RAIDZ which is similar to RAID 5, but what about RAID 10? Isn''t a RAID 10 setup better for data protection? So if I have 8 x 1.5tb drives, wouldn''t I: - mirror drive 1 and 5 - mirror drive 2 and 6 - mirror drive 3 and 7 - mirror drive 4 and 8 Then stripe 1,2,3,4 Then stripe 5,6,7,8 How does one do this with ZFS?