Displaying 20 results from an estimated 10000 matches similar to: "How to set ZFS metadata copies=3?"
2007 Apr 11
0
raidz2 another resilver problem
Hello zfs-discuss,
One of a disk started to behave strangely.
Apr 11 16:07:42 thumper-9.srv sata: [ID 801593 kern.notice] NOTICE: /pci at 1,0/pci1022,7458 at 3/pci11ab,11ab at 1:
Apr 11 16:07:42 thumper-9.srv port 6: device reset
Apr 11 16:07:42 thumper-9.srv scsi: [ID 107833 kern.warning] WARNING: /pci at 1,0/pci1022,7458 at 3/pci11ab,11ab at 1/disk at 6,0 (sd27):
Apr 11 16:07:42 thumper-9.srv
2009 Jun 19
8
x4500 resilvering spare taking forever?
I''ve got a Thumper running snv_57 and a large ZFS pool. I recently noticed a drive throwing some read errors, so I did the right thing and zfs replaced it with a spare.
Everything went well, but the resilvering process seems to be taking an eternity:
# zpool status
pool: bigpool
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was
2010 Sep 30
0
ZFS Raidz2 problem, detached drive
I have an X4500 thumper box with 48x 500gb drives setup in a a pool and split into raidz2 sets of 8 - 10 drives within the single pool.
I had a failed disk with i cfgadm unconfigured and replaced no problem, but it wasn''t recognised as a Sun drive in Format and unbeknown to me someone else logged in remotely at the time and issued a zpool replace....
I corrected the system/drive
2010 Sep 29
10
Resliver making the system unresponsive
This must be resliver day :)
I just had a drive failure. The hot spare kicked in, and access to the pool over NFS was effectively zero for about 45 minutes. Currently the pool is still reslivering, but for some reason I can access the file system now.
Resliver speed has been beaten to death I know, but is there a way to avoid this? For example, is more enterprisy hardware less susceptible to
2008 Jul 31
17
Can I trust ZFS?
Hey folks,
I guess this is an odd question to be asking here, but I could do with some feedback from anybody who''s actually using ZFS in anger.
I''m about to go live with ZFS in our company on a new fileserver, but I have some real concerns about whether I can really trust ZFS to keep my data alive if things go wrong. This is a big step for us, we''re a 100% windows
2010 Apr 24
3
ZFS RAID-Z2 degraded vs RAID-Z1
Had an idea, could someone please tell me why it''s wrong? (I feel like it has to be).
A RaidZ-2 pool with one missing disk offers the same failure resilience as a healthy RaidZ1 pool (no data loss when one disk fails). I had initially wanted to do single parity raidz pool (5disk), but after a recent scare decided raidz2 was the way to go. With the help of a sparse file
2010 Dec 05
4
Zfs ignoring spares?
Hi all
I have installed a new server with 77 2TB drives in 11 7-drive RAIDz2 VDEVs, all on WD Black drives. Now, it seems two of these drives were bad, one of them had a bunch of errors, the other was very slow. After zfs offlining these and then zfs replacing them with online spares, resilver ended and I thought it''d be ok. Appearently not. Albeit the resilver succeeds, the pool status
2009 Nov 13
11
scrub differs in execute time?
I have a raidz2 and did a scrub, it took 8h. Then I reconnected some drives to other SATA ports, and now it takes 15h to scrub??
Why is that?
--
This message posted from opensolaris.org
2009 Jan 15
14
Using ZFS for replication
Fairly new to ZFS. I am looking to replicate data between two thumper boxes.
Found quite a few articles about using zfs incremental snapshot send/receive. Just a cheeky question to see if anyone has anything working in a live environment and are happy to share the scripts, save me reinventing the wheel. thanks in advance.
--
This message posted from opensolaris.org
2009 Jan 12
1
ZFS size is different ?
Hi all,
I have 2 questions about ZFS.
1. I have create a snapshot in my pool1/data1, and zfs send/recv it to pool2/data2. but I found the USED in zfs list is different:
NAME USED AVAIL REFER MOUNTPOINT
pool2/data2 160G 1.44T 159G /pool2/data2
pool1/data 176G 638G 175G /pool1/data1
It keep about 30,000,000 files.
The content of p_pool/p1 and backup/p_backup
2010 Dec 14
3
last thought before switching to ZFS
Hi I have google to fine som info abut ZFS and I found that site hire.
I have unraid now and are realy happy whit it. but to January I are going to upgrade my cpu to a 45watt quad core from intel that I are begun to use my server to encode my tv show iso when it on.
so now I are begun to learn ZFS in vitualbox. but now to the question it I make a poll whit 4 2TB ind raidz1 can it so be convetet
2009 Apr 27
23
Raidz vdev size... again.
Hi,
i''m new to the list so please bare with me. This isn''t an OpenSolaris
related problem but i hope it''s still the right list to post to.
I''m on the way to move a backup server to using zfs based storage, but i
don''t want to spend too much drives to parity (the 16 drives are attached
to a 3ware raid controller so i could also just use raid6 there).
I
2007 Sep 06
0
Zfs with storedge 6130
On 9/4/07 4:34 PM, "Richard Elling" <Richard.Elling at Sun.COM> wrote:
> Hi Andy,
> my comments below...
> note that I didn''t see zfs-discuss at opensolaris.org in the CC for the
> original...
>
> Andy Lubel wrote:
>> Hi All,
>>
>> I have been asked to implement a zfs based solution using storedge 6130 and
>> im chasing my own
2011 Feb 05
12
ZFS Newbie question
I?ve spend a few hours reading through the forums and wiki and honestly my head is spinning. I have been trying to study up on either buying or building a box that would allow me to add drives of varying sizes/speeds/brands (adding more later etc) and still be able to use the full space of drives (minus parity? [not sure if I got the terminology right]) with redundancy. I have found the ?all in
2007 Sep 14
5
ZFS Space Map optimalization
I have a huge problem with space maps on thumper. Space maps takes over 3GB
and write operations generates massive read operations.
Before every spa sync phase zfs reads space maps from disk.
I decided to turn on compression for pool ( only for pool, not filesystems ) and it helps.
Now space maps, intent log, spa history are compressed.
Not I''m thinking about disabling checksums. All
2007 Feb 27
16
understanding zfs/thunoer "bottlenecks"?
Currently I''m trying to figure out the best zfs layout for a thumper wrt. to read AND write performance.
I did some simple mkfile 512G tests and found out, that per average ~ 500 MB/s seems to be the maximum on can reach (tried initial default setup, all 46 HDDs as R0, etc.).
According to http://www.amd.com/us-en/assets/content_type/DownloadableAssets/ArchitectureWP_062806.pdf I would
2010 Jan 28
13
ZFS configuration suggestion with 24 drives
Replacing my current media server with another larger capacity media server. Also switching over to solaris/zfs.
Anyhow we have 24 drive capacity. These are for large sequential access (large media files) used by no more than 3 or 5 users at a time. I''m inquiring as to what the best configuration for this is for vdevs. I''m considering the following configurations
4 x x6
2006 Jul 17
28
Big JBOD: what would you do?
ZFS fans,
I''m preparing some analyses on RAS for large JBOD systems such as
the Sun Fire X4500 (aka Thumper). Since there are zillions of possible
permutations, I need to limit the analyses to some common or desirable
scenarios. Naturally, I''d like your opinions. I''ve already got a few
scenarios in analysis, and I don''t want to spoil the brain storming, so
2007 Apr 22
7
slow sync on zfs
Hello zfs-discuss,
Relatively low traffic to the pool but sync takes too long to complete
and other operations are also not that fast.
Disks are on 3510 array. zil_disable=1.
bash-3.00# ptime sync
real 1:21.569
user 0.001
sys 0.027
During sync zpool iostat and vmstat look like:
f3-1 504G 720G 370 859 995K 10.2M
misc 20.6M 52.0G 0 0
2009 Mar 28
53
Can this be done?
I currently have a 7x1.5tb raidz1.
I want to add "phase 2" which is another 7x1.5tb raidz1
Can I add the second phase to the first phase and basically have two
raid5''s striped (in raid terms?)
Yes, I probably should upgrade the zpool format too. Currently running
snv_104. Also should upgrade to 110.
If that is possible, would anyone happen to have the simple command
lines to