Displaying 10 results from an estimated 10 matches for "da7".
Did you mean:
da
2007 Apr 26
7
device name changing
Hi.
If I create a zpool with the following command:
zpool create tank raidz2 da0 da1 da2 da3 da4 da5 da6 da7
and after a reboot the device names for some reason are changed so da2
and da5 are swapped, either by altering the LUN setting on the storage
or by switching cables/swapping disks etc.?
How will zfs handle that? Will it simply acknowledge that all devices
are present and the pool is intact and re...
2007 Sep 28
3
[LLVMdev] Crash on accessing deleted MBBs (new backend)
...%D0 = MOVE_dr_dr32 %D1<kill>
RTS %D0<imp-use,kill>
# End machine code for ilog2().
Here's where things to south:
# Machine code for ilog2():
Live Ins: D0 in VR#1030
Live Outs: D0
entry: 01917130, LLVM BB @003F93A0, ID#0:
Live Ins: %D0
MOVE_dr_mem_pd32 %Da5, %Da7
%Da5 = MOVE_dr_dr32 %Da7
%Da7 = ADD_imm_dr32 4, %Da7
%D1 = CLR_dr32
CMP_dr_dr32 %D0, %D1<kill>
%D1<dead> = SEQ
BEQ mbb<entry.bb9_crit_edge,01916610>
Successors according to CFG: 01916610 (#4) 01916530 (#1)
entry.bb5_crit_edge: 0...
2013 Mar 25
2
gptzfsboot: error 4 lba 30
...lba 31
gptzfsboot: error 4 lba 31
gptzfsboot: error 4 lba 31
gptzfsboot: error 4 lba 31
gptzfsboot: error 4 lba 31
(Not shortened, exactly those lines)
The server then manages to boot from a mirrored zpool.
What is the cause of error 4 lba 30/31 ?
- controller is a hp/compaq p400 (ciss)
- da0 - da7 are raid0 volumes (controller not jbod capable)
- freebsd 9.1 REL (same error message with 9-STABLE from 2013-03-24)
- server is zfs-only
# diskinfo -v da0
da0
512 # sectorsize
146778685440 # mediasize in bytes (136G)
286677120 # mediasize in sectors...
2010 Mar 26
23
RAID10
Hi All,
I am looking at ZFS and I get that they call it RAIDZ which is similar to RAID 5, but what about RAID 10? Isn''t a RAID 10 setup better for data protection?
So if I have 8 x 1.5tb drives, wouldn''t I:
- mirror drive 1 and 5
- mirror drive 2 and 6
- mirror drive 3 and 7
- mirror drive 4 and 8
Then stripe 1,2,3,4
Then stripe 5,6,7,8
How does one do this with ZFS?
2003 Dec 18
0
Partial deadlock in 4.9p1
...State: up Device /dev/da4a Avail: 3/4094 MB (0%)
D disk04 State: up Device /dev/da5a Avail: 3/4094 MB (0%)
D disk05 State: up Device /dev/da6a Avail: 3/4094 MB (0%)
D disk08 State: up Device /dev/da7a Avail: 3/4094 MB (0%)
D disk09 State: up Device /dev/da8a Avail: 3/4094 MB (0%)
D disk10 State: up Device /dev/da9a Avail: 3/4094 MB (0%)
D disk11 State: up Device /dev/da10a Avail: 3/4094 MB (0%)
D disk12...
2013 Jul 18
0
Seeing data corruption with g_multipath utility
...multipath/newdisk4 ONLINE 0 0 0
multipath/newdisk2 ONLINE 0 0 0
errors: No known data errors
*
*
* *
*gmultipath status:*
* *
Name Status Components
multipath/newdisk2 OPTIMAL da7 (ACTIVE)
da2 (PASSIVE)
multipath/newdisk1 OPTIMAL da6 (ACTIVE)
da1 (PASSIVE)
multipath/newdisk4 OPTIMAL da3 (ACTIVE)
da4 (PASSIVE)
multipath/newdisk...
2012 Nov 12
1
dovecot lost mail! Cause?
Hi,
After using dovecot for several years now, today happend something which
makes me really feel unconfortable: An email received was just not
delivered properly, or, is lost! The mail (from an external server) was
sent to two local mailboxes, user1 and user2. user1 received the message
but for user2, it *magically* disappeared.
MTA is exim4 which definitely processed the messages and handed
2011 Mar 01
5
btrfs wishlist
Hi all
Having managed ZFS for about two years, I want to post a wishlist.
INCLUDED IN ZFS
- Mirror existing single-drive filesystem, as in ''zfs attach''
- RAIDz-stuff - single and hopefully multiple-parity RAID configuration with block-level checksumming
- Background scrub/fsck
- Pool-like management with multiple RAIDs/mirrors (VDEVs)
- Autogrow as in ZFS autoexpand
NOT
2007 May 23
13
Preparing to compare Solaris/ZFS and FreeBSD/ZFS performance.
Hi.
I''m all set for doing performance comparsion between Solaris/ZFS and
FreeBSD/ZFS. I spend last few weeks on FreeBSD/ZFS optimizations and I
think I''m ready. The machine is 1xQuad-core DELL PowerEdge 1950, 2GB
RAM, 15 x 74GB-FC-10K accesses via 2x2Gbit FC links. Unfortunately the
links to disks are the bottleneck, so I''m going to use not more than 4
disks, probably.
2001 Aug 09
2
Debugging help: BUG: Assertion failure with ext3-0.95 for 2.4.7
...d 1da286b0
Aug 9 17:57:40 boeaet34 kernel: <(transaction.c, 1069):
journal_dirty_metadata: journal_head 1da286b0
Aug 9 17:57:40 boeaet34 kernel: <7dd: orphan inode 1256641 will point to 0
Aug 9 17:57:40 boeaet34 kernel: <r_head 1d9a5a10, force_copy 0
Aug 9 17:57:40 boeaet34 kernel: 5): da7>a: journal_head 1d9a5a10
Aug 9 17:57:40 boeaet34 kernel: <ction.c, 525): do_getd 1d9a5a10,
forction.7>(transaction.c, 1069): ction.cd 1d9a58c0, forction.cforce_copy 0
Aug 9 17:57:40 boeaet34 kernel: 1069):le going live.
Aug 9 17:57:40 boeaet34 kernel: <: ext3_forget: forgetting bh...