Displaying 8 results from an estimated 8 matches for "md8".
Did you mean:
md5
2009 Sep 20
0
Re: reiserfs3/ext4/btrfs RAID read performance
On Sep 20, 11:50 am, wbrana@gmail.com wrote:
> On Sun, Sep 20, 2009 at 3:47 AM, Daniel J Blueman
>
> <daniel.blueman@gmail.com> wrote:
> > On Sep 19, 7:20 pm, wbr...@gmail.com wrote:
>
> >> RAID details:
> >>
> >> md8 : active raid10 sda7[0] sdd7[3] sdc7[2] sdb7[1]
> >> 62925824 blocks 256K chunks 2 far-copies [4/4] [UUUU]
> >>
> >> Ext4:
> >> mkfs.ext4 -E stride=64,stripe-width=128 /dev/md8
> >> mount -t ext4 -o noatime,auto_da_alloc,commit=600 /dev/md8 /mnt/md...
2002 Feb 28
5
Problems with ext3 fs
...[raid5]
read_ahead 1024 sectors
md0 : active raid1 hdc1[1] hda1[0]
96256 blocks [2/2] [UU]
md5 : active raid1 hdk1[1] hde1[0]
976640 blocks [2/2] [UU]
md6 : active raid1 hdk5[1] hde5[0]
292672 blocks [2/2] [UU]
md7 : active raid1 hdk6[1] hde6[0]
1952896 blocks [2/2] [UU]
md8 : active raid1 hdk7[1] hde7[0]
976640 blocks [2/2] [UU]
md9 : active raid1 hdk8[1] hde8[0]
9765376 blocks [2/2] [UU]
md10 : active raid0 hdk9[1] hde9[0]
12108800 blocks 4k chunks
md12 : active raid5 hdk3[3] hde3[2] hdc2[1] hda2[0]
59978304 blocks level 5, 32k chunk, algor...
2001 Nov 11
2
Software RAID and ext3 problem
...e raid array with paritiions as shown
below:
Filesystem Size Used Avail Use% Mounted on
/dev/md5 939M 237M 654M 27% /
/dev/md0 91M 22M 65M 25% /boot
/dev/md6 277M 8.1M 254M 4% /tmp
/dev/md7 1.8G 1.3G 595M 69% /usr
/dev/md8 938M 761M 177M 82% /var
/dev/md9 9.2G 2.6G 6.1G 30% /home
/dev/md10 11G 2.1G 8.7G 19% /scratch
/dev/md12 56G 43G 13G 77% /global
The /usr and /var filesystems keep switching to ro mode following the
detection of errors. This has bee...
2005 Feb 03
2
RAID 1 sync
Is my new 300GB RAID 1 array REALLY going to take 18936 minutes to
sync!!???
2001 Nov 11
0
(no subject)
...e raid array with paritiions as shown
below:
Filesystem Size Used Avail Use% Mounted on
/dev/md5 939M 237M 654M 27% /
/dev/md0 91M 22M 65M 25% /boot
/dev/md6 277M 8.1M 254M 4% /tmp
/dev/md7 1.8G 1.3G 595M 69% /usr
/dev/md8 938M 761M 177M 82% /var
/dev/md9 9.2G 2.6G 6.1G 30% /home
/dev/md10 11G 2.1G 8.7G 19% /scratch
/dev/md12 56G 43G 13G 77% /global
The /usr and /var filesystems keep switching to ro mode following the
detection of errors. This has bee...
2012 Jan 17
2
Transition to CentOS - RAID HELP!
...olGroup00 lvm2 [139.72 GB / 0 free]
PV /dev/md4 VG VolGroup00 lvm2 [139.72 GB / 0 free]
PV /dev/md5 VG VolGroup00 lvm2 [139.72 GB / 0 free]
PV /dev/md6 VG VolGroup00 lvm2 [139.72 GB / 0 free]
PV /dev/md7 VG VolGroup00 lvm2 [139.72 GB / 0 free]
PV /dev/md8 VG VolGroup00 lvm2 [139.72 GB / 0 free]
PV /dev/md9 VG VolGroup00 lvm2 [139.72 GB / 139.72 GB free]
Total: 10 [1.36 TB] / in use: 10 [1.36 TB] / in no VG: 0 [0 ]
(evidently /dev/md9 isn't being used ... emergency spare?)
And from there, they created the logical vol...
2003 Mar 24
5
Bootable Kernel
Hello,
Sorry to sound like a bad record, but I'm trying to figure out why I'm
getting "Bad gzip magic numbers" errors when ISOlinux is trying to
inflate the kernel. Right now I'm clueless as to what's wrong. I've tried
various kernels compiled on a few different platforms. Has anyone ever
seen this before? ANyone know where a better forum to ask this question
2009 May 13
1
[PATCH server] Cloud UI V1 (readonly).
...vnZDD6+Qe|Oed2z{QJOBUy`AI}URCwBqkikmBKomvqdvqpNQ>v53
zl~xzVojd6__z{BN&_y7Afqt7yx1#7yG|9$7n^0zalWdaOf)^gcefMyg8G61$Pw|Yh
z#k+e5&*}dA$vt*6#)hf_>B$W@=VsP}<7>)P2osC|HG=VeC%np$9$qqdW^t5o<F~?-
zE6Vgfj2^aL at 5u~d{QeJIl}N{@VwbIVD_}h3X5WLOgfkULs}`sOd++yA(zcWTy6~8a
z_<~JvKyaNqdRHxg{MD8}UoPTkDy^D~QWDi at Cb(S#a at v}toiLExbM!W+VsggvFv2d?
pONac!mV<Q-^Z8=&51C&91^^~AOG{24q`m+E002ovPDHLkV1lujmu&z5
literal 0
HcmV?d00001
diff --git a/src/public/stylesheets/cloud/layout.css b/src/public/stylesheets/cloud/layout.css
new file mode 100644
index 0000000..60b5c91
--- /dev/nul...