Displaying 20 results from an estimated 30 matches for "md7".
Did you mean:
md5
2007 Feb 18
4
sysutils/fusefs-ntfs working for anyone?
...n userspace
fusefs-ntfs-0.20070207RC1 Mount NTFS partitions and disk images
I use the sysutils/ntfsprogs port to create a NTFS filesystem. I can
also mount this filesystem using mount.ntfs, yet I fail to get anywhere
with ntfs-3g. What's that darn seekscript about anyway?
# mkfs.ntfs -fF /dev/md7
/dev/md7 is not a block device.
mkntfs forced anyway.
The sector size was not specified for /dev/md7 and it could not be obtained automatically. It has been set to 512 bytes.
The partition start sector was not specified for /dev/md7 and it could not be obtained automatically. It has been set to 0...
2004 Jun 27
1
Trouble with rsync inside fcron: buffer overflow in recv_exclude_list
...s: 64-bit files, socketpairs, hard links, symlinks, batchfiles,
IPv6, 64-bit system inums, 64-bit internal inums
<snipped license>
sh-2.05b$ cat /usr/local/sbin/mail_backup
#!/bin/sh
touch /tmp/mail_backup-rsync >& /dev/null
date >> /tmp/mail_backup-rsync
mkdir /md7/mail_backup/`date "+%H"` >& /dev/null
/usr/bin/rsync --delete-after --max-delete=1 -pogtvS /var/spool/mail/* /md7/mail_backup/`date "+%H"`/
----- Forwarded message from fcron <root> -----
To: root@hvs
Subject: Output of fcron job: '/usr/local/sbin/mail_ba...
2007 May 04
3
NFS issue
Hi List,
I must be going mad or something, but got a really odd problem with NFS
mount and a DVD rom.
Here is the situation,
/dev/md7 58G 18G 37G 33% /data
which is shared out by NFS, (/etc/exportfs)
This has been working since I installed the OS, Centos 4.4
I have a DVD on that is device /dev/scd0, which I can mount anywhere I
like, no problem.
However, the problem comes when I try to mount it
under /data/...
2024 Jan 17
2
Upgrade 10.4 -> 11.1 making problems
...59202 0 Y 2803
Brick glusterpub1:/gluster/md6/workdata 55829 0 Y 4583
Brick glusterpub2:/gluster/md6/workdata 50455 0 Y 3296
Brick glusterpub3:/gluster/md6/workdata 50262 0 Y 3237
Brick glusterpub1:/gluster/md7/workdata 52238 0 Y 5014
Brick glusterpub2:/gluster/md7/workdata 52474 0 Y 3673
Brick glusterpub3:/gluster/md7/workdata 57966 0 Y 3653
Self-heal Daemon on localhost N/A N/A Y 4141
Self-heal Daemon o...
2024 Jan 18
2
Upgrade 10.4 -> 11.1 making problems
...03
>> Brick glusterpub1:/gluster/md6/workdata 55829 0 Y 4583
>> Brick glusterpub2:/gluster/md6/workdata 50455 0 Y 3296
>> Brick glusterpub3:/gluster/md6/workdata 50262 0 Y 3237
>> Brick glusterpub1:/gluster/md7/workdata 52238 0 Y 5014
>> Brick glusterpub2:/gluster/md7/workdata 52474 0 Y 3673
>> Brick glusterpub3:/gluster/md7/workdata 57966 0 Y 3653
>> Self-heal Daemon on localhost N/A N/A Y...
2024 Jan 18
1
Upgrade 10.4 -> 11.1 making problems
...Y 2803
> Brick glusterpub1:/gluster/md6/workdata 55829 0 Y 4583
> Brick glusterpub2:/gluster/md6/workdata 50455 0 Y 3296
> Brick glusterpub3:/gluster/md6/workdata 50262 0 Y 3237
> Brick glusterpub1:/gluster/md7/workdata 52238 0 Y 5014
> Brick glusterpub2:/gluster/md7/workdata 52474 0 Y 3673
> Brick glusterpub3:/gluster/md7/workdata 57966 0 Y 3653
> Self-heal Daemon on localhost N/A N/A Y 4141
>...
2024 Jan 18
2
Upgrade 10.4 -> 11.1 making problems
...Y? ? ? 2803
>> Brick glusterpub1:/gluster/md6/workdata? ? 55829? ? 0? ? ? ? ? Y? ? ? 4583
>> Brick glusterpub2:/gluster/md6/workdata? ? 50455? ? 0? ? ? ? ? Y? ? ? 3296
>> Brick glusterpub3:/gluster/md6/workdata? ? 50262? ? 0? ? ? ? ? Y? ? ? 3237
>> Brick glusterpub1:/gluster/md7/workdata? ? 52238? ? 0? ? ? ? ? Y? ? ? 5014
>> Brick glusterpub2:/gluster/md7/workdata? ? 52474? ? 0? ? ? ? ? Y? ? ? 3673
>> Brick glusterpub3:/gluster/md7/workdata? ? 57966? ? 0? ? ? ? ? Y? ? ? 3653
>> Self-heal Daemon on localhost? ? ? ? ? ? ? N/A? ? ? N/A? ? ? ? Y? ? ? 4141
>...
2024 Jan 18
1
Upgrade 10.4 -> 11.1 making problems
...ck glusterpub1:/gluster/md6/workdata 55829 0 Y 4583
> >> Brick glusterpub2:/gluster/md6/workdata 50455 0 Y 3296
> >> Brick glusterpub3:/gluster/md6/workdata 50262 0 Y 3237
> >> Brick glusterpub1:/gluster/md7/workdata 52238 0 Y 5014
> >> Brick glusterpub2:/gluster/md7/workdata 52474 0 Y 3673
> >> Brick glusterpub3:/gluster/md7/workdata 57966 0 Y 3653
> >> Self-heal Daemon on localhost N/A...
2023 Mar 30
2
Performance: lots of small files, hdd, nvme etc.
...r/md4/workdata
Brick7: glusterpub1:/gluster/md5/workdata
Brick8: glusterpub2:/gluster/md5/workdata
Brick9: glusterpub3:/gluster/md5/workdata
Brick10: glusterpub1:/gluster/md6/workdata
Brick11: glusterpub2:/gluster/md6/workdata
Brick12: glusterpub3:/gluster/md6/workdata
Brick13: glusterpub1:/gluster/md7/workdata
Brick14: glusterpub2:/gluster/md7/workdata
Brick15: glusterpub3:/gluster/md7/workdata
Options Reconfigured:
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
performance.cache-invalidation: on
performance.stat-prefetch: on
features.cache-invalidation-timeout...
2024 Jan 18
1
Upgrade 10.4 -> 11.1 making problems
...rpub1:/gluster/md6/workdata 55829 0 Y 4583
>>>> Brick glusterpub2:/gluster/md6/workdata 50455 0 Y 3296
>>>> Brick glusterpub3:/gluster/md6/workdata 50262 0 Y 3237
>>>> Brick glusterpub1:/gluster/md7/workdata 52238 0 Y 5014
>>>> Brick glusterpub2:/gluster/md7/workdata 52474 0 Y 3673
>>>> Brick glusterpub3:/gluster/md7/workdata 57966 0 Y 3653
>>>> Self-heal Daemon on localhost N...
2007 Mar 06
1
blocks 256k chunks on RAID 1
Hi, I have a RAID 1 (using mdadm) on CentOS Linux and in /proc/mdstat I
see this:
md7 : active raid1 sda2[0] sdb2[1]
26627648 blocks [2/2] [UU] [-->> it's OK]
md1 : active raid1 sdb3[1] sda3[0]
4192896 blocks [2/2] [UU] [-->> it's OK]
md2 : active raid1 sda5[0] sdb5[1]
4192832 blocks [2/2] [UU] [-->> it's OK]
md3 : active raid1 sdb6[1] sda6[0]
419283...
2024 Jan 19
1
Upgrade 10.4 -> 11.1 making problems
...;> Brick glusterpub1:/gluster/md6/workdata 55829 0 Y 4583
> >> Brick glusterpub2:/gluster/md6/workdata 50455 0 Y 3296
> >> Brick glusterpub3:/gluster/md6/workdata 50262 0 Y 3237
> >> Brick glusterpub1:/gluster/md7/workdata 52238 0 Y 5014
> >> Brick glusterpub2:/gluster/md7/workdata 52474 0 Y 3673
> >> Brick glusterpub3:/gluster/md7/workdata 57966 0 Y 3653
> >> Self-heal Daemon on localhost N/A N/A...
2024 Jan 19
1
Upgrade 10.4 -> 11.1 making problems
...sterpub1:/gluster/md6/workdata 55829 0 Y
> 4583
> > >> Brick glusterpub2:/gluster/md6/workdata 50455 0 Y
> 3296
> > >> Brick glusterpub3:/gluster/md6/workdata 50262 0 Y
> 3237
> > >> Brick glusterpub1:/gluster/md7/workdata 52238 0 Y
> 5014
> > >> Brick glusterpub2:/gluster/md7/workdata 52474 0 Y
> 3673
> > >> Brick glusterpub3:/gluster/md7/workdata 57966 0 Y
> 3653
> > >> Self-heal Daemon on localhost N/A...
2010 Nov 04
1
orphan inodes deleted issue
...anup: deleting unreferenced inode 2009799
ext3_orphan_cleanup: deleting unreferenced inode 2009794
EXT3-fs: md1: 27 orphan inodes deleted
EXT3-fs: recovery complete.
It's my array:
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/md1 9.5G 735M 8.3G 8% /
/dev/md7 38G 6.7G 30G 19% /var
/dev/md6 15G 4.5G 9.1G 33% /usr
/dev/md5 103G 45G 54G 46% /backup
/dev/md3 284G 42G 228G 16% /home
/dev/md2 2.0G 214M 1.7G 12% /tmp
/dev/md0 243M 24M 207M 11% /boot
I'...
2024 Jan 20
1
Upgrade 10.4 -> 11.1 making problems
...uster/md6/workdata 55829 0 Y 4583
>> > >> Brick glusterpub2:/gluster/md6/workdata 50455 0 Y 3296
>> > >> Brick glusterpub3:/gluster/md6/workdata 50262 0 Y 3237
>> > >> Brick glusterpub1:/gluster/md7/workdata 52238 0 Y 5014
>> > >> Brick glusterpub2:/gluster/md7/workdata 52474 0 Y 3673
>> > >> Brick glusterpub3:/gluster/md7/workdata 57966 0 Y 3653
>> > >> Self-heal Daemon on localhost...
2024 Jan 24
1
Upgrade 10.4 -> 11.1 making problems
...uster/md6/workdata? ? 55829? ? 0? ? ? ? ? Y? ? ? 4583
>> > >> Brick glusterpub2:/gluster/md6/workdata? ? 50455? ? 0? ? ? ? ? Y? ? ? 3296
>> > >> Brick glusterpub3:/gluster/md6/workdata? ? 50262? ? 0? ? ? ? ? Y? ? ? 3237
>> > >> Brick glusterpub1:/gluster/md7/workdata? ? 52238? ? 0? ? ? ? ? Y? ? ? 5014
>> > >> Brick glusterpub2:/gluster/md7/workdata? ? 52474? ? 0? ? ? ? ? Y? ? ? 3673
>> > >> Brick glusterpub3:/gluster/md7/workdata? ? 57966? ? 0? ? ? ? ? Y? ? ? 3653
>> > >> Self-heal Daemon on localhost? ? ? ?...
2001 Nov 11
2
Software RAID and ext3 problem
...4.13 with the
appropiate ext3 patch and a software raid array with paritiions as shown
below:
Filesystem Size Used Avail Use% Mounted on
/dev/md5 939M 237M 654M 27% /
/dev/md0 91M 22M 65M 25% /boot
/dev/md6 277M 8.1M 254M 4% /tmp
/dev/md7 1.8G 1.3G 595M 69% /usr
/dev/md8 938M 761M 177M 82% /var
/dev/md9 9.2G 2.6G 6.1G 30% /home
/dev/md10 11G 2.1G 8.7G 19% /scratch
/dev/md12 56G 43G 13G 77% /global
The /usr and /var filesystems keep switching to ro mod...
2024 Jan 25
1
Upgrade 10.4 -> 11.1 making problems
...ata 55829 0 Y 4583
> >> > >> Brick glusterpub2:/gluster/md6/workdata 50455 0 Y 3296
> >> > >> Brick glusterpub3:/gluster/md6/workdata 50262 0 Y 3237
> >> > >> Brick glusterpub1:/gluster/md7/workdata 52238 0 Y 5014
> >> > >> Brick glusterpub2:/gluster/md7/workdata 52474 0 Y 3673
> >> > >> Brick glusterpub3:/gluster/md7/workdata 57966 0 Y 3653
> >> > >> Self-heal Daemon on l...
2024 Jan 27
1
Upgrade 10.4 -> 11.1 making problems
...ata? ? 55829? ? 0? ? ? ? ? Y? ? ? 4583
> >> > >> Brick glusterpub2:/gluster/md6/workdata? ? 50455? ? 0? ? ? ? ? Y? ? ? 3296
> >> > >> Brick glusterpub3:/gluster/md6/workdata? ? 50262? ? 0? ? ? ? ? Y? ? ? 3237
> >> > >> Brick glusterpub1:/gluster/md7/workdata? ? 52238? ? 0? ? ? ? ? Y? ? ? 5014
> >> > >> Brick glusterpub2:/gluster/md7/workdata? ? 52474? ? 0? ? ? ? ? Y? ? ? 3673
> >> > >> Brick glusterpub3:/gluster/md7/workdata? ? 57966? ? 0? ? ? ? ? Y? ? ? 3653
> >> > >> Self-heal Daemon on l...
2002 Feb 28
5
Problems with ext3 fs
...jinsky:~$ cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid5]
read_ahead 1024 sectors
md0 : active raid1 hdc1[1] hda1[0]
96256 blocks [2/2] [UU]
md5 : active raid1 hdk1[1] hde1[0]
976640 blocks [2/2] [UU]
md6 : active raid1 hdk5[1] hde5[0]
292672 blocks [2/2] [UU]
md7 : active raid1 hdk6[1] hde6[0]
1952896 blocks [2/2] [UU]
md8 : active raid1 hdk7[1] hde7[0]
976640 blocks [2/2] [UU]
md9 : active raid1 hdk8[1] hde8[0]
9765376 blocks [2/2] [UU]
md10 : active raid0 hdk9[1] hde9[0]
12108800 blocks 4k chunks
md12 : active raid5 hdk3[3] hde...