Displaying 20 results from an estimated 35 matches for "raiddevice".
Did you mean:
raiddevices
2011 Aug 17
1
RAID5 suddenly broken
...Update Time : Wed Aug 17 14:47:36 2011
State : active
Active Devices : 2
Working Devices : 2
Failed Devices : 1
Spare Devices : 0
Checksum : ed6d5dcd - correct
Events : 38857
Layout : left-symmetric
Chunk Size : 256K
Number Major Minor RaidDevice State
this 0 8 3 0 active sync /dev/sda3
0 0 8 3 0 active sync /dev/sda3
1 1 0 0 1 faulty removed
2 2 8 51 2 active sync /dev/sdd3
/dev/sdb3:
Magic : a92b4efc...
2009 May 20
2
help with rebuilding md0 (Raid5)
...0
Update Time : Tue May 19 21:40:43 2009
State : active
Active Devices : 2
Working Devices : 2
Failed Devices : 1
Spare Devices : 0
Checksum : cbe14089 - correct
Events : 0.22
Layout : left-asymmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 0 3 4 0 active sync /dev/hda4
0 0 3 4 0 active sync /dev/hda4
1 1 22 4 1 faulty /dev/hdc4
2 2 0 0 2 faulty removed
3 3 34 4 3...
2006 Apr 23
1
RAID question
...Persistence : Superblock is persistent
Update Time : Thu Apr 20 15:52:50 2006
State : dirty, no-errors
<--------------------------------------------------- RAID STATE
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Number Major Minor RaidDevice State
0 33 7 0 active sync /dev/hde7
1 34 7 1 active sync /dev/hdg7
UUID : 232294aa:45bd9dea:c62face1:2fbf7a60
Events : 0.38
..........................................
while in CentOS 3 systems we find:
............
2014 Feb 07
3
Software RAID1 Failure Help
...8 (1389.17 GiB 1491.61 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Fri Feb 7 15:21:45 2014
State : active, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 1
Spare Devices : 0
Events : 0.758203
Number Major Minor RaidDevice State
0 8 3 0 active sync /dev/sda3
1 0 0 1 removed
2 8 19 - faulty spare /dev/sdb3
[root at server ~]# mdadm --detail /dev/md1
/dev/md1:
Version : 0.90
Creation Time : Tue Jan 4 05:39:36 2011
Raid Level : raid1
Array Size : 8385856 (8.00 GiB 8.59 GB)
Used Dev Size : 8385856 (8.00 GiB 8.59 GB)
R...
2010 Nov 14
3
RAID Resynch...??
...State : clean, resyncing
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Layout : far=2
Chunk Size : 512K
Rebuild Status : 57% complete
UUID : f045370a:5be687e9:73e57992:06ea59e5
Events : 0.8
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
If done some googling but I am stumped as to what would have kicked this
off, anyone have any insight that would be great...The system has been up
for ~34 days and...
2010 Feb 28
3
puzzling md error ?
...Persistence : Superblock is persistent
Update Time : Sun Feb 28 04:53:29 2010
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : b6da4dc5:c7372d6e:63f32b9c:49fa95f9
Events : 0.84
Number Major Minor RaidDevice State
0 8 33 0 active sync /dev/sdc1
1 8 49 1 active sync /dev/sdd1
# mdadm --detail /dev/md11
/dev/md11:
Version : 0.90
Creation Time : Wed Oct 8 12:54:57 2008
Raid Level : raid1
Array Size : 143374656 (136.73 G...
2006 Jun 24
3
recover data from linear raid
Hello,
I had a scientific linux 3.0.4 system (rhel compatible), with 3
ide disks, one for / and two others in linear raid (250 gb and 300 gb
each).
This system was obsoleted so i move the raid disks to a new
scientific linux 3.0.7 installation. However, the raid array was not
detected ( I put the disks on the same channels and same master/lsave
setup as in the previous setup). In fact
2010 Jan 05
4
Software RAID1 Disk I/O
I just installed CentOS 5.4 64 bit release on a 1.9ghz CPU with 8gB of
RAM. It has 2 Western Digital 1.5TB SATA2 drives in RAID1.
[root at server ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/md2 1.4T 1.4G 1.3T 1% /
/dev/md0 99M 19M 76M 20% /boot
tmpfs 4.0G 0 4.0G 0% /dev/shm
[root at server ~]#
Its barebones
2011 Apr 26
0
mdraid woes (missing superblock?)
...Devices : 2
Total Devices : 2
Preferred Minor : 5
Update Time : Tue Apr 26 11:48:35 2011
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Checksum : 657d7a24 - correct
Events : 14
Number Major Minor RaidDevice State
this 0 8 17 0 active sync /dev/sdb1
0 0 8 17 0 active sync /dev/sdb1
1 1 8 49 1 active sync /dev/sdd1
# examine the other partition/dev/sdd1:
Magic : a92b4efc
Version : 0.9...
2014 Dec 03
7
DegradedArray message
...ate Time : Tue Dec 2 20:02:55 2014
State : active, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
Name : desk4.localdomain:0
UUID : 29f70093:ae78cf9f:0ab7c1cd:e380f50b
Events : 266241
Number Major Minor RaidDevice State
0 0 0 0 removed
1 253 3 1 active sync /dev/dm-3
[root at desk4 ~]# [root at desk4 ~]# mdadm --query --detail /dev/md1
/dev/md1:
Version : 1.1
Creation Time : Thu Nov 15 19:24:19 2012
Raid Level : raid1
Array...
2006 Aug 10
3
MD raid tools ... did i missed something?
...2
Persistence : Superblock is persistent
Update Time : Thu Aug 10 07:32:45 2006
State : dirty, degraded
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 256K
Number Major Minor RaidDevice State
0 0 0 -1 removed
1 8 21 1 active sync /dev/sdb5
2 8 37 2 active sync /dev/sdc5
UUID : 4c77d8a9:3952f00b:876ce47a:a65d5522
Events : 0.12152695
=============================...
2012 Jun 28
2
Strange du/df behaviour.
...te Time : Thu Jun 28 10:17:04 2012
State : active
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : near=2
Chunk Size : 64K
UUID : 423fd5cf:beedc018:915808f0:8ec673de
Events : 0.845339
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
2 8 49 2 active sync /dev/sdd1
3 8 65 3 active sync /dev/sde1
Any clues why du shows wrong and flo...
2012 Jun 07
1
mdadm: failed to write superblock to
...Superblock is persistent
Update Time : Thu Jun 7 08:57:12 2012
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
UUID : 9beaf2eb:4b5c7416:776c2c25:004bd7b2
Events : 0.147
Number Major Minor RaidDevice State
0 0 0 0 removed
1 8 17 1 active sync /dev/sdb1
Thanks Sebastian
2008 Nov 14
0
Still working on a Member Server
...=Computers ;
ldap suffix = dc=GUM,dc=COM ;
ldap user suffix = ou=People ;
idmap backend = ldap://192.168.1.245
idmap uid = 10000-20000 ;
idmap gid = 10000-20000 ;
winbind enum users = Yes
winbind enum groups = Yes
winbind trusted domains only = Yes
[GUMSHARE]
comment = GUMSHARE
path = /RAIDDEVICE/GUMSHARE
username = GUM+user1,@"GUM+Domain Users"
read list = GUM+user1, "@GUM+Domain Users"
write list = "@GUM+Domain Users"
read only = No
create mask = 0774
security mask = 0774
force security mode = 0770
directory mask = 02777
directory security mask = 077...
2023 Mar 30
1
Performance: lots of small files, hdd, nvme etc.
Well, you have *way* more files than we do... :)
Il 30/03/2023 11:26, Hu Bert ha scritto:
> Just an observation: is there a performance difference between a sw
> raid10 (10 disks -> one brick) or 5x raid1 (each raid1 a brick)
Err... RAID10 is not 10 disks unless you stripe 5 mirrors of 2 disks.
> with
> the same disks (10TB hdd)? The heal processes on the 5xraid1-scenario
>
2013 Mar 28
1
Glusterfs gives up with endpoint not connected
...ime : Thu Mar 28 11:13:21 2013
State : clean
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 64K
UUID : c484e093:018a2517:56e38f5e:1a216491
Events : 0.250
Number Major Minor RaidDevice State
0 8 49 0 active sync /dev/sdd1
1 8 65 1 active sync /dev/sde1
2 8 97 2 active sync /dev/sdg1
3 8 81 - spare /dev/sdf1
[root at tuepdc glusterfs]# tail -f mnt-...
2015 Feb 18
5
CentOS 7: software RAID 5 array with 4 disks and no spares?
...ate : active
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : localhost:root
UUID : cfc13fe9:8fa811d8:85649402:58c4846e
Events : 4703
Number Major Minor RaidDevice State
0 8 3 0 active sync /dev/sda3
1 8 19 1 active sync /dev/sdb3
2 8 35 2 active sync /dev/sdc3
4 8 51 3 active sync /dev/sdd3
Apparently no spare devices have...
2010 Aug 18
3
Wrong disk size problem.
...Persistence : Superblock is persistent
Update Time : Wed Aug 18 10:01:27 2010
State : active
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : a0bf9972:b46617a2:bf37d07b:5d34b930
Events : 0.51
Number Major Minor RaidDevice State
0 8 2 0 active sync /dev/sda2
1 8 18 1 active sync /dev/sdb2
2008 Nov 14
0
WG: Still working on a Member Server
...=Computers ;
ldap suffix = dc=GUM,dc=COM ;
ldap user suffix = ou=People ;
idmap backend = ldap://192.168.1.245
idmap uid = 10000-20000 ;
idmap gid = 10000-20000 ;
winbind enum users = Yes
winbind enum groups = Yes
winbind trusted domains only = Yes
[GUMSHARE]
comment = GUMSHARE
path = /RAIDDEVICE/GUMSHARE
username = GUM+user1,@"GUM+Domain Users"
read list = GUM+user1, "@GUM+Domain Users"
write list = "@GUM+Domain Users"
read only = No
create mask = 0774
security mask = 0774
force security mode = 0770
directory mask = 02777
directory security mask = 077...
2015 Feb 18
0
CentOS 7: software RAID 5 array with 4 disks and no spares?
...ate : active
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : localhost:root
UUID : cfc13fe9:8fa811d8:85649402:58c4846e
Events : 4703
Number Major Minor RaidDevice State
0 8 3 0 active sync /dev/sda3
1 8 19 1 active sync /dev/sdb3
2 8 35 2 active sync /dev/sdc3
4 8 51 3 active sync /dev/sdd3
Apparently no spare devices have...