search for: raiddevices

Displaying 20 results from an estimated 35 matches for "raiddevices".

2011 Aug 17
1
RAID5 suddenly broken
Hello, I have a RAID5 array on my CentOS 5.6 x86_64 workstation which "suddenly" failed to work (actually after the system could not resume from a suspend). I had recently issues after moving the workstation to another office, where one of the disks got accidently unplugged. But the RAID was working and it had reconstructed (as far as I can tell) the data. After I replugged the disk,
2009 May 20
2
help with rebuilding md0 (Raid5)
Sorry, this is going to be a rather long post...Here's the situation; I have 4 IDE disks from an old snap server which fails to mount the raid array. We believe there is a controller error on the SNAP so we've put them in another box running CentOS 5 and can see the disks OK. hda thru hdd looks like this Disk /dev/hdd: 185.2 GB, 185283624960 bytes 255 heads, 63 sectors/track, 22526
2006 Apr 23
1
RAID question
Hello. I have several systems (CentOS 3 and CentOS 4) with software raid and I've observed a difference in the raid state: In CentOS 3 systems we find that mdadm --detail /dev/md* show: .............. Version : 00.90.00 Creation Time : Fri Apr 16 14:59:43 2004 Raid Level : raid1 Array Size : 20289984 (19.35 GiB 20.78 GB) Device Size : 20289984 (19.35 GiB 20.78 GB)
2014 Feb 07
3
Software RAID1 Failure Help
I am running software RAID1 on a somewhat critical server. Today I noticed one drive is giving errors. Good thing I had RAID. I planned on upgrading this server in next month or so. Just wandering if there was an easy way to fix this to avoid rushing the upgrade? Having a single drive is slowing down reads as well, I think. Thanks. Feb 7 15:28:28 server smartd[2980]: Device: /dev/sdb
2010 Nov 14
3
RAID Resynch...??
So still coming up to speed with mdadm and I notice this morning one of my servers acting sluggish...so when I looked at the mdadm raid device I see this: mdadm --detail /dev/md0 /dev/md0: Version : 0.90 Creation Time : Mon Sep 27 22:47:44 2010 Raid Level : raid10 Array Size : 976759808 (931.51 GiB 1000.20 GB) Used Dev Size : 976759808 (931.51 GiB 1000.20 GB) Raid
2010 Feb 28
3
puzzling md error ?
this has never happened to me before, and I'm somewhat at a loss. got a email from the cron thing... /etc/cron.weekly/99-raid-check: WARNING: mismatch_cnt is not 0 on /dev/md10 WARNING: mismatch_cnt is not 0 on /dev/md11 ok, md10 and md11 are each raid1's made from 2 x 72GB scsi drives, on a dell 2850 or something dual single-core 3ghz server. these two md's are in
2006 Jun 24
3
recover data from linear raid
Hello, I had a scientific linux 3.0.4 system (rhel compatible), with 3 ide disks, one for / and two others in linear raid (250 gb and 300 gb each). This system was obsoleted so i move the raid disks to a new scientific linux 3.0.7 installation. However, the raid array was not detected ( I put the disks on the same channels and same master/lsave setup as in the previous setup). In fact
2010 Jan 05
4
Software RAID1 Disk I/O
I just installed CentOS 5.4 64 bit release on a 1.9ghz CPU with 8gB of RAM. It has 2 Western Digital 1.5TB SATA2 drives in RAID1. [root at server ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/md2 1.4T 1.4G 1.3T 1% / /dev/md0 99M 19M 76M 20% /boot tmpfs 4.0G 0 4.0G 0% /dev/shm [root at server ~]# Its barebones
2011 Apr 26
0
mdraid woes (missing superblock?)
I have a raid1 array which is somehow faulty. There is 1,5 TB of stuff, I would not want to lose it (though I have full backup). The array cannot be mounted on startup (error message was "missing superblock"). I had to boot from DVD with linux rescue and remove the array from fstab. Here is some info - I am a little dumbfounded. [root at a134-224 log]# cat /proc/mdstat (...) md5 :
2014 Dec 03
7
DegradedArray message
Received the following message in mail to root: Message 257: >From root at desk4.localdomain Tue Oct 28 07:25:37 2014 Return-Path: <root at desk4.localdomain> X-Original-To: root Delivered-To: root at desk4.localdomain From: mdadm monitoring <root at desk4.localdomain> To: root at desk4.localdomain Subject: DegradedArray event on /dev/md0:desk4 Date: Tue, 28 Oct 2014 07:25:27
2006 Aug 10
3
MD raid tools ... did i missed something?
Hi I have a degraded array /dev/md2 ===================================================================== $ mdadm -D /dev/md2 /dev/md2: Version : 00.90.01 Creation Time : Thu Oct 6 20:31:57 2005 Raid Level : raid5 Array Size : 221953536 (211.67 GiB 227.28 GB) Device Size : 110976768 (105.84 GiB 113.64 GB) Raid Devices : 3 Total Devices : 2 Preferred Minor : 2
2012 Jun 28
2
Strange du/df behaviour.
Hi all. I have currently a server: cat /etc/redhat-release CentOS release 5.7 (Final) uname -a Linux host.domain.com 2.6.18-274.18.1.el5 #1 SMP Thu Feb 9 12:45:44 EST 2012 x86_64 x86_64 x86_64 GNU/Linux I have there a filesystem mounted: /dev/vg0/paczki /home/paczki-workdir ext4 defaults,noatime 0 0 on which df gives strange output: LANG=C df -h
2012 Jun 07
1
mdadm: failed to write superblock to
Hello, i have a little problem. Our server has an broken RAID. # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sda1[2](F) sdb1[1] 2096064 blocks [2/1] [_U] md2 : active raid1 sda3[2](F) sdb3[1] 1462516672 blocks [2/1] [_U] md1 : active raid1 sda2[0] sdb2[1] 524224 blocks [2/2] [UU] unused devices: <none> I have remove the partition: # mdadm --remove
2008 Nov 14
0
Still working on a Member Server
Going through the examples and reading through the wiki's I still have not found exactly what I was looking for in matching uid's and gid's. using samba samba3-3.0.32-36 We currently have a domain controller Samba/LDAP PDC. samba-3.0.20b-1 Previous member servers samba-3.0.10-1.4 and I went to add a member server. Now I find that users and groups don't match. So from
2023 Mar 30
1
Performance: lots of small files, hdd, nvme etc.
Well, you have *way* more files than we do... :) Il 30/03/2023 11:26, Hu Bert ha scritto: > Just an observation: is there a performance difference between a sw > raid10 (10 disks -> one brick) or 5x raid1 (each raid1 a brick) Err... RAID10 is not 10 disks unless you stripe 5 mirrors of 2 disks. > with > the same disks (10TB hdd)? The heal processes on the 5xraid1-scenario >
2013 Mar 28
1
Glusterfs gives up with endpoint not connected
Dear all, Right out of the blue glusterfs is not working fine any more every now end the it stops working telling me, Endpoint not connected and writing core files: [root at tuepdc /]# file core.15288 core.15288: ELF 64-bit LSB core file AMD x86-64, version 1 (SYSV), SVR4-style, from 'glusterfs' My Version: [root at tuepdc /]# glusterfs --version glusterfs 3.2.0 built on Apr 22 2011
2015 Feb 18
5
CentOS 7: software RAID 5 array with 4 disks and no spares?
Hi, I just replaced Slackware64 14.1 running on my office's HP Proliant Microserver with a fresh installation of CentOS 7. The server has 4 x 250 GB disks. Every disk is configured like this : * 200 MB /dev/sdX1 for /boot * 4 GB /dev/sdX2 for swap * 248 GB /dev/sdX3 for / There are supposed to be no spare devices. /boot and swap are all supposed to be assembled in RAID level 1 across
2010 Aug 18
3
Wrong disk size problem.
Hi,we have Centos 5.4 server and according to me we have strange problem. Disk size and other indormation like below.Normally,md2 partition should have 46GB free disk size but available value is zero.Why it show zero ? If you help me,I will be happy. df -h Filesystem Size Used Avail Use% Mounted on /dev/md1 19G 2.1G 16G 12% / /dev/md2 880G 834G 0
2008 Nov 14
0
WG: Still working on a Member Server
For me getting a member server to work I did not need winbind just ldap was sufficient. Did you made the trust account? Getent group and passwd must give you all users and groups. You must be able to chmod domainuser:domaingroup on your Member Server. What I recognized is that the member server with samba 3.028 is much to slow. It takes too long if you try to connect over My Network Places. This
2015 Feb 18
0
CentOS 7: software RAID 5 array with 4 disks and no spares?
Hi Niki, md127 apparently only uses 81.95GB per disk. Maybe one of the partitions has the wrong size. What's the output of lsblk? Regards Michael ----- Urspr?ngliche Mail ----- Von: "Niki Kovacs" <info at microlinux.fr> An: "CentOS mailing list" <CentOS at centos.org> Gesendet: Mittwoch, 18. Februar 2015 08:09:13 Betreff: [CentOS] CentOS 7: software RAID 5