Displaying 20 results from an estimated 7000 matches similar to: "add on sata card relabeling drives, installation"
2007 May 27
1
dealing with mke2fs -T option
Hi,
I have a doubt if I use the mke2fs option the right way.
I formatted two different disks, one with
$ mke2fs -b 4096 -E stride=16 -m 1 -T news /dev/sdd
and the other with
$ mke2fs -b 4096 -E stride=16 -m 1 -T largefile4 /dev/sde
sdd is supposed to get files between 8k and 16k.
sde will handle files with a fixed size of 32Mb.
Then I tried this :
$ dd if=/dev/zero of=/mount-sdx/file bs=4k
2016 May 25
1
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
On 2016-05-25 19:13, Kelly Lesperance wrote:
> Hdparm didn?t get far:
>
> [root at r1k1 ~] # hdparm -tT /dev/sda
>
> /dev/sda:
> Timing cached reads: Alarm clock
> [root at r1k1 ~] #
Hi Kelly,
Try running 'iostat -xdmc 1'. Look for a single drive that has
substantially greater await than ~10msec. If all the drives
except one are taking 6-8msec, but one is very
2010 Sep 13
3
Proper procedure when device names have changed
I am running zfs-fuse on an Ubuntu 10.04 box. I have a dual mirrored pool:
mirror sdd sde mirror sdf sdg
Recently the device names shifted on my box and the devices are now sdc sdd sde and sdf. The pool is of course very unhappy about the mirrors are no longer matched up and one device is "missing". What is the proper procedure to deal with this?
-brian
--
This message posted from
2008 Mar 11
1
Question on SATA DVD using centos 5.1
On my machine I have
SATA0: HD
SATA1: HD these two drives are set as RAID1
SATA2: HD extra
SATA3: DVD
SATA4: external USB disk
Snip from dmesg shows the ATAPI device being detected.
ata3: SATA max UDMA/133 cmd 0x9e0 ctl 0xbe0 bmdma 0xe400 irq 10
ata4: SATA max UDMA/133 cmd 0x960 ctl 0xb60 bmdma 0xe408 irq 10
ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
ata3.00: ATAPI: PIONEER BD-ROM
2016 May 25
6
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
I?ve posted this on the forums at https://www.centos.org/forums/viewtopic.php?f=47&t=57926&p=244614#p244614 - posting to the list in the hopes of getting more eyeballs on it.
We have a cluster of 23 HP DL380p Gen8 hosts running Kafka. Basic specs:
2x E5-2650
128 GB RAM
12 x 4 TB 7200 RPM SATA drives connected to an HP H220 HBA
Dual port 10 GB NIC
The drives are configured as one large
2016 May 27
2
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
All of our Kafka clusters are fairly write-heavy. The cluster in question is our second-heaviest ? we haven?t yet upgraded the heaviest, due to the issues we?ve been experiencing in this one.
Here is an iostat example from a host within the same cluster, but without the RAID check running:
[root at r2k1 ~] # iostat -xdmc 1 10
Linux 3.10.0-327.13.1.el7.x86_64 (r2k1) 05/27/16 _x86_64_ (32 CPU)
2010 Sep 26
1
hotplug Backup-hdd
Hi,
i have a system with
/dev/sda - System Hard Drive
/dev/md0 - SoftwareRaid 5 for Data
with
/dev/sdb
/dev/sdc
/dev/sdd
Now i have one more in a removeable frame for Backup
/dev/sde
/dev/md0 is forwarded to an Samba-Domain for Data service in the network.
What''s the best way to sync the data from /dev/md0 to /dev/sde ?
is a domain hotplug able ? So when i plug in /dev/sde,
2017 Sep 20
4
xfs not getting it right?
Hi,
xfs is supposed to detect the layout of a md-RAID devices when creating the
file system, but it doesn?t seem to do that:
# cat /proc/mdstat
Personalities : [raid1]
md10 : active raid1 sde[1] sdd[0]
499976512 blocks super 1.2 [2/2] [UU]
bitmap: 0/4 pages [0KB], 65536KB chunk
# mkfs.xfs /dev/md10p2
meta-data=/dev/md10p2 isize=512 agcount=4, agsize=30199892 blks
2009 May 04
2
FW: Oracle 9204 installation on linux x86-64 on ocfs
Hello All,
I have installed Oracle Cluster Manager on linux x86-64 nit. I am using ocfs file system for quorum file. But I am getting following error. Please see ocfs configureation below. I would appreciate, if someone could help me to understand if I am doing something wrong. Thanks in advance.
--------------------------------------------------cm.log file ----------------------------
oracm,
2008 Apr 17
2
Question about RAID 5 array rebuild with mdadm
I'm using Centos 4.5 right now, and I had a RAID 5 array stop because
two drives became unavailable. After adjusting the cables on several
occasions and shutting down and restarting, I was able to see the
drives again. This is when I snatched defeat from the jaws of
victory. Please, someone with vast knowledge of how RAID 5 with mdadm
works, tell me if I have any chance at all
2019 Jun 14
3
zfs
Hi, folks,
testing zfs. I'd created a zpoolz2, ran a large backup onto it. Then I
pulled one drive (11-drive, one hot spare pool), and it resilvered with
the hot spare. zpool status -x shows me
state: DEGRADED
status: One or more devices could not be used because the label is missing or
invalid. Sufficient replicas exist for the pool to continue
functioning in a degraded state.
2010 May 28
2
permanently add md device
Hi All
Currently i'm setting up a 5.4 server and try to create a 3rd raid device, when i run:
$mdadm --create /dev/md2 -v --raid-devices=15 --chunk=32 --level=raid6 /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq
the device file "md2" is created and the raid is being configured. but somehow
2013 Oct 07
2
Some questions after devices addition to existing raid 1 btrfs filesystem
Hi,
I have added 2x2Tb to my existing 2x2Tb raid 1 btrfs filesystem and
then ran a balance:
# btrfs filesystem show
Total devices 4 FS bytes used 1.74TB
devid 3 size 1.82TB used 0.00 path /dev/sdd
devid 4 size 1.82TB used 0.00 path /dev/sde
devid 2 size 1.82TB used 1.75TB path /dev/sdc
devid 1 size 1.82TB used 1.75TB path /dev/sdb
# btrfs
2013 Jan 12
2
selinux + kvm virtualization + smartd problem
Hello,
I'm using HP homeserver where host system run CentOS 6.3 with KVM
virtualization with SELinux enabled, guests too run the same OS (but
without SELinux, but this does not matter).
Host system installed on mirrors based on sda and sdb physical disks.
sd{c..f} disks attached to KVM guest (whole disks, not partitions;
needed to use zfs (zfsonlinux) benefit features). Problem is that
disks
2018 Jan 10
2
Issues accessing ZFS-shares on Linux
I just noticed that by running by commands /usr/sbin/smbd -D or
/usr/sbin/smbd -i without systemd's unit, all shares work perfectly so
the problem must then be somehow related to systemd.. Let the testing
continue..
I also tested what happens if I comment out everything and just use
ExecStart=/usr/sbin/smbd -D as that command worked on the console. That
did not help.
For the record, this is
2011 Jan 07
3
When are Logwatch errors really errors
Hi All,
I don't know enough about when errors are *really* errors. So I google a lot to read and learn.
I have a few things in my Logwatch that I want to make sure I understand
1. smartd
**Unmatched Entries**
Problem creating device name scan list
Device /dev/sda: using '-d sat' for ATA disk behind SAT layer.
Device /dev/sdb: using '-d sat' for ATA disk behind SAT layer.
2018 Apr 17
5
Getting glusterfs to expand volume size to brick size
pylon:/var/lib/glusterd/vols/dev_apkmirror_data # ack shared-brick-count
dev_apkmirror_data.pylon.mnt-pylon_block3-dev_apkmirror_data.vol
3: option shared-brick-count 3
dev_apkmirror_data.pylon.mnt-pylon_block2-dev_apkmirror_data.vol
3: option shared-brick-count 3
dev_apkmirror_data.pylon.mnt-pylon_block1-dev_apkmirror_data.vol
3: option shared-brick-count 3
Sincerely,
Artem
--
2009 Nov 20
3
steadily increasing/high loadavg without i/o wait or cpu utilization
Hi all,
I just installed centos 5.4 xen-kernel on intel core i5 machine as dom0.
After some hours of syncing a raid10 array (8 sata disk) I noticed a
steadily increasing loadavg. I think without reasonable i/o wait or cpu
utilization the loadavg on this system should be very lower. If this
loadavg is normal I would be greatful if somone could explain why. The
screenshots below show that there is
2012 Nov 06
2
disk device lvm iscsi questions ....
Hi,
I have an iscsistorage which I attached to a new centos 6.3 server. I
added logic volumes as usual, the block devices (sdb & sdc) showed up in
dmesg; I can mount and access the stored files.
Now we did an firmware software update to that storage (while
unmounted/detached from the fileserver) and after reboot of the storage
and reatache the iscsi nodes I do get new devices. (sdd &
2024 Oct 19
2
How much disk can fail after a catastrophic failure occur?
Hi there.
I have 2 servers with this number of disks in each side:
pve01:~# df | grep disco
/dev/sdd 1.0T 9.4G 1015G 1% /disco1TB-0
/dev/sdh 1.0T 9.3G 1015G 1% /disco1TB-3
/dev/sde 1.0T 9.5G 1015G 1% /disco1TB-1
/dev/sdf 1.0T 9.4G 1015G 1% /disco1TB-2
/dev/sdg 2.0T 19G 2.0T 1% /disco2TB-1
/dev/sdc 2.0T 19G 2.0T 1%