Displaying 20 results from an estimated 2000 matches similar to: "rescheduling sector linux raid ?"
2007 Mar 20
1
centos raid 1 question
Hi,
im having this on my screen and dmesg im not sure if this is an error
message. btw im using centos 4.4 with 2 x 200GB PATA drives.
md: md0: sync done.
RAID1 conf printout:
--- wd:2 rd:2
disk 0, wo:0, o:1, dev:hda2
disk 1, wo:0, o:1, dev:hdc2
md: delaying resync of md5 until md3 has finished resync (they share one or
more physical units)
md: syncing RAID array md5
md: minimum _guaranteed_
2010 Feb 28
3
puzzling md error ?
this has never happened to me before, and I'm somewhat at a loss. got a
email from the cron thing...
/etc/cron.weekly/99-raid-check:
WARNING: mismatch_cnt is not 0 on /dev/md10
WARNING: mismatch_cnt is not 0 on /dev/md11
ok, md10 and md11 are each raid1's made from 2 x 72GB scsi drives, on a
dell 2850 or something dual single-core 3ghz server.
these two md's are in
2005 Feb 03
2
RAID 1 sync
Is my new 300GB RAID 1 array REALLY going to take 18936 minutes to
sync!!???
2008 Sep 07
3
USB drive fails at sector 0xFFFFFFF
I'm backing up to a NTFS partition on an external USB drive with dump. I'm
seeing failures in /var/log/messages reading sector 0xFFFFFFF that cause
the verify pass to fail. Are there any known problems in the USB driver?
Kernel via uname -a:
Linux segw2.mpa.lan 2.6.18-92.1.6.el5 #1 SMP Wed Jun 25 13:49:24 EDT 2008
i686 i686 i386 GNU/Linux
Message reported. (Note the number 268435455,
2005 Jun 28
1
How to figure out underlying failed disk(parttions) and sector(s) position ???
Hi,
with being exposed to more and more failed hard disks
reports, I've accumulated several questions of the
logged messages in /var/log/messages file: like how to
identifying failed disks(partitions), where is the
exact failed sector(s) on the hard disk, and why
badblocks reports OK to the reported disk failure.
Let me explained the above with the following several
example.
scenario #1, a
2009 Jun 16
1
Xen vs. iSCSI
[previously sent to rhelv5 list, apologies to those on both]
I've got a problem I can reproduce easily enough, but really I fail to
understand what's going wrong.
I've got a 5.3 Dom0, which is running three guests. One is Fedora 10,
that runs with local flat files, and works fine. One is Nexenta 2
(opensolaris-based), and that runs off of physical partitions, and seems
to work
2014 Mar 13
1
[PATCH] pm/fan: drop the fan lock in fan_update() before rescheduling
From: Martin Peres <martin.peres at labri.fr>
This should fix a deadlock that has been reported to us where fan_update()
would hold the fan lock and try to grab the alarm_program_lock to reschedule
an update. On an other CPU, the alarm_program_lock would have been taken
before calling fan_update(), leading to a deadlock.
Reported-by: Marcin Slusarz <marcin.slusarz at gmail.com>
2002 Mar 15
7
Is this ext3 or bad drive sectors problem?
Hello LINUX GURUS,
I am fairly new to LINUX OS, so pardon my ignorance about LINUX OS.
I have installed LINUX 7.2 on IBM netfinity 4000R server. OS works fine for few days and start giving me this error message all of a sudden:
"kernel: hda: dma_intr: status=0x51 { Driveready SeekComplete Error }"
"kernel: hda: dma_intr: status=0x40 { UncorrectableError }, LBAsect=4944861,
2005 Jan 01
1
Advice for dealing with bad sectors on /
All,
Trying to figure out how to deal with, I assume, a dying disk that's
unfortunately on / (ext3).
Getting errors similar to:
Dec 31 20:44:30 mybox kernel: hdb: dma_intr: status=0x51 { DriveReady
SeekComplete Error }
Dec 31 20:44:30 mybox kernel: hdb: dma_intr: error=0x40 { UncorrectableError },
LBAsect=163423, high=0, low=163423, sector=163360
Dec 31 20:44:30 mybox kernel: end_request:
2009 Jul 29
0
Software RAID-1 partition constantly syncing
I have a partition set up as software RAID-1 on a CentOS 5.3 machine.
Today, the system was rebooted, when it came back up I noticed that it
had started to resync. It completes the sync, then immediately starts again.
From the log:
Jul 29 09:46:02 cbserver kernel: md: syncing RAID array md2
Jul 29 09:46:02 cbserver kernel: md: minimum _guaranteed_ reconstruction
speed: 5000 KB/sec/disc.
Jul
2003 Jun 05
5
Hard Disk Failure
Hi All,
I had to reboot a machine as I lost the ssh connectivity to it. I could
ping to it though. On rebooting, the dmesg buffer showed the following
messsage
hdc: dma_intr: status=0x51 { DriveReady SeekComplete Error }
hdc: dma_intr: error=0x40 { UncorrectableError }, CHS=7520/0/155, sector=1820440
end_request: I/O error, dev 16:02 (hdc), sector 1820440
hdc: dma_intr: status=0x51 { DriveReady
2009 Apr 28
1
USB device not connected (CentOS 5.3)
I just tried this with CentOS 5.3 as well, and got exactly the same
symptoms and dmesg output. (As a point of comparison, Ubunu 8.04 on
my work laptop is able to access the drive.)
Obviously "not detected" is a misapprehension, though I'm puzzled why
"lsusb" doesn't show it. The device is there even though the
partition table can't be read.
---------- Forwarded
2007 Jun 25
1
I/O errors in domU with LVM on DRBD
Hi,
Sorry for the need of the long winded email. Looking for some answers
to the following.
I am setting up a xen PV domU on top of a LVM partitioned DRBD
device. Everything was going just fine until I tried to test the
filesystems in the domU.
Here is my setup;
Dom0 OS: CentOS release 5 (Final)
Kernel: 2.6.18-8.1.4.el5.centos.plusxen
Xen: xen-3.0.3-25.0.3.el5
DRBD:
2009 Oct 14
1
Bug on barriers for virtio_blk device - end_request: I/O error, dev vda, sector 0
Added CCs for the author of and people CCed on the patch.
The patch has been submitted twice (6 Aug, 3 Sept), the first time there
was one concern voiced which seems to have been addressed in replies. The
second time there were no reactions.
The patch seems to work for Massimo...
Massimo Cetra wrote:
> Massimo Cetra ha scritto:
>> Hello all,
>>
>> i ended up with some
2009 Oct 14
1
Bug on barriers for virtio_blk device - end_request: I/O error, dev vda, sector 0
Added CCs for the author of and people CCed on the patch.
The patch has been submitted twice (6 Aug, 3 Sept), the first time there
was one concern voiced which seems to have been addressed in replies. The
second time there were no reactions.
The patch seems to work for Massimo...
Massimo Cetra wrote:
> Massimo Cetra ha scritto:
>> Hello all,
>>
>> i ended up with some
2006 Dec 01
1
[PATCH] Ensure blktap reports I/O errors back to guest
There are a number of flaws in the blktap userspace daemon when dealing
with I/O errors.
- The backends which use AIO check the io_events.res member to determine
if an I/O error occurred. Which is good. But when calling the callback
to signal completion of the I/O, they pass the io_events.res2 member
Now this seems fine at first glance[1]
"res is the usual result of an I/O
2011 Sep 29
3
xvda I/O errors in linux 3.1 under XCP 1.0
Good day.
I''m getting this error:
[101017.440858] blkfront: barrier: empty write xvda op failed
[101017.440862] blkfront: xvda: barrier or flush: disabled
[101017.463438] end_request: I/O error, dev xvda, sector 3676376
[101017.463452] end_request: I/O error, dev xvda, sector 3676376
[101017.463459] Buffer I/O error on device xvda1, logical block 459291
[101017.463464] lost page write
2010 Apr 11
7
dom0 crash, require assistance interpretting logs and config suggestions
Hello,
I have experienced a dom0 crash where the system became unreachable via
the network and the console was unresponsive. I would appreciate help
interpretting the logs and any configuration change suggestions.
It is a stock Debian Lenny dom0 running xen 3.2.1 with kernel
2.6.26-2-xen-amd64 and an AMD Athlon IIx4 with 4 GB of RAM. It is
running 4 VMs. One VM has two PCI NICs being
2014 Mar 24
4
[PATCH 1/4] pm/fan: drop the fan lock in fan_update() before rescheduling
From: Martin Peres <martin.peres at labri.fr>
This should fix a deadlock that has been reported to us where fan_update()
would hold the fan lock and try to grab the alarm_program_lock to reschedule
an update. On an other CPU, the alarm_program_lock would have been taken
before calling fan_update(), leading to a deadlock.
We should Cc: <stable at vger.kernel.org> # 3.9+
Reported-by:
2009 Aug 27
4
Debian lenny, lvm and filesystem xfs
Hi,
I''m running Xen on a Debian Xeon E3110 using the Debian 2.6.26-2-xen kernel.
As filesystem for my lvm domU partitions I choosed xfs.
I get the following error in kern.log of domU when booting a domU:
blkfront: sda2: write barrier op failed
blkfront: sda2: barriers disabled
end_request: I/O error, dev sda2, sector 0
end_request: I/O error, dev sda2, sector 0
Filesystem