Displaying 20 results from an estimated 10000 matches similar to: "swap_free messages in log"
2005 Apr 27
2
Intel motherboard BOXD915GAVL very slow with centos 4
All,
I am installing a BOXD915GAVL system and centos 4.
The CPU is a 2.8 GIG celeron D with 1GIG memory and 120GIG Seagate drive.
Is is VERY slow. What gives?
Any other experience with something like this?
It is installing but even after install when booting it
is taking VERY long time to even get to the Checking new hardware screen.
Thanks,
Jerry
-------------- next part --------------
An
2005 Aug 28
0
Help with kernel crash log file included.
I had a kernel crash last evening...
This is what was in the /var/log/messages file.
What should be my next step.
I am using an Intel P4 2.4 GHZ, Intel motherboard 865 chipset, 2 software RAID 120GIG disks,
2.6.9-11.ELsmp kernel, and 2 asterisk TDM04B cards.
System normally works great through the week, crash was sunday morning at 4:22 am.
Thanks,
Jerry
2013 Feb 04
3
Questions about software RAID, LVM.
I am planning to increase the disk space on my desktop system. It is
running CentOS 5.9 w/XEN. I have two 160Gig 2.5" laptop (2.5") SATA drives
in two slots of a 4-slot hot swap bay configured like this:
Disk /dev/sda: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End
2007 Sep 24
2
parted - is there a problem
Everyone,
I recently added a 300gig Seagate sata drive on a Centos 5.0 and have a
couple of questions.
The drive was recognized with the device as /dev/sdc. The system came
with some SCCI drives that are labeled as /dev/sda and /dev/sdb. I was
surprised that the sata drives used sdc. Are the sata drives considered
more like SCCI or IDE drives?
The real problem occurred when I tried to
2002 Feb 28
2
Oops kernel (2.4.18) on Sparc32
Hello,
I have recompiled a Linux 2.4.18 kernel on my Sparc and I receive
some oops when quick disk access are performed. For example : I download
some files on a local server with ncftp (-TR directory), and ncftp hangs
up (with kernel oops !). I cannot umount the device and I must reboot
the station.
For information, my sparc is a SparcStation 5 (85 MHz) with 160
Mbytes and 3 SCSI
2008 Feb 08
3
CentOS 5.1 Core 2 Duo Install freezes
Alrighty. I'm having a hell of a time and I need help. I'll try to
give as much information as possible.
----------
HARDWARE
----------
MB:
Supermicro PDSBM-LN2+
Intel 946GZ
ICH7R + Intel? 82573
Memory:
Crucial 512Mb, 1Gb, 2Gb modules matched to above MB.
Processors:
Intel Celeron 420 Conroe-L, 1.6GHz 512KB 64-bit
Intel Core 2 Duo E6320, 1.86Ghz 4Mb 64-bit
Intel Pentium E2140
2010 Oct 19
3
more software raid questions
hi all!
back in Aug several of you assisted me in solving a problem where one
of my drives had dropped out of (or been kicked out of) the raid1 array.
something vaguely similar appears to have happened just a few mins ago,
upon rebooting after a small update. I received four emails like this,
one for /dev/md0, one for /dev/md1, one for /dev/md125 and one for
/dev/md126:
Subject: DegradedArray
2005 Sep 14
3
errors received in logs
now that i am done ranting. i have a question. I
received this error today and i think it was today
only so far, but the error is as follows.
hda: drive_cmd: status=0x51 { DriveReady SeekComplete
Error }
hda: drive_cmd: error=0x04Aborted Command
i did some googling and found some stuff...but some
said go out and get another drive this one is going to
hard drive heaven very soon... and other
2006 Sep 21
12
Hard drive errors
One of my CentOS boxes has started giving me errors. The box is
CentOS-4.4 (i386) fully updated. It has a pair of SATA drives in a
software raid 1 configuration.
The errors I see are:
ata1: command 0xca timeout, stat 0x50 host_stat 0x24
ata1: status=0x50 { DriveReady SeekComplete }
Info fld=0x1e22b8, Current sda: sense key No Sense
ata2: command 0xca timeout, stat 0x50
2019 Apr 09
2
Kernel panic after removing SW RAID1 partitions, setting up ZFS.
System is CentOS 6 all up to date, previously had two drives in MD RAID
configuration.
md0: sda1/sdb1, 20 GB, OS / Partition
md1: sda2/sdb2, 1 TB, data mounted as /home
Installed kmod ZFS via yum, reboot, zpool works fine. Backed up the /home data
2x, then stopped the sd[ab]2 partition with:
mdadm --stop /dev/md1;
mdadm --zero-superblock /dev/sd[ab]1;
Removed /home in /etc/fstab. Used
2008 Sep 28
4
S10 (05/08) vs SNV_98 stubdom install at Xen 3.3 CentOS 5.2 Dom0 (64-bit)
[This email is either empty or too large to be displayed at this time]
2006 Mar 14
2
Help. Failed event on md1
Hi all,
This morning I received this notification from mdadm:
This is an automatically generated mail message from mdadm
running on server-mail.mydomain.kom
A Fail event had been detected on md device /dev/md1.
Faithfully yours, etc.
In /proc/mdstat I see this:
Personalities : [raid1]
md1 : active raid1 sdb2[2](F) sda2[0]
77842880 blocks [2/1] [U_]
md0 : active raid1 sdb1[1] sda1[0]
2018 Dec 05
3
Accidentally nuked my system - any suggestions ?
Le 04/12/2018 ? 23:50, Stephen John Smoogen a ?crit?:
> In the rescue mode, recreate the partition table which was on the sdb
> by copying over what is on sda
>
>
> sfdisk ?d /dev/sda | sfdisk /dev/sdb
>
> This will give the kernel enough to know it has things to do on
> rebuilding parts.
Once I made sure I retrieved all my data, I followed your suggestion,
and it looks
2014 Dec 10
4
CentOS 7 grub.cfg missing on new install
Greetings -
The short story is that got my new install completed with the partitioning I
wanted and using software raid, but after a reboot I ended up with a grub
prompt, and do not appear to have a grub.cfg file. So here is a little
history of how I got here, because I know in order for anyone to help me
they would subsequently ask for this information. So this post is a little
long, but
2014 Jun 22
3
mdraid Q on c6...
so, I installed a c6 system offsite today, had to do it in a hurry.
box has 2 disks meant to be mirrored... I couldn't figure out how to
get anaconda to build a LVM root on a mirror, so I ended up just
installing a /boot and vg_system on sda and raid it later.
every howto I find for linux says to half-raid the OTHER disk, COPY
everything to it, then boot from it and wipe the first disk
2006 Dec 26
14
[PATCH] fix free of event channel in blkfront
Hi All,
We tested the xm block-attach/detach command.
It repeats block-attach/detach command for DomU and pv-on-hvm on HVM Domain.
(block-attach -> block-detach -> block-attach -> block-detach -> ...)
The block-attach command failed when repeating 256 times.
It is because the channel had not been freed in blkfront.
Therefore, it remain using the event channel.
This patch is
2016 Mar 12
4
C7 + UEFI + GPT + RAID1
Hi list,
I'm new with UEFI and GPT.
For several years I've used MBR partition table. I've installed my
system on software raid1 (mdadm) using md0(sda1,sdb1) for swap,
md1(sda2, sdb2) for /, md2 (sda3,sdb3) for /home. From several how-to
concerning raid1 installation, I must put each partition on a different
md devices. I've asked times ago if it's more correct create the
2014 Dec 05
3
CentOS 7 install software Raid on large drives error
----- Original Message -----
From: "Mark Milhollan" <mlm at pixelgate.net>
To: "Jeff Boyce" <jboyce at meridianenv.com>
Sent: Thursday, December 04, 2014 7:18 AM
Subject: Re: [CentOS] CentOS 7 install software Raid on large drives error
> On Wed, 3 Dec 2014, Jeff Boyce wrote:
>
>>I am trying to install CentOS 7 into a new Dell Precision 3610. I have
2007 Sep 04
4
RAID + LVM Addition to CentOS 5 Install
Hi All,
I have what I believe to be a pretty basic LVM & RAID setup on my
CentOS 5 machine:
Raid Partitions:
/dev/sda1,sdb1
/dev/sda2,sdb2
/dev/sda3,sdb3
During the install I created a RAID 1 volume md0 out of sda1,sdb1 for
the boot partition and then added sda2,sdb2 to a separate RAID 1
volume as well (md1). I then setup md1 as a LVM physical volume for
volume group 'system'. I
2009 Jul 02
4
Upgrading drives in raid 1
I think I have solved my issue and would like some input from anyone who has
done this for pitfalls, errors, or if I am just wrong.
Centos 5.x, software raid, 250gb drives.
2 drives in mirror, one spare. All same size.
2 devices in the mirror, one boot (about 100MB), one that fills the rest of
disk and contains LVM partitions.
I was thinking of taking out the spare and adding a 500gb drive.
I