Displaying 20 results from an estimated 200 matches similar to: "boot errors"
2005 Oct 11
0
AW: Re: xen 3.0 boot problem
> > Well, i''m using here the qla2340 on several boxes. It works
> > with Xen 2.0 but noch with Xen 3.0. as part of SUSE Linux 10.0:
>
> Interesting. If the driver really doews work flawlessly in
> Xen 2, then I think the culprit has to be interrupt routeing.
>
> Under Xen 3, does /proc/interrupts show you''re receiving interrupts?
I cannot boot with
2005 Oct 11
0
AW: Re: xen 3.0 boot problem
> > Well, i''m using here the qla2340 on several boxes. It works
> > with Xen 2.0 but noch with Xen 3.0. as part of SUSE Linux 10.0:
>
> Interesting. If the driver really doews work flawlessly in
> Xen 2, then I think the culprit has to be interrupt routeing.
>
> Under Xen 3, does /proc/interrupts show you''re receiving interrupts?
I cannot boot with
2006 Dec 12
1
ZFS Storage Pool advice
This question is concerning ZFS. We have a Sun Fire V890 attached to a EMC disk array. Here''s are plan to incorporate ZFS:
On our EMC storage array we will create 3 LUNS. Now how would ZFS be used for the best performance?
What I''m trying to ask is if you have 3 LUNS and you want to create a ZFS storage pool, would it be better to have a storage pool per LUN or combine the 3
2005 Sep 07
1
mkinitrd
I''ve compiled xen without any problems. but now i have to create an initrd file. When i use the command mkinitrd (without any options) the i''ve got some errors:
# mkinitrd
Root device: /dev/sda3 (mounted on / as reiserfs)
Module list: ata_piix mptbase mptscsih qla2300 reiserfs
Kernel image: /boot/vmlinuz-2.6.11.12-xen0
Initrd image: /boot/initrd-2.6.11.12-xen0
Shared
2006 Jul 14
0
qla2xxx driver failed in dom0 - invalid opcode: 0000 [1] SMP
I wanted to post this in case there is a real bug here.
The system this is on is running Debian Etch with the Xen packages from
Debian Unstable (currently 3.0.2+hg9697-1). I am running all packaged
software so this is probably slightly out of date.
This is on an HP DL320 G4 with a Pentium D 930 processor. I tried unloading
and reloading the qla2xxx module, but rmmod reported it in use, however
2006 Jul 14
0
RE: qla2xxx driver failed in dom0 - invalid opcode: 0000[1] SMP
> I wanted to post this in case there is a real bug here.
>
> The system this is on is running Debian Etch with the Xen packages
from
> Debian Unstable (currently 3.0.2+hg9697-1). I am running all packaged
> software so this is probably slightly out of date.
It would be interesting to see if this can be repro''ed on a tree built
from a recent xen-unstable.hg.
2005 Aug 11
1
How to prevent loading qla2300.o module
I apologize in advance as this is not really at CentOS specific issue,
but I don't know where else to turn. We are configuring some Dell
1850s for a customer, and they have all been configured with a QLogic
dual channel HBA. But no storage has been (or will be in the near
future) attached to these HBAs. During the boot process (both for the
kickstart and after the OS has been
2009 Mar 25
0
CentOS won't shutdown ... or do anything else
I started to have problems similar to ones described in the past on
this list but could not find any kind of resolution. I did an
lsmod, a mount command, and for fun, did an strace on shutdown to
see where it is hanging, and an ltrace as well.
Any thoughts?
Module Size Used by
parport_pc 28033 0
lp 15661 0
parport 38153 2
2007 Sep 23
3
ext3 file system becoming read only
Hi
In our office environment few servers mostly database servers and yesterday it
happened
for one application server(first time) the partion is getting "read only".
I was checking the archives, found may be similar kind of issues in the
2007-July archives.
But how it has been solved if someone describes me that will be really helpful.
In our case, just at the problem started found
2004 Oct 31
1
which kernel should be used to solve sendfile problem on linux?
I have seen some folks on the list with what seems to be a similar issue
as I have. I work in a large FX shop and this has been a problem for us
since the 3.0.7 upgrade. If anyone has a known to work config, could you
share your set up info please?
Jerry what kernel, and other system bits versions do you test on?
The system has a 2.4.21 kernel with xfs filesystems. It is a redhat 9
machine on
2007 May 08
1
kernel: kernel BUG at include/asm/spinlock.h:109!
Hi,
We are running a server as an nfs head connect to a clariion, every
couple of days we have our box fall over with the following msg in
syslog.
Has anyone had this happen to there boxen?
rhel4 u4
May 8 12:23:52 ruchba kernel: Assertion failure in
log_do_checkpoint() at fs/jbd/checkpoint.c:363: "drop_count != 0 ||
cleanup_ret != 0"
May 8 12:23:52 ruchba kernel: ------------[ cut
2006 Dec 12
23
ZFS Storage Pool advice
This question is concerning ZFS. We have a Sun Fire V890 attached to a EMC disk array. Here''s are plan to incorporate ZFS:
On our EMC storage array we will create 3 LUNS. Now how would ZFS be used for the best performance?
What I''m trying to ask is if you have 3 LUNS and you want to create a ZFS storage pool, would it be better to have a storage pool per LUN or combine the 3
2007 Nov 16
0
ZFS mirror and sun STK 2540 FC array
Hi all,
we have just bought a sun X2200M2 (4GB / 2 opteron 2214 / 2 disks 250GB
SATA2, solaris 10 update 4)
and a sun STK 2540 FC array (8 disks SAS 146 GB, 1 raid controller).
The server is attached to the array with a single 4 Gb Fibre Channel link.
I want to make a mirror using ZFS with this array.
I have created 2 volumes on the array
in RAID0 (stripe of 128 KB) presented to the host
2010 Jul 18
3
Proxy IMAP/POP/ManageSieve/SMTP in a large cluster enviroment
Hi to all in the list, we are trying to do some tests lab for a large scale
mail system having this requirements:
- Scale to maybe 1million users(Only for testing).
- Server side filters.
- User quotas.
- High concurrency.
- High performance and High Availability.
We plan to test this using RHEL5 and maybe RHEL6.
As a storage we are going to use an HP EVA 8400 FC(8 GB/s)
We defined this
2006 May 10
0
Unable to boot Xen (3.0.2) on Dell Poweredge 1855
This is a really long email...my apologies.
Hardware platform: Dell Poweredge 1855 (blade), Dual Xeon 2.8 GHz; 12 MB
RAM; PERC 4/IM; mirrored 73 GB U320 SCSI; qLogic 2312 PCI Fibre Channel
HBAs.
OS: (currently) CentOS 4.3, gcc 3.4.5
Software: Xen 3.0.2-2
Attempting to boot off local SCSI RAID (mirror)...not the SAN.
I am more of a Solaris person, so forgive me if I don''t know all
2005 Apr 15
3
IBM BladeCenter HS20 blades
Greetings,
We have purchased an IBM BladeCenter and I am in the process of testing
Linux installation on these things (boot off SAN i.e. qla2300 driver,
not using internal drives). My distro of choice is Debian, however,
since I'm really not interested in trying to hand compile all the
drivers, I decided to try CentOS (which I'm so far very impressed with).
On boot, as with the
2006 May 26
0
Problem booting on SAN xen 3.0.2
I''m trying to get Xen 3.0.2 working on an IBM Bladecenter with a qlogic 2300
fiber channel device and booting on a SAN.
I have compiled several version of the xen kernel and I''m unable to have an
initrd with the qla2300 or qla2xxx discovering any drive. I have tried
several options when compiling xen. But the result is always the same: the
initrd loads the driver but each
2009 Dec 22
2
Mirror of SAN Boxes with ZFS ? (split site mirror)
Hello,
I''m thinking about a setup that looks like this:
- 2 headnodes with FC connectivity (OpenSolaris)
- 2 backend FC srtorages (Disk Shelves with RAID Controllers presenting a huge 15 TB RAID5)
- 2 datacenters (distance 1 km with dark fibre)
- one headnode and one storage in each data center
(Sorry for this ascii art :)
( Data Center 1) <--1km--> (Data
2012 Aug 10
1
virtio-scsi <-> vhost multi lun/adapter performance results with 3.6-rc0
Hi folks,
The following are initial virtio-scsi + target vhost benchmark results
using multiple target LUNs per vhost and multiple virtio PCI adapters to
scale the total number of virtio-scsi LUNs into a single KVM guest.
The test setup is currently using 4x SCSI LUNs per vhost WWPN, with 8x
virtio PCI adapters for a total of 32x 500MB ramdisk LUNs into a single
guest, along with each backend
2012 Aug 10
1
virtio-scsi <-> vhost multi lun/adapter performance results with 3.6-rc0
Hi folks,
The following are initial virtio-scsi + target vhost benchmark results
using multiple target LUNs per vhost and multiple virtio PCI adapters to
scale the total number of virtio-scsi LUNs into a single KVM guest.
The test setup is currently using 4x SCSI LUNs per vhost WWPN, with 8x
virtio PCI adapters for a total of 32x 500MB ramdisk LUNs into a single
guest, along with each backend