Displaying 20 results from an estimated 500 matches similar to: "About kernel smp and acpi on hardware vm."
2008 May 17
0
xen kernel showing only one processor on SMP machine
Hi there,
I'm running an ibm xseries 345/dual Xeon 2.66GHz, 2.6.18-53.1.14.el5xen
#1 SMP, and cat /proc/cpuinfo shows me only one processor. Apparently
something strange happening with ACPI. What am I doing wrong? Thank you!
(sorry for the cross post, didn't realize there was a virt list first
time around!)
Selected bits from dmesg:
-------------------------
Using ACPI (MADT) for SMP
2008 May 17
0
xen kernel showing only one processor on SMP machine
Hi there,
I'm running an ibm xseries 345/dual Xeon 2.66GHz, 2.6.18-53.1.14.el5xen
#1 SMP, and cat /proc/cpuinfo shows me only one processor. Apparently
something strange happening with ACPI. What am I doing wrong? Thank you!
Selected bits from dmesg:
-------------------------
Using ACPI (MADT) for SMP configuration information
SMP alternatives: switching to UP code
ACPI: Core revision
2008 May 18
2
xen kernel showing only one processor on SMP
What if you boot with other kernel?
If you have the same result maybe BIOS update is required for your server?!
>----------------------------------------------------------------------
>
>Message: 1
>Date: Sat, 17 May 2008 14:51:45 -0700
>From: gen2 <gen2 at planetofidiots.com>
>Subject: [CentOS-virt] xen kernel showing only one processor on SMP
> machine
2008 Mar 03
1
qemu-dm I/O request not ready
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
I''m using xen to run real linux system. I''ve configured partition for my
system. When I''m installing Debian Etch everything is fine but when I''m
starting installed system I get a hang.
My xen disk conf is:
disk = [ ''phy:/dev/root/node1,ioemu:hda,w'' ]
device_model =
2005 Nov 19
1
Bad disk?
Hi, I get the below from dmesg. The server seems to run fine, but it
does worry me. What should I do? Other than take a backup.. :-)
hda: dma_timer_expiry: dma status == 0x21
hda: DMA timeout error
hda: dma timeout error: status=0xd0 { Busy }
ide: failed opcode was: unknown
hda: DMA disabled
ide0: reset: success
hda: dma_timer_expiry: dma status == 0x21
hda: dma_intr: status=0x51 { DriveReady
2010 Mar 10
3
Logrotate/cron and major I/O contention with KVM.
Is anyone else having major I/O peaks due to logrotate or other jobs
running simultaneously across multiple guests. I have one KVM server
running Centos 5.4 with local disk that is seriously suffering as most
of the guests rotate their syslog at the same time.
Looking at the KVM server I'm seeing
11:00:01 PM CPU %user %nice %system %iowait
%steal %idle
03:40:01 AM
2007 Mar 27
1
NPTL degraded?
I'm not sure what has happened but for some reason my CentOS 4 system is
showing me using default threads??
# getconf GNU_LIBPTHREAD_VERSION
linuxthreads-0.10
# uname -rimpv
2.6.9-42.0.10.EL #1 Tue Feb 27 09:24:42 EST 2007 i686 athlon i386
I could have sworn it used to show:
NPTL 2.3.4
Like a couple other machines. Any ideas what might have changed this or how I
get it back??? the devel
2018 Jun 19
3
Naming clash: -DCLS=n and CLS in code
On Tue, 19 Jun 2018 at 20:46, Bruce Hoult <brucehoult at sifive.com> wrote:
> Furthermore .. in the articles you reference, the -DCLS=$(getconf LEVEL1_DCACHE_LINESIZE) is passed when compiling the user's program -- one doing extensive blocked matrix operations -- and not when building the *compiler*.
It's worse. At least in the first case, the code looks like it
wouldn't even
2018 Jun 19
6
Naming clash: -DCLS=n and CLS in code
Tim Northover wrote on 06/19/2018 08:54 PM:
>>> Why are you passing that argument in the first place? The compiler
>>> completely ignores it.
>>
>> CLS stands for cacheline size. It is an important parameter
>> for optimization of the generated code, at least with gcc/g++.
>> -DCLS=n should have the same importance like for example -DNDEBUG.
>
> The
2010 Aug 05
0
No subject
[snip]
Aug 21 09:08:49 BUBBLE kernel: [ 32.126698] PGD 2b01067 PUD bc850067 PMD =
242d067 PTE fffffffffffff237
Aug 21 09:08:49 BUBBLE kernel: [ 32.126716] CPU 0=20
Aug 21 09:08:49 BUBBLE kernel: [ 32.126720] Modules linked in: cpufreq_po=
wersave cpufreq_conservative cpufreq_userspace cpufreq_stats parport_pc ppd=
ev lp parport bridge stp xen_evtchn xenfs binfmt_misc uinput fuse ext2 hdap=
2008 Sep 19
0
[PATCH] linux: fix processor handling in presence of external control
- avoid leaking stuff in acpi_processor_remove()
- remove a pointless change to native code in acpi_processor_hotplug()
(struct acpi_processor''s id field is unsigned)
- don''t set up processor_extcntl_ops when nothing controlled by Xen
(thus processor_cntl_external() will always return false, allowing
ACPI code to retain native behavior)
As usual, written and tested on
2010 Oct 24
0
BUG: soft lockup - CPU#7 stuck for 61s! [udisks-dm-expor:11772]
what does it mean? i never seen that before
xen4 on debian squeeze 2.6.32-5-xen-amd64
[22077.208077] BUG: soft lockup - CPU#7 stuck for 61s! [udisks-dm-expor:11772]
[22077.208139] Modules linked in: ext4 jbd2 crc16 xfs exportfs xt_tcpudp nf_conntrack_ipv4 nf_defrag_ipv4 xt_state nf_conntrack xt_physdev bridge stp ip6table_filter ip6_tables iptable_filter ip_tables x_tables xen_evtchn xenfs fuse
2007 Jan 12
2
CentOS 4.4 on mini-itx
Hi,
I have a EPIA mini-itx board that I have been running Fedora 4 on. I want to
install CentOS 4.4 on it since FC4 is no longer supported. The install goes fine
but when I reboot the machine after the install I get errors like the following:
hda:<4>hda: dma_timer_Expiry : dma status == 0x21
There are other errors about not being able to access the disk and eventually
the kernel panics.
2006 Aug 16
1
DMA in HVM guest on x86_64
I''ve been following unstable day to day with mercurial but I''m still
having a problem with my HVM testing.
I using the i686 Centos + Bluecurve isntaller and I get the following
error in the guest during disk formatting:
<4>hda: dma_timer_expiry: dma status == 0x21
<4>hda: DMA timeout error
<4>hda: dma timeout error: status=0x58 { DriveReady SeekComplete
2008 Apr 25
1
dying hd on live legacy system...
We have an old 3.x server whose hd is dying (kernel: hda: dma_timer_expiry:
dma status == 0x61) and accessing certain files just crashes the system with
a reboot.
We have moved as many files to a nfs server as we could so simply.
The system has been heavily modified (all using rpms) from baseline.
What is the most practical method to replace the hard drive?
-Jason Pyeron
2008 Jan 23
3
asterisk optimalization
hi,
i'm testing asterisk 1.4/1.2 in the following scenario
centos5/cpu quad xeon E5335 2.0Ghz
- test clients behind nat
- 1500+ testing instances - reregister option from 1min to 1hour
- qualify set to 5000
top shows over 100% cpu. cpu cores sometimes go to 95%
with htop i see ~16threads but only one child have ~95% cpu
(how i can get info about that thread? what he is doing?)
what is
2018 Jun 19
2
Naming clash: -DCLS=n and CLS in code
Tim Northover via llvm-dev wrote on 06/19/2018 09:22 PM:
> On Tue, 19 Jun 2018 at 20:12, U.Mutlu <um at mutluit.com> wrote:
>> You can find more examples by searching for "DCLS getconf LEVEL1_DCACHE_LINESIZE".
>
> Frankly, it all looks like cargo-cult optimization flagomancy. I'd
> suggest you drop it, it's not magical I promise.
Hey, I'm working on
2011 Jun 27
4
How many L1/L2 my cpu have ?
Hi
Could anybody explain me how to check how many L1/L2 cache my cpu have.
I'm using CentOS 5.6
*cat /proc/cpuinfo |grep CPU *
model name : Intel(R) Core(TM)2 Duo CPU T9300 @ 2.50GHz
model name : Intel(R) Core(TM)2 Duo CPU T9300 @ 2.50GHz
Diagram of a generic dual-core processor, with CPU-local level 1 caches, and
a shared, on-die level 2 cache.
2011 Sep 29
2
[LLVMdev] Building bitcode modules
Hi,
> What compiler are you using to build with? I've made it default to clang with less looking around for llvm-gcc, so there may be an issue there. What is your configure line? What host are you trying to build on?
First I have compiled llvm/clang compiled with "gcc (Gentoo 4.5.2 p1.1,
pie-0.4.5) 4.5.2". Then I have installed llvm/clang. They are in the
path:
$ clang
2011 Sep 29
0
[LLVMdev] Building bitcode modules
On Sep 29, 2011, at 12:53 AM, Speziale Ettore wrote:
> Hi,
>
>> What compiler are you using to build with? I've made it default to clang with less looking around for llvm-gcc, so there may be an issue there. What is your configure line? What host are you trying to build on?
>
> First I have compiled llvm/clang compiled with "gcc (Gentoo 4.5.2 p1.1,
> pie-0.4.5)