Displaying 20 results from an estimated 2000 matches similar to: "[CENTOS6] mtrr_cleanup: can not find optimal value - during server startup"
2012 Apr 12
1
6.2 x86_64 "mtrr_cleanup: can not find optimal value"
Hi,
I have server that has been running 5.x - 5.8 for a few years without issue and decided to move it to a fresh install of 6.2. First thing I noticed is a good part of the log has these mtrr messages finally ending with
"mtrr_cleanup: can not find optimal value" and "please specify mtrr_gran_size/mtrr_chunk_size". I have been searching around and reading the kernel docs
2012 Nov 03
0
mtrr_gran_size and mtrr_chunk_size
Good Day All,
Today I looked at the dmesg log and I notice that the following messages
regarding mtrr_gran_size/mtrr_chunk_size.
I am currently running CentOS 6.3 and I installed CentOS 6.2 and 6.1 and I
was seeing the same errors. When I installed CentOS 5.8 on the same laptop
I do not see these errors.
$ lsb_release -a
LSB Version:
2013 Jun 13
0
*BAD*gran_size
CentOS 6.4, current, Dell PE R720.
Had an issue today with a bus error, and googling only found two year old
references to problems with non-Dell drives (we just added two WD Reds,
and mdadm raided them).
So, looking through dmesg and /var/log/messages, I ran into a *lot* of
G
gran_size: 128K chunk_size: 256K num_reg: 10 lose cover
RAM:
0G
gran_size: 128K chunk_size:
2004 Aug 06
0
̨Íåºê´ó¹ú¼ÊÉÌó³ÏÑ°¹óµØºÏ×÷»ï°é£¬ÏúÊÛµçÄÔÅä¼þ¡¢ÊÖ»ú¡¢±Ê¼Ç±¾µçÄÔ ÁªÏµÈË:³ÂÊÀ½Ü µç»°:013850704389
̨Íåºê´ó¹ú¼ÊÉÌó¹«Ë¾³ÏÑ°¹óµØºÏ×÷»ï°é£¬ÏúÊÛµçÄÔ¡¢Åä¼þ¡¢ÊÖ»ú¡¢±Ê¼Ç±¾µçÄÔ¡££¨²Î¿¼¼Û¸ñ±íÈçÏ£©
ÎÒ˾±£Ö¤²úƷΪȫÐÂÔ×°,»õµ½¸¶¿î¡£Óв»Ïê¿ÉÀ´ÈËÀ´µç×Éѯ£º
ÁªÏµÈË:³ÂÊÀ½Ü µç»°:013850704389
Ò»¡¢Ä¦ÍÐÂÞÀ µ¥Î»/Ôª
T189--400 T190--350 T191--500 C289--550
T2988--350 A388--1600 A6288--1300 V70--2000
V8088--800 V998++ --700 V60+ --1200 V66+GPRS--950
ŵ»ùÑÇ
3310--400 3330--450
2011 Oct 26
0
PCIe errors handled by OS
Does anybody please have any experience with
the following CentOS 6 warnings in logwatch?
WARNING: Kernel Errors Present
ACPI Error (psargs-0359): [ ...: 1 Time(s)
pci 0000:00:01.0: PCIe errors handled by OS. ...: 1 Time(s)
pci 0000:00:1c.0: PCIe errors handled by OS. ...: 1 Time(s)
pci 0000:00:1c.5: PCIe errors handled by OS. ...: 1 Time(s)
pci 0000:00:1c.6: PCIe errors
2012 Sep 29
2
Doubled up RAM to 32 GB - now how to speed up a LAPP server?
Dear CentOS users,
I run a small Facebook game at a CentOS 6.3 machine
with PostgreSQL 8.4.3 + few PHP scripts + 1 Perl daemon
and even though the server worked ok,
I've suggested my users to double up the RAM
to 32 GB and they have collected money for that.
Now my problem is that I don't know, which knob
to turn and how to really use the additional memory.
Below is my top output at the
2011 Sep 26
4
Hard I/O lockup with EL6
I'm trying to figure out why 2 machines have a "hard I/O lock" on the HDD when
running EL6.
I have 4 identical machines, all were stable with EL5. 2 work great with EL6,
2 do not. I've checked momtherboard BIOS versions and settings, SAS controller
BIOS versions and settings, they are the same between the working and non-
working systems.
When booting a non-working system,
2013 Jun 08
0
[PATCH] Btrfs-progs: elaborate error handling of mkfs
$./mkfs.btrfs -f /dev/sdd -b 2M
[...]
mkfs.btrfs: volumes.c:845: btrfs_alloc_chunk: Assertion `!(ret)'' failed.
Aborted (core dumped).
We should return error to userspace instead of the above.
Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
---
mkfs.c | 23 +++++++++++++++--------
volumes.c | 16 +++++++++++-----
2 files changed, 26 insertions(+), 13 deletions(-)
diff --git
2020 May 24
3
[PATCH] file_checksum() optimization
When a whole-file checksum is performed, hashing was done in 64 byte
blocks, causing overhead and limiting performance.
Testing showed the performance improvement to go up quickly going from
64 to 512 bytes, with diminishing returns above, 4096 was where it
seemed to plateau for me. Re-used CHUNK_SIZE (32 kB) as it already
exists and should be fine to use here anyway.
Noticed this because
2018 Apr 27
0
Size of produced binaries when compiling llvm & clang sources
On Fri, Apr 27, 2018 at 6:21 PM, Manuel Yguel via llvm-dev
<llvm-dev at lists.llvm.org> wrote:
> Dear llvm developpers,
> I followed the tutorial to build llvm and clang provided here:
> https://clang.llvm.org/get_started.html
>
> The sources are in sync with subversion repository, and I ended up with more
> than 30GB of binaries in llvm/bin as shown at the end of this
2018 Apr 27
3
Size of produced binaries when compiling llvm & clang sources
Dear llvm developpers,
I followed the tutorial to build llvm and clang provided here:
https://clang.llvm.org/get_started.html
The sources are in sync with subversion repository, and I ended up with
more than 30GB of binaries in llvm/bin as shown at the end of this message.
I assume I did something wrong, but I did not find any entry in the doc
that helps me understand how to reduce the size of
2013 Apr 23
0
Fw: Error with function - USING library(plyr)
Dear R forum,
Please refer to my query regarding "Error with function". I forgot to mention that I am using "plyr" library.
Sorry for inconvenience.
Regards
Katherine
--- On Tue, 23/4/13, Katherine Gobin <katherine_gobin@yahoo.com> wrote:
From: Katherine Gobin <katherine_gobin@yahoo.com>
Subject: [R] Error with function
To: r-help@r-project.org
Date:
2013 Apr 23
0
Error with function
Dear R forum,
I have a data.frame as given below:
df = data.frame(tran = c("tran1", "tran2", "tran3", "tran4"), tenor = c("2w", "1m", "7m", "3m"))
Also, I define
libor_tenor_labels = as.character(c("o_n", "1w", "2w",
"1m", "2m", "3m", "4m",
2010 Aug 31
0
istream_read like zlib, but without zlib
Hy Timo !
I Made some modification in stream_read in zlib. I remove all zlib part,
because i don't need this, but i need to read a istream to change it.
Well, i create a size_t called supersize, with is a substitute for
stream->zs.avail_in.
The trouble is, my debug file have a lot of "READ Plugin\n", and i think
it's because my read becomes a loop, i think it's because
2013 Apr 23
0
Fw: " PROBLEM SOLVED" - Error with function
Dear R forum
Please refer to my query captioned Error with function.
I had missed in bracket ")" in the return statement and hence I was getting the error. I has struggled for more than 2 hours to find out the problem and only then has posted to the forum. I sincerely apologize to all for consuming your valuable time.
Thanks for the efforts at your end.
Regards
Katherine
--- On
2019 May 23
0
[PATCH v2 2/8] s390/cio: introduce DMA pools to cio
From: Halil Pasic <pasic at linux.ibm.com>
To support protected virtualization cio will need to make sure the
memory used for communication with the hypervisor is DMA memory.
Let us introduce one global cio, and some tools for pools seated
at individual devices.
Our DMA pools are implemented as a gen_pool backed with DMA pages. The
idea is to avoid each allocation effectively wasting a
2019 May 29
0
[PATCH v3 2/8] s390/cio: introduce DMA pools to cio
From: Halil Pasic <pasic at linux.ibm.com>
To support protected virtualization cio will need to make sure the
memory used for communication with the hypervisor is DMA memory.
Let us introduce one global pool for cio.
Our DMA pools are implemented as a gen_pool backed with DMA pages. The
idea is to avoid each allocation effectively wasting a page, as we
typically allocate much less than
2009 Apr 15
3
MySQL On ZFS Performance(fsync) Problem?
Hi,all
I did some test about MySQL''s Insert performance on ZFS, and met a big
performance problem,*i''m not sure what''s the point*.
Environment
2 Intel X5560 (8 core), 12GB RAM, 7 slc SSD(Intel).
A Java client run 8 threads concurrency insert into one Innodb table:
*~600 qps when sync_binlog=1 & innodb_flush_log_at_trx_commit=1
~600 qps when sync_binlog=10
2019 May 12
0
[PATCH 05/10] s390/cio: introduce DMA pools to cio
On Fri, 10 May 2019 16:10:13 +0200
Cornelia Huck <cohuck at redhat.com> wrote:
> On Fri, 10 May 2019 00:11:12 +0200
> Halil Pasic <pasic at linux.ibm.com> wrote:
>
> > On Thu, 9 May 2019 12:11:06 +0200
> > Cornelia Huck <cohuck at redhat.com> wrote:
> >
> > > On Wed, 8 May 2019 23:22:10 +0200
> > > Halil Pasic <pasic at
2019 Jun 06
0
[PATCH v4 2/8] s390/cio: introduce DMA pools to cio
To support protected virtualization cio will need to make sure the
memory used for communication with the hypervisor is DMA memory.
Let us introduce one global pool for cio.
Our DMA pools are implemented as a gen_pool backed with DMA pages. The
idea is to avoid each allocation effectively wasting a page, as we
typically allocate much less than PAGE_SIZE.
Signed-off-by: Halil Pasic <pasic at