Displaying 20 results from an estimated 2000 matches similar to: "6.2 x86_64 "mtrr_cleanup: can not find optimal value""
2012 Jan 15
0
[CENTOS6] mtrr_cleanup: can not find optimal value - during server startup
After fresh installation of CentOS 6.2 on my server, I get following errors
in my dmesg output:
-------
MTRR default type: uncachable
MTRR fixed ranges enabled:
00000-9FFFF write-back
A0000-BFFFF uncachable
C0000-D7FFF write-protect
D8000-E7FFF uncachable
E8000-FFFFF write-protect
MTRR variable ranges enabled:
0 base 000000000 mask C00000000 write-back
1 base 400000000 mask
2012 Nov 03
0
mtrr_gran_size and mtrr_chunk_size
Good Day All,
Today I looked at the dmesg log and I notice that the following messages
regarding mtrr_gran_size/mtrr_chunk_size.
I am currently running CentOS 6.3 and I installed CentOS 6.2 and 6.1 and I
was seeing the same errors. When I installed CentOS 5.8 on the same laptop
I do not see these errors.
$ lsb_release -a
LSB Version:
2013 Jun 13
0
*BAD*gran_size
CentOS 6.4, current, Dell PE R720.
Had an issue today with a bus error, and googling only found two year old
references to problems with non-Dell drives (we just added two WD Reds,
and mdadm raided them).
So, looking through dmesg and /var/log/messages, I ran into a *lot* of
G
gran_size: 128K chunk_size: 256K num_reg: 10 lose cover
RAM:
0G
gran_size: 128K chunk_size:
2004 Aug 06
0
̨Íåºê´ó¹ú¼ÊÉÌó³ÏÑ°¹óµØºÏ×÷»ï°é£¬ÏúÊÛµçÄÔÅä¼þ¡¢ÊÖ»ú¡¢±Ê¼Ç±¾µçÄÔ ÁªÏµÈË:³ÂÊÀ½Ü µç»°:013850704389
̨Íåºê´ó¹ú¼ÊÉÌó¹«Ë¾³ÏÑ°¹óµØºÏ×÷»ï°é£¬ÏúÊÛµçÄÔ¡¢Åä¼þ¡¢ÊÖ»ú¡¢±Ê¼Ç±¾µçÄÔ¡££¨²Î¿¼¼Û¸ñ±íÈçÏ£©
ÎÒ˾±£Ö¤²úƷΪȫÐÂÔ×°,»õµ½¸¶¿î¡£Óв»Ïê¿ÉÀ´ÈËÀ´µç×Éѯ£º
ÁªÏµÈË:³ÂÊÀ½Ü µç»°:013850704389
Ò»¡¢Ä¦ÍÐÂÞÀ µ¥Î»/Ôª
T189--400 T190--350 T191--500 C289--550
T2988--350 A388--1600 A6288--1300 V70--2000
V8088--800 V998++ --700 V60+ --1200 V66+GPRS--950
ŵ»ùÑÇ
3310--400 3330--450
2002 Jun 29
0
电脑配件惊爆抢购价 samba
̨ʤÏòÄãÎʺÃ!
ÎÒ¹«Ë¾³¤ÆÚ´Óʹú¼ÊóÒ×,ΪÍÚ¾òÊг¡Ç±Á¦¡¢À©´ó¾Óª¹æÄ£,ÒâÔÚ¹óµØ
Ñ°ÕÒÁôÒ×´°¿Ú,Ìؽ«´Ë¼Ûͬ±í³Ê¹óµ¥Î»²Î¿¼.ÎÒ˾ÌṩһÁ÷Æ·ÖÊ,Ò»Á÷·þÎñ,ËÍ»õÉÏÃÅ,
»õµ½¸¶¿î, ÅúÁí¾ù¿É.»¶Ó¸÷½çÅóÓÑÀ´µç´¹Ñ¯¼°Ö§³Ö.¶àл!!!
̨ʤ¹«Ë¾
ÖйúITóÒײ¿ :ÇØ Áú ͼ
ÇëÎðÖ±½Ó»Ø¸´£¬ÓÐÒâÕßÇëÀ´µç ------0139-59726696
Ò»:±Ê¼Ç±¾µçÄÔ(È«ÇòÁª±£µ¥,ÈýÄê)
±Ê¼Ç±¾µçÄÔÊÖ»úÉÏÍøרÓÃPC¿¨----------1450Ôª
A . Ë÷Äá SONY
SR/27K(4500Ôª)
2002 Jul 23
0
电脑配件惊爆价 samba
̨ÕýÏòÄãÎʺÃ!
ÎÒ¹«Ë¾³¤ÆÚ´Óʹú¼ÊóÒ×,ΪÍÚ¾òÊг¡Ç±Á¦¡¢À©´ó¾Óª¹æÄ£,ÒâÔÚ¹óµØ
Ñ°ÕÒÁôÒ×´°¿Ú,Ìؽ«´Ë¼Ûͬ±í³Ê¹óµ¥Î»²Î¿¼.ÎÒ˾ÌṩһÁ÷Æ·ÖÊ,Ò»Á÷·þÎñ,ËÍ»õÉÏÃÅ,
»õµ½¸¶¿î, ÅúÁí¾ù¿É.»¶Ó¸÷½çÅóÓÑÀ´µç´¹Ñ¯¼°Ö§³Ö.¶àл!!!
̨ÍåÕýÈÙ¹ú¼ÊóÒ×¹«Ë¾
ÖйúITóÒײ¿ :Áø½ðÃú
ÇëÎðÖ±½Ó»Ø¸´£¬ÓÐÒâÕßÇëÀ´µç ------0138-50738839
Ò».µçÄÔÅä¼þ(RMB.Ôª):
A:Ö÷°å:
΢ÐÇ 845Pro2-LE(Socket,i845,SDRAM,AC97Éù¿¨)---530
845Pro
2013 Apr 23
0
Fw: Error with function - USING library(plyr)
Dear R forum,
Please refer to my query regarding "Error with function". I forgot to mention that I am using "plyr" library.
Sorry for inconvenience.
Regards
Katherine
--- On Tue, 23/4/13, Katherine Gobin <katherine_gobin@yahoo.com> wrote:
From: Katherine Gobin <katherine_gobin@yahoo.com>
Subject: [R] Error with function
To: r-help@r-project.org
Date:
2013 Apr 23
0
Error with function
Dear R forum,
I have a data.frame as given below:
df = data.frame(tran = c("tran1", "tran2", "tran3", "tran4"), tenor = c("2w", "1m", "7m", "3m"))
Also, I define
libor_tenor_labels = as.character(c("o_n", "1w", "2w",
"1m", "2m", "3m", "4m",
2013 Apr 23
0
Fw: " PROBLEM SOLVED" - Error with function
Dear R forum
Please refer to my query captioned Error with function.
I had missed in bracket ")" in the return statement and hence I was getting the error. I has struggled for more than 2 hours to find out the problem and only then has posted to the forum. I sincerely apologize to all for consuming your valuable time.
Thanks for the efforts at your end.
Regards
Katherine
--- On
2020 May 24
3
[PATCH] file_checksum() optimization
When a whole-file checksum is performed, hashing was done in 64 byte
blocks, causing overhead and limiting performance.
Testing showed the performance improvement to go up quickly going from
64 to 512 bytes, with diminishing returns above, 4096 was where it
seemed to plateau for me. Re-used CHUNK_SIZE (32 kB) as it already
exists and should be fine to use here anyway.
Noticed this because
2013 Jun 08
0
[PATCH] Btrfs-progs: elaborate error handling of mkfs
$./mkfs.btrfs -f /dev/sdd -b 2M
[...]
mkfs.btrfs: volumes.c:845: btrfs_alloc_chunk: Assertion `!(ret)'' failed.
Aborted (core dumped).
We should return error to userspace instead of the above.
Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
---
mkfs.c | 23 +++++++++++++++--------
volumes.c | 16 +++++++++++-----
2 files changed, 26 insertions(+), 13 deletions(-)
diff --git
2011 Oct 26
0
PCIe errors handled by OS
Does anybody please have any experience with
the following CentOS 6 warnings in logwatch?
WARNING: Kernel Errors Present
ACPI Error (psargs-0359): [ ...: 1 Time(s)
pci 0000:00:01.0: PCIe errors handled by OS. ...: 1 Time(s)
pci 0000:00:1c.0: PCIe errors handled by OS. ...: 1 Time(s)
pci 0000:00:1c.5: PCIe errors handled by OS. ...: 1 Time(s)
pci 0000:00:1c.6: PCIe errors
2010 Aug 31
0
istream_read like zlib, but without zlib
Hy Timo !
I Made some modification in stream_read in zlib. I remove all zlib part,
because i don't need this, but i need to read a istream to change it.
Well, i create a size_t called supersize, with is a substitute for
stream->zs.avail_in.
The trouble is, my debug file have a lot of "READ Plugin\n", and i think
it's because my read becomes a loop, i think it's because
2019 May 23
0
[PATCH v2 2/8] s390/cio: introduce DMA pools to cio
From: Halil Pasic <pasic at linux.ibm.com>
To support protected virtualization cio will need to make sure the
memory used for communication with the hypervisor is DMA memory.
Let us introduce one global cio, and some tools for pools seated
at individual devices.
Our DMA pools are implemented as a gen_pool backed with DMA pages. The
idea is to avoid each allocation effectively wasting a
2019 May 29
0
[PATCH v3 2/8] s390/cio: introduce DMA pools to cio
From: Halil Pasic <pasic at linux.ibm.com>
To support protected virtualization cio will need to make sure the
memory used for communication with the hypervisor is DMA memory.
Let us introduce one global pool for cio.
Our DMA pools are implemented as a gen_pool backed with DMA pages. The
idea is to avoid each allocation effectively wasting a page, as we
typically allocate much less than
2019 May 12
0
[PATCH 05/10] s390/cio: introduce DMA pools to cio
On Fri, 10 May 2019 16:10:13 +0200
Cornelia Huck <cohuck at redhat.com> wrote:
> On Fri, 10 May 2019 00:11:12 +0200
> Halil Pasic <pasic at linux.ibm.com> wrote:
>
> > On Thu, 9 May 2019 12:11:06 +0200
> > Cornelia Huck <cohuck at redhat.com> wrote:
> >
> > > On Wed, 8 May 2019 23:22:10 +0200
> > > Halil Pasic <pasic at
2019 Jun 06
0
[PATCH v4 2/8] s390/cio: introduce DMA pools to cio
To support protected virtualization cio will need to make sure the
memory used for communication with the hypervisor is DMA memory.
Let us introduce one global pool for cio.
Our DMA pools are implemented as a gen_pool backed with DMA pages. The
idea is to avoid each allocation effectively wasting a page, as we
typically allocate much less than PAGE_SIZE.
Signed-off-by: Halil Pasic <pasic at
2019 Jun 12
0
[PATCH v5 2/8] s390/cio: introduce DMA pools to cio
To support protected virtualization cio will need to make sure the
memory used for communication with the hypervisor is DMA memory.
Let us introduce one global pool for cio.
Our DMA pools are implemented as a gen_pool backed with DMA pages. The
idea is to avoid each allocation effectively wasting a page, as we
typically allocate much less than PAGE_SIZE.
Signed-off-by: Halil Pasic <pasic at
2020 Aug 20
2
[PATCH 05/28] media/v4l2: remove V4L2-FLAG-MEMORY-NON-CONSISTENT
On Wed, Aug 19, 2020 at 03:07:04PM +0100, Robin Murphy wrote:
>> FWIW, I asked back in time what the plan is for non-coherent
>> allocations and it seemed like DMA_ATTR_NON_CONSISTENT and
>> dma_sync_*() was supposed to be the right thing to go with. [2] The
>> same thread also explains why dma_alloc_pages() isn't suitable for the
>> users of dma_alloc_attrs() and
2003 Mar 23
1
[RFC] dynamic checksum size
Currently rsync has a bit of a problem with very large
files. Dynamic block sizes were introduced to try handle that
automatically if the user didn't specify a block size.
Unfortunately that isn't enough and the block size would
need to grow faster than the file. Besides, overly large block
sizes mean large amounts of data need to be copied even for
small changes.
The maths indicate