Displaying 20 results from an estimated 3000 matches similar to: "ARC, mmap, pagecache..."
2006 Oct 13
24
Self-tuning recordsize
Would it be worthwhile to implement heuristics to auto-tune
''recordsize'', or would that not be worth the effort?
--
Regards,
Jeremy
2007 Mar 01
4
pagecache corruption on Tyan S3870
A couple of months ago I reported some problems with a batch of Tyan
K8SSA (S3870) based machines. We are continuing to have an odd problem
with these boxes, and if anyone has seen something similar elsewhere,
I'd appreciate hearing about it.
These boxes are running Centos 4.4 x86_64 with kernel
2.6.9-42.0.3.ELsmp. They are dual Opteron 265's (dual core) with 4x2GB
DIMM's. The
2017 Nov 14
2
dramatic performance slowdown due to THP allocation failure with full pagecache
Hi all,
This is not really a libvirt issue but I'm hoping some of the smart folks
here will know more about this problem...
We have noticed when running some HPC applications on our OpenStack
(libvirt+KVM) cloud that the same application occasionally performs much
worse (4-5x slowdown) than normal. We can reproduce this quite easily by
filling pagecache (i.e. dd-ing a single large file to
2006 Sep 28
13
jbod questions
Folks,
We are in the process of purchasing new san/s that our mail server
runs on (JES3). We have moved our mailstores to zfs and continue to
have checksum errors -- they are corrected but this improves on the
ufs inode errors that require system shutdown and fsck.
So, I am recommending that we buy small jbods, do raidz2 and let zfs
handle the raiding of these boxes. As we need more
2010 Jan 12
11
How do separate ZFS filesystems affect performance?
I''m working with a Cyrus IMAP server running on a T2000 box under
Solaris 10 10/09 with current patches. Mailboxes reside on six ZFS
filesystems, each containing about 200 gigabytes of data. These are
part of a single zpool built on four Iscsi devices from our Netapp
filer.
One of these ZFS filesystems contains a number of global and per-user
databases in addition to one sixth of the
2009 Feb 02
8
ZFS core contributor nominations
The time has come to review the current Contributor and Core contributor
grants for ZFS. Since all of the ZFS core contributors grants are set
to expire on 02-24-2009 we need to renew the members that are still
contributing at core contributor levels. We should also add some new
members to both Contributor and Core contributor levels.
First the current list of Core contributors:
Bill
2007 Aug 21
12
Is ZFS efficient for large collections of small files?
Is ZFS efficient at handling huge populations of tiny-to-small files -
for example, 20 million TIFF images in a collection, each between 5
and 500k in size?
I am asking because I could have sworn that I read somewhere that it
isn''t, but I can''t find the reference.
Thanks,
Brian
--
- Brian Gupta
http://opensolaris.org/os/project/nycosug/
2010 Dec 07
9
[PATCH] Btrfs: pwrite blocked when writing from the mmaped buffer of the same page
This problem is found in meego testing:
http://bugs.meego.com/show_bug.cgi?id=6672
A file in btrfs is mmaped and the mmaped buffer is passed to pwrite to write to the same page
of the same file. In btrfs_file_aio_write(), the pages is locked by prepare_pages(). So when
btrfs_copy_from_user() is called, page fault happens and the same page needs to be locked again
in filemap_fault(). The fix is to
2007 Mar 15
20
C''mon ARC, stay small...
Running an mmap-intensive workload on ZFS on a X4500, Solaris 10 11/06
(update 3). All file IO is mmap(file), read memory segment, unmap, close.
Tweaked the arc size down via mdb to 1GB. I used that value because
c_min was also 1GB, and I was not sure if c_max could be larger than
c_min....Anyway, I set c_max to 1GB.
After a workload run....:
> arc::print -tad
{
. . .
ffffffffc02e29e8
2005 Nov 22
2
[LLVMdev] llvm-ranlib: Bus Error in regressions + fix
On Nov 22, 2005, at 17:18, Reid Spencer wrote:
> Your patch uses an operating system call that is not portable. All
> non-portable code needs to be located in the lib/System library.
Yep! I know. That is why I posted it for discussion. I'm not sure if
this is the "right" way to fix the problem, or if there is a different
fix that should be applied (like perhaps copying the
2009 Sep 24
5
Checksum property change does not change pre-existing data - right?
My understanding is that if I "zfs set checksum=<different>" to change the algorithm that this will change the checksum algorithm for all FUTURE data blocks written, but does not in any way change the checksum for previously written data blocks.
I need to corroborate this understanding. Could someone please point me to a document that states this? I have searched and searched
2006 Sep 26
8
Matching Malloc and Free
I would like to profile heap usage on a per thread basis in a large application process. To do this I am tracking calls to malloc and free with the attached script. Everything seems to look OK with some simple test programmes, however, when I track the live system, the results suggest that one thread has grown by approximately 1Gb and I would be surprised if this was true because I ran pmap -x on
2008 Apr 21
1
Compile libtheora 1.0beta3 with VS2005
Hi all,
I tried to compile the theora source with VS2005. But it asked for the ogg
library. error message is as follow.
****************************************************************************
Error 1 fatal error C1083: Cannot open include file: 'ogg/ogg.h': No
such file or directory c:\Documents and
Settings\Manoj\Desktop\libtheora-1.0beta3\include\theora\codec.h
64
Error
2013 Feb 10
2
[LLVMdev] llvm-installation
hello sir,
in llvm installation ./configure command worked properly but while
giving
make -j 4 command in ubuntu
everything got build properly but at last it showed error as
llvm[4]: Linking Debug+Asserts executable clang
collect2: ld terminated with signal 9 [Killed]
make[4]: *** [/home/manoj/Desktop/LLVM/
build/Debug+Asserts/bin/clang] Error 1
make[4]: Leaving directory
2013 Feb 11
1
[LLVMdev] llvm-installation
hello sir,
./configure worked properly but while giving
make -j 2 command in ubuntu
everything got build properly but at last it showed error as
llvm[4]: Linking Debug+Asserts executable clang
collect2: ld terminated with signal 9 [Killed]
make[4]: *** [/home/manoj/Desktop/LLVM/
build/Debug+Asserts/bin/clang] Error 1
make[4]: Leaving directory
2013 Feb 11
1
[LLVMdev] llvm
hello sir,
./configure worked properly but while giving
make -j 2 command in ubuntu
everything got build properly but at last it showed error as
llvm[4]: Linking Debug+Asserts executable clang
collect2: ld terminated with signal 9 [Killed]
make[4]: *** [/home/manoj/Desktop/LLVM/build/Debug+Asserts/bin/clang] Error
1
make[4]: Leaving directory `/home/manoj/Desktop/LLVM/
2013 Feb 10
0
[LLVMdev] llvm-installation
You should use make -j with a number lower than 4, it looks like you ran
out of memory when linking.
On 10 February 2013 15:41, Manoj C <manoj.chinthala at gmail.com> wrote:
> hello sir,
> in llvm installation ./configure command worked properly but while
> giving
> make -j 4 command in ubuntu
>
> everything got build properly but at last it showed error as
>
>
2005 Nov 23
0
[LLVMdev] llvm-ranlib: Bus Error in regressions + fix
Evan Jones wrote:
> I am pretty certain that this has nothing to do with the C++ library,
> and everything to do with the behaviour of mmap when the file that was
> mmaped is modified. I actually can reproduce this behaviour with the
> attached C test case. The program mmaps a file called 'data,' prints the
> last byte, truncates the file, then tries to read the last
2009 May 04
8
CentOS DomU on Opensolaris Dom0 - virt-install fails with error in virDomainCreateLinux()
Hi,
I am trying to install CentOS on an Opensolaris Dom0. virt-install fails
with an error in virDomainCreateLinux().
Is this a known issue? Am I missing some step?
manoj@mowgli:~$ uname -a
SunOS mowgli 5.11 snv_101b i86pc i386 i86xpv Solaris
manoj@mowgli:~$ pfexec virt-install
What is the name of your virtual machine? centos
How much RAM should be allocated (in megabytes)? 512
What would
2005 Nov 25
28
ZFS and memcntl(..., MC_SYNC, ...)
It wouldn''t be proper to start my first post here without congratulations
and thanks to the ZFS team for such an impressive piece of work.
Anyway, on to my query. I''ve been trying out ZFS, with a particular focus in
reducing latency in a specific application. This application has a fair
amount of random writing going on in the background (which, of course, ZFS
will make