search for: x5355

Displaying 11 results from an estimated 11 matches for "x5355".

Did you mean: 5355
2010 Sep 10
11
Large directory performance
...gle directory and still get lookups at a rate of 5,000 files/sec. That leaves me wondering 2 things. How can we get 5,000 files/sec for anything and why is our performance dropping off so suddenly at after 20k files? Here is our setup: All IO servers are Dell PowerEdge 2950s. 2 8-core sockets with X5355 @ 2.66GHz and 16Gb of RAM. The data is on DDN S2A 9550s with 8+2 RAID configuration connected directly with 4Gb Fibre channel. They are running RHEL 4.5, Lustre 6.7.2-ddn3, kernel 2.6.18-128.7.1.el5.ddn1.l1.6.7.2.ddn3smp As a side note the users code is Parflow, developed at LLNL. The files are S...
2010 Dec 15
4
RHEL6 domU migrate issues w/ higher to lower frequency CPU''s
...terms of live migration, there seems to be a problem when moving from a higher (in terms of CPU MHz) to lower (MHz) system -- even if the higher of the two is a much older CPU model. For example, I can reproduce the bug under Xen 3.4.3 with the following: * Migrating from X5450 @ 3.00GHz to X5355 @ 2.66GHz fails, but the opposite (increasing in CPU frequency) succeeds. * Migrating from Xeon(TM) CPU 2.80GHz to E5310 @ 1.60GHz fails, but the opposite (increasing in CPU frequency) succeeds. BTW, when I say "fails", what I really mean is the migration succeeds but the domU is n...
2015 Jun 01
1
GlusterFS 3.7 - slow/poor performances
...rations with custom configurations. if I look at the side of the ifstat command, I can note my IO write processes never exceed 3MBs... EXT4 native FS seems to be faster (roughly 15-20% but no more) than XFS one My [test] storage cluster config is composed by 2 identical servers (biCPU Intel Xeon X5355, 8GB of RAM, 2x2TB HDD (no-RAID) and Gb ethernet) My volume settings: single: 1server 1 brick replicated: 2 servers 1 brick each distributed: 2 servers 2 bricks each dist-repl: 2 bricks in the same server and replica 2 All seems to be OK in gluster status command line. Do you have an idea wh...
2015 Jun 02
2
GlusterFS 3.7 - slow/poor performances
...if I look at the side of the ifstat command, I can note my IO write > processes never exceed 3MBs... > > EXT4 native FS seems to be faster (roughly 15-20% but no more) than > XFS one > > My [test] storage cluster config is composed by 2 identical servers > (biCPU Intel Xeon X5355, 8GB of RAM, 2x2TB HDD (no-RAID) and Gb ethernet) > > My volume settings: > single: 1server 1 brick > replicated: 2 servers 1 brick each > distributed: 2 servers 2 bricks each > dist-repl: 2 bricks in the same server and replica 2 > > All seems to be OK in gluster status com...
2008 Aug 15
0
Xen 3.2.1 - Win 2003/2008 Server 64-bit guests: cygwin bash builtin "test" crashes
...Windows Server 2003 Standard 32-bit Edition Windows Server 2003 Standard R2 x64 Edition Windows Server 2008 Standard 32-bit Edition Windows Server 2008 Standard 64-bit Edition CPU's: #1 Dual-Core AMD Opteron(tm) Processor 2216 HE #2 Intel(R) Xeon(R) CPU E5320 @ 1.86GHz #3 Intel(R) Xeon(R) CPU X5355 @ 2.66GHz #4 Intel(R) Xeon(R) CPU 5130 @ 2.00GHz The problem is dependent on the 64-bit OS / hardware combination. I suspect the problem derives from the way Xen virtualizes the 64-bit Xeons for HVM guests. Exactly how I cannot say (and this could wrong). Any comment would be welcome. Thank yo...
2008 Dec 02
0
[PATCH] Fix Xen panic with oprofile
...nt in time. The patch was generated against Xen 3.3.0 but will cleanly apply to both xen-3.3-testing and xen-unstable. It has only been tested on x86_32 but the ia64 portion of it should (hopefully) be obvious. Note that I needed to backport the patch from [1] to get samples on my processor (Xeon X5355) but, as the patch doesn''t seem to have gone into mainline yet [2], I am holding off on submitting it here. Cheers, Niraj [1] http://lkml.org/lkml/2008/11/11/62 [2] http://lkml.org/lkml/2008/11/17/282 Signed-off-by: Niraj Tolia <niraj.tolia@hp.com> diff -r 18eff064c628 xen/arch...
2007 Oct 08
2
Supermicro X7DVL-E and Xeon L5320 installation problems
Tearing our hair out on this one. Trying to install CentOS 5 x86_64 on a Supermicro X7DVL-E with 2 Xeon L5320 quad core CPUs, 3Ware SATA RAID controller in a mirrored setup and 4 GB of memory. Installation crashes at random places while copying the files. We've run memtest86 for 24 hours without any errors, replaced the RAID controller, motherboard and disks, but still no luck.
2013 Mar 07
0
OpenSSH-6.2 tests
I ran the tests for the latest snapshot (openssh-SNAP-20130307.tar.gz), and all tests reported passed. Full results can be sent, but the zip I originally tried was blocked. My system: Linux rigel 2.6.38-gentoo-r6 #1 SMP Sat Jun 25 13:48:28 CDT 2011 x86_64 Intel(R) Xeon(R) CPU X5355 @ 2.66GHz GenuineIntel GNU/Linux Andy Clements
2009 Jan 27
1
paravirtualized vs HVM disk interference (85% vs 15%)
...e when doing I/O to a disk images contained in single files from a paravirtualized domain and from an HVM at the same time. The problem was found in a Xen box with Fedora 8 x86_64 binaries installed (Xen 3.1.0 + dom0 Linux 2.6.21). The test hardware was a rack mounted server with two 2.66 Ghz Xeon X5355 (4 cores each one, 128 Kb L1 cache and 8 Mb L2 cache), 4Gb of RAM and one disk of 250 Gb. Both paravirt and HVM domains have 512 Mb of RAM and 8 vcpu''s and runs also a Fedora 8 x86_64 distro. Stressing at the same time the paravirtualized and HVM guests with disk stressing tools like ...
2008 Jul 11
2
Asterisk PBX How-to Guide for Amazon EC2
I've just added a PREVIEW release of my upcoming how-to guide for Asterisk PBX on EC2. It is based on months of testing and evaluating Asterisk on EC2. It addresses all kinks and showstoppers that many people have experienced over the past year or so. Because this is a preview, it is not the final version of this guide. It is subject to change (format, copy, layout, etc.) To view and download
2008 Jun 02
8
xen 3.2.1 + intel quad core + 8gigs of ram + linux
Hi! Anyone have a suggestion which version of Linux would be the best host? We typically use Ubuntu but we are flexible since we want the best performance and the least amount of headaches getting it installed and working. Thanks, Jimmy _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users