Displaying 20 results from an estimated 9000 matches similar to: "Realistic mirror performance expectations?"
2005 May 27
1
performance on small files transfers
Hi all,
I'm confused of small files (no bigger than 50k )
transfers speed through samba,which is very slow on
my machine. Bellow is a real case,
-- SuSE professional 9.2, kernel 2.6.11,Samba 3.0.14a,reiserfs
-- Dual AMD Opteron,4G mem,Giga byte LAN
-- 2 raid 5 make up of 16 SATA hard disks
-- set readhead to 1024
I tested raids speed using bonnie++ and get 450 Mbytes/s
at 16GB files
2011 Dec 12
0
glusterFS : realistic for 40 nodes hypervisors and 100To ?
Hello,
I'm working on a cloud/virtualisation platform and i'd like to know
your point of view.
I'm dream about a shared file system (or cluster filesystem) for lots
of hypervisor with lots of shared disk space (for HA, hot
migration,...).
Well glusterFS seems very cool, easy configuration, work in only
shared storage way (no need for replication).
Can I use it for 40 nodes and
2005 Jun 13
0
MySQL: max realistic size of extensions table.
Hi,
I'm using *CVS Head version and read the dialplan from MySQL.
I'm making A-Z termination to over 4000 different country and city codes. I
have 3 different dialing rules depending on the price level of the dialed
number.
Should my extensions table contain 4000 lines? Is this realistic? Or is
there any other (more clever) way doing this?
Regards,
Cenk.
-------------- next part
2009 Jun 24
3
Unexplained reboots in DRBD82 + OCFS2 setup
We're trying to setup a dual-primary DRBD environment, with a shared
disk with either OCFS2 or GFS. The environment is a Centos 5.3 with
DRBD82 (but also tried with DRBD83 from testing) .
Setting up a single primary disk and running bonnie++ on it works.
Setting up a dual-primary disk, only mounting it on one node (ext3) and
running bonnie++ works
When setting up ocfs2 on the /dev/drbd0
2009 Jun 24
3
Unexplained reboots in DRBD82 + OCFS2 setup
We're trying to setup a dual-primary DRBD environment, with a shared
disk with either OCFS2 or GFS. The environment is a Centos 5.3 with
DRBD82 (but also tried with DRBD83 from testing) .
Setting up a single primary disk and running bonnie++ on it works.
Setting up a dual-primary disk, only mounting it on one node (ext3) and
running bonnie++ works
When setting up ocfs2 on the /dev/drbd0
2008 Sep 25
4
Help with b97 HVM zvol-backed DomU disk performance
Hi Folks,
I was wondering if anyone has an pointers/suggestions on how I might increase disk performance of a HVM zvol-backed DomU? - this is my first DomU, so hopefully its something obvious
Running bonnie++ shows the DomU''s performance to be 3 orders of magnitude worse than Dom0''s, which itself is half as good as when not running xVM at all (see bottom for bonnie++ results)
2019 Apr 13
1
[RFC 3/3] RDMA/virtio-rdma: VirtIO rdma driver
On 2019/4/11 19:01, Yuval Shaia wrote:
> Signed-off-by: Yuval Shaia <yuval.shaia at oracle.com>
> ---
> drivers/infiniband/Kconfig | 1 +
> drivers/infiniband/hw/Makefile | 1 +
> drivers/infiniband/hw/virtio/Kconfig | 6 +
> drivers/infiniband/hw/virtio/Makefile | 4 +
>
2019 Apr 11
1
[RFC 3/3] RDMA/virtio-rdma: VirtIO rdma driver
Signed-off-by: Yuval Shaia <yuval.shaia at oracle.com>
---
drivers/infiniband/Kconfig | 1 +
drivers/infiniband/hw/Makefile | 1 +
drivers/infiniband/hw/virtio/Kconfig | 6 +
drivers/infiniband/hw/virtio/Makefile | 4 +
drivers/infiniband/hw/virtio/virtio_rdma.h | 40 +
.../infiniband/hw/virtio/virtio_rdma_device.c | 59 ++
2007 Jan 11
4
Help understanding some benchmark results
G''day, all,
So, I''ve decided to migrate my home server from Linux+swRAID+LVM to Solaris+ZFS, because it seems to hold much better promise for data integrity, which is my primary concern.
However, naturally, I decided to do some benchmarks in the process, and I don''t understand why the results are what they are. I though I had a reasonable understanding of ZFS, but now
2006 Oct 04
2
server disk subsystem benchmarks, bonnie++ and/or others?
Greetings
I've searched to no avail so far... there is bound to be something more
intelligible out there...???
I am playing with bonnie++ for the first time...
May I please get some advise and list experience on using this or other disk
subsystem benchmark programs properly with or without a GUI ?
Test system in this case is a Compaq DL360 with 2 to 4 Gig DRAM and qty (2)
36Gig 10k drives
2008 May 15
2
missing from Centos51 src tree: ".../drivers/infiniband/hw/amso1100/Makefile"
i'm attempting to rebuild centos51 kernel-xen.
(fwiw, because pciback has NOT been compiled into the kernel,
http://bugs.centos.org/view.php?id=2767)
after,
yum install kernel-devel kernel-xen-devel
and usual,
ln -s /usr/src/kernels/`uname -r`-`uname -m` /usr/src/linux
cd /usr/src/linux
cp /boot/config-`uname -r` ./.config
make oldconfig
make menuconfig
...
next,
make rpm
2007 Apr 10
0
[Xen-SmartIO] Make Install crashed - Infiniband Drivers
Hi,
I have a problem while doing a "make linux-2.6-xen-install".
First of all, i did a "make linux-2.6-xen-config CONFIGMODE=menuconfig"
I chose in Device Drivers menu -> Infiniband Support :
If i choose all the options in module or built-in, i get the same
results with make install.
This error appears :
2019 Nov 12
0
[PATCH v3 06/14] RDMA/hfi1: Use mmu_interval_notifier_insert for user_exp_rcv
From: Jason Gunthorpe <jgg at mellanox.com>
This converts one of the two users of mmu_notifiers to use the new API.
The conversion is fairly straightforward, however the existing use of
notifiers here seems to be racey.
Tested-by: Dennis Dalessandro <dennis.dalessandro at intel.com>
Signed-off-by: Jason Gunthorpe <jgg at mellanox.com>
---
drivers/infiniband/hw/hfi1/file_ops.c
2009 Jan 10
3
Poor RAID performance new Xeon server?
I have just purchased an HP ProLiant HP ML110 G5 server and install ed
CentOS 5.2 x86_64 on it.
It has the following spec:
Intel(R) Xeon(R) CPU 3065 @ 2.33GHz
4GB ECC memory
4 x 250GB SATA hard disks running at 1.5GB/s
Onboard RAID controller is enabled but at the moment I have used mdadm
to configure the array.
RAID bus controller: Intel Corporation 82801 SATA RAID Controller
For a simple
2019 Oct 28
1
[PATCH v2 06/15] RDMA/hfi1: Use mmu_range_notifier_inset for user_exp_rcv
From: Jason Gunthorpe <jgg at mellanox.com>
This converts one of the two users of mmu_notifiers to use the new API.
The conversion is fairly straightforward, however the existing use of
notifiers here seems to be racey.
Cc: Mike Marciniszyn <mike.marciniszyn at intel.com>
Cc: Dennis Dalessandro <dennis.dalessandro at intel.com>
Signed-off-by: Jason Gunthorpe <jgg at
2004 Jul 14
3
ext3 performance with hardware RAID5
I'm setting up a new fileserver. It has two RAID controllers, a PERC 3/DI
providing mirrored system disks and a PERC 3/DC providing a 1TB RAID5 volume
consisting of eight 144GB U160 drives. This will serve NFS, Samba and sftp
clients for about 200 users.
The logical drive was created with the following settings:
RAID = 5
stripe size = 32kb
write policy = wrback
read policy =
2006 May 23
0
Re: [Xen-smartio] Problem with infiniband on Xen domU
Hi Lamia,
Currently there is no direct InfiniBand access support in the Official Xen
trees. You might want
to have a look at our Xen-IB implementation. A preliminary version can be
downloaded here:
http://xenbits.xensource.com/ext/xen-smartio.hg
Some information about its design and applications can be found here:
High Performance VMM-Bypass I/O in Virtual Machines (Usenix 06), and
A Case for
2019 Nov 01
0
[PATCH v2 00/15] Consolidate the mmu notifier interval_tree and locking
On 10/28/19 1:10 PM, Jason Gunthorpe wrote:
> From: Jason Gunthorpe <jgg at mellanox.com>
>
> 8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1,
> scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where
> they only use invalidate_range_start/end and immediately check the
> invalidating range against some driver data structure to tell
2008 Feb 18
5
kernel-2.6.18-8.1.14 + lustre 1.6.4.2 + OFED 1.2
We seemed to have it a stumbling block when building with the above
(supported) versions. Our process...
1. Start with stock rhel5 2.6.18-8.1.14 source tree
2. Configure InfiniBand support out of the the kernel (we will build
OFED separately).
3. Apply the 1.6.4.2 kernel patches to the kernel source.
4. Build the kernel.
5. Build OFED 1.2 against the patched kernel
6. Build Lustre using
2007 Nov 08
1
XEN HVMs on LVM over iSCSI - test results, (crashes) and questions
Hi all,
some results from my configuration.
I''ve this configuration (I''m not interested in raw top performances but
in reliability.. so, I can accept slow MB/sec and prefer to rely on a
RAID6, for example):
1 Infortrend iSCSI Array A16E-G2130-4 with:
- 1GB DDR cache
- RAID6
- 7 x 500GB sataII Seagate ST3500630NS with 16mb (no budget for SAS)
- one of the logic volumes (about