similar to: Hi all

Displaying 20 results from an estimated 100 matches similar to: "Hi all"

2017 Aug 13
0
throughput question
Hi everybody I have a some question about throughput for glusterfs volume. I have 3 server for glusterfs, each have one brick and 1GbE for their network, I have made distributed replica 3 volume with these 3 bricks. the network between the clients and the servers are 1GbE. refer to this link: https://s3.amazonaws.com/aws001/guided_trek/Perform ance_in_a_Gluster_Systemv6F.pdf I have setup
2017 Jul 18
1
License for product
Hi all, I?ve been developing a product using glusterfs within my team. I want to know issues about license when I use glusterfs as product. Q1. What should I do when I use glusterfs which is modified by myself. Q2. I?ve developed a management and monitoring software for gluster and things for cluster system. Should I open this source codes ? Thanks in advance.
2007 Oct 08
16
Fileserver performance tests
Hi all, i want to replace a bunch of Apple Xserves with Xraids and HFS+ (brr) by Sun x4200 with SAS-Jbods and ZFS. The application will be the Helios UB+ fileserver suite. I installed the latest Solaris 10 on a x4200 with 8gig of ram and two Sun SAS controllers, attached two sas-jbods with 8 SATA-HDDs each und created a zfs pool as a raid 10 by doing something like the following: [i]zpool create
2010 Mar 02
9
Filebench Performance is weird
Greeting All I am using Filebench benchmark in an "Interactive mode" to test ZFS performance with randomread wordload. My Filebench setting & run results are as follwos ------------------------------------------------------------------------------------------ filebench> set $filesize=5g filebench> set $dir=/hdd/fs32k filebench> set $iosize=32k filebench> set
2007 Oct 30
2
[osol-help] Squid Cache on a ZFS file system
On 29/10/2007, Tek Bahadur Limbu <teklimbu at wlink.com.np> wrote: > I created a ZFS file system like the following with /mypool/cache being > the partition for the Squid cache: > > 18:51:27 root at solaris:~$ zfs list > NAME USED AVAIL REFER MOUNTPOINT > mypool 478M 31.0G 10.0M /mypool > mypool/cache 230M 9.78G 230M
2009 Apr 23
1
ZFS SMI vs EFI performance using filebench
I have been testing the performance of zfs vs. ufs using filebench. The setup is a v240, 4GB RAM, 2 at 1503MHz, 1 320GB _SAN_ attached LUN, and using a ZFS mirrored root disk. Our SAN is a top notch NVRAM based SAN. There are lots of discussions using ZFS with SAN based storage.. and it seems ZFS is designed to perform best with dumb disk (JBODs). The test I ran support this observation.. and
2006 Nov 03
2
Filebench, X4200 and Sun Storagetek 6140
Hi there I''m busy with some tests on the above hardware and will post some scores soon. For those that do _not_ have the above available for tests, I''m open to suggestions on potential configs that I could run for you. Pop me a mail if you want something specific _or_ you have suggestions concerning filebench (varmail) config setup. Cheers This message posted from
2014 Aug 21
2
[PATCH] vhost: Add polling mode
"Michael S. Tsirkin" <mst at redhat.com> wrote on 20/08/2014 01:57:10 PM: > > Results: > > > > Netperf, 1 vm: > > The polling patch improved throughput by ~33% (1516 MB/sec -> 2046 MB/sec). > > Number of exits/sec decreased 6x. > > The same improvement was shown when I tested with 3 vms running netperf > > (4086 MB/sec -> 5545
2014 Aug 21
2
[PATCH] vhost: Add polling mode
"Michael S. Tsirkin" <mst at redhat.com> wrote on 20/08/2014 01:57:10 PM: > > Results: > > > > Netperf, 1 vm: > > The polling patch improved throughput by ~33% (1516 MB/sec -> 2046 MB/sec). > > Number of exits/sec decreased 6x. > > The same improvement was shown when I tested with 3 vms running netperf > > (4086 MB/sec -> 5545
2014 Aug 20
0
[PATCH] vhost: Add polling mode
On 10/08/14 10:30, Razya Ladelsky wrote: > From: Razya Ladelsky <razya at il.ibm.com> > Date: Thu, 31 Jul 2014 09:47:20 +0300 > Subject: [PATCH] vhost: Add polling mode > > When vhost is waiting for buffers from the guest driver (e.g., more packets to > send in vhost-net's transmit queue), it normally goes to sleep and waits for the > guest to "kick" it.
2014 Aug 21
0
[PATCH] vhost: Add polling mode
From: Razya Ladelsky > "Michael S. Tsirkin" <mst at redhat.com> wrote on 20/08/2014 01:57:10 PM: > > > > Results: > > > > > > Netperf, 1 vm: > > > The polling patch improved throughput by ~33% (1516 MB/sec -> 2046 MB/sec). > > > Number of exits/sec decreased 6x. > > > The same improvement was shown when I tested with 3
2017 Oct 24
2
active-active georeplication?
Halo replication [1] could be of interest here. This functionality is available since 3.11 and the current plan is to have it fully supported in a 4.x release. Note that Halo replication is built on existing synchronous replication in Gluster and differs from the current geo-replication implementation. Kotresh's response is spot on for the current geo-replication implementation. Regards,
2007 Nov 29
10
ZFS write time performance question
HI, The question is a ZFS performance question in reguards to SAN traffic. We are trying to benchmark ZFS vx VxFS file systems and I get the following performance results. Test Setup: Solaris 10: 11/06 Dual port Qlogic HBA with SFCSM (for ZFS) and DMP (of VxFS) Sun Fire v490 server LSI Raid 3994 on backend ZFS Record Size: 128KB (default) VxFS Block Size: 8KB(default) The only thing
2017 Oct 24
2
active-active georeplication?
hi everybody, Have glusterfs released a feature named active-active georeplication? If yes, in which version it is released? If no, is it planned to have this feature? -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20171024/0656b41f/attachment.html>
2014 Aug 10
7
[PATCH] vhost: Add polling mode
From: Razya Ladelsky <razya at il.ibm.com> Date: Thu, 31 Jul 2014 09:47:20 +0300 Subject: [PATCH] vhost: Add polling mode When vhost is waiting for buffers from the guest driver (e.g., more packets to send in vhost-net's transmit queue), it normally goes to sleep and waits for the guest to "kick" it. This kick involves a PIO in the guest, and therefore an exit (and possibly
2014 Aug 10
7
[PATCH] vhost: Add polling mode
From: Razya Ladelsky <razya at il.ibm.com> Date: Thu, 31 Jul 2014 09:47:20 +0300 Subject: [PATCH] vhost: Add polling mode When vhost is waiting for buffers from the guest driver (e.g., more packets to send in vhost-net's transmit queue), it normally goes to sleep and waits for the guest to "kick" it. This kick involves a PIO in the guest, and therefore an exit (and possibly
2014 Aug 10
0
[PATCH] vhost: Add polling mode
On Sun, Aug 10, 2014 at 11:30:35AM +0300, Razya Ladelsky wrote: > From: Razya Ladelsky <razya at il.ibm.com> > Date: Thu, 31 Jul 2014 09:47:20 +0300 > Subject: [PATCH] vhost: Add polling mode > > When vhost is waiting for buffers from the guest driver (e.g., more packets to > send in vhost-net's transmit queue), it normally goes to sleep and waits for the > guest to
2017 Oct 24
0
active-active georeplication?
thx for reply, that was so much interesting to me. How can I get these news about glusterfs new features? On Tue, Oct 24, 2017 at 5:54 PM, Vijay Bellur <vbellur at redhat.com> wrote: > > Halo replication [1] could be of interest here. This functionality is > available since 3.11 and the current plan is to have it fully supported in > a 4.x release. > > Note that Halo
2014 Aug 20
0
[PATCH] vhost: Add polling mode
On Sun, Aug 10, 2014 at 11:30:35AM +0300, Razya Ladelsky wrote: > From: Razya Ladelsky <razya at il.ibm.com> > Date: Thu, 31 Jul 2014 09:47:20 +0300 > Subject: [PATCH] vhost: Add polling mode > > When vhost is waiting for buffers from the guest driver (e.g., more packets to > send in vhost-net's transmit queue), it normally goes to sleep and waits for the > guest to
2018 Feb 04
2
halo not work as desired!!!
I have 2 data centers in two different region, each DC have 3 severs, I have created glusterfs volume with 4 replica, this is glusterfs volume info output: Volume Name: test-halo Type: Replicate Status: Started Snapshot Count: 0 Number of Bricks: 1 x 4 = 4 Transport-type: tcp Bricks: Brick1: 10.0.0.1:/mnt/test1 Brick2: 10.0.0.3:/mnt/test2 Brick3: 10.0.0.5:/mnt/test3 Brick4: 10.0.0.6:/mnt/test4