similar to: Performance testing with sysbench...

Displaying 20 results from an estimated 1000 matches similar to: "Performance testing with sysbench..."

2017 Aug 24
3
NFS versus Fuse file locking problem (NFS works, fuse doesn't...)
Hi This is gluster 3.8.4. Volume options are out of the box. Sharding is off (and I don't think enabling it would matter) I haven't done much performance tuning. For one thing, using a simple script that just creates files I can easily flood the network, so I don't expect a performance issue. The problem we see is that after a certain time the fuse clients completely stop accepting
2017 Jun 02
1
File locking...
Hi all, A few questions. - Is POSIX locking enabled when using the native client? I would assume yes. - What other settings/tuneables exist when it comes to file locking? Krist -- Vriendelijke Groet | Best Regards | Freundliche Gr??e | Cordialement ------------------------------ Krist van Besien | Senior Architect | Red Hat EMEA Cloud Practice | RHCE | RHCSA Open Stack @: krist at
2017 May 31
2
"Another Transaction is in progres..."
Hi all, I am trying to do trivial things, like setting quota, or just querying the status and keep getting "Another transaction is in progres for <some volume>" These messages pop up, then disappear for a while, then pop up again... What do these messages mean? How do I figure out which "transaction" is meant here, and what do I do about it? Krist -- Vriendelijke
2017 Jun 01
0
"Another Transaction is in progres..."
Thanks for the suggestion, this solved it for us, and we probably found the cause as well. We had performance co-pilot running and it was continously enabling profiling on volumes... We found the reference to the node that had the lock, and restarted glusterd on that node, and all went well from there on. Krist On 31 May 2017 at 15:56, Vijay Bellur <vbellur at redhat.com> wrote: >
2017 Aug 24
0
NFS versus Fuse file locking problem (NFS works, fuse doesn't...)
Hi Krist, What are your volume options on that setup? Have you tried tuning it for the kind of workload and files size you have? I would definitely do some tests with feature.shard=on/off first. If shard is on, try playing with features.shard-block-size. Do you have jumbo frames (MTU=9000) enabled across the switch and nodes? if you have concurrent clients writing/reading, it could be beneficial
2017 Jul 26
0
Heketi and Geo Replication.
Hello, Is it possible to set up a Heketi Managed gluster cluster in one datacenter, and then have geo replication for all volumes to a second cluster in another datacenter? I've been looking at that, but haven't really found a recipe/solution for this. Ideally what I want is that when a volume is created in cluster1, that a slave volume is automatically created in cluster2, and
2017 Jun 01
0
Disconnected gluster node things it is still connected...
Hi all, Trying to do some availability testing. We have three nodes: node1, node2, node3. Volumes are all replica 2, across all three nodes. As a test we disconnected node1, buy removing the vlan tag for that host on the switch it is connected to. As a result node2 and node3 now show node1 in disconnected status, and show the volumes as degraded. This is ecpected. However logging in to node1
2017 Oct 19
0
Trying to remove a brick (with heketi) fails...
Hello, I have a gluster cluster with 4 nodes, that is managed using heketi. I want to test the removeal of one node. We have several volumes on it, some with rep=2, others with rep=3. I get the following error: [root at CTYI1458 .ssh]# heketi-cli --user admin --secret "******" node remove 749850f8e5fd23cf6a224b7490499659 Error: Failed to remove device, error: Cannot replace brick
2017 Aug 24
2
NFS versus Fuse file locking problem (NFS works, fuse doesn't...)
Hi all, I usualy advise clients to use the native client if at all possible, as it is very robust. But I am running in to problems here. In this case the gluster system is used to store video streams. Basicaly the setup is the following: - A gluster cluster of 3 nodes, with ample storage. They export several volumes. - The network is 10GB, switched. - A "recording server" which
2017 Aug 25
0
NFS versus Fuse file locking problem (NFS works, fuse doesn't...)
On Thu, Aug 24, 2017 at 9:01 AM, Krist van Besien <krist at redhat.com> wrote: > Hi > This is gluster 3.8.4. Volume options are out of the box. Sharding is off > (and I don't think enabling it would matter) > > I haven't done much performance tuning. For one thing, using a simple > script that just creates files I can easily flood the network, so I don't >
2017 Jun 14
0
Transport Endpoint Not connected while running sysbench on Gluster Volume
Also, this is the profile output of this Volume: gluster> volume profile mariadb_gluster_volume info cumulative Brick: laeft-dccdb01p.core.epay.us.loc:/export/mariadb_backup/brick ------------------------------------------------------------------- Cumulative Stats: Block Size: 16384b+ 32768b+ 65536b+ No. of Reads: 0 0 0
2017 Jun 13
2
Transport Endpoint Not connected while running sysbench on Gluster Volume
I'm having a hard time trying to get a gluster volume up and running. I have setup other gluster volumes on other systems without much problems but this one is killing me. The gluster vol was created with the command: gluster volume create mariadb_gluster_volume laeft-dccdb01p:/export/mariadb/brick I had to lower frame-timeout since the system would become unresponsive until the frame failed
2017 Jun 15
1
Transport Endpoint Not connected while running sysbench on Gluster Volume
<re added gluster users, it looks like it was dropped from your email> ----- Original Message ----- > From: "Julio Guevara" <julioguevara150 at gmail.com> > To: "Ben Turner" <bturner at redhat.com> > Sent: Thursday, June 15, 2017 5:52:26 PM > Subject: Re: [Gluster-users] Transport Endpoint Not connected while running sysbench on Gluster Volume
2017 Jul 04
2
I need a sanity check.
2006 Oct 20
4
tcpsnoop problem
Hello, I have the following problem: On a solaris 10 machine, with 5 "zones" there is a process that is talking to the wrong db server. I need to find out which process this is, so I can analize this further. I have tried to doing this using tcpsnoop from the DTrace toolkit, but without success. This is what I''ve done. First I started tcpsnoop, dumping it''s output
2009 Jan 28
0
smp_tlb_shootdown bottleneck?
Hi. Sometimes I see much contention in smp_tlb_shootdown while running sysbench: sysbench --test=fileio --num-threads=8 --file-test-mode=rndrd --file-total-size=3G run kern.smp.cpus: 8 FreeBSD 7.1-R CPU: 0.8% user, 0.0% nice, 93.8% system, 0.0% interrupt, 5.4% idle Mem: 11M Active, 2873M Inact, 282M Wired, 8K Cache, 214M Buf, 765M Free Swap: 4096M Total, 4096M Free PID USERNAME PRI NICE
2014 Jan 16
2
Your opinion about RHCSA certification
Hello to all, I'm currently studying (and collecting notes here https://github.com/fdicarlo/RHCSA_cs) for RHCSA. My plan is to RHCSA -> RHCE and then RHCSS. What I want to ask you is: - What do you think about it? - Did you find it useful? - Do you have any advices? Best regards, Fabrizio -- "The intuitive mind is a sacred gift and the rational mind is a faithful servant. We have
2004 Aug 06
0
Re: [Flac-dev] Unified codec interface
That's what UCI is trying to do. I'm hoping for just a simple unified Ogg interface for the audio codecs that Ogg supports. -dwh- On 31 Jan 2003, Csillag Krist?f wrote: > Here is what I imagined (just vague thoughts, nothing polished): > > Let's suppose we have a hypotetical library called > "Free Universal Codec Kit" - ..um...well.. Frunick for short :)
2016 Apr 04
2
Free Redhat Linux (rhel) version 7.2
Yes, this helps at least "single" developers and people that are training for rhce / rhcsa exam.. br, -- Eero 2016-04-04 17:16 GMT+03:00 Mohammed Zeeshan <mohammed.zee1000 at gmail.com>: > On Mon, Apr 4, 2016 at 7:36 PM, Valeri Galtsev <galtsev at kicp.uchicago.edu> > wrote: > > > > > On Mon, April 4, 2016 8:53 am, Johnny Hughes wrote: > > >
2012 Apr 17
2
Kernel bug in BTRFS (kernel 3.3.0)
Hi, Doing some extensive benchmarks on BTRFS, I encountered a kernel bug in BTRFS (as reported in dmesg) Maybe the information below can help you making btrfs better. Situation Doing an intensive sequential write on a SAS 3TB disk drive (SEAGATE ST33000652SS) with 128 threads with Sysbench. Device is connected through an HBA. Blocksize was 256k ; Kernel is 3.3.0 (x86_64) ; Btrfs is version