Displaying 20 results from an estimated 3000 matches similar to: "dbench"
2018 Apr 22
4
Reconstructing files from shards
Il dom 22 apr 2018, 10:46 Alessandro Briosi <ab1 at metalit.com> ha scritto:
> Imho the easiest path would be to turn off sharding on the volume and
> simply do a copy of the files (to a different directory, or rename and
> then copy i.e.)
>
> This should simply store the files without sharding.
>
If you turn off sharding on a sharded volume with data in it, all sharded
2017 Jun 30
3
Very slow performance on Sharded GlusterFS
I already tried 512MB but re-try again now and results are the same. Both without tuning;
Stripe 2 replica 2: dd performs 250~ mb/s but shard gives 77mb.
I attached two logs (shard and stripe logs)
Note: I also noticed that you said ?order?. Do you mean when we create via volume set we have to make an order for bricks? I thought gluster handles (and do the math) itself.
Gencer
2018 Apr 23
1
Reconstructing files from shards
2018-04-23 9:34 GMT+02:00 Alessandro Briosi <ab1 at metalit.com>:
> Is it that really so?
yes, i've opened a bug asking developers to block removal of sharding
when volume has data on it or to write a huge warning message
saying that data loss will happen
> I thought that sharding was a extended attribute on the files created when
> sharding is enabled.
>
> Turning off
2017 Oct 05
2
data corruption - any update?
On 4 October 2017 at 23:34, WK <wkmail at bneit.com> wrote:
> Just so I know.
>
> Is it correct to assume that this corruption issue is ONLY involved if you
> are doing rebalancing with sharding enabled.
>
> So if I am not doing rebalancing I should be fine?
>
That is correct.
> -bill
>
>
>
> On 10/3/2017 10:30 PM, Krutika Dhananjay wrote:
>
>
2012 Apr 25
1
dbench & similar - as a valid benchmark
hi everybody
would a tool such as dbench be a valid bechmark for gluster?
and, most importantly, is there any formula to estimate raw
fs to gluster performance ratio for different setups?
for instance:
having a replicated volume, two bricks, fuse mountpoint to
volume via non-congested 1Gbps
or even
a volume on single brick with fuse client mountpoing locally
what percentage/fraction of raw
2018 May 30
2
shard corruption bug
What shard corruption bug? bugzilla url? I'm running into some odd behavior
in my lab with shards and RHEV/KVM data, trying to figure out if it's
related.
Thanks.
On Fri, May 4, 2018 at 11:13 AM, Jim Kinney <jim.kinney at gmail.com> wrote:
> I upgraded my ovirt stack to 3.12.9, added a brick to a volume and left it
> to settle. No problems. I am now running replica 4
2018 May 04
2
shard corruption bug
Il giorno ven 4 mag 2018 alle ore 14:06 Jim Kinney <jim.kinney at gmail.com>
ha scritto:
> It stopped being an outstanding issue at 3.12.7. I think it's now fixed.
So, is not possible to extend and rebalance a working cluster with sharded
data ?
Can someone confirm this ? Maybe the ones that hit the bug in the past
2008 Aug 07
4
Xen performance and Dbench
I saw the presentation "Virtualization of Linux Servers" at
OLS last month and it had some nice comparisons of Xen
performance vs a lot of other virtualization/container
technologies:
http://ols.fedoraproject.org/OLS/Reprints-2008/camargos-reprint.pdf
As always with benchmarks, there are questions to ask and
points to quibble, but overall Xen looks quite good...
except on Dbench. Has
2017 Oct 04
2
data corruption - any update?
On Wed, Oct 4, 2017 at 10:51 AM, Nithya Balachandran <nbalacha at redhat.com>
wrote:
>
>
> On 3 October 2017 at 13:27, Gandalf Corvotempesta <
> gandalf.corvotempesta at gmail.com> wrote:
>
>> Any update about multiple bugs regarding data corruptions with
>> sharding enabled ?
>>
>> Is 3.12.1 ready to be used in production?
>>
>
>
2017 Oct 03
2
data corruption - any update?
Any update about multiple bugs regarding data corruptions with
sharding enabled ?
Is 3.12.1 ready to be used in production?
2018 May 04
2
shard corruption bug
Hi to all
is the "famous" corruption bug when sharding enabled fixed or still a work
in progress ?
2017 Sep 08
4
GlusterFS as virtual machine storage
Gandalf, SIGKILL (killall -9 glusterfsd) did not stop I/O after few
minutes. SIGTERM on the other hand causes crash, but this time it is
not read-only remount, but around 10 IOPS tops and 2 IOPS on average.
-ps
On Fri, Sep 8, 2017 at 1:56 PM, Diego Remolina <dijuremo at gmail.com> wrote:
> I currently only have a Windows 2012 R2 server VM in testing on top of
> the gluster storage,
2018 May 04
0
shard corruption bug
I upgraded my ovirt stack to 3.12.9, added a brick to a volume and left
it to settle. No problems. I am now running replica 4 (preparing to
remove a brick and host to replica 3).
On Fri, 2018-05-04 at 14:24 +0000, Gandalf Corvotempesta wrote:
> Il giorno ven 4 mag 2018 alle ore 14:06 Jim Kinney <jim.kinney at gmail.
> com>
> ha scritto:
> > It stopped being an outstanding
2012 Apr 27
1
geo-replication and rsync
Hi,
can someone tell me the differenct between geo-replication and plain rsync?
On which frequency files are replicated with geo-replication?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120427/72f35727/attachment.html>
2017 Oct 10
2
small files performance
2017-10-10 8:25 GMT+02:00 Karan Sandha <ksandha at redhat.com>:
> Hi Gandalf,
>
> We have multiple tuning to do for small-files which decrease the time for
> negative lookups , meta-data caching, parallel readdir. Bumping the server
> and client event threads will help you out in increasing the small file
> performance.
>
> gluster v set <vol-name> group
2018 May 30
0
shard corruption bug
https://docs.gluster.org/en/latest/release-notes/3.12.6/
The major issue in 3.12.6 is not present in 3.12.7. Bugzilla ID listed in link.
On May 29, 2018 8:50:56 PM EDT, Dan Lavu <dan at redhat.com> wrote:
>What shard corruption bug? bugzilla url? I'm running into some odd
>behavior
>in my lab with shards and RHEV/KVM data, trying to figure out if it's
>related.
>
2017 Sep 08
0
GlusterFS as virtual machine storage
2017-09-08 14:11 GMT+02:00 Pavel Szalbot <pavel.szalbot at gmail.com>:
> Gandalf, SIGKILL (killall -9 glusterfsd) did not stop I/O after few
> minutes. SIGTERM on the other hand causes crash, but this time it is
> not read-only remount, but around 10 IOPS tops and 2 IOPS on average.
> -ps
So, seems to be reliable to server crashes but not to server shutdown :)
2017 Jun 29
4
How to shutdown a node properly ?
Init.d/system.d script doesn't kill gluster automatically on
reboot/shutdown?
Il 29 giu 2017 5:16 PM, "Ravishankar N" <ravishankar at redhat.com> ha scritto:
> On 06/29/2017 08:31 PM, Renaud Fortier wrote:
>
> Hi,
>
> Everytime I shutdown a node, I lost access (from clients) to the volumes
> for 42 seconds (network.ping-timeout). Is there a special way to
2017 Sep 08
3
GlusterFS as virtual machine storage
2017-09-08 13:44 GMT+02:00 Pavel Szalbot <pavel.szalbot at gmail.com>:
> I did not test SIGKILL because I suppose if graceful exit is bad, SIGKILL
> will be as well. This assumption might be wrong. So I will test it. It would
> be interesting to see client to work in case of crash (SIGKILL) and not in
> case of graceful exit of glusterfsd.
Exactly. if this happen, probably there
2011 Aug 01
0
dbench strange results
Hi
I'm building new samba server (on Debian 6.0, software RAID10 2TB, Xeon
CPU). Generally everything is working fine, so I have decided to run
some stress tests. My choice was dbench. Old server is Debian 4.0 (samba
3.0.24, Athlon 3000+, one ATA 160GB disk). So run
dbench 16
on both old and new server
The results are strange
old serwer: about 300MB/sek (dbench 3.0)
and below are first